code
stringlengths 38
801k
| repo_path
stringlengths 6
263
|
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="rolpXbud5axT"
# <a href="https://colab.research.google.com/github/neurorishika/PSST/blob/master/Tutorial/Example%20Implementation%20Locust%20AL/Example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> <a href="https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/neurorishika/PSST/master/Tutorial/Example%20Implementation%20Locust%20AL/Example.ipynb" target="_parent"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open in Kaggle"/></a>
# + [markdown] id="Dm8JgotGtYqz"
# ## Day 6: (Example Implementation) Into the Mind of a Locust
#
# Once we have figured out how to implement neurons and networks, we can try to simulate a model of a small part of a Locust's nervous system.
# + cellView="form" id="LJYNtkVYvCAT" colab={"base_uri": "https://localhost:8080/"} outputId="f81c144f-ae3b-4da3-dee1-ec9463104a68"
#@markdown Import required files and code from previous tutorials
# !wget --no-check-certificate \
# "https://raw.githubusercontent.com/neurorishika/PSST/master/Tutorial/Example%20Implementation%20Locust%20AL/tf_integrator.py" \
# -O "tf_integrator.py"
# + [markdown] id="u8uG0NNNtYq1"
# ### A Locust Antennal Lobe
#
# To an insect, the sense of smell is its source of essential information about their surroundings. Whether it is getting attracted to a potential mate or finding a source of nutrition, odorants are known to trigger various behaviors in insects. Because of their simpler nervous systems and amazing ability to detect and track odors, insects are used to study olfaction to a great extent. Their most surprising ability is to track odors even in the very noisy natural environments. To study this neurobiology look a the dynamics of the networks involved in the processing of chemosensory information to the insect.
#
# Odorants are detected by receptors in olfactory receptor neurons (ORNs) which undergo a depolarization as a result. The antennal lobe (AL) in the insects' brain is considered to be the primary structure that receives input from ORNs within the antennae. It is the consensus that, since different sets of ORNs are activated by different odors, the coding of the odor identity is combinatorial in nature. Inside the Antennal Lobe, the input from the the ORNs is converted into complex long-lived dynamic transients and complex interacting signals which output to the mushroom body in the insect brain. Moreover, the network in the Locust Antennal Lobe seems to be a random network rather than a genetically encoded network with a particular fixed topology.
#
# The Locust AL has two types of neurons:
# 1. **Inhibitory Local Interneurons (LN):** They produce GABAergic Synapses ie. GABAa (Fast) and Metabotropic GABA (Slow). They synapse onto other Local Interneurons and to Projection Neurons. A subset of them receive inputs from the ORNs.
# 2. **Projection Neurons (PN):** They produce Cholinergic Synapses ie. Acetylcholine. They synapse onto local interneurons and also project to the mushroom body outside the AL and act as the output for the AL. A subset of them also receive inputs from the ORNs.
#
#
# #### A Model of the Locust AL
#
# The model described here is based on Bazhenov 2001b.
#
# <img src="https://raw.githubusercontent.com/neurorishika/PSST/master/Tutorial/Example%20Implementation%20Locust%20AL/model.png" width="400"/>
#
# **Total Number of Neurons** = 120
# **Number of PNs** = 90
# **Number of LNs** = 30
#
# The connectivity is random with different connection probabilities as described below:
#
# **Probability(PN to PN)** = 0.0
# **Probability(PN to LN)** = 0.5
# **Probability(LN to LN)** = 0.5
# **Probability(LN to PN)** = 0.5
#
# 33% of all neurons receive input from ORNs.
#
# ##### Projection Neurons
#
# The projection neurons have an hodgkin huxley dynamics as described below:
#
# $$C_m \frac{dV_{PN}}{dt} = I_{stim} - I_{L} - I_{Na} - I_{K} - I_{A} - I_{KL} - I_{GABA_a} -I_{Ach}$$
#
# It expresses voltage-gated Na$^+$ channels, voltage-gated K$^+$ channels, transient K$^+$ channels, K$^+$ leak channels and general leak channel. The Acetylcholine current is always zero in this model as PNs do not project to PNs.
#
# The PN currents and differential equation for dynamics are described below:
# <img src="https://raw.githubusercontent.com/neurorishika/PSST/master/Tutorial/Example%20Implementation%20Locust%20AL/PNeq.png" width="1000"/>
#
# ##### Local Interneurons
#
# The Local Interneurons have an dynamics somewhat similar to hodgkin huxley as described below:
#
# $$C_m \frac{dV_{LN}}{dt} = I_{stim} - I_{L} - I_{Ca} - I_{K} - I_{K(Ca)} - I_{KL} - I_{GABA_a} -I_{Ach}$$
#
# Unlike the Projection neuron, the local interneurons do not have the normal sodium potassium channel action potentials. They expresse voltage-gated Ca$^{2+}$ channels, voltage-gated K$^+$ channels, Calcium dependent K$^+$ channels, K$^+$ leak channels and general leak channel. Here, opposite interactions between K and Ca channels cause longer extended action potentials. The dynamics in intracellular Ca$^{2+}$ causes changes in K(Ca) channel dynamics allowing for adaptive behavior in the neurons.
#
# The LN currents and differential equation for dynamics are described below:
# <img src="https://raw.githubusercontent.com/neurorishika/PSST/master/Tutorial/Example%20Implementation%20Locust%20AL/LNeq.png" width="1000"/>
#
# ##### Synaptic Dynamics
#
# Finally, the equation of the dynamics of the two types of synapses are given below.
#
# <img src="https://raw.githubusercontent.com/neurorishika/PSST/master/Tutorial/Example%20Implementation%20Locust%20AL/syneq.png" width="1000"/>
#
# #### Importing Nerveflow
#
# Once the Integrator is saved in tf_integrator.py in the same directory as the Notebook, we can start importing the essentials including the integrator.
# + id="4FR_xgwGtYq4" colab={"base_uri": "https://localhost:8080/"} outputId="e6ac048b-88ff-474f-808f-90a5751d49c0"
import tensorflow as tf
## OR ##
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import numpy as np
import tf_integrator as tf_int
import matplotlib.pyplot as plt
import seaborn as sns
# + [markdown] id="54nRA3GPtYq6"
# #### Defining Simulation Parameters
#
# Now we can start defining the parameters of the model. First, we define the length of the simulation and the time resolution and create the time array t.
# + id="M2C-ovQ3tYq7"
sim_time = 1000 # simulation time (in ms)
sim_res = 0.01 # simulation resolution (in ms)
t = np.arange(0.0, sim_time, sim_res) # time array
# + [markdown] id="ZWMsFJv1tYq8"
# Now we start implementing the details of the network. Since there are two different cell types they may have different properties/parameters. As you might remember, our parallelization paradigm allows us to have different values of parameters for different neurons/synapses by having parameter vectors. To make it easy for us to manipulate the information it is best for us to have a convection for splitting parameters into cell types.
#
# Here, we follow the convention that the first 90 values of each common parameter will be for PN cell type and the rest 30 will be for LN cell type. Unique parameters are defined whenever necessary.
# + id="mMmxq2xJtYq9"
# Defining Neuron Counts
n_n = int(120) # number of neurons
p_n = int(90) # number of PNs
l_n = int(30) # number of LNs
C_m = [1.0]*n_n # Capacitance
# Defining Common Current Parameters #
g_K = [10.0]*n_n # K conductance
g_L = [0.15]*n_n # Leak conductance
g_KL = [0.05]*p_n + [0.02]*l_n # K leak conductance (first 90 for PNs and next 30 for LNs)
E_K = [-95.0]*n_n # K Potential
E_L = [-55.0]*p_n + [-50.0]*l_n # Leak Potential (first 90 for PNs and next 30 for LNs)
E_KL = [-95.0]*n_n # K Leak Potential
# Defining Cell Type Specific Current Parameters #
## PNs
g_Na = [100.0]*p_n # Na conductance
g_A = [10.0]*p_n # Transient K conductance
E_Na = [50.0]*p_n # Na Potential
E_A = [-95.0]*p_n # Transient K Potential
## LNs
g_Ca = [3.0]*l_n # Ca conductance
g_KCa = [0.3]*l_n # Ca dependent K conductance
E_Ca = [140.0]*l_n # Ca Potential
E_KCa = [-90]*l_n # Ca dependent K Potential
A_Ca = [2*(10**(-4))]*l_n # Ca outflow rate
Ca0 = [2.4*(10**(-4))]*l_n # Equilibrium Calcium Concentration
t_Ca = [150]*l_n # Ca recovery time constant
## Defining Firing Thresholds ##
F_b = [0.0]*n_n # Fire threshold
## Defining Acetylcholine Synapse Connectivity ##
ach_mat = np.zeros((n_n,n_n)) # Ach Synapse Connectivity Matrix
ach_mat[p_n:,:p_n] = np.random.choice([0.,1.],size=(l_n,p_n)) # 50% probability of PN -> LN
np.fill_diagonal(ach_mat,0.) # Remove all self connection
## Defining Acetylcholine Synapse Parameters ##
n_syn_ach = int(np.sum(ach_mat)) # Number of Acetylcholine (Ach) Synapses
alp_ach = [10.0]*n_syn_ach # Alpha for Ach Synapse
bet_ach = [0.2]*n_syn_ach # Beta for Ach Synapse
t_max = 0.3 # Maximum Time for Synapse
t_delay = 0 # Axonal Transmission Delay
A = [0.5]*n_n # Synaptic Response Strength
g_ach = [0.35]*p_n+[0.3]*l_n # Ach Conductance
E_ach = [0.0]*n_n # Ach Potential
## Defining GABAa Synapse Connectivity ##
fgaba_mat = np.zeros((n_n,n_n)) # GABAa Synapse Connectivity Matrix
fgaba_mat[:,p_n:] = np.random.choice([0.,1.],size=(n_n,l_n)) # 50% probability of LN -> LN/PN
np.fill_diagonal(fgaba_mat,0.) # No self connection
## Defining GABAa Synapse Parameters ##
n_syn_fgaba = int(np.sum(fgaba_mat)) # Number of GABAa (fGABA) Synapses
alp_fgaba = [10.0]*n_syn_fgaba # Alpha for fGABA Synapse
bet_fgaba = [0.16]*n_syn_fgaba # Beta for fGABA Synapse
V0 = [-20.0]*n_n # Decay Potential
sigma = [1.5]*n_n # Decay Time Constant
g_fgaba = [0.8]*p_n+[0.8]*l_n # fGABA Conductance
E_fgaba = [-70.0]*n_n # fGABA Potential
# + [markdown] id="0VoNnGEstYq-"
# #### Visualizing the Connectivity
#
# Now that we have a connectivity matrix, we can visualize the connectivity as a heatmap. For this, we combine the two synapse connectivity matrix to get a single matrix that has +1 for excitatory connection and -1 for inhibitory connections. Then we use seaborn library to get a clean heatmap.
# + id="bp2GktHOtYrA" colab={"base_uri": "https://localhost:8080/", "height": 413} outputId="214de6bc-eb7c-45a4-c5ae-2906ba2b779f"
plt.figure(figsize=(7,6))
sns.heatmap(ach_mat-fgaba_mat, cmap="RdBu")
plt.xlabel("Presynaptic Neuron Number")
plt.ylabel("Postsynaptic Neuron Number")
plt.title("Network Connectivity")
plt.show()
# + [markdown] id="yDYz8NAEtYrC"
# #### Defining the functions to evaluate dynamics in gating variables
#
# Now, we define all the functions that will evaluate the equilibium value and time constants of gating variable for different channels.
# + id="NdxwrQJDtYrD"
def K_prop(V):
T = 22
phi = 3.0**((T-36.0)/10)
V_ = V-(-50)
alpha_n = 0.02*(15.0 - V_)/(tf.exp((15.0 - V_)/5.0) - 1.0)
beta_n = 0.5*tf.exp((10.0 - V_)/40.0)
t_n = 1.0/((alpha_n+beta_n)*phi)
n_0 = alpha_n/(alpha_n+beta_n)
return n_0, t_n
def Na_prop(V):
T = 22
phi = 3.0**((T-36)/10)
V_ = V-(-50)
alpha_m = 0.32*(13.0 - V_)/(tf.exp((13.0 - V_)/4.0) - 1.0)
beta_m = 0.28*(V_ - 40.0)/(tf.exp((V_ - 40.0)/5.0) - 1.0)
alpha_h = 0.128*tf.exp((17.0 - V_)/18.0)
beta_h = 4.0/(tf.exp((40.0 - V_)/5.0) + 1.0)
t_m = 1.0/((alpha_m+beta_m)*phi)
t_h = 1.0/((alpha_h+beta_h)*phi)
m_0 = alpha_m/(alpha_m+beta_m)
h_0 = alpha_h/(alpha_h+beta_h)
return m_0, t_m, h_0, t_h
def A_prop(V):
T = 36
phi = 3.0**((T-23.5)/10)
m_0 = 1/(1+tf.exp(-(V+60.0)/8.5))
h_0 = 1/(1+tf.exp((V+78.0)/6.0))
tau_m = 1/(tf.exp((V+35.82)/19.69) + tf.exp(-(V+79.69)/12.7) + 0.37) / phi
t1 = 1/(tf.exp((V+46.05)/5.0) + tf.exp(-(V+238.4)/37.45)) / phi
t2 = (19.0/phi) * tf.ones(tf.shape(V),dtype=V.dtype)
tau_h = tf.where(tf.less(V,-63.0),t1,t2)
return m_0, tau_m, h_0, tau_h
def Ca_prop(V):
m_0 = 1/(1+tf.exp(-(V+20.0)/6.5))
h_0 = 1/(1+tf.exp((V+25.0)/12))
tau_m = 1.5
tau_h = 0.3*tf.exp((V-40.0)/13.0) + 0.002*tf.exp((60.0-V)/29)
return m_0, tau_m, h_0, tau_h
def KCa_prop(Ca):
T = 26
phi = 2.3**((T-23.0)/10)
alpha = 0.01*Ca
beta = 0.02
tau = 1/((alpha+beta)*phi)
return alpha*tau*phi, tau
# + [markdown] id="JO8i_jXgtYrE"
# #### Defining Channel Currents, Synaptic Potentials, and Input Currents (Injection Current)
#
# Just like the normal hodgkin-huxley network, we define the different functions that evaluate the different neuronal currents.
# + id="fZA3r8sgtYrE"
# NEURONAL CURRENTS
# Common Currents #
def I_K(V, n):
return g_K * n**4 * (V - E_K)
def I_L(V):
return g_L * (V - E_L)
def I_KL(V):
return g_KL * (V - E_KL)
# PN Currents #
def I_Na(V, m, h):
return g_Na * m**3 * h * (V - E_Na)
def I_A(V, m, h):
return g_A * m**4 * h * (V - E_A)
# LN Currents #
def I_Ca(V, m, h):
return g_Ca * m**2 * h * (V - E_Ca)
def I_KCa(V, m):
T = 26
phi = 2.3**((T-23.0)/10)
return g_KCa * m * phi * (V - E_KCa)
# SYNAPTIC CURRENTS
def I_ach(o,V):
o_ = tf.Variable([0.0]*n_n**2,dtype=tf.float64)
ind = tf.boolean_mask(tf.range(n_n**2),ach_mat.reshape(-1) == 1)
o_ = tf.scatter_update(o_,ind,o)
o_ = tf.transpose(tf.reshape(o_,(n_n,n_n)))
return tf.reduce_sum(tf.transpose((o_*(V-E_ach))*g_ach),1)
def I_fgaba(o,V):
o_ = tf.Variable([0.0]*n_n**2,dtype=tf.float64)
ind = tf.boolean_mask(tf.range(n_n**2),fgaba_mat.reshape(-1) == 1)
o_ = tf.scatter_update(o_,ind,o)
o_ = tf.transpose(tf.reshape(o_,(n_n,n_n)))
return tf.reduce_sum(tf.transpose((o_*(V-E_fgaba))*g_fgaba),1)
# OR for TensorFlow v2.x and above
def I_ach(o,V):
o_ = tf.zeros([n_n**2],dtype=tf.float64)
ind = tf.reshape(tf.boolean_mask(tf.range(n_n**2),ach_mat.reshape(-1) == 1),[-1,1])
o_ = tf.tensor_scatter_nd_update(o_,ind,o)
o_ = tf.transpose(tf.reshape(o_,(n_n,n_n)))
return tf.reduce_sum(tf.transpose((o_*(V-E_ach))*g_ach),1)
def I_gaba(o,V):
o_ = tf.zeros([n_n**2],dtype=tf.float64)
ind = tf.reshape(tf.boolean_mask(tf.range(n_n**2),gaba_mat.reshape(-1) == 1),[-1,1])
o_ = tf.tensor_scatter_nd_update(o_,ind,o)
o_ = tf.transpose(tf.reshape(o_,(n_n,n_n)))
return tf.reduce_sum(tf.transpose((o_*(V-E_gaba))*g_gaba),1)
# INPUT CURRENTS
def I_inj_t(t):
return tf.constant(current_input.T,dtype=tf.float64)[tf.to_int32(t*100)]
# + [markdown] id="rqKPEPDztYrF"
# #### Putting Together the Dynamics
#
# The construction of the function that evaluates the dynamical equations is similar to the normal HH Model but we have a small difference. That is, not all neurons share the same dynamical equations, they have some common terms and some terms that are unique. The procedure we deal with this is briefly explained below.
#
# $$\frac{d\vec{X_A}}{dt} = f(\vec{X_{A_\alpha}},\vec{X_{A_\beta}}...)+g_A(\vec{X_{A_\gamma}}...)\ \ \ \ \ \ \frac{d\vec{X_B}}{dt} = f(\vec{X_{B_\alpha}},\vec{X_{B_\beta}}...)+g_B(\vec{X_{B_\gamma}}...)$$
# $$To\ evaluate\ \frac{d\vec{X}}{dt}\ where\ \vec{X}=[\vec{X_A},\vec{X_B}], Find:$$
#
# $$\Big(\frac{dX_A}{dt}\Big)_{unique}=g_A(\vec{X_{A_\gamma}}..)\ \ \ \ \ \ \ \ \ \ \ \ \ \Big(\frac{dX_B}{dt}\Big)_{unique}=g_B(\vec{X_{B_\gamma}}..)$$
#
# $$\Big(\frac{dX}{dt}\Big)_{unique}=\Big[\Big(\frac{dX_A}{dt}\Big)_{unique},\Big(\frac{dX_B}{dt}\Big)_{unique}\Big]$$
#
#
# $$\frac{dX}{dt}=\Big(\frac{dX}{dt}\Big)_{unique}+f(\vec{X_{\alpha}},\vec{X_{\beta}}...)$$
#
# This way, we can minimize the number of computations required for the simulation and all the computations are highly parallelizeable by TensorFlow. As for the vector we follow the convention given below:
#
# * Voltage(PN)[90]
# * Voltage(LN)[30]
# * K-gating(ALL)[120]
# * Na-activation-gating(PN)[90]
# * Na-inactivation-gating(PN)[90]
# * Transient-K-activation-gating(PN)[90]
# * Transient-K-inactivation-gating(PN)[90]
# * Ca-activation-gating(LN)[30]
# * Ca-inactivation-gating(LN)[30]
# * K(Ca)-gating(LN)[30]
# * Ca-concentration(LN)[30]
# * Acetylcholine Open Fraction[n_syn_ach]
# * GABAa Open Fraction[n_syn_fgaba]
# * Fire-times[120]
# + id="IJ1hvNv6tYrG"
def dXdt(X, t): # X is the state vector
V_p = X[0 : p_n] # Voltage(PN)
V_l = X[p_n : n_n] # Voltage(LN)
n_K = X[n_n : 2*n_n] # K-gating(ALL)
m_Na = X[2*n_n : 2*n_n + p_n] # Na-activation-gating(PN)
h_Na = X[2*n_n + p_n : 2*n_n + 2*p_n] # Na-inactivation-gating(PN)
m_A = X[2*n_n + 2*p_n : 2*n_n + 3*p_n] # Transient-K-activation-gating(PN)
h_A = X[2*n_n + 3*p_n : 2*n_n + 4*p_n] # Transient-K-inactivation-gating(PN)
m_Ca = X[2*n_n + 4*p_n : 2*n_n + 4*p_n + l_n] # Ca-activation-gating(LN)
h_Ca = X[2*n_n + 4*p_n + l_n: 2*n_n + 4*p_n + 2*l_n] # Ca-inactivation-gating(LN)
m_KCa = X[2*n_n + 4*p_n + 2*l_n : 2*n_n + 4*p_n + 3*l_n] # K(Ca)-gating(LN)
Ca = X[2*n_n + 4*p_n + 3*l_n: 2*n_n + 4*p_n + 4*l_n] # Ca-concentration(LN)
o_ach = X[6*n_n : 6*n_n + n_syn_ach] # Acetylcholine Open Fraction
o_fgaba = X[6*n_n + n_syn_ach : 6*n_n + n_syn_ach + n_syn_fgaba] # GABAa Open Fraction
fire_t = X[-n_n:] # Fire-times
V = X[:n_n] # Overall Voltage (PN + LN)
# Evaluate Differentials for Gating variables and Ca concentration
n0,tn = K_prop(V)
dn_k = - (1.0/tn)*(n_K-n0)
m0,tm,h0,th = Na_prop(V_p)
dm_Na = - (1.0/tm)*(m_Na-m0)
dh_Na = - (1.0/th)*(h_Na-h0)
m0,tm,h0,th = A_prop(V_p)
dm_A = - (1.0/tm)*(m_A-m0)
dh_A = - (1.0/th)*(h_A-h0)
m0,tm,h0,th = Ca_prop(V_l)
dm_Ca = - (1.0/tm)*(m_Ca-m0)
dh_Ca = - (1.0/th)*(h_Ca-h0)
m0,tm = KCa_prop(Ca)
dm_KCa = - (1.0/tm)*(m_KCa-m0)
dCa = - np.array(A_Ca)*I_Ca(V_l,m_Ca,h_Ca) - (Ca - Ca0)/t_Ca
# Evaluate differential for Voltage
# The dynamical equation for voltage has both unique and common parts.
# Thus, as discussed above, we first evaluate the unique parts of Cm*dV/dT for LNs and PNs.
CmdV_p = - I_Na(V_p, m_Na, h_Na) - I_A(V_p, m_A, h_A)
CmdV_l = - I_Ca(V_l, m_Ca, h_Ca) - I_KCa(V_l, m_KCa)
# Once we have that, we merge the two into a single 120-vector.
CmdV = tf.concat([CmdV_p,CmdV_l],0)
# Finally we add the common currents and divide by Cm to get dV/dt.
dV = (I_inj_t(t) + CmdV - I_K(V, n_K) - I_L(V) - I_KL(V) - I_ach(o_ach,V) - I_fgaba(o_fgaba,V)) / C_m
# Evaluate dynamics in synapses
A_ = tf.constant(A,dtype=tf.float64)
T_ach = tf.where(tf.logical_and(tf.greater(t,fire_t+t_delay),tf.less(t,fire_t+t_max+t_delay)),A_,tf.zeros(tf.shape(A_),dtype=A_.dtype))
T_ach = tf.multiply(tf.constant(ach_mat,dtype=tf.float64),T_ach)
T_ach = tf.boolean_mask(tf.reshape(T_ach,(-1,)),ach_mat.reshape(-1) == 1)
do_achdt = alp_ach*(1.0-o_ach)*T_ach - bet_ach*o_ach
T_fgaba = 1.0/(1.0+tf.exp(-(V-V0)/sigma))
T_fgaba = tf.multiply(tf.constant(fgaba_mat,dtype=tf.float64),T_fgaba)
T_fgaba = tf.boolean_mask(tf.reshape(T_fgaba,(-1,)),fgaba_mat.reshape(-1) == 1)
do_fgabadt = alp_fgaba*(1.0-o_fgaba)*T_fgaba - bet_fgaba*o_fgaba
# Set change in fire-times as zero
dfdt = tf.zeros(tf.shape(fire_t),dtype=fire_t.dtype)
# Combine to a single vector
out = tf.concat([dV, dn_k,
dm_Na, dh_Na,
dm_A, dh_A,
dm_Ca, dh_Ca,
dm_KCa,
dCa, do_achdt,
do_fgabadt, dfdt ],0)
return out
# + [markdown] id="PVA94WUstYrH"
# #### Creating the Current Input and Run the Simulation
#
# We will run an 1000 ms simulation where we will give input to randomly chosen 33% of the neurons. The input will be an saturating increase to 10 from 100 ms to 600 ms and an exponetial decrease to 0 after 600 ms.
# + id="pw75DOE_tYrH"
current_input = np.zeros((n_n,t.shape[0]))
# Create the input shape
y = np.where(t<600,(1-np.exp(-(t-100)/75)),0.9996*np.exp(-(t-600)/150))
y = np.where(t<100,np.zeros(t.shape),y)
# Randomly choose 33% indices from 120
p_input = 0.33
input_neurons = np.random.choice(np.array(range(n_n)),int(p_input*n_n),replace=False)
# Assign input shape to chosen indices
current_input[input_neurons,:]= 10*y
# + [markdown] id="EvI85GvYtYrH"
# #### Create the initial state vector and add some jitter to break symmetry
#
# We create the initial state vector, initializing voltage to $-70\ mV$, gating variables and open fractions to $0.0$, and calcium concentration to $2.4\times10^{-4}$ and add 1% gaussian noise to the data.
# + id="UdyPHD7LtYrH"
state_vector = [-70]* n_n + [0.0]* (n_n + 4*p_n + 3*l_n) + [2.4*(10**(-4))]*l_n + [0]*(n_syn_ach+n_syn_fgaba) + [-(sim_time+1)]*n_n
state_vector = np.array(state_vector)
state_vector = state_vector + 0.01*state_vector*np.random.normal(size=state_vector.shape)
init_state = tf.constant(state_vector, dtype=tf.float64)
# + [markdown] id="rBokuhpstYrI"
# #### Running the simulation and Interpreting the Output
# + id="5GPgv5OltYrI" colab={"base_uri": "https://localhost:8080/"} outputId="0391e0bb-c97e-44f7-c1a0-10242de9cb79"
state = tf_int.odeint(dXdt, init_state, t, n_n, F_b)
with tf.Session() as sess:
tf.global_variables_initializer().run()
state = sess.run(state)
sess.close()
# + [markdown] id="acMNvxDbtYrJ"
# To visualize the data, we plot the voltage traces of the PNs (Neurons 0 to 89) and LNs (Neurons 90 to 119) as a Voltage vs Time heatmap, peaks in the data are the action potentials.
# + id="IGdYkVrDtYrJ" colab={"base_uri": "https://localhost:8080/", "height": 441} outputId="8d5d5127-0c29-40f4-fa2c-06c682d388cf"
plt.figure(figsize=(12,6))
sns.heatmap(state[::100,:90].T,xticklabels=100,yticklabels=5,cmap='Greys')
plt.xlabel("Time (in ms)")
plt.ylabel("Projection Neuron Number")
plt.title("Voltage vs Time Heatmap for Projection Neurons (PNs)")
plt.tight_layout()
plt.show()
# + id="Psi6nmqStYrK" colab={"base_uri": "https://localhost:8080/", "height": 441} outputId="ed8f55cf-51d2-435f-e347-c5e808ec219b"
plt.figure(figsize=(12,6))
sns.heatmap(state[::100,90:120].T,xticklabels=100,yticklabels=5,cmap='Greys')
plt.xlabel("Time (in ms)")
plt.ylabel("Local Interneuron Number")
plt.title("Voltage vs Time Heatmap for Local Interneurons (LNs)")
plt.tight_layout()
plt.show()
# + [markdown] id="yhq1s5yItYrK"
# Thus, We are capable of making models of complex realistic networks of neurons in the brains of living organisms. As an **Exercise** implement the batch and caller-runner system for this example as taught in day 5. Use this to implement a simulation of successive box input to 1/3rd of the neurons for a period of 2 seconds. Also try introducing variability to the parameters of the neurons.
|
Tutorial/Example Implementation Locust AL/Example.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python3.6
# language: python
# name: python3.6
# ---
# # Eventalign_collapse usage
# ## Bash command line usage
# + [markdown] heading_collapsed=true
# #### From an existing nanopolish event align output to another file
# + hidden=true
NanopolishComp Eventalign_collapse -i ./data/nanopolish_reads.tsv -o ./output/nanopolish_collapsed_reads.tsv
tail ./output/nanopolish_collapsed_reads.tsv
# + [markdown] heading_collapsed=true
# #### From an existing nanopolish event align output to the standard output
# + hidden=true
NanopolishComp Eventalign_collapse -i ./data/nanopolish_reads_samples.tsv | tail
# + [markdown] heading_collapsed=true
# #### Directly from nanopolish to the standard output.
# + hidden=true
nanopolish eventalign -t 1 --samples --scale-events --reads ./data/reads.fastq --bam ./data/aligned_reads.bam --genome ./data/reference.fa --signal-index| NanopolishComp Eventalign_collapse | tail
# + [markdown] heading_collapsed=true
# #### Directly from nanopolish to a file
# + hidden=true
nanopolish eventalign -t 1 --samples --scale-events --reads ./data/reads.fastq --bam ./data/aligned_reads.bam --genome ./data/reference.fa --signal-index| NanopolishComp Eventalign_collapse > output/collapsed_nanopolish.tsv
# -
# ## Python API usage
# + [markdown] heading_collapsed=true
# #### Import the package
# + hidden=true
from NanopolishComp.Eventalign_collapse import Eventalign_collapse
# + [markdown] heading_collapsed=true
# #### Example including sample writing
# + hidden=true
Eventalign_collapse (input_fn="./data/nanopolish_reads_samples.tsv", output_fn="./output/collapsed_nanopolish.tsv", write_samples=True, verbose=True)
# ! head output/collapsed_nanopolish.tsv
# + [markdown] heading_collapsed=true
# #### Example without sample writing
# + hidden=true
Eventalign_collapse (input_fn="./data/nanopolish_reads_samples.tsv", output_fn="./output/collapsed_nanopolish.tsv", verbose=True)
# ! head output/collapsed_nanopolish.tsv -n 15
# + [markdown] heading_collapsed=true
# #### Minimal
# + hidden=true
Eventalign_collapse (input_fn="./data/nanopolish_reads.tsv", output_fn="./output/collapsed_nanopolish.tsv", verbose=True)
# ! head output/collapsed_nanopolish.tsv -n 15
|
tests/NanopolishComp_usage.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# -*- coding: utf-8 -*-
import sys
import chseg
import numpy as np
import tensorflow as tf
from collections import Counter
from cnn_seq_label import Tagger
from sklearn.metrics import classification_report
SEQ_LEN = 50
N_CLASS = 4 # B: 0, M: 1, E: 2, S: 3
N_EPOCH = 1
BATCH_SIZE = 128
sample = '我来到大学读书,希望学到知识'
py = int(sys.version[0])
def to_train_seq(*args):
data = []
for x in args:
data.append(iter_seq(x))
return data
def to_test_seq(*args):
data = []
for x in args:
x = x[: (len(x) - len(x) % SEQ_LEN)]
data.append(np.reshape(x, [-1, SEQ_LEN]))
return data
def iter_seq(x, text_iter_step=10):
return np.array([x[i : i+SEQ_LEN] for i in range(0, len(x)-SEQ_LEN, text_iter_step)])
if __name__ == '__main__':
x_train, y_train, x_test, y_test, vocab_size, word2idx, idx2word = chseg.load_data()
X_train, Y_train = to_train_seq(x_train, y_train)
X_test, Y_test = to_test_seq(x_test, y_test)
print('Vocab size: %d' % vocab_size)
clf = Tagger(vocab_size, N_CLASS, SEQ_LEN)
clf.fit(X_train, Y_train, n_epoch=N_EPOCH, batch_size=BATCH_SIZE)
y_pred = clf.predict(X_test, batch_size=BATCH_SIZE)
print(classification_report(Y_test.ravel(), y_pred.ravel(), target_names=['B', 'M', 'E', 'S']))
chars = list(sample) if py == 3 else list(sample.decode('utf-8'))
_test = [word2idx[w] for w in sample] + [0] * (SEQ_LEN-len(sample))
labels = clf.infer(_test, len(sample))
labels = labels[:len(sample)]
res = ''
for i, l in enumerate(labels):
c = sample[i] if py == 3 else sample.decode('utf-8')[i]
if l == 2 or l == 3:
c += ' '
res += c
print(res)
# -
|
src_nlp/tensorflow/depreciated/cnn_seq_label_chseg_test.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Setup
# %reload_ext autoreload
# %autoreload 2
import matplotlib
from matplotlib import pyplot as plt
# %matplotlib inline
matplotlib.__version__
import os
from pathlib import Path
import pandas as pd
from sklearn.model_selection import GroupKFold
import cv2
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torchvision import transforms
torch.__version__
# +
from albumentations import Compose, JpegCompression, CLAHE, RandomRotate90, Transpose, ShiftScaleRotate, \
Blur, OpticalDistortion, GridDistortion, HueSaturationValue, Flip, VerticalFlip
import pretrainedmodels as pm
from kekas import Keker, DataOwner, DataKek
from kekas.transformations import Transformer, to_torch, normalize
from kekas.metrics import accuracy
from kekas.modules import Flatten, AdaptiveConcatPool2d
from kekas.callbacks import Callback, Callbacks, DebuggerCallback
# -
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
device
# # Prepare Data
data_path = Path('GS')
# +
row_list = []
for ds_path in data_path.iterdir():
for cl in ['on', 'off']:
for p in (ds_path / cl).iterdir():
row_list.append([str(p), ds_path.name, cl])
ds_df = pd.DataFrame(row_list, columns=['fpath', 'group', 'label'])
ds_df.shape
# -
ds_df.head()
train_inds, other_inds = next(GroupKFold(n_splits=4).split(ds_df.fpath, groups=ds_df.group))
valid_inds, test_inds = next(GroupKFold(n_splits=2).split(ds_df.fpath, groups=ds_df.group))
train_inds.shape, valid_inds.shape, test_inds.shape
# +
from kekas.transformations import Transformer, to_torch, normalize
from torchvision import transforms
# create train and val datasets using DataKek class - a pytorch Dataset that uses
# pandas DataFrame as data source
# at first we need to create a reader function that will define how image will be opened
def reader_fn(i, row):
# it always gets i and row as parameters
# where i is an index of dataframe and row is a dataframes row
image = cv2.imread(row["fpath"])
if row["label"] == "on":
label = 0
else:
label = 1
return {"image": image, "label": label}
# Then we should create transformations/augmentations
# We will use awesome https://github.com/albu/albumentations library
def augs(p=0.5):
return Compose([
CLAHE(),
RandomRotate90(),
Transpose(),
ShiftScaleRotate(shift_limit=0.0625, scale_limit=0.50, rotate_limit=15, p=.75),
Blur(blur_limit=3),
OpticalDistortion(),
GridDistortion(),
HueSaturationValue()
], p=p)
def get_transforms(dataset_key, size, p):
# we need to use a Transformer class to apply transformations to DataKeks elements
# dataset_key is an image key in dict returned by reader_fn
PRE_TFMS = Transformer(dataset_key, lambda x: cv2.resize(x, (size, size)))
AUGS = Transformer(dataset_key, lambda x: augs()(image=x)["image"])
NRM_TFMS = transforms.Compose([
Transformer(dataset_key, to_torch()),
Transformer(dataset_key, normalize())
])
train_tfms = transforms.Compose([PRE_TFMS, AUGS, NRM_TFMS])
val_tfms = transforms.Compose([PRE_TFMS, NRM_TFMS]) # because we don't want to augment val set yet
return train_tfms, val_tfms
# +
train_tfms, val_tfms = get_transforms("image", 224, 0.5)
train_dk = DataKek(df=ds_df.iloc[train_inds], reader_fn=reader_fn, transforms=train_tfms)
val_dk = DataKek(df=ds_df.iloc[valid_inds], reader_fn=reader_fn, transforms=val_tfms)
# +
# and DataLoaders
batch_size = 32
workers = 8
train_dl = DataLoader(train_dk, batch_size=batch_size, num_workers=workers, shuffle=True, drop_last=True)
val_dl = DataLoader(val_dk, batch_size=batch_size, num_workers=workers, shuffle=False)
# -
# # Train Model
# +
# create a simple neural network using pretrainedmodels library
# https://github.com/Cadene/pretrained-models.pytorch
class Net(nn.Module):
def __init__(
self,
num_classes: int,
p: float = 0.5,
pooling_size: int = 2,
last_conv_size: int = 2048,
arch: str = "se_resnext50_32x4d",
pretrained: str = "imagenet") -> None:
"""A simple model to finetune
Args:
num_classes: the number of target classes, the size of the last layer's output
p: dropout probability
pooling_size: the size of the result feature map after adaptive pooling layer
last_conv_size: size of the flatten last backbone conv layer
arch: the name of the architecture form pretrainedmodels
pretrained: the mode for pretrained model from pretrainedmodels
"""
super().__init__()
net = pm.__dict__[arch](pretrained=pretrained)
modules = list(net.children())[:-2] # delete last layers: pooling and linear
# add custom head
modules += [nn.Sequential(
# AdaptiveConcatPool2d is a concat of AdaptiveMaxPooling and AdaptiveAveragePooling
AdaptiveConcatPool2d(size=pooling_size),
Flatten(),
nn.BatchNorm1d(2 * pooling_size * pooling_size * last_conv_size),
nn.Dropout(p),
nn.Linear(2 * pooling_size * pooling_size * last_conv_size, num_classes)
)]
self.net = nn.Sequential(*modules)
def forward(self, x):
logits = self.net(x)
return logits
# +
# [s for s in list(pm.__dict__) if not s.startswith('__')]
# -
model = pm.__dict__['resnet34']()
modules = list(model.children())
len(modules)
model = pm.__dict__['se_resnext50_32x4d']()
modules = list(model.children())[:-2]
len(modules)
# +
# the three whales of your pipelane are: the data, the model and the loss (hi, Jeremy)
# the data is represented in Kekas by DataOwner. It is a namedtuple with three fields:
# 'train_dl', 'val_dl', 'test_dl'
# For training process we will need at least two of them, and we can skip 'test_dl' for now
# so we will initialize it with `None` value.
dataowner = DataOwner(train_dl, val_dl, None)
# model is just a pytorch nn.Module, that we created vefore
model = Net(num_classes=2, arch='resnet34', last_conv_size=512)
# loss or criterion is also a pytorch nn.Module. For multiloss scenarios it can be a list of nn.Modules
# for our simple example let's use the standart cross entopy criterion
criterion = nn.CrossEntropyLoss()
# +
# Also we need to specify, what model will do with each batch of data on each iteration
# We should define a `step_fn` function
# The code below repeats a `keker.default_step_fn` code to provide you with a concept of step function
def step_fn(model: torch.nn.Module,
batch: torch.Tensor) -> torch.Tensor:
"""Determine what your model will do with your data.
Args:
model: the pytorch module to pass input in
batch: the batch of data from the DataLoader
Returns:
The models forward pass results
"""
# you could define here whatever logic you want
inp = batch["image"] # here we get an "image" from our dataset
return model(inp)
# +
# previous preparations was mostly out of scope of Kekas library (except DataKeks creation)
# Now let's dive into kekas a little bit
# firstly, we create a Keker - the core Kekas class, that provides all the keks for your pipeline
keker = Keker(model=model,
dataowner=dataowner,
criterion=nn.CrossEntropyLoss(),
step_fn=step_fn, # previosly defined step function
target_key="label", # remember, we defined it in the reader_fn for DataKek?
metrics={"acc": accuracy}, # optional, you can not specify any metrics at all
opt=torch.optim.Adam, # optimizer class. if not specifying,
# the SGD is used by default
opt_params={"weight_decay": 1e-5},
device=device) # optimizer kwargs in dict format (optional too)
# Actually, there are a lot of params for kekers, but this out of scope of this example
# you can read about them in Keker's docstring (but who really reads the docs, huh?)
# -
# before the start of the finetuning procedure let's freeeze all the layers except the last one - the head
# the `freeze` method is mostly inspired (or stolen) from fastai
# but you should define a model's attribute to deal with
# for example, our model is actually model.net, so we need to specify the 'net' attr
# also this method does not freezes batchnorm layers by default. To change this set `freeze_bn=True`
keker.freeze(model_attr="net")
# +
# let's find an 'optimal' learning rate with learning rate find procedure
# for details please see the fastai course and this articles:
# https://arxiv.org/abs/1803.09820
# https://sgugger.github.io/how-do-you-find-a-good-learning-rate.html
# NOTE: this is an optional step and you can skip it and use your favorite learning rate
# you MUST specify the logdir to see graphics
# keker will write a tensorboard logs into this folder
# to see them start a tensorboard with `--logdir /path/to/logdir`
# keker.kek_lr(final_lr=0.1, logdir="/tmp/tensorboard")
# +
lr = 5e-4
epochs = 5
keker.kek_one_cycle(max_lr=lr, # the maximum learning rate
cycle_len=epochs, # number of epochs, actually, but not exactly
momentum_range=(0.95, 0.85), # range of momentum changes
div_factor=25, # max_lr / min_lr
increase_fraction=0.3) # the part of cycle when learning rate increases
# If you don't understand these parameters, read this - https://sgugger.github.io/the-1cycle-policy.html
# NOTE: you cannot use schedulers and early stopping with one cycle!
# another options are the same as for `kek` method
# -
test_dk = DataKek(df=ds_df.iloc[test_inds], reader_fn=reader_fn, transforms=val_tfms)
test_dl = DataLoader(test_dk, batch_size=batch_size, num_workers=workers, shuffle=False)
test_outputs = keker.predict_loader(test_dl)
test_preds = (torch.sigmoid(torch.from_numpy(test_outputs[:,1])) > 0.5).numpy()
import numpy as np
test_classes = np.array([row[1].label == 'off' for row in test_dk.data], dtype=np.uint8)
(test_classes == test_preds).mean()
# +
# keker.kek(lr=1e-5,
# epochs=5,
# opt=torch.optim.Adam,
# opt_params={"weight_decay": 1e-5},
# sched=torch.optim.lr_scheduler.StepLR,
# sched_params={"step_size":1, "gamma": 0.9},
# logdir="/path/to/logdir",
# cp_saver_params={
# "savedir": "/path/to/save/dir",
# "metric": "acc",
# "n_best": 3,
# "prefix": "kek",
# "mode": "max"},
# early_stop_params={
# "patience": 3,
# "metric": "acc",
# "mode": "min",
# "min_delta": 0
# })
# -
|
notebooks/train-kekas.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="32-zMt7tZr3R"
# # Project: Image Data Augmentation
#
# [](https://colab.research.google.com/github/ShawnHymel/computer-vision-with-embedded-machine-learning/blob/master/2.3.5%20-%20Project%20-%20Data%20Augmentation/project_image_data_augmentation.ipynb)
#
# This is an example for creating an augmented dataset. It will transform input images to create a series of augmented samples that are saved in a new output directory.
#
# Create a folder named "dataset" in the /content directory and upload your images there. The images should be divided into their respective classes, where each class has its own folder with the name of the class. For example:
#
# <pre>
# /content
# |- dataset
# |- background
# |- capacitor
# |- diode
# |- led
# |- resistor
# </pre>
#
# The original images along with their transforms will be saved in the output directory. Each output file will be the original filename appended with "_{num}" where {num} is some incrementing value based on the total number of transforms performed per image.
#
# For example, if you have a file named "0.png" in /content/dataset/resistor, it will become "0_0.png" in /content/output/resistor. The first transform will be "0_1.png", the second transform will be "0_2.png" and so on.
#
# Run each of the cells paying attention to their contents and output. Fill out the necessary parts of the functions where you find the following comment:
#
# ```
# # >>> ENTER YOUR CODE HERE <<<
# ```
#
# Author: EdgeImpulse, Inc.<br>
# Date: August 3, 2021<br>
# License: [Apache-2.0](apache.org/licenses/LICENSE-2.0)<br>
# + id="9RTimcB-ZoIT"
import numpy as np
import matplotlib.pyplot as plt
import random
import os
import PIL
import skimage.transform
import skimage.util
# + id="8zJCNZmEaCCN"
### Settings
# Location of dataset and output folder
DATASET_PATH = "/content/dataset"
OUT_PATH = "/content/output"
OUT_ZIP = "augmented_dataset.zip"
# File format to use for new dataset
IMG_EXT = ".png"
# You are welcome to change the seed to get different augmentation effects
SEED = 42
random.seed(SEED)
# + id="siLr8t4-qR9K"
### Create output directory
try:
os.makedirs(OUT_PATH)
except FileExistsError:
print("WARNING: Output directory already exists. Check to make sure it is empty.")
# + [markdown] id="KAZYWLFeB9vR"
# ## Transform Functions
#
# Create one or more functions that transform an input image.
# + id="kxh8f4JXgnTa"
### Example: Function to create 3 new flipped images of the input
def create_flipped(img):
# Create a list of flipped images
flipped = []
flipped.append(np.fliplr(img))
flipped.append(np.flipud(img))
flipped.append(np.flipud(np.fliplr(img)))
return flipped
# + id="YAIqA5xdtt_6"
# >>> ENTER YOUR CODE HERE <<<
# Create one or more functions that create transforms of your images
# + [markdown] id="vaJR7hAOCEID"
# ## Perform Transforms
#
# Call your functions to create a set of augmented data.
# + id="J9ryKQeQaOKE"
### Function to open image and create a list of new transforms
# NOTE: You will need to call your functions here!
def create_transforms(file_path):
# Open the image
img = PIL.Image.open(file_path)
# Convert the image to a Numpy array (keep all color channels)
img_array = np.asarray(img)
# Add original image to front of list
img_tfs = []
img_tfs.append([img_array])
# Perform transforms (call your functions)
img_tfs.append(create_flipped(img_array))
# >>> ENTER YOUR CODE HERE <<<
# e.g. img_tfs.append(create_translations(img_array, 2))
# Flatten list of lists (to create one long list of images)
img_tfs = [img for img_list in img_tfs for img in img_list]
return img_tfs
# + id="y3ZEsAGUAvUS"
### Load all images, create transforms, and save in output directory
# Find the directories in the dataset folder (skip the Jupyter Notebook checkpoints hidden folder)
for label in os.listdir(DATASET_PATH):
class_dir = os.path.join(DATASET_PATH, label)
if os.path.isdir(class_dir) and label != ".ipynb_checkpoints":
# Create output directory
out_path = os.path.join(OUT_PATH, label)
os.makedirs(out_path, exist_ok=True)
# Go through each image in the subfolder
for i, filename in enumerate(os.listdir(class_dir)):
# Skip the Jupyter Notebook checkpoints folder that sometimes gets added
if filename != ".ipynb_checkpoints":
# Get the root of the filename before the extension
file_root = os.path.splitext(filename)[0]
# Do all transforms for that one image
file_path = os.path.join(DATASET_PATH, label, filename)
img_tfs = create_transforms(file_path)
# Save images to new files in output directory
for i, img in enumerate(img_tfs):
# Create a Pillow image from the Numpy array
img_pil = PIL.Image.fromarray(img)
# Construct filename (<orignal>_<transform_num>.<EXT>)
out_file_path = os.path.join(out_path, file_root + "_" + str(i) + IMG_EXT)
# Convert Numpy array to image and save as a file
img_pil = PIL.Image.fromarray(img)
img_pil.save(out_file_path)
# + id="wWwxvzKxDJ18"
### Zip our new dataset (use '!' to call Linux commands)
# !zip -r -q "{OUT_ZIP}" "{OUT_PATH}"
|
2.3.5 - Project - Data Augmentation/project_image_data_augmentation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Analysis of the results
#
# This notebook investigates the results of all the model runs in the directory `results/runs`.
#
# ## Imports and hardcoded variables
# +
import os
import arviz as az
import numpy as np
import pandas as pd
import xarray
from matplotlib import pyplot as plt
from pprint import pprint
RESULTS_DIR = os.path.join("results", "runs")
ARVIZ_STYLE = "arviz-redish"
# -
# ## Loading InferenceData objects
#
# The results of the analysis are stored as [`InferenceData`](https://arviz-devs.github.io/arviz/api/generated/arviz.InferenceData.html#arviz.InferenceData) objects in [netcdf](https://www.unidata.ucar.edu/software/netcdf/) files. The next cell loads these files.
# +
run_dirs = [
os.path.join(RESULTS_DIR, d)
for d in os.listdir(RESULTS_DIR)
if os.path.isdir(os.path.join(".", RESULTS_DIR, d))
]
priors = {}
posteriors = {}
for run_dir in run_dirs:
prior_file = os.path.join(run_dir, "prior.nc")
posterior_file = os.path.join(run_dir, "posterior.nc")
if os.path.exists(prior_file):
priors[os.path.basename(run_dir)] = az.from_netcdf(prior_file)
if os.path.exists(posterior_file):
posterior = az.from_netcdf(posterior_file)
posteriors[os.path.basename(run_dir)] = posterior
priors["interaction"]
# -
# Some of the runs may also have results of exact cross-validation, also saved in netcdf files.
#
# While its convenient to store the cross-validation files separately, for analysis it's nice to have them in the same place as their posteriors, so the next cell loads the cross-validation netcdfs and adds them to the matching posterior `InferenceData`s.
for posterior_name, posterior in posteriors.items():
llik_cv_file = os.path.join(RESULTS_DIR, posterior_name, "llik_cv.nc")
if os.path.exists(llik_cv_file):
llik_cv = xarray.load_dataset(llik_cv_file)
posterior.add_groups({"log_likelihood_cv": llik_cv})
posteriors["interaction"]
# ## Comparing predictions
#
# This cell uses arviz's [`compare`](https://arviz-devs.github.io/arviz/api/generated/arviz.compare.html) function to calculate the approximate leave-one-out expected log predictive density for each `InferenceData` object in the `posteriors` dictionary.
#
# It then calculates the same quantity using exact k-fold cross-validation.
# +
posterior_loo_comparison = az.compare(posteriors)
posterior_kfold_comparison = pd.Series(
{
posterior_name:
float(
posterior.get("log_likelihood_cv")["llik"]
.mean(dim=["chain", "draw"])
.sum()
)
for posterior_name, posterior in posteriors.items()
if "log_likelihood_cv" in posterior.groups()
}, name="kfold"
)
posterior_comparison = posterior_loo_comparison.join(posterior_kfold_comparison)
posterior_comparison
# -
# ## Graphs
# The last cell uses arviz to plot each posterior predictive distribution and saves the result to the `plots` directory.
# +
az.style.use(ARVIZ_STYLE)
x = xarray.DataArray(np.linspace(0, 1, 100))
f, axes = plt.subplots(1, 3, figsize=[20, 5], sharey=True)
axes = axes.ravel()
for (posterior_name, posterior), ax in zip(posteriors.items(), axes):
az.plot_lm(
y="y",
x=x,
idata=posterior,
y_hat="yrep",
axes=ax,
kind_pp="hdi",
y_kwargs={"markersize": 6, "color":"black"},
grid=False
)
ax.legend(frameon=False)
ax.set(title=posterior_name.replace("_", " ").capitalize(), ylabel="")
ax.set_xticks([], [])
axes[0].set_ylabel("y")
f.suptitle("Marginal posterior predictive distributions")
f.savefig(os.path.join("results", "plots", "posterior_predictive_comparison.png"))
|
{{cookiecutter.repo_name}}/investigate.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: omd
# language: python
# name: omd
# ---
# -*- coding: utf-8 -*-
"""
Created on %(date)s
@author: %(<NAME>)s
"""
from __future__ import print_function
import sys
import os
# # !{sys.executable} -m pip install atomic_parameters
from pymatgen.core import lattice, structure
from pymatgen.io.cif import CifParser
from pymatgen.io.cif import CifWriter
from pymatgen.core.structure import Molecule
from pymatgen.io.xyz import XYZ
import pymatgen.io.xyz as xyzio
import numpy as np
import shlex
import shutil
import time
import math
import random
# from atomic_parameters import atoms as ap
import json
import argparse
import itertools
# +
PI = np.pi
def main():
parser = argparse.ArgumentParser(description='Split file into batches')
parser.add_argument('-p', '--parameter_file', nargs='?',
help='Name of parameters file',
default='parameters.txt')
parser.add_argument('-s', '--summary_file', nargs='?',
help='Name of summary file', default='summary.out')
parser.add_argument('-f', '--folder_in', nargs='?',
help='Folder with structure files',
default='./')
parser.add_argument('-o', '--folder_out', nargs='?',
help='Folder with structure files', default='output')
parser.add_argument('-c', '--continue_run', dest='continue_run',
action='store_true')
parser.add_argument('--attach_sorbate', dest='attach_sorbate',
action='store_true')
parser.add_argument('-m', '--max_structures', nargs='?', default=10000000,
const=10000000, type=int,
help='The maximum number of structures to run')
parser.add_argument('-mf', '--molecule_file', nargs='?',
help='xyz coordinates of molecule',default='ethane.txt')
parser.set_defaults(continue_run=False)
parser.set_defaults(attach_sorbate=False)
args = parser.parse_args()
cont = args.continue_run
attach_ads = args.attach_sorbate
params_filename = args.parameter_file
source_folder = args.folder_in+'/'
target_folder = args.folder_out+'/'
number_of_structures = args.max_structures
sfile = args.summary_file
molecule_file = args.molecule_file
tolerance = dict()
tolerance['plane'] = 25 # 30 # 35
tolerance['plane_5l'] = 25 # 30
tolerance['tetrahedron'] = 10
tolerance['plane_on_metal'] = 12.5
t0 = time.time()
with open(params_filename, 'r') as params_file:
if not cont:
clear_files(sfile, target_folder)
for i, struc in enumerate(params_file):
t0s = time.time()
line_elements = shlex.split(struc)
filename = line_elements[0]
analyze_structure(filename, sfile, cont, source_folder,
target_folder, attach_ads, molecule_file,tolerance)
t1s = time.time()
print('Time:', t1s-t0s)
if i+1 >= number_of_structures:
break
params_file.close()
t1 = time.time()
print('\n Total Time', t1-t0)
def clear_files(sfile, target_folder):
make_folder(target_folder)
open(target_folder+'open_metal_mofs.out', 'w').close()
open(target_folder+'problematic.gcd', 'w').close()
open(target_folder+sfile, 'w').close()
def make_folder(folder):
if not os.path.exists(folder):
os.makedirs(folder)
def delete_folder(folder_path):
if os.path.exists(folder_path):
for file_object in os.listdir(folder_path):
file_object_path = os.path.join(folder_path, file_object)
if os.path.isfile(file_object_path):
os.unlink(file_object_path)
else:
shutil.rmtree(file_object_path)
def analyze_structure(filename, sfile, cont, source_folder,
target_folder, attach_ads,molecule_file, tolerance):
filetype = filename.split('.')[-1]
mof_name = filename.split('.'+filetype)[0]
molecule_filetype = molecule_file.split('.')[-1]
output_folder = target_folder+mof_name
json_file_out = output_folder+'/'+mof_name+'.json'
if cont and os.path.exists(json_file_out):
print(mof_name, 'has been processed already... skipping')
return
delete_folder(output_folder)
make_folder(output_folder)
make_folder(target_folder+'open_metal_mofs')
make_folder(target_folder+'problematic_metal_mofs')
open_metal_mofs = open(target_folder+'open_metal_mofs.out', 'a')
problematic_mofs = open(target_folder+'problematic.out', 'a')
summary_mofs = open(target_folder+sfile, 'a')
if filetype == 'cif':
lattice, system = make_system_from_cif(source_folder+filename)
else:
sys.exit('Do not know this filetype')
print("\n", filename)
metal, organic = split_structure_to_organic_and_metal(system)
if metal.num_sites == 0:
print(mof_name+' : No metal was found in structure', end="",
file=summary_mofs)
print(mof_name+' : No metal was found in structure', end="")
summary_mofs.close()
return
m_sa_frac, m_surface_area = 0.0, 0.0
# m_sa_frac,m_surface_areaget_metal_surface_areas(metal,system)
first_coordnation_structure_each_metal = find_all_coord_spheres(metal,
system)
output_json = get_output_dict(mof_name, m_surface_area, m_sa_frac,
system.volume)
dist_all = system.lattice.get_all_distances(system.frac_coords,
system.frac_coords)
all_coord_spheres = []
for a in range(0, len(system)):
all_coord_spheres.append(find_coord_sphere_using_dist(a, system, dist_all[a])[0])
oms_cs_list = [] # list of coordination sequences for each open metal found
cms_cs_list = [] # list of coordination sequences for each closed metal found
for m, omc in enumerate(first_coordnation_structure_each_metal):
# site_dict is a dictionary holding all the information for a metal site
# is ubdated by check_if_open
op, pr, site_dict = check_if_open(omc, tolerance)
m_index = match_index(omc, system)
cs = find_coordination_sequence(m_index, system, all_coord_spheres)
if op:
cs_list = oms_cs_list
else:
cs_list = cms_cs_list
m_id, new_site = find_metal_id(cs_list, cs)
site_dict["oms_id"] = m_id
if new_site:
print('New site found')
site_dict["unique"] = True
cs_list.append(cs)
if op and attach_ads and new_site:
# general way to add adsorbate (still debugging)
# unique_site_simple(m_index, m_id, system, cs_list,
# output_folder, mof_name, molecule_file)
unique_site(m_index, m_id, system, output_folder, mof_name,
molecule_file)
if op and not output_json['metal_sites_found']:
output_json['metal_sites_found'] = True
if pr:
output_json['problematic'] = True
output_json['metal_sites'].append(site_dict)
xyz_name = mof_name+'first_coordination_sphere'+str(m)+'.xyz'
output_fname = output_folder+'/'+xyz_name
xyzio.XYZ(omc).write_file(output_fname)
print('Checking for OMSs done. Writing files')
summary = write_summary(output_json)
# if open_metal_site:
if output_json['metal_sites_found']:
print(summary, end="", file=open_metal_mofs)
shutil.copyfile(source_folder+filename,
target_folder+'open_metal_mofs/'+filename)
print(summary, end="\n", file=summary_mofs)
# if problematic_structure:
if output_json['problematic']:
print(mof_name.split('_')[0], file=problematic_mofs)
shutil.copyfile(source_folder+filename,
target_folder+'problematic_metal_mofs/'+filename)
write_xyz_file(output_folder+'/'+mof_name+'_metal.xyz', metal)
write_xyz_file(output_folder+'/'+mof_name+'_organic.xyz', organic)
write_xyz_file(output_folder+'/'+mof_name+'.xyz', system)
system.make_supercell(2)
write_xyz_file(output_folder+'/'+mof_name+'_super.xyz', system)
open_metal_mofs.close()
summary_mofs.close()
problematic_mofs.close()
with open(json_file_out, 'w') as outfile:
json.dump(output_json, outfile, indent=3)
def get_output_dict(mof_name, m_surface_area, m_sa_frac, volume):
output_json = dict()
output_json['material_name'] = mof_name
output_json['max_surface_area'] = m_surface_area
output_json['max_surface_area_frac'] = m_sa_frac
output_json['uc_volume'] = volume
output_json['metal_sites_found'] = False
output_json['problematic'] = False
output_json['metal_sites'] = list()
return output_json
def update_output_dict(open_metal_candidate, site_dict, op, pr, t, tf, min_dih,
all_dih):
spec = str(open_metal_candidate.species[0])
open_metal_candidate_number = open_metal_candidate.num_sites-1
site_dict["is_open"] = op
site_dict["t_factor"] = tf
site_dict["metal"] = spec
site_dict["type"] = 'closed'
site_dict["number_of_linkers"] = open_metal_candidate_number
site_dict["min_dihedral"] = min_dih
site_dict["all_dihedrals"] = all_dih
site_dict["unique"] = False
site_dict["problematic"] = pr
for ti in t:
if t[ti]:
if site_dict["type"] == 'closed':
site_dict["type"] = str(ti)
else:
site_dict["type"] = site_dict["type"]+','+str(ti)
return site_dict
def unique_site(oms_index, oms_id, system, output_folder, mof_name, molecule_file):
# cs = find_coordination_sequence(oms_index, system, all_coord_spheres)
# oms_id, new_site = find_metal_id(cs_list, cs)
mof_filename = output_folder+'/'+mof_name
# if new_site:
end_to_end = 2.32
eles = ['O', 'O', 'C']
ads = add_co2_simple(system, oms_index, end_to_end, eles)
mof_with_co2 = merge_structures(ads, system)
cif = CifWriter(ads)
cif.write_file(mof_filename+'_co2_'+str(oms_id)+'.cif')
cif = CifWriter(mof_with_co2)
output_filename = mof_filename
output_filename += '_first_coordination_sphere_with_co2_'
output_filename += str(oms_id)+'.cif'
cif.write_file(output_filename)
end_to_end = 1.1
eles = ['N', 'N']
ads = add_co2_simple(system, oms_index, end_to_end, eles)
mof_with_co2 = merge_structures(ads, system)
cif = CifWriter(ads)
cif.write_file(mof_filename+'_n2_'+str(oms_id)+'.cif')
cif = CifWriter(mof_with_co2)
output_filename = mof_filename
output_filename += 'first_coordination_sphere_with_n2'
output_filename += str(oms_id)+'.cif'
cif.write_file(output_filename)
# end_to_end = 1.54
# eles = ['C','C']
# ads = add_co2_simple(system,oms_index,end_to_end,eles)
# mof_with_ethane = merge_structures(ads,system)
# cif = CifWriter(ads)
# cif.write_file(mof_filename+'_ethane_'+str(oms_id)+'.cif')
# cif = CifWriter(mof_with_ethane)
# output_filename = mof_filename
# output_filename += 'first_coordination_sphere_with_ethane'
# output_filename += str(oms_id)+'.cif'
# cif.write_file(output_filename)
return
def find_metal_id(cs_list, cs):
"""Check if a given site is unique based on its coordination sequence"""
for i, cs_i in enumerate(cs_list):
if compare_lists(cs_i, cs):
return i, False
return len(cs_list), True
def compare_lists(l1, l2):
if len(l1) != len(l2):
return False
for i, j in zip(l1, l2):
if i != j:
return False
return True
def write_summary(output_json):
counter_om = 0
om_details = " "
for d in output_json['metal_sites']:
if d["unique"]:
counter_om += 1
if d["is_open"]:
om_details = om_details+d['metal']
om_details = om_details+','+str(d['number_of_linkers'])+'L'
om_details = om_details+','+str(d['t_factor'])
om_details = om_details+','+str(d['min_dihedral'])
output_json['open_metal_density'] = counter_om/output_json['uc_volume']
open_metal_str = 'no'
if output_json['metal_sites_found']:
open_metal_str = 'yes'
output_string = output_json['material_name']+' '+str(counter_om)
output_string += ' '+str(output_json['open_metal_density'])
output_string += str(output_json['max_surface_area_frac'])+' '
output_string += str(output_json['max_surface_area'])+' '
output_string += open_metal_str+' '+om_details
return output_string
def write_xyz_file(filename, system):
if not filename[-4:] == '.xyz':
filename += '.xyz'
xyzio.XYZ(system).write_file(filename)
def find_all_coord_spheres(centers, structure):
coord_spheres = []
for i, c in enumerate(centers):
s_ = Structure(structure.lattice, [centers.species[i]],
[centers.frac_coords[i]])
c_index = match_index(s_, structure)
coord_spheres.append(find_coord_sphere(c_index, structure)[1])
return coord_spheres
def find_coord_sphere(center, structure):
dist = structure.lattice.get_all_distances(structure.frac_coords[center],
structure.frac_coords)
res = find_coord_sphere_using_dist(center, structure, dist[0])
coord_sphere, coord_sphere_structure = res
return coord_sphere, coord_sphere_structure
def find_coord_sphere_using_dist(center, structure, dist):
if dist[center] > 0.0000001:
sys.exit('The self distance appears to be non-negative')
# ligands_structure_new = None
ligands_structure_new = Structure(structure.lattice,
[structure.species[center]],
[structure.frac_coords[center]])
max_bond = ap.get_max_bond()
coord_sphere = [center]
for i, dis in enumerate(dist):
if i == center:
continue
if dis > max_bond:
continue
species_one = str(structure.species[center])
species_two = str(structure.species[i])
tol = ap.get_bond_tolerance(species_one, species_two)
bond_tol = tol
if ap.bond_check(species_one, species_two, dis, bond_tol):
coord_sphere.append(i)
ligands_structure_new.append(species_two, structure.frac_coords[i])
ligands = ap.keep_valid_bonds(ligands_structure_new, 0)
#TODO remove extra atoms from coordination sphere after keeping only valid bonds
# coord_sphere_kept = []
# for i, l in enumerate(ligands_structure_new):
# for j, lv in enumerate(ligands):
# if l == lv:
# coord_sphere_kept.append(coord_sphere[i])
# continue
# ligands.insert(0, structure.species[center], structure.frac_coords[center])
coord_sphere_structure = center_around_metal(ligands)
return coord_sphere, coord_sphere_structure
def find_first_coordination_sphere(metal, structure_full):
tol = 0.3
first_coordination_structure = Structure(metal.lattice, metal.species,
metal.frac_coords)
first_coord_struct_each_metal = []
for m, m_f_coor in enumerate(metal.frac_coords):
structure = structure_full.copy()
tmp1 = []
tmp2 = []
tmp1.append(str(metal.species[m]))
tmp2.append([m_f_coor[0], m_f_coor[1], m_f_coor[2]])
first_coord_struct_each_metal.append(Structure(metal.lattice,
tmp1, tmp2))
dist = metal.lattice.get_all_distances(m_f_coor, structure.frac_coords)
for i, d in enumerate(dist[0]):
if d < 0.01 and str(structure.species[i]) == str(metal.species[m]):
structure.__delitem__(i)
dist = metal.lattice.get_all_distances(m_f_coor, structure.frac_coords)
min_dist = min(dist[0])
increase = 1.0
while True:
bonds_found = 0
first_coordnation_list_species = []
first_coord_list_coords = []
for i, dis in enumerate(dist[0]):
species_one = str(metal.species[m])
species_two = str(structure.species[i])
bond_tol = ap.get_bond_tolerance(species_one,
species_two)*increase
if ap.bond_check(species_one, species_two, dis, bond_tol):
first_coordnation_list_species.append(structure.species[i])
first_coord_list_coords.append(structure.frac_coords[i])
bonds_found += 1
# if check_if_enough_bonds(bonds_found,
# increase,str(metal.species[m])):
check_structure = Structure(metal.lattice,
first_coordnation_list_species,
first_coord_list_coords)
# print increase,bond_tol,bonds_found
if ap.check_if_valid_bonds(check_structure, bond_tol, increase):
break
else:
increase -= 0.5
if increase < 0:
print('something went terribly wrong')
input()
print('Increased bond_tolerance by ', increase)
for cls, clc in zip(first_coordnation_list_species,
first_coord_list_coords):
first_coord_struct_each_metal[m].append(cls, clc)
first_coordination_structure.append(cls, clc)
first_coord_struct_each_metal[m] = \
center_around_metal(first_coord_struct_each_metal[m])
return first_coordination_structure, first_coord_struct_each_metal
# def check_if_enough_bonds(bonds_found,increase,metal):
# if ap.is_lanthanide_or_actinide(metal):
# min_bonds=5
# else:
# min_bonds=4
# if bonds_found < min_bonds and increase <= 1.5:
# return False
# else:
# return True
def make_system_from_cif(ciffile):
cif = CifParser(ciffile)
system = cif.get_structures(primitive=False)
return system[0].lattice, system[0]
def make_latice_from_xyz(xyz):
lattice = Lattice.from_parameters(xyz.uc_params[0], xyz.uc_params[1],
xyz.uc_params[2], xyz.uc_params[3],
xyz.uc_params[4], xyz.uc_params[5])
return lattice
def make_system_from_xyz(xyz):
lattice = make_latice_from_xyz(xyz)
elements, coords = xyz.return_mp_structure_lists()
structure = Structure(lattice, elements, coords, coords_are_cartesian=True)
return lattice, structure
def split_structure_to_organic_and_metal(structure):
coords = structure.frac_coords
elements = structure.species
coords_metal = []
coords_organic = []
elements_metal = []
elements_organic = []
for element, coord in zip(elements, coords):
if ap.check_if_metal(str(element)):
elements_metal.append(element)
coords_metal.append(coord)
else:
elements_organic.append(element)
coords_organic.append(coord)
structure_metal = Structure(structure.lattice, elements_metal, coords_metal)
structure_organic = Structure(structure.lattice,
elements_organic, coords_organic)
return structure_metal, structure_organic
def match_index(metal, system):
ele = str(metal.species[0])
f_coords = metal.frac_coords[0]
dist = system.lattice.get_all_distances(f_coords, system.frac_coords)
for i, d in enumerate(dist[0]):
if d < 0.001 and str(system.species[i]) == ele:
return i
def check_if_6_or_more(system):
if system.num_sites > 6:
return True
else:
return False
def check_if_open(system, tolerance):
site_dict = dict()
tf = get_t_factor(system)
problematic = False
test = dict()
test['plane'] = False
test['same_side'] = False
test['metal_plane'] = False
test['3_or_less'] = False
test['non_TD'] = False
open_metal_mof = False
num = system.num_sites
num_l = system.num_sites - 1
min_cordination = 3
if ap.is_lanthanide_or_actinide(str(system.species[0])):
min_cordination = 5
if num_l < min_cordination:
problematic = True
if num_l <= 3: # min_cordination:
open_metal_mof = True
test['3_or_less'] = True
min_dihid = 0.0
all_dihidrals = 0.0
# return open_metal_mof, problematic, test, tf, 0.0, 0.0
else:
open_metal_mof, test, min_dihid, all_dihidrals = \
check_non_metal_dihedrals(system, test, tolerance)
site_dict = update_output_dict(system,
site_dict,
open_metal_mof,
problematic,
test,
tf,
min_dihid,
all_dihidrals)
# return open_metal_mof, problematic, test, tf, min_dihid, all_dihidrals, site_dict
return open_metal_mof, problematic, site_dict
def get_t_factor(coordination_sphere):
num = coordination_sphere.num_sites
index_range = range(1, num)
all_angles = []
for i in itertools.combinations(index_range, 2):
angle = coordination_sphere.get_angle(i[0], 0, i[1])
all_angles.append([angle, i[0], i[1]])
all_angles.sort(key=lambda x: x[0])
# beta is the largest angle and alpha is the second largest angle
# in the coordination sphere; using the same convention as Yang et al.
# DOI: 10.1039/b617136b
if num > 3:
beta = all_angles[-1][0]
alpha = all_angles[-2][0]
if num-1 == 6:
max_indeces_all = all_angles[-1][1:3]
l3_l4_angles = [x for x in all_angles if x[1] not in max_indeces_all and
x[2] not in max_indeces_all]
max_indeces_all_3_4 = max(l3_l4_angles, key=lambda x: x[0])[1:3]
l5_l6_angles = [x for x in l3_l4_angles
if x[1] not in max_indeces_all_3_4 and
x[2] not in max_indeces_all_3_4]
gamma = max(l5_l6_angles, key=lambda x: x[0])[0]
tau = get_t6_factor(gamma)
elif num-1 == 5:
tau = get_t5_factor(alpha, beta)
elif num-1 == 4:
tau = get_t4_factor(alpha, beta)
else:
tau = -1
return tau
def get_t4_factor(a, b):
return (360-(a+b))/141.0
def get_t5_factor(a, b):
return (b-a)/60.0
def get_t6_factor(c):
return c/180
def check_metal_dihedrals(system, oms_test, tolerance):
num = system.num_sites
open_metal_mof = False
crit = dict()
tol = dict()
crit['plane'] = 180
tol['plane'] = tolerance['plane'] # 30
all_dihedrals, all_indices = obtain_metal_dihedrals(num, system)
number_of_planes = 0
for dihedral, indices in zip(all_dihedrals, all_indices):
[i, j, k, l] = indices
if abs(dihedral - crit['plane']) < tol['plane'] \
or abs(dihedral - crit['plane']+180) < tol['plane']:
number_of_planes += 1
all_indices = find_other_indeces([0, j, k, l], num)
if check_if_atoms_on_same_site(all_indices, system, j, k, l):
open_metal_mof = True
oms_test[system[0], 'metal_plane'] = True
return open_metal_mof, oms_test
return open_metal_mof, oms_test
def check_non_metal_dihedrals(system, oms_test, tolerance):
num = system.num_sites
num_l = system.num_sites - 1
crit = dict()
tol = dict()
crit['plane'] = 180
tol['plane'] = tolerance['plane'] # 35
crit['plane_5l'] = 180
tol['plane_5l'] = tolerance['plane_5l'] # 30
crit['tetrahedron'] = 70.528779 # 70
tol['tetrahedron'] = tolerance['tetrahedron'] # 10
open_metal_mof = False
if num_l == 4:
test_type = 'plane' # 'tetrahedron'
om_type = 'non_TD'
elif num_l == 5:
test_type = 'plane_5l'
om_type = 'plane_5l'
elif num_l > 5:
test_type = 'plane'
om_type = 'same_side'
all_dihedrals, all_indeces = obtain_dihedrals(num, system)
min_dihedral = min(all_dihedrals)
for dihedral, indeces in zip(all_dihedrals, all_indeces):
[i, j, k, l] = indeces
if num_l == -4:
if not (abs(dihedral - crit[test_type]) < tol[test_type]
or abs(dihedral - crit[test_type]+180) < tol[test_type]):
oms_test.update(dict.fromkeys([om_type, test_type], True))
open_metal_mof = True
else:
if abs(dihedral - crit[test_type]) < tol[test_type] \
or abs(dihedral - crit[test_type]+180) < tol[test_type]:
if num_l <= 5:
oms_test.update(dict.fromkeys([om_type, test_type], True))
open_metal_mof = True
elif num_l > 5:
if check_if_plane_on_metal(0, [i, j, k, l],
system, tolerance):
other_indeces = find_other_indeces([0, i, j, k, l],
num)
if check_if_atoms_on_same_site(other_indeces,
system, j, k, l):
oms_test.update(dict.fromkeys([om_type, test_type],
True))
open_metal_mof = True
if (num_l >= 4) and not open_metal_mof:
open_metal_mof, test = check_metal_dihedrals(system, oms_test, tolerance)
return open_metal_mof, oms_test, min_dihedral, all_dihedrals
def check_if_atoms_on_same_site(other_indeces, system, j, k, l):
dihedrals_other = []
for o_i in other_indeces:
dihedrals_other.append(system.get_dihedral(j, k, l, o_i))
if not (check_positive(dihedrals_other) and
check_negative(dihedrals_other)):
return True
return False
def obtain_dihedrals(num, system):
all_dihedrals = []
indeces = []
indices_1 = range(1, num)
indices_2 = range(1, num)
for i, l in itertools.combinations(indices_1, 2):
for j, k in itertools.combinations(indices_2, 2):
if len({i, j, k, l}) == 4:
dihedral = abs(system.get_dihedral(i, j, k, l))
all_dihedrals.append(dihedral)
indeces.append([i, j, k, l])
return all_dihedrals, indeces
def obtain_metal_dihedrals(num, system):
all_dihedrals = []
indeces = []
indices_1 = range(1, num)
i = 0
for l in indices_1:
for j, k in itertools.permutations(indices_1, 2):
if len({i, j, k, l}) == 4:
dihedral = abs(system.get_dihedral(i, j, k, l))
all_dihedrals.append(dihedral)
indeces.append([i, j, k, l])
return all_dihedrals, indeces
def obtain_dihedrals_old(num, system):
all_dihedrals = []
indeces = []
for i in range(1, num):
for j in range(1, num):
for k in range(1, num):
for l in range(1, num):
if i == j or i == k or i == l or j == k or j == l or k == l:
pass
else:
dihedral = abs(system.get_dihedral(i, j, k, l))
all_dihedrals.append(dihedral)
indeces.append([i, j, k, l])
all_dihedrals = [round(x, 5) for x in all_dihedrals]
all_dihedrals_new = []
indices_1 = range(1, num)
indices_2 = range(1, num)
for i, l in itertools.combinations(indices_1, 2):
for j, k in itertools.combinations(indices_2, 2):
if len({i, j, k, l}) == 4:
dihedral = abs(system.get_dihedral(i, j, k, l))
all_dihedrals_new.append(dihedral)
# indeces.append([i, j, k, l])
all_dihedrals_new = [round(x, 5) for x in all_dihedrals_new]
for ang in all_dihedrals:
if ang not in all_dihedrals_new:
print('ERROR')
input()
return all_dihedrals, indeces
def obtain_metal_dihedrals_old(num, system):
all_dihedrals = []
indeces = []
i = 0
for j in range(1, num):
for k in range(1, num):
for l in range(1, num):
if i == j or i == k or i == l or j == k or j == l or k == l:
pass
else:
dihedral = abs(system.get_dihedral(i, j, k, l))
all_dihedrals.append(dihedral)
indeces.append([i, j, k, l])
all_dihedrals = [round(x, 5) for x in all_dihedrals]
all_dihedrals_new = []
indices_1 = range(1, num)
i = 0
for j, k, l in itertools.combinations(indices_1, 3):
if len({i, j, k, l}) == 4:
dihedral = abs(system.get_dihedral(i, j, k, l))
all_dihedrals_new.append(dihedral)
# indeces.append([i, j, k, l])
all_dihedrals_new = [round(x, 5) for x in all_dihedrals]
for ang in all_dihedrals:
if ang not in all_dihedrals_new:
print('ERROR METAL')
input()
return all_dihedrals, indeces
def add_co2(v1, v2, v3, v4, system):
ads_dist = 2.2
# old method for adding co2.
if 1 == 2:
p1 = calc_plane(v1, v2, v3)
p2 = calc_plane(v2, v3, v4)
p_avg_up = [ads_dist*(p_1+p_2)/2 for p_1, p_2 in zip(p1, p2)]
p_avg_down = [-p for p in p_avg_up]
p_avg_up = p_avg_up+system.cart_coords[0]
p_avg_down = p_avg_down+system.cart_coords[0]
p_avg_up_f = system.lattice.get_fractional_coords(p_avg_up)
p_avg_down_f = system.lattice.get_fractional_coords(p_avg_down)
dist_up = min(system.lattice.get_all_distances(p_avg_up_f,
system.frac_coords)[0])
dist_down = min(system.lattice.get_all_distances(p_avg_down_f,
system.frac_coords)[0])
if dist_up < dist_down:
p_avg_f = p_avg_down_f
direction = -1
else:
p_avg_f = p_avg_up_f
direction = 1
co2_vector_c = [direction*(ads_dist+1.16)*(p_1+p_2)/2 for p_1, p_2
in zip(p1, p2)]+system.cart_coords[0]
co2_vector_o = [direction*(ads_dist+2*1.16)*(p_1+p_2)/2 for p_1, p_2
in zip(p1, p2)]+system.cart_coords[0]
co2_vector_C_f = system.lattice.get_fractional_coords(co2_vector_c)
co2_vector_O_f = system.lattice.get_fractional_coords(co2_vector_o)
def add_co2_simple(structure, oms_index, end_to_end, eles):
ads_dist = 2.2
# end_to_end = 2.32
bond = end_to_end/2
# eles = ['O', 'O', 'C']
adsorption_site = []
adsorption_pos = []
ads_vector = []
ads_vector_f = []
ads_vector.append(find_adsorption_site(structure,
structure.cart_coords[oms_index],
ads_dist))
ads_vector.append(find_adsorption_site(structure, ads_vector[0],
end_to_end))
if len(eles) > 2:
r = [i-j for i, j in zip(ads_vector[1], ads_vector[0])]
r_l = np.linalg.norm(np.array(r))
r_unit = [i/r_l for i in r]
pos = list(rr*bond for rr in r_unit)
ads_vector.append([j-i for i, j in zip(pos, ads_vector[1])])
ads_vector_f = [structure.lattice.get_fractional_coords(ads_v)
for ads_v in ads_vector]
for e in eles:
adsorption_site.append(e)
for a in ads_vector_f:
adsorption_pos.append(a)
dists = []
for s, p in zip(adsorption_site, adsorption_pos):
dists.append(min(structure.lattice.
get_all_distances(p, structure.frac_coords)[0]))
print('Min adsorbate distance from framework:', min(dists), max(dists))
return Structure(structure.lattice, adsorption_site, adsorption_pos)
def unique_site_simple(oms_index, oms_id, system, output_folder, mof_name,
molecule_file):
mof_filename = output_folder+'/'+mof_name
molecule_name = molecule_file.split('.'+'prm')[0]
ads = add_adsorbate_simple(system,oms_index,molecule_file)
mof_with_adsorbate = merge_structures(ads, system)
cif = CifWriter(mof_with_adsorbate)
cif.write_file(mof_filename+'_'+molecule_name+str(oms_id)+'_1.cif')
cif = CifWriter(mof_with_adsorbate)
output_filename = mof_filename
output_filename += '_first_coordination_sphere_with_'+molecule_name
output_filename += str(oms_id)+'_1.cif'
cif.write_file(output_filename)
return
def adsorbate_placement(system, molecule_file, ads_vector):
coords, coords2, atom_type, ads_frac, com_frac, mol_com_rotated = [], [], [], [], [], []
# Read a pre-defined molecule file
mol = Molecule.from_file(molecule_file)
com_xyz = mol.center_of_mass # get CoM in cartesian
diff_com_ads_vector = np.array([c - a for c, a in zip(com_xyz, ads_vector)])
# shift com of molecule to the position of ads_vector
mol.translate_sites([i for i in range(0, len(mol))], -diff_com_ads_vector)
# RANDOM ROTATION OF MOLECULE AROUND CoM
mol_com_origin, rotated_xyz = [], []
# mol = Molecule(atom_type, coords2)
com_origin = mol.center_of_mass
mol.translate_sites([i for i in range(0, len(mol))], -com_origin)
# for line in coords2:
# x = float(line[0]) - float(com_origin[0])
# y = float(line[1]) - float(com_origin[1])
# z = float(line[2]) - float(com_origin[2])
# mol_com_origin.append([x, y, z])
# building rotation matrix R
R = np.zeros((3, 3), float)
R[:, 0] = RandomNumberOnUnitSphere()
m = RandomNumberOnUnitSphere()
R[:, 1] = m - np.dot(m, R[:, 0]) * R[:, 0] # subtract of component along co1 so it is orthogonal
R[:, 1] = R[:, 1] / np.linalg.norm(R[:, 1])
R[:, 2] = np.cross(R[:, 0], R[:, 1])
R = R.tolist()
# rotate the molecule
rotated_xyz = np.dot(mol.cart_coords, R) + com_origin
rotated_moledule = Structure(system.lattice, mol.species, rotated_xyz,
coords_are_cartesian=True)
# put adsorbate inside the simulation cell in fractional coordinates
inverse_matrix = system.lattice.inv_matrix
for j, line in enumerate(rotated_xyz):
s_x = inverse_matrix[0][0] * line[0] + inverse_matrix[0][1] * line[1] + inverse_matrix[0][2] * line[2]
s_y = inverse_matrix[1][0] * line[0] + inverse_matrix[1][1] * line[1] + inverse_matrix[1][2] * line[2]
s_z = inverse_matrix[2][0] * line[0] + inverse_matrix[2][1] * line[1] + inverse_matrix[2][2] * line[2]
ads_frac.append([float(s_x), float(s_y), float(s_z)])
atom_type = [str(e) for e in rotated_moledule.species]
return rotated_moledule.frac_coords, atom_type
def adsorbate_framework_overlap(system, com_frac, atom_type):
for i, dist in enumerate(com_frac):
distances = system.lattice.get_all_distances(com_frac[i],
system.frac_coords)
for j, dist2 in enumerate(distances[0]):
if (dist2 - (ap.get_vdf_radius(str(atom_type[i])) + ap.get_vdf_radius(str(system.species[j])))) < -1e-4:
return True
return False
def add_adsorbate_simple(system, oms_index, molecule_file):
ads_dist = 3.0
ads_vector = []
overlap = True
counter = 0
print("Initial adsorption site is ", ads_dist, "Å away from OMS.")
while overlap is True:
counter += 1
print("insertion attempts: ", counter)
# find a position *ads_dist* away from oms
ads_vector = find_adsorption_site(system,
system.cart_coords[oms_index],
ads_dist) # cartesian coordinate as an output
ads_frac, atom_type = adsorbate_placement(system, molecule_file,
ads_vector)
overlap = adsorbate_framework_overlap(system, ads_frac, atom_type)
ads = Structure(system.lattice, atom_type, ads_frac)
mof_with_adsorbate = merge_structures(ads, system)
cif = CifWriter(mof_with_adsorbate)
cif.write_file(str(ads_dist) + str(counter) + '.cif')
if overlap is True:
if counter > 4:
ads_dist += 0.5 # increase the distance from adsorption site by 0.5 Å
counter = 0 # reset the counter
print("New Site is ", ads_dist, "Å away from OMS.")
else:
continue
else:
break
mol = Structure(system.lattice, atom_type, ads_frac)
return mol
def RandomNumberOnUnitSphere():
thetha = 0.0
phi = 0.0
theta = 2*PI*np.random.random_sample()
phi = np.arccos(2*np.random.random_sample()-1.0)
x = np.cos(theta)*np.sin(phi)
y = np.sin(theta)*np.sin(phi)
z = np.cos(phi)
return x, y, z
def find_adsorption_site(system, center, prob_dist):
# find the adsorption site by maximizing the distance
# from all the atoms while keep the distance fixed
# at some predefined distance
tries = 1000
sum_distance = []
probe_positions = []
for i in range(0, tries):
# probe_pos=generate_random_position(system.cart_coords[0],center,(prob_dist)-atom.get_vdf_radius(center_e))
probe_pos = generate_random_position(center, prob_dist)
probe_pos_f = system.lattice.get_fractional_coords(probe_pos)
sum_distance.append(sum([1.0/(r**12)
for r in system.
lattice.
get_all_distances(probe_pos_f,
system.frac_coords)[0]]))
probe_positions.append(probe_pos)
new_position = probe_positions[sum_distance.index(min(sum_distance))]
return new_position
def calc_plane(x, y, z):
v1 = [y[0] - x[0], y[1] - x[1], y[2] - x[2]]
v2 = [z[0] - x[0], z[1] - x[1], z[2] - x[2]]
cross_product = [v1[1]*v2[2]-v1[2]*v2[1],
v1[2] * v2[0] - v1[0] * v2[2],
v1[0] * v2[1] - v1[1] * v2[0]]
d = cross_product[0] * x[0] - cross_product[1] * x[1] \
+ cross_product[2] * x[2]
a = cross_product[0]
b = cross_product[1]
c = cross_product[2]
plane_v = [a, b, c]
plane_v_norm = plane_v/np.linalg.norm(plane_v)
return plane_v_norm
def check_if_plane_on_metal(m_i, indeces, system, tolerance):
crit = 180
# Set to 12.5 so that ferocene type coordination spheres are dected
# correctly. eg. BUCROH
tol = tolerance['plane_on_metal'] # 12.5
# tol = 25.0
for i in range(1, len(indeces)):
for j in range(1, len(indeces)):
for k in range(1, len(indeces)):
if i == j or i == k or j == k:
pass
else:
dihedral = abs(system.get_dihedral(m_i, indeces[i],
indeces[j], indeces[k]))
if abs(dihedral-crit) < tol or abs(dihedral-crit+180) < tol:
return True
return False
def check_positive(n_list):
for n in n_list:
if n > 0:
return True
def check_negative(n_list):
for n in n_list:
if n < 0:
return True
def find_other_indeces(indeces, num):
other_indeces = []
for index in range(0, num):
if index not in indeces:
other_indeces.append(index)
return other_indeces
def center_around_metal(system):
# return system
center = system.frac_coords[0]
tmp1 = []
tmp2 = []
tmp1.append(str(system.species[0]))
tmp2.append([system.frac_coords[0][0], system.frac_coords[0][1],
system.frac_coords[0][2]])
system_centered = Structure(system.lattice, tmp1, tmp2)
for i in range(1, system.num_sites):
c_i = system.frac_coords[i]
dist_vector = center-c_i
dist_vector_r = []
for j in range(0, 3):
dist_vector_r.append(round(dist_vector[j]))
dist_before = np.linalg.norm(system.lattice.get_cartesian_coords(center)
- system.lattice.get_cartesian_coords(c_i))
c_i_centered = c_i+dist_vector_r
dist_after = np.linalg.norm(system.lattice.get_cartesian_coords(center)
- system.lattice.
get_cartesian_coords(c_i_centered))
if dist_after > dist_before:
for j in range(0, 3):
dist_vector_r[j] = np.rint(dist_vector[j])
c_i_centered = c_i+dist_vector_r
if dist_after > dist_before:
c_i_centered = c_i
system_centered.append(system.species[i], c_i_centered)
return system_centered
def merge_structures(s1, s2):
sites = []
posistions = []
for e, c in zip(s1.species, s1.frac_coords):
sites.append(e)
posistions.append(c)
for e, c in zip(s2.species, s2.frac_coords):
sites.append(e)
posistions.append(c)
if s1.lattice != s2.lattice:
sys.exit('Trying to merger two structures with different lattices')
return Structure(s1.lattice, sites, posistions)
def get_metal_surface_areas(metal, system):
sa_list = []
for m, m_coor in enumerate(metal.frac_coords):
#make a structure of atoms withn 7.0 the metal
sub_system = make_subsystem(m_coor, system, 5.0)
sa_list.append(get_metal_surface_area(m_coor,
str(metal.species[m]),
sub_system))
s_max = max(sa_list)
return s_max
def make_subsystem(coord, system, dist_check):
distances = system.lattice.get_all_distances(coord, system.frac_coords)
coords = []
elements = []
for i, dist in enumerate(distances[0]):
if dist < dist_check:
if dist > 0.1: # exclude the central metal
elements.append(system.species[i])
coords.append(system.frac_coords[i])
return Structure(system.lattice, elements, coords)
def get_metal_surface_area(fcenter, metal_element, system):
center = system.lattice.get_cartesian_coords(fcenter)
vdw_probe = 1.86 # 1.52 #0.25 #2.1 #1.52 #vdw radius for oxygen
# use 0.0 for vdw_probe to get sphere of metal
metal_full_surface_area = sphere_area(metal_element, vdw_probe)
count = 0
mc_tries = 5000
params_file = open('test.txt', 'w')
for i in range(0, mc_tries):
dist = ap.get_vdf_radius(metal_element)+vdw_probe
pos = generate_random_position(center, dist) # vdw_probe
# pos=generate_random_position(center,metal_element,vdw_probe) #vdw_probe
print('xx', pos[0], pos[1], pos[2], file=params_file)
pos_f = system.lattice.get_fractional_coords(pos)
if not check_for_overlap(center, pos_f, system, vdw_probe):
count += 1
sa_frac = float(count)/float(mc_tries) # metal_full_surface_area
sa = metal_full_surface_area*sa_frac
return sa_frac, sa
def check_for_overlap(center, pos, system, r_probe):
# pos=[0.0,0.0,0.0]
# print 'N 1.0',pos[0],pos[1],pos[2],'biso 1.0 N'
distances = system.lattice.get_all_distances(pos, system.frac_coords)
for i, dist in enumerate(distances[0]):
# if not check_if_center(center,system.cart_coords[i],system):
# print dist-r_probe+atom.get_vdf_radius(str(system.species[i]))
# input()
if (dist - (r_probe+ap.get_vdf_radius(str(system.species[i])))) < -1e-4:
return True
return False
def check_if_center(center, test_coords, system):
distances = system.lattice.get_all_distances(test_coords, center)
if distances[0] < 0.1:
return True
else:
return False
def sphere_area(metal_element, probe_r):
r = ap.get_vdf_radius(metal_element)
r = r+probe_r
return 4*math.pi*r*r
def generate_random_position(center, dist):
r = generate_random_vector()
# dist=atom.get_vdf_radius(metal)+vdw_probe
pos = list(rr*dist for rr in r)
pos = [i+j for i, j in zip(pos, center)]
return pos
def generate_random_vector():
zeta_sq = 2.0
ran_vec = []
while zeta_sq > 1.0:
xi1 = random.random()
xi2 = random.random()
zeta1 = 1.0-2.0*xi1
zeta2 = 1.0-2.0*xi2
zeta_sq = (zeta1*zeta1+zeta2*zeta2)
ranh = 2.0*math.sqrt(1.0-zeta_sq)
ran_vec.append(zeta1*ranh)
ran_vec.append(zeta2*ranh)
ran_vec.append(1.0-2.0*zeta_sq)
return ran_vec
def find_coordination_sequence(center, structure, all_coord_spheres):
"""computes the coordination sequence up to the Nth coordination shell
as input it takes the MOF as a pymatgen Structure and the index of the
center metal in the Structure
"""
# dist_all = structure.lattice.get_all_distances(structure.frac_coords,
# structure.frac_coords)
# all_coord_spheres = []
# for a in range(0, len(structure)):
# all_coord_spheres.append(find_coord_sphere_using_dist(a, structure, dist_all[a])[0])
# The shell_list is a set with the index of each atom and its unit
# cell index realtive to a cetral unit cell
shell_list = {(center, (0, 0, 0))}
shell_list_prev = set([])
all_shells = set(shell_list)
n_shells = 6
cs = []
ele = [(str(structure.species[center]))]
coords = [[structure.frac_coords[center][0],
structure.frac_coords[center][1],
structure.frac_coords[center][2]]]
# coordination_structure = (Structure(structure.lattice, ele, coords))
coord_sphere_time = 0.0
count_total = 0
for n in range(0, n_shells):
c_set = set([])
for a_uc in shell_list:
a = a_uc[0]
lattice = a_uc[1]
t0 = time.time()
#TODO make finding coordination sphere faster
# coord_sphere = find_coord_sphere_using_dist(a, structure,
# dist_all[a])[0]
# print(a)
# print(coord_sphere)
# print(find_coord_sphere_using_dist(a, structure,
# dist_all[a])[1])
# input()
coord_sphere = all_coord_spheres[a]
count_total += 1
t1 = time.time()
coord_sphere_time += t1-t0
coord_sphere_with_uc = []
for c in coord_sphere:
# dist = structure.lattice.\
# get_all_distance_and_image(structure.frac_coords[a],
# structure.frac_coords[c])
#
# uc = tuple(l-nl for l, nl in zip(lattice, min(dist)[1]))
diff = structure.frac_coords[a] - structure.frac_coords[c]
new_lat_i = [round(d, 0) for d in diff]
uc = tuple(l-nl for l, nl in zip(lattice, new_lat_i))
# check = [(d-md) < 1e-5 for d, md in zip(new_lat_i, min(dist)[1])]
# if not any(check):
# print(min(dist)[1])
# print(new_lat_i)
# print(check)
# input()
coord_sphere_with_uc.append((c, uc))
coord_sphere_with_uc = tuple(coord_sphere_with_uc)
c_set = c_set.union(set(coord_sphere_with_uc))
for a in shell_list_prev:
c_set.discard(a)
for a in shell_list:
c_set.discard(a)
# for i_uc in c_set:
# i = i_uc[0]
# ele = ap().elements[n+3]
# coords = [structure.frac_coords[i][0], structure.frac_coords[i][1],
# structure.frac_coords[i][2]]
# coordination_structure.append(ele, coords)
cs.append(len(c_set))
all_shells = all_shells.union(c_set)
shell_list_prev = shell_list
shell_list = c_set
# coordination_structure = center_around_metal(coordination_structure)
# write_xyz_file('temp.xyz', coordination_structure)
# print('coordination sphere time:', coord_sphere_time, count_total, len(structure))
# print('Coordination Sphere: ', cs)
# input()
return cs
# if __name__ == '__main__':
# main()
# -
|
examples/FINAL-open_metal_detector.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Scalars
#
# Computer numbers and mathematical numbers are not the same thing. Knowing how numbers are represented on a computer can prevent unintended consequences.
#
# Integers
# - binary representation - everything in a computer is represented with a 0 or 1
# - little and big endian
# - overflow
# - integer division
#
#
# Reals
# - floating point representation
# - small values have more precision than big values
# - floating point numbers are approximations
# - underflow
# - catastrophic cancellation
# - numerically stable algorithms
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# ## Integers
# ### Binary representation of integers
format(16, 'b') # 2 to the fourth power
# If we use 32 bits
format(16, '032b')
# ### Bit shifting
format(16 >> 2, '032b')
16 >> 2
format(16 << 2, '032b')
16 << 2
# ## Endianess
#
# This refers to how the bytes that make up an integer are stored in the computer. Big Endian means that the most significant byte (8 bits) is stored at the lowest address, while Little Endian means that the least significant byte is stored at the lowest address.
#
# 
#
# For the most part you don't have to care about this unless your code involves manipulating the internal structure of integers.
x = 1234
x.to_bytes(2, 'big')
x.to_bytes(2, 'little')
int.from_bytes(x.to_bytes(2, 'big'), 'big')
# Errors occurs if you mis-interpret the byte order
int.from_bytes(x.to_bytes(2, 'big'), 'little')
# ### Overflow
#
# In general, the computer representation of integers has a limited range, and may overflow. The range depends on whether the integer is signed or unsigned.
#
# For example, with 8 bits, we can represent at most $2^8 = 256$ integers.
#
# - 0 to 255 unsigned
# - -128 ti 127 signed
# Signed integers
np.arange(130, dtype=np.int8)[-5:]
# Unsigned integers
np.arange(130, dtype=np.uint8)[-5:]
np.arange(260, dtype=np.uint8)[-5:]
# ### Integer division
#
# In Python 2 or other languages such as C/C++, be very careful when dividing as the division operator `/` performs integer division when both numerator and denominator are integers. This is rarely what you want. In Python 3 the `/` always performs floating point division, and you use `//` for integer division, removing a common source of bugs in numerical calculations.
# + language="python2"
#
# import numpy as np
#
# x = np.arange(10)
# print(x/10)
# -
# If the code above could run, it would give all zeros due to integer division. Same as below:
x = np.arange(10)
x // 10
# Python 3 does the "right" thing.
x = np.arange(10)
x/10
# ## Real numbers
#
# Real numbers are represented as **floating point** numbers. A floating point number is stored in 3 pieces (sign bit, exponent, mantissa) so that every float is represetned as get +/- mantissa ^ exponent. Because of this, the interval between consecutive numbers is smallest (high precison) for numebrs close to 0 and largest for numbers close to the lower and upper bounds.
#
# Because exponents have to be singed to represent both small and large numbers, but it is more convenint to use unsigned numbers here, the exponnent has an offset (also knwnn as the exponentn bias). For example, if the expoennt is an unsigned 8-bit number, it can rerpesent the range (0, 255). By using an offset of 128, it will now represent the range (-127, 128).
#
# 
#
# **Note**: Intervals between consecutive floating point numbers are not constant. In particular, the precision for small numbers is much larger than for large numbers. In fact, approximately half of all floating point numbers lie between -1 and 1 when using the `double` type in C/C++ (also the default for `numpy`).
#
# 
#
# Because of this, if you are adding many numbers, it is more accurate to first add the small numbers before the large numbers.
# #### IEEE 754 32-bit floating point representation
#
# 
#
# See [Wikipedia](https://en.wikipedia.org/wiki/Single-precision_floating-point_format) for how this binary number is evaluated to 0.15625.
from ctypes import c_int, c_float
s = c_int.from_buffer(c_float(-0.15625)).value
s = format(s, '032b')
s
rep = {
'sign': s[:1],
'exponent' : s[1:9:],
'fraction' : s[9:]
}
rep
# ### Most base 10 real numbers are approximations
#
# This is simply because numbers are stored in finite-precision binary format.
'%.20f' % (0.1 * 0.1 * 100)
# ### Never check for equality of floating point numbers
i = 0
loops = 0
while i != 1:
i += 0.1 * 0.1
loops += 1
if loops == 1000000:
break
i
i = 0
loops = 0
while np.abs(1 - i) > 1e-6:
i += 0.1 * 0.1
loops += 1
if loops == 1000000:
break
i
# ### Associative law does not necessarily hold
6.022e23 - 6.022e23 + 1
1 + 6.022e23 - 6.022e23
# ### Distributive law does not hold
a = np.exp(1)
b = np.pi
c = np.sin(1)
a*(b+c)
a*b + a*c
# ### Catastrophic cancellation
# Consider calculating sample variance
#
# $$
# s^2= \frac{1}{n(n-1)}\sum_{i=1}^n x_i^2 - (\sum_{i=1}^n x_i)^2
# $$
#
# Be careful whenever you calculate the difference of potentially big numbers.
def var(x):
"""Returns variance of sample data using sum of squares formula."""
n = len(x)
return (1.0/(n*(n-1))*(n*np.sum(x**2) - (np.sum(x))**2))
# ### Numerically stable algorithms
# #### What is the sample variance for numbers from a normal distribution with variance 1?
np.random.seed(15)
x_ = np.random.normal(0, 1, int(1e6))
x = 1e12 + x_
var(x)
np.var(x)
# #### Underflow
#
# We want to calculate the ration between the products of two sets of random numbers. Problems arise because the products are too small to be captured as a standard floating point number.
np.warnings.filterwarnings('ignore')
np.random.seed(4)
xs = np.random.random(1000)
ys = np.random.random(1000)
np.prod(xs), np.prod(ys), np.prod(xs)/np.prod(ys)
# #### Prevent underflow by staying in log space
x = np.sum(np.log(xs))
y = np.sum(np.log(ys))
np.exp(x - y)
# #### Overflow
#
# Let's calculate
#
# $$
# \log(e^{1000} + e^{1000})
# $$
#
# Using basic algebra, we get the solution $\log(2) + 1000$.
x = np.array([1000, 1000])
np.log(np.sum(np.exp(x)))
np.logaddexp(*x)
# **logsumexp**
#
# This function generalizes `logsumexp` to an arbitrary number of addends and is useful in a variety of statistical contexts.
# Suppose we need to calculate a probability distribution $\pi$ parameterized by a vector $x$
#
# $$
# \pi_i = \frac{e^{x_i}}{\sum_{j=1}^n e^{x_j}}
# $$
#
# Taking logs, we get
#
# $$
# \log(\pi_i) = x_i - \log{\sum_{j=1}^n e^{x_j}}
# $$
x = 1e6*np.random.random(100)
np.log(np.sum(np.exp(x)))
from scipy.special import logsumexp
logsumexp(x)
# ### Other useful numerically stable functions
# **logp1 and expm1**
np.exp(np.log(1 + 1e-6)) - 1
np.expm1(np.log1p(1e-6))
# **sinc**
x = 1
np.sin(x)/x
np.sinc(x)
x = np.linspace(0.01, 2*np.pi, 100)
plt.plot(x, np.sinc(x), label='Library function')
plt.plot(x, np.sin(x)/x, label='DIY function')
plt.legend()
pass
|
notebooks/copies/lectures/T01A_Scalars.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] hide_input=false slideshow={"slide_type": "slide"}
# # Computational Thinking
# *Lesson Developer: <NAME> <EMAIL>*
# + init_cell=true slideshow={"slide_type": "skip"} tags=["Hide"]
# This code cell starts the necessary setup for Hour of CI lesson notebooks.
# First, it enables users to hide and unhide code by producing a 'Toggle raw code' button below.
# Second, it imports the hourofci package, which is necessary for lessons and interactive Jupyter Widgets.
# Third, it helps hide/control other aspects of Jupyter Notebooks to improve the user experience
# This is an initialization cell
# It is not displayed because the Slide Type is 'Skip'
from IPython.display import HTML, IFrame, Javascript, display
from ipywidgets import interactive
import ipywidgets as widgets
from ipywidgets import Layout
import getpass # This library allows us to get the username (User agent string)
# import package for hourofci project
import sys
sys.path.append('../../supplementary') # relative path (may change depending on the location of the lesson notebook)
import hourofci
# load javascript to initialize/hide cells, get user agent string, and hide output indicator
# hide code by introducing a toggle button "Toggle raw code"
HTML('''
<script type="text/javascript" src=\"../../supplementary/js/custom.js\"></script>
<style>
.output_prompt{opacity:0;}
</style>
<input id="toggle_code" type="button" value="Toggle raw code">
''')
# + [markdown] slideshow={"slide_type": "slide"}
# Computational thinking represents the way we consider problems and solutions in ways that computers, or computer systems, would be able to execute. ‘Thinking like a computer’ doesn’t mean thinking in binary, but thinking in ways that enable problem solving with a computer.
# Let’s consider an example that’s certainly safer within a computer simulation than out in the real world - a forest fire!
# + [markdown] hide_input=true slideshow={"slide_type": "slide"}
# ## Forest fire simulations and visualizations
# On the next page, there is an example of a **simulation** of a forest fire. It is as if your computer is pretending to burn an actual forest, but in reality nothing is being burned; the computer is only doing what it does best: **computations**. By showing you pictures of trees, the computer gives you a **visualization** of how the forest burns. Although you cannot see the _computations_ a computer is doing, you can see the _visualizations_ it shows you. Throughout this lesson you will get a taste of how to think **computationally** to understand how a computer is able to **simulate** something like a forest fire.
# + hide_input=false slideshow={"slide_type": "slide"} tags=["Init", "Hide", "1A"]
IFrame(src="supplementary/fire-imgs.html", width=350, height=450)
# + [markdown] slideshow={"slide_type": "slide"}
# Simulations like this allow us to predict a phenomenon or occurrence in some environment or context. By representing the occurrences within the simulation, we can see how it is operating and working - and in a much safer way than trying this experiment in the real world.
# + [markdown] slideshow={"slide_type": "-"}
# **Continue the journey: [Next Example](ct-2.ipynb)**
|
beginner-lessons/computational-thinking/ct-1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
from tensorflow import keras
keras.__version__
# # A first look at a neural network
# We will now take a look at a first concrete example of a neural network, which makes use of the Python library Keras to learn to classify hand-written digits.
#
# The MNIST dataset comes pre-loaded in Keras, in the form of a set of four Numpy arrays:
# +
from keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# -
train_images.shape
train_images.ndim
len(train_labels)
train_labels
# +
digit = train_images[4, 7:-7: , 7:-7]
digit.shape
# -
import matplotlib.pyplot as plt
plt.imshow(digit, cmap=plt.cm.binary)
plt.show()
# Our workflow will be follow: first we will present our neural network with the training data, train_images and train_labels. The network will then learn to associate images and labels. Finally, we will ask the network to produce predications for test_images, and we will verify if these predictions match the labels from test_labels.
#
# Let's build our network
#
#
# +
from keras import models
from keras import layers
network = models.Sequential()
network.add(layers.Dense(512, activation='relu', input_shape=(28*28,)))
network.add(layers.Dense(10, activation='softmax'))
# -
# To make our network ready for training, we need to pick three more things, as part of "compilation" step:
#
# A loss function: the is how the network will be able to measure how good a job it is doing on its training data, and thus how it will be able to steer itself in the right direction.
# An optimizer: this is the mechanism through which the network will update itself based on the data it sees and its loss function.
# Metrics to monitor during training and testing. Here we will only care about accuracy (the fraction of the images that were correctly classified).
#
network.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
# Before training, we will preprocess our data by reshaping it into the shape that the network expects, and scaling it so that all values are in the [0, 1] interval. Previously, our training images for instance were stored in an array of shape (60000, 28, 28) of type uint8 with values in the [0, 255] interval. We transform it into a float32 array of shape (60000, 28 * 28) with values between 0 and 1.
#
# +
train_images = train_images.reshape((60000, 28 * 28))
train_images = train_images.astype('float32') / 255
test_images = test_images.reshape((10000, 28 * 28))
test_images = test_images.astype('float32') / 255
# +
from keras.utils import to_categorical
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
# -
tf.config.experimental_run_functions_eagerly(True)
network.fit(train_images, train_labels, epochs=5, batch_size=128)
tset_loss, test_acc = network.evaluate(test_images, test_labels)
print('test_acc:', test_acc)
#
#
# Our test set accuracy turns out to be 97.8% -- that's quite a bit lower than the training set accuracy. This gap between training accuracy and test accuracy is an example of "overfitting", the fact that machine learning models tend to perform worse on new data than on their training data.
|
DeepLearning/deep-learning-with-python-notebooks/2.1_a_first_look_at_a_neural_network.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as sts
# %matplotlib inline
#from sts import reciprocal
# задаем исходное распределение
a, b = 0.00623093670105, 1.0062309367
reciprocal_rv=sts.reciprocal(a,b)
sample=reciprocal_rv.rvs(size=1000)
# строим гистрограмму и теоретическую кривую плотности распределения
plt.hist(sample, bins=50, normed=True, label='sample')
plt.ylabel('fraction of samples')
plt.xlabel('$x$')
x = np.linspace(0,1,1000)
pdf = reciprocal_rv.pdf(x)
plt.plot(x, pdf, label='theoretical pdf')
plt.legend()
plt.ylabel('$f(x)$')
plt.xlabel('$x$')
# вычисляем среднее и дисперсию исходного распределения
mean0 = reciprocal_rv.mean()
D0 = reciprocal_rv.var()
print 'Среднее значение и дисперсия исходного распределения'
print 'Среднее = ', mean0, ' Дисперсия = ', D0
# распределение выборочного среднего случайной величины
# при выбороке размера 5 значений
mean5=np.zeros(1000)
for i in range (1000):
sample5=reciprocal_rv.rvs(size=5)
mean5[i]=np.mean(sample5)
print 'Среднее для выборки из 5 равняется: ', mean5.mean()
print 'Среднее исходного распределения: ', mean0
print 'Дисперсия для выборки из 5 равняется: ', mean5.var()
print 'Дисперсия исходного распределения деленное на n, т.е. D/n: ', D0/5
# +
# строим гистограмму и кривую нормального распределения
plt.hist(mean5, normed=True, label='sample')
plt.ylabel('fraction of samples')
plt.xlabel('$x$')
norm1=sts.norm(loc=EX, scale=(DX/5)**0.5)
x = np.linspace(0,1,1000)
pdf = norm1.pdf(x)
plt.plot(x, pdf, label='theoretical pdf')
plt.legend()
plt.ylabel('$f(x)$')
plt.xlabel('$x$')
# +
# распределение выборочного среднего случайной величины
# при выбороке размера 10 значений
mean10=np.zeros(1000)
for i in range (1000):
sample10=reciprocal_rv.rvs(size=10)
mean10[i]=np.mean(sample10)
print 'Среднее для выборки из 10 равняется: ', mean10.mean()
print 'Среднее исходного распределения: ', mean0
print 'Дисперсия для выборки из 10 равняется: ', mean10.var()
print 'Дисперсия исходного распределения деленное на n, т.е. D/n: ', D0/10
# +
# строим гистограмму и кривую нормального распределения
plt.hist(mean10, normed=True, label='sample')
plt.ylabel('fraction of samples')
plt.xlabel('$x$')
norm10=sts.norm(loc=EX, scale=(DX/10)**0.5)
x = np.linspace(0,1,1000)
pdf = norm10.pdf(x)
plt.plot(x, pdf, label='theoretical pdf')
plt.legend()
plt.ylabel('$f(x)$')
plt.xlabel('$x$')
# +
# распределение выборочного среднего случайной величины
# при выбороке размера 50 значений
mean50=np.zeros(1000)
for i in range (1000):
sample50=reciprocal_rv.rvs(size=50)
mean50[i]=np.mean(sample50)
print 'Среднее для выборки из 50 равняется: ', mean50.mean()
print 'Среднее исходного распределения: ', mean0
print 'Дисперсия для выборки из 50 равняется: ', mean50.var()
print 'Дисперсия исходного распределения деленное на n, т.е. D/n: ', D0/50
# -
# строим гистограмму и кривую нормального распределения
plt.hist(mean50, normed=True, label='sample')
plt.ylabel('fraction of samples')
plt.xlabel('$x$')
norm50=sts.norm(loc=EX, scale=(DX/50)**0.5)
x = np.linspace(0,1,1000)
pdf = norm50.pdf(x)
plt.plot(x, pdf, label='theoretical pdf')
plt.legend()
plt.ylabel('$f(x)$')
plt.xlabel('$x$')
# +
# Выводы:
# 1)При увеличении объема выборки из распределения,
# параметры этой выборки приближаются к нормальным
# 2) распределение выборочных средних хорошо описывается
# нормальным распределением даже для n=5
# -
|
data-science-course/course1/week5/samples/CLT.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PySpark
# language: python
# name: pyspark
# ---
# + [markdown] id="WYJPIMozU1QV"
# ## 2. Data Analyst - Run SQL on tables and plot data
#
# ### This notebook runs some SQL against BigQuery on tables and plots the data
# +
from google.cloud import bigquery
import json
client = bigquery.Client()
# -
# project_id = !gcloud config list --format 'value(core.project)' 2>/dev/null
dataset_name = project_id[0] + '-raw'
dataset_name = dataset_name.replace('-', '_')
table_path = "`" + project_id[0] + '.' + dataset_name + '.transaction_data_train`'
table_path
# **TODO**
# * Substitute **table_path** in the FROM clauses below
# #### Use Ipython magics to query BQ
# * Learn more about Ipython magics for BigQuery [[here]](https://googleapis.dev/python/bigquery/latest/magics.html)
# %%bigquery
SELECT
isFraud,
COUNT(*) as count
FROM `thatistoomuchdata.thatistoomuchdata_raw.transaction_data_train`
GROUP BY isFraud
# %%bigquery
SELECT
type,
COUNT(*) as Transactions,
AVG(amount) as Average_amount
FROM `thatistoomuchdata.thatistoomuchdata_raw.transaction_data_train`
GROUP BY type
# %%bigquery
SELECT
type,
SUM(amount)/1000000000 as total_amount_in_billion,
SUM(CASE WHEN isFraud=1 THEN amount ELSE 0 END)/1000000000 as fraud_amount_in_billion,
SUM(CASE WHEN isFraud=1 THEN 1 ELSE 0 END) as fraud_cases,
FROM `thatistoomuchdata.thatistoomuchdata_raw.transaction_data_train`
GROUP BY type
# + [markdown] id="1CKY3Z8MU1QV"
# #### Use Spark to query BQ table
#
# Enable Apache Arrow to allow faster conversion from Spark DataFrame to Pandas DataFrame [[doc]](https://spark.apache.org/docs/latest/sql-pyspark-pandas-with-arrow.html)
# -
sql = f"""
SELECT
type,
COUNT(*) as Transactions,
AVG(amount) as Average_amount,
SUM(amount)/1000000000 as total_amount_in_billion,
SUM(CASE WHEN isFraud=1 THEN amount ELSE 0 END)/1000000000 as fraud_amount_in_billion,
SUM(CASE WHEN isFraud=1 THEN 1 ELSE 0 END) as fraud_cases,
FROM {table_path}
GROUP BY type
"""
df_transaction_data_train = client.query(sql).to_dataframe()
type(df_transaction_data_train)
df_transaction_data_train.head()
# + [markdown] id="COb6a1XTU1QW"
# ### Plot data
# + id="x01_SPT9U1QW"
import matplotlib.pyplot as plt
# -
df_transaction_data_train.plot(x="type", y=["total_amount_in_billion", "fraud_amount_in_billion"], rot=90, kind="bar")
df_transaction_data_train.plot(x="type", y=["Transactions", "fraud_cases"], rot=90, kind="bar")
# + id="fyJvPdghU1QW" outputId="47256998-e8b7-4c73-c444-54a3e894ff19"
df_transaction_data_train.set_index('type', inplace=True)
df_transaction_data_train.head()
# -
|
2_Data_Analyst.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: nams
# language: python
# name: nams
# ---
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
import warnings
warnings.filterwarnings('ignore')
# ## Introduction
# +
from IPython.display import YouTubeVideo
YouTubeVideo(id="JjpbztqP9_0", width="100%")
# -
# Graph traversal is akin to walking along the graph, node by node,
# constrained by the edges that connect the nodes.
# Graph traversal is particularly useful for understanding
# the local structure of certain portions of the graph
# and for finding paths that connect two nodes in the network.
#
# In this chapter, we are going to learn how to perform pathfinding in a graph,
# specifically by looking for _shortest paths_ via the _breadth-first search_ algorithm.
# ## Breadth-First Search
#
# The BFS algorithm is a staple of computer science curricula,
# and for good reason:
# it teaches learners how to "think on" a graph,
# putting one in the position of
# "the dumb computer" that can't use a visual cortex to
# "_just know_" how to trace a path from one node to another.
# As a topic, learning how to do BFS
# additionally imparts algorithmic thinking to the learner.
#
# ### Exercise: Design the algorithm
#
# Try out this exercise to get some practice with algorithmic thinking.
#
# > 1. On a piece of paper, conjure up a graph that has 15-20 nodes. Connect them any way you like.
# > 1. Pick two nodes. Pretend that you're standing on one of the nodes, but you can't see any further beyond one neighbor away.
# > 1. Work out how you can find _a_ path from the node you're standing on to the other node, given that you can _only_ see nodes that are one neighbor away but have an infinitely good memory.
#
# If you are successful at designing the algorithm, you should get the answer below.
from nams import load_data as cf
G = cf.load_sociopatterns_network()
# +
from nams.solutions.paths import bfs_algorithm
# UNCOMMENT NEXT LINE TO GET THE ANSWER.
# bfs_algorithm()
# -
# ### Exercise: Implement the algorithm
#
# > Now that you've seen how the algorithm works, try implementing it!
# +
# FILL IN THE BLANKS BELOW
def path_exists(node1, node2, G):
"""
This function checks whether a path exists between two nodes (node1,
node2) in graph G.
"""
visited_nodes = _____
queue = [_____]
while len(queue) > 0:
node = ___________
neighbors = list(_________________)
if _____ in _________:
# print('Path exists between nodes {0} and {1}'.format(node1, node2))
return True
else:
visited_nodes.___(____)
nbrs = [_ for _ in _________ if _ not in _____________]
queue = ____ + _____
# print('Path does not exist between nodes {0} and {1}'.format(node1, node2))
return False
# +
# UNCOMMENT THE FOLLOWING TWO LINES TO SEE THE ANSWER
from nams.solutions.paths import path_exists
# # path_exists??
# +
# CHECK YOUR ANSWER AGAINST THE TEST FUNCTION BELOW
from random import sample
import networkx as nx
def test_path_exists(N):
"""
N: The number of times to spot-check.
"""
for i in range(N):
n1, n2 = sample(G.nodes(), 2)
assert path_exists(n1, n2, G) == bool(nx.shortest_path(G, n1, n2))
return True
assert test_path_exists(10)
# -
# ## Visualizing Paths
#
# One of the objectives of that exercise before was to help you "think on graphs".
# Now that you've learned how to do so, you might be wondering,
# "How do I visualize that path through the graph?"
#
# Well first off, if you inspect the `test_path_exists` function above,
# you'll notice that NetworkX provides a `shortest_path()` function
# that you can use. Here's what using `nx.shortest_path()` looks like.
path = nx.shortest_path(G, 7, 400)
path
# As you can see, it returns the nodes along the shortest path,
# incidentally in the exact order that you would traverse.
#
# One thing to note, though!
# If there are multiple shortest paths from one node to another,
# NetworkX will only return one of them.
#
# So how do you draw those nodes _only_?
#
# You can use the `G.subgraph(nodes)`
# to return a new graph that only has nodes in `nodes`
# and only the edges that exist between them.
# After that, you can use any plotting library you like.
# We will show an example here that uses nxviz's matrix plot.
#
# Let's see it in action:
# + tags=[]
import nxviz as nv
g = G.subgraph(path)
nv.matrix(g, sort_by="order")
# -
# _Voila!_ Now we have the subgraph (1) extracted and (2) drawn to screen!
# In this case, the matrix plot is a suitable visualization for its compactness.
# The off-diagonals also show that each node is a neighbor to the next one.
#
# You'll also notice that if you try to modify the graph `g`, say by adding a node:
#
# ```python
# g.add_node(2048)
# ```
#
# you will get an error:
#
# ```python
# ---------------------------------------------------------------------------
# NetworkXError Traceback (most recent call last)
# <ipython-input-10-ca6aa4c26819> in <module>
# ----> 1 g.add_node(2048)
#
# ~/anaconda/envs/nams/lib/python3.7/site-packages/networkx/classes/function.py in frozen(*args, **kwargs)
# 156 def frozen(*args, **kwargs):
# 157 """Dummy method for raising errors when trying to modify frozen graphs"""
# --> 158 raise nx.NetworkXError("Frozen graph can't be modified")
# 159
# 160
#
# NetworkXError: Frozen graph can't be modified
# ```
#
# From the perspective of semantics, this makes a ton of sense:
# the subgraph `g` is a perfect subset of the larger graph `G`,
# and should not be allowed to be modified
# unless the larger container graph is modified.
# ### Exercise: Draw path with neighbors one degree out
#
# Try out this next exercise:
#
# > Extend graph drawing with the neighbors of each of those nodes.
# > Use any of the nxviz plots (`nv.matrix`, `nv.arc`, `nv.circos`);
# > try to see which one helps you tell the best story.
# + tags=[]
from nams.solutions.paths import plot_path_with_neighbors
### YOUR SOLUTION BELOW
# + tags=[]
plot_path_with_neighbors(G, 7, 400)
# -
# In this case, we opted for an Arc plot because we only have one grouping of nodes but have a logical way to order them.
# Because the path follows the order, the edges being highlighted automatically look like hops through the graph.
# ## Bottleneck nodes
#
# We're now going to revisit the concept of an "important node",
# this time now leveraging what we know about paths.
#
# In the "hubs" chapter, we saw how a node that is "important"
# could be so because it is connected to many other nodes.
#
# Paths give us an alternative definition.
# If we imagine that we have to pass a message on a graph
# from one node to another,
# then there may be "bottleneck" nodes
# for which if they are removed,
# then messages have a harder time flowing through the graph.
#
# One metric that measures this form of importance
# is the "betweenness centrality" metric.
# On a graph through which a generic "message" is flowing,
# a node with a high betweenness centrality
# is one that has a high proportion of shortest paths
# flowing through it.
# In other words, it behaves like a _bottleneck_.
# ### Betweenness centrality in NetworkX
#
# NetworkX provides a "betweenness centrality" function
# that behaves consistently with the "degree centrality" function,
# in that it returns a mapping from node to metric:
# + tags=[]
import pandas as pd
pd.Series(nx.betweenness_centrality(G))
# -
# ### Exercise: compare degree and betweenness centrality
#
# > Make a scatterplot of degree centrality on the x-axis
# > and betweenness centrality on the y-axis.
# > Do they correlate with one another?
# + tags=[]
import matplotlib.pyplot as plt
import seaborn as sns
# YOUR ANSWER HERE:
# + tags=[]
from nams.solutions.paths import plot_degree_betweenness
plot_degree_betweenness(G)
# -
# ### Think about it...
#
# ...does it make sense that degree centrality and betweenness centrality
# are not well-correlated?
#
# Can you think of a scenario where a node has a
# "high" betweenness centrality
# but a "low" degree centrality?
# Before peeking at the graph below,
# think about your answer for a moment.
# + tags=[]
nx.draw(nx.barbell_graph(5, 1))
# -
# ## Recap
#
# In this chapter, you learned the following things:
#
# 1. You figured out how to implement the breadth-first-search algorithm to find shortest paths.
# 1. You learned how to extract subgraphs from a larger graph.
# 1. You implemented visualizations of subgraphs, which should help you as you communicate with colleagues.
# 1. You calculated betweenness centrality metrics for a graph, and visualized how they correlated with degree centrality.
# ## Solutions
#
# Here are the solutions to the exercises above.
# + tags=[]
from nams.solutions import paths
import inspect
print(inspect.getsource(paths))
|
notebooks/02-algorithms/02-paths.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import logging
logger = logging.getLogger("logger_name")
logger.debug("Logging at debug")
logger.info("Logging at info")
logger.warning("Logging at warning")
logger.error("Logging at error")
logger.fatal("Logging at fatal")
system = "moon"
for number in range(3):
...
logger.warning("%d errors reported in %s", number, system)
|
Chapter06/Exercise93/Exercise93.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.0.0
# language: julia
# name: julia-1.0
# ---
# ### Let's get this out of the way first: **Julia uses 1-based indexing**
# # "Helper" Functions similar to Python's ```dir()``` and ```help()```
# **```methodswith()```**
methodswith(AbstractString)
# # Vectorized Operation
[1,2,3] .^ 3
# # Control Flow
# ### IF statement
# +
some_var = 5
if some_var > 10
println("some_var is totally bigger than 10.")
elseif some_var < 10 # This elseif clause is optional.
println("some_var is smaller than 10.")
else # The else clause is
println("some_var is indeed 10.")
end
# -
# ### FOR Loops
for animal=["dog", "cat", "mouse"]
println("$animal is a mammal")
# You can use $ to interpolate variables or expression into strings
end
# You can use 'in' instead of '='.
for animal in ["dog", "cat", "mouse"]
println("$animal is a mammal")
end
for i in 1:5
println(i)
end
for i in 1:9:50
println(i)
end
for a in Dict("dog"=>"mammal","cat"=>"mammal","mouse"=>"mammal")
println("$(a[1]) is a $(a[2])")
end
for (k,v) in Dict("dog"=>"mammal","cat"=>"mammal","mouse"=>"mammal")
println("$k is a $v")
end
# ### WHILE Loop
# NOTE: x is not "seen" by the ```while``` loop and so it must be made global
x = 1
while x < 4
println(x)
global x += 1 # Shorthand for x = x + 1
end
x = 1
while x < 4
println(x)
x += 1 # Shorthand for x = x + 1
end
# # Compound Expression
z = begin
x = 1
y = 2
x + y
end
# (```;```) chain syntax
z = (x = 1; y = 2; x + y)
begin x = 1; y = 2; x + y end
(x = 1;
y = 2;
x + y)
# # String Basics
str = "Hello, world.\n"
"""Contains "quote" characters"""
str[1]
str[6]
str[end]
str[end-1]
str[4:9]
# Notice that the expressions str[k] and str[k:k] do not give the same result:
str[6]
str[6:6]
# The former is a Char and the latter is a str
str = "long string"
substr = SubString(str, 1, 4)
# ### String concatenation with ```string()``` or ```*```
greet = "Hello"
whom = "World"
string(greet, ", ", whom, ".\n")
greet * ", " * whom
# ### String interpolation
"$greet, $whom.\n"
# you can interpolate any expression into a string using parentheses:
"1 + 2 = $(1 + 2)"
# ### Triple-Quoted String Literals
str =
"""Hello,
world.
"""
print(str)
# ### Location Index: ```findfirst()``` / ```isequal()```
findfirst(isequal('x'), "xylophone")
# **How do I find methods that uses a string as one of its arguments? Use ```methodswith()```**
methodswith(AbstractString)
|
jupyter_notebooks/julia/Julia_Basics.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.12 64-bit
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from IPython.core.display import display
from sklearn.cross_decomposition import PLSRegression
from sklearn.decomposition import PCA
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import GridSearchCV
from modules.datasets import add_bias_noise, make_time_series_data
from modules.performance import draw_figures
# select n_components of PCA
def pca_n_select(x, ax=None, thr_cum=0.9, thr_exp_var=0.05):
# PCA
pca = PCA()
pca.fit(x)
print("X Shape", x.shape)
cumsum = np.hstack([0, np.cumsum(pca.explained_variance_ratio_)])
d1 = np.argmax(cumsum >= thr_cum)
d2 = np.argmax(pca.explained_variance_ <= thr_exp_var)
print("d1=",d1,"d2=",d2)
n_comp = max(d1,d2)
print("selected n:", n_comp)
if ax:
ax.plot(range(min(x.shape[0],x.shape[1])+1), cumsum)
return pca, n_comp, ax
else:
return pca, n_comp
# Select an algorithm.
algorithm = "OLS" # "OLS", "PCR", "PLS"
# Generatesample data.
n_cv = 5
n_samples = 1000
n_samples_train = 800
n_features = 10
y_bias = 5
y_noise = 0.1
x_noise_test = 0.001
x_bias_test = 0.1
x = make_time_series_data(
n_samples=n_samples,
n_features=n_features,
random_state=0,
start="2021-01-01 00:00:00",
)
random_generator = np.random.RandomState(101)
x.iloc[:n_samples_train, 3] = x.iloc[:n_samples_train, 0] + random_generator.normal(
scale=x_noise_test, size=n_samples_train
)
random_generator = np.random.RandomState(102)
x.iloc[n_samples_train:, 3] = (
x.iloc[n_samples_train:, 0]
+ random_generator.normal(scale=x_noise_test, size=n_samples - n_samples_train)
+ x_bias_test
)
y = pd.DataFrame(
0.5 * x.iloc[:, 0] + 0.4 * x.iloc[:, 1] - 0.3 * x.iloc[:, 2], columns=["target"]
)
y = add_bias_noise(y, bias=y_bias, noise=y_noise, random_state=1)
display(x.head())
display(y.head())
# Split the data into train set and test set.
data = {"x": {}, "y": {}, "y_pred": {}}
data["x"]["train"] = x.iloc[:n_samples_train, :]
data["x"]["test"] = x.iloc[n_samples_train:, :]
data["y"]["train"] = y.iloc[:n_samples_train, :]
data["y"]["test"] = y.iloc[n_samples_train:, :]
# Training a model.
if algorithm == "OLS":
reg = LinearRegression()
reg.fit(data["x"]["train"], data["y"]["train"].values.reshape(-1))
coef = reg.coef_.copy()
intercept = reg.intercept_.copy()
elif algorithm == "PCR":
# get number of components
fig = plt.figure()
fig.suptitle("Explained Variance Ratio")
ax = fig.add_subplot(1,1,1)
_, n_comp, ax = pca_n_select(data["x"]["train"], ax=ax, thr_cum=0.95, thr_exp_var=0.5)
plt.show()
pca = PCA(n_components=n_comp)
t = pca.fit_transform(data["x"]["train"])
reg = LinearRegression()
reg.fit(t, data["y"]["train"].values.reshape(-1))
coef = np.dot(reg.coef_.copy(), pca.components_)
intercept = reg.intercept_.copy()
elif algorithm == "PLS":
reg = PLSRegression()
parameters = {"n_components": [i + 1 for i in range(n_features)]}
search_params = GridSearchCV(
reg,
parameters,
scoring="neg_root_mean_squared_error",
n_jobs=-1,
cv=n_cv,
refit=True,
)
search_params.fit(data["x"]["train"], data["y"]["train"])
print("Best Parameters")
print(search_params.best_params_)
reg = search_params.best_estimator_
coef = reg.coef_.reshape(-1)
intercept = reg._y_mean.copy()
else:
raise ValueError("algorithm must be 'OLS' or 'PCR'.")
display(
pd.DataFrame([coef], index=["Coefficient"], columns=data["x"]["train"].columns)
)
display(
pd.DataFrame(
[intercept], index=["Intercept"], columns=data["y"]["train"].columns
)
)
# Predict y from x with the created model.
for type_data in ["train", "test"]:
if algorithm == "PCR":
data["y_pred"][type_data] = reg.predict(pca.transform(data["x"][type_data])).reshape(-1, 1)
else:
data["y_pred"][type_data] = reg.predict(data["x"][type_data]).reshape(-1, 1)
# Visualize performance
data_list = [
{
"name": "Training",
"actual": data["y"]["train"].values,
"estimated": data["y_pred"]["train"],
},
{
"name": "Test",
"actual": data["y"]["test"].values,
"estimated": data["y_pred"]["test"],
},
]
fig, axes = draw_figures(data_list, "Training - Test")
# -
|
notebook/ols_pcr_pls.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:udacity] *
# language: python
# name: conda-env-udacity-py
# ---
# # Project 1: Finding Lane Lines On the Road
#
# ## Reflection
#
# ### Pipeline
#
# My pipeline consisted of 7 steps:
#
# #### Step 1: Grayscaling
#
# I did not modify the grayscale conversion in any way.
#
# #### Step 2: Canny edge detection
#
# I used the opencv Canny algo with the following parameters:
#
# Parameters:
# low_threshold = 100
# high_threshold = 200
#
# #### Step 3: Region masking
#
# I applied a quadrilateral region mask to the frames with the following vertices:
#
# top_left = (int(xsize/2 - 50), int(ysize/1.72))
# top_right = (int(xsize/2 + 50), int(ysize/1.72))
# bottom_left = (0, ysize)
# bottom_right = (xsize, ysize)
#
# #### Step 4: Hough line detection
#
# I used the opencv HoughLinesP algo with the following parameters:
#
# rho = 1
# theta = np.pi/180
# threshold = 50
# max_line_gap = 200
# min_line_length = 100
#
# #### Step 5: Min/max filter
#
# I applied a min/max filter on each line's slope to reject unwanted detections.
#
# #### Step 6: Line averaging
#
# I split the lines into two camps (left and right) based on their slope. To calculate the average line, I simply added their total __dx__ / __dy__'s and divided the two to calculate __m__. I used the average top point as __y1__ and __x1__, and used the resultant system of equations to calculate __y2__, __x2__, and __b__.
#
# #### Step 7: Smoothing filter
#
# The lines were quite jumpy, so I created a 5-tap box filter to smooth them out.
#
# ### Shortcomings
#
# I am extremely unsatisfied with the filtering methodology I had to come up with to prevent my pipeline from detecting a couple of difficult erroneous lines in solidYellowLeft.mp4 that were perpendicular to the lane lines.
#
# If I upped the threshold on the Canny, the results were too sparse to reliably detect the lanes. If I made the Hough detection more stringent, there would be frames where it would not detect the lanes at all.
#
# In the end, I used a simple min/max filter on the slope of the detected lines to filter out the bad data. It was only for roughly four frames in the entire video, but it's a bad solution because it only works in this specific application.
#
# ### Possible improvements
#
# Box filters have a sinc frequency response, so I probably should have used a gaussian to do the lowpassing instead, but the result was good enough for now and I was more focused more on finding a more elegant alternative to the min/max filter. I was not successful!
|
Writeup_Calvin_Giddens.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import re
from pprint import pprint
import pickle
import gensim
from gensim.models.ldamulticore import LdaMulticore
from gensim import corpora, models
from gensim.utils import simple_preprocess
from gensim.models import CoherenceModel
import pyLDAvis
import pyLDAvis.gensim_models
import string
import warnings
warnings.simplefilter('ignore')
from itertools import chain
from sklearn.feature_extraction.text import CountVectorizer
from tqdm import tqdm
from tqdm.auto import tqdm
import datetime as dt
import os
from dateutil.relativedelta import *
# -
# ## Define Functions
# +
def clean(text):
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
exclude = set(string.punctuation)
stop = set(stopwords.words('english'))
lemma = WordNetLemmatizer()
text = re.sub('[^A-z,.-]+', ' ', text)
stop_free = ' '.join([word for word in text.lower().split() if word not in stop])
punc_free = ''.join(ch for ch in stop_free if ch not in exclude)
normalized = ' '.join([lemma.lemmatize(word) for word in punc_free.split()])
return normalized.split()
def bigrams(words, bi_min=15, tri_min=10):
bigram = gensim.models.Phrases(words, min_count = bi_min)
bigram_mod = gensim.models.phrases.Phraser(bigram)
return bigram_mod
def get_corpus(text_clean):
bigram_mod = bigrams(text_clean)
bigram = [bigram_mod[review] for review in text_clean]
id2word = gensim.corpora.Dictionary(bigram)
id2word.filter_extremes(no_below=10, no_above=0.35)
id2word.compactify()
corpus = [id2word.doc2bow(text) for text in bigram]
return corpus, id2word, bigram
def compute_coherence_values(dictionary, corpus, texts, limit, start=2, step=3):
from gensim.models.ldamodel import LdaModel
"""
# From https://datascienceplus.com/evaluation-of-topic-modeling-topic-coherence/
Compute c_v coherence for various number of topics
Parameters:
----------
dictionary : Gensim dictionary
corpus : Gensim corpus
texts : List of input texts
limit : Max num of topics
Returns:
-------
model_list : List of LDA topic models
coherence_values : Coherence values corresponding to the LDA model with respective number of topics
"""
coherence_values = []
model_list = []
for num_topics in range(start, limit, step):
model=LdaModel(corpus=corpus, id2word=dictionary, num_topics=num_topics, passes = 4)
model_list.append(model)
coherencemodel = CoherenceModel(model=model, texts=texts, dictionary=dictionary, coherence='c_v')
coherence_values.append(coherencemodel.get_coherence())
return model_list, coherence_values
# -
# ## Load Preprocessed Data
# +
# Load nontext data
file = open('../data/preprocessed/nontext_data.pickle', 'rb')
nontext_data = pd.read_pickle(file)
file.close()
print(nontext_data.shape)
nontext_data.head()
# +
# Load text data
file = open('../data/preprocessed/text_no_split.pickle', 'rb')
text_data = pd.read_pickle(file)
file.close()
print(text_data.shape)
text_data.head()
# -
# ## Review text types
# Frequency of text-data type
text_data.type.value_counts()
# +
# show timing of types -- drop meeting transcripts due to delay in release, press conference transcripts as only began in 2011
plt.hist(text_data[text_data.type == 'minutes'].date, alpha = 0.5, label = 'meeting minutes')
plt.hist(text_data[text_data.type == 'speech'].date, alpha = 0.5, label = 'speech')
plt.hist(text_data[text_data.type == 'statement'].date, alpha = 0.5, label = 'statement')
plt.hist(text_data[text_data.type == 'testimony'].date, alpha = 0.5, label = 'testimony')
plt.legend()
# -
# ## Keep only Meeting Minutes, Speech, Statement, Testimony
text_filtered = text_data[text_data.type.isin(['minutes', 'speech', 'statement', 'testimony'])]
text_filtered.shape
# ## Clean text
# +
text_filtered['text_clean'] = text_filtered['text'].apply(clean)
text_filtered.text_clean[0]
# -
# ## Split into Bigrams
corpus, id2word, bigram_text = get_corpus(text_filtered.text_clean)
# ## Apply LDA
# ### Choose number of topics
model_list, coherence_values = compute_coherence_values(dictionary=id2word, corpus=corpus, texts=bigram_text, start=5, limit=30, step=2)
# Coherence score plot
limit=30; start=5; step=2;
x = range(start, limit, step)
plt.plot(x, coherence_values)
plt.xlabel("Num Topics")
plt.ylabel("Coherence score")
plt.legend(("coherence_values"), loc='best')
plt.show()
# Choose 8 topics -- highest coherence score
# ### Model
# Number of Topics
num_topics = 8
# Build LDA model
lda_model_8 = gensim.models.LdaMulticore(corpus=corpus,
id2word=id2word,
num_topics=num_topics,
passes=50)
# ### Words in Topics
# +
from wordcloud import WordCloud, STOPWORDS
import matplotlib.colors as mcolors
cols = [color for name, color in mcolors.TABLEAU_COLORS.items()] # more colors: 'mcolors.XKCD_COLORS'
cloud = WordCloud(stopwords=stop_words,
background_color='white',
width=2500,
height=1800,
max_words=10,
colormap='tab10',
color_func=lambda *args, **kwargs: cols[i],
prefer_horizontal=1.0)
topics = lda_model.show_topics(formatted=False)
fig, axes = plt.subplots(2, 2, figsize=(10,10), sharex=True, sharey=True)
for i, ax in enumerate(axes.flatten()):
fig.add_subplot(ax)
topic_words = dict(topics[i][1])
cloud.generate_from_frequencies(topic_words, max_font_size=300)
plt.gca().imshow(cloud)
plt.gca().set_title('Topic ' + str(i), fontdict=dict(size=16))
plt.gca().axis('off')
plt.subplots_adjust(wspace=0, hspace=0)
plt.axis('off')
plt.margins(x=0, y=0)
plt.tight_layout()
plt.show()
# -
# ### Keywords for each topic
# Print the Keyword in the topics
pprint(lda_model_8.print_topics())
doc_lda = lda_model_8[corpus]
# ## Assess topic coherence
# +
# Compute Perplexity
print('\nPerplexity: ', lda_model_8.log_perplexity(corpus)) # Measure of how good the model is. Lower is better.
# Compute Coherence Score
from gensim.models import CoherenceModel
coherence_model_lda = CoherenceModel(model=lda_model_8, texts=bigram_text, dictionary=id2word, coherence='c_v')
coherence_lda = coherence_model_lda.get_coherence()
print('\nCoherence Score: ', coherence_lda) # Want above .5
# +
# VISUALIZE LDA OUTPUT AS HTML-PAGE
import pyLDAvis.gensim_models
pyLDAvis.enable_notebook()
vis = pyLDAvis.gensim_models.prepare(lda_model_8, corpus, dictionary=lda_model_8.id2word)
vis
# output results
pyLDAvis.save_html(vis, '../img/bigram.lda8-50.html')
# -
# ## Add LDA Topics to dataframe
# +
# ADD LDA TOPICS TO DATAFRAME
# REFERENCE CODE: https://towardsdatascience.com/end-to-end-topic-modeling-in-python-latent-dirichlet-allocation-lda-35ce4ed6b3e0
data = text_filtered['text_clean']
def format_topics_sentences(ldamodel=lda_model_8, corpus=corpus, texts=data):
# Init output
sent_topics_df = pd.DataFrame()
# Get main topic in each document
for i, row in enumerate(ldamodel[corpus]):
row = sorted(row, key=lambda x: (x[1]), reverse=True)
# Get the Dominant topic, Perc Contribution and Keywords for each document
for j, (topic_num, prop_topic) in enumerate(row):
if j == 0: # => dominant topic
wp = ldamodel.show_topic(topic_num)
topic_keywords = ", ".join([word for word, prop in wp])
sent_topics_df = sent_topics_df.append(pd.Series([int(topic_num), round(prop_topic,4), topic_keywords]), ignore_index=True)
else:
break
sent_topics_df.columns = ['Dominant_Topic', 'Perc_Contribution', 'Topic_Keywords']
# Add original text to the end of the output
contents = pd.Series(texts)
sent_topics_df = pd.concat([sent_topics_df, contents], axis=1)
return(sent_topics_df)
# Run function to create new df
df_topic_sents_keywords = format_topics_sentences(ldamodel=lda_model_8, corpus=corpus, texts=bigram_text)
# Format
df_dominant_topic = df_topic_sents_keywords.reset_index()
df_dominant_topic.columns = ['Document_No', 'Dominant_Topic', 'Topic_Perc_Contrib', 'Keywords', 'Text']
df_dominant_topic.head()
# -
# +
# This command prints out the topics in a text and the percent of the text from each topic
# if you want to make a list for each doc-topic pair
l = [lda_model_8.get_document_topics(item) for item in corpus]
# Create a list of lists (list of topics inside each doc)
topic_distribution = list()
for num in range(len(text_filtered['text_clean'])):
props = [0] * 18 # create empty list
doc = num
for item in lda_model_8[corpus[num]]:
topic = item[0]
props[topic] = item[1]
topic_distribution.append(props)
# Turn list of lists into dataframe
topic_dist = pd.DataFrame(topic_distribution)
# +
# combine topics across transcript, minutes, etc. for each meeting date
final_text = text_filtered.join(topic_dist)
final_text = final_text.groupby('next_meeting').mean()
print(final_text.shape)
final_text.head()
# +
# find na's
na_rows = final_text[final_text.isna().any(axis=1)]
print(na_rows.shape)
display(na_rows)
# -
final_text = final_text.fillna(0)
# +
# join LDA and nontext data
joined_data = final_text.join(nontext_data)
print(joined_data.shape)
joined_data.tail()
# -
plt.figure(figsize=(10,6))
plt.plot(joined_data.index, joined_data.next_decision, 'o', label = 'Next Decision')
#plt.plot(joined_data.index, joined_data[0], 'o', label = 'Topic 0')
#plt.plot(joined_data.index, joined_data[1], 'o', label = 'Topic 1')
#plt.plot(joined_data.index, joined_data[2], 'o', label = 'Topic 2')
#plt.plot(joined_data.index, joined_data[3], 'o', label = 'Topic 3')
#plt.plot(joined_data.index, joined_data[4], 'o', label = 'Topic 4')
plt.plot(joined_data.index, joined_data[5], 'o', label = 'Topic 5')
#plt.plot(joined_data.index, joined_data[6], 'o', label = 'Topic 6')
#plt.plot(joined_data.index, joined_data[7], 'o', label = 'Topic 7')
plt.legend()
# ## Modify rate decision
# +
# change target variable from -1 to 1 to 1 to 3
def convert_class(x):
if x == 1:
return 3
elif x == 0:
return 2
elif x == -1:
return 1
joined_data['RateDecision'] = joined_data.RateDecision.apply(convert_class)
joined_data['next_decision'] = joined_data['RateDecision'].shift(1)
# drop NA caused by shifting
joined_data = joined_data.dropna(subset = ['next_decision'])
# -
# ## Write data
def save_data(df, file_name, dir_name='../data/'):
if not os.path.exists(dir_name):
os.mkdir(dir_name)
# Save results to a picke file
file = open(dir_name + file_name + '.pickle', 'wb')
pickle.dump(df, file)
file.close()
# Save results to a csv file
df.to_csv(dir_name + file_name + '.csv', index=True)
# +
# Final Text Data
save_data(final_text, 'text_w_lda')
# Final joined data
save_data(joined_data, 'final_fed_data')
|
src/5_Fed_Funds_LDA.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 1 - '시퀀스 투 시퀀스' 신경망 학습
#
# 이 시리즈에서는 PyTorch 및 TorchText를 사용하여 한 시퀀스에서 다른 시퀀스로 이동하는 기계 학습 모델을 구축 할 것입니다. 이것은 독일어에서 영어로의 번역에서 수행되지만, 모델은 요약과 같이 한 시퀀스에서 다른 시퀀스로 이동하는 것과 관련된 모든 문제에 적용될 수 있습니다.
#
# 이 첫 번째 노트북에서는 [Sequence to Sequence Learning with Neural Networks](https://arxiv.org/abs/1409.3215)의 모델을 구현하여 일반적인 개념들을 간단하게 이해하기 시작할 것이다.
#
# ### Introduction
#
# The most common sequence-to-sequence (seq2seq) models are *encoder-decoder* models, which commonly use a *recurrent neural network* (RNN) to *encode* the source (input) sentence into a single vector. In this notebook, we'll refer to this single vector as a *context vector*. We can think of the context vector as being an abstract representation of the entire input sentence. This vector is then *decoded* by a second RNN which learns to output the target (output) sentence by generating it one word at a time.
#
# 
#
# The above image shows an example translation. The input/source sentence, "guten morgen", is passed through the embedding layer (yellow) and then input into the encoder (green). We also append a *start of sequence* (`<sos>`) and *end of sequence* (`<eos>`) token to the start and end of sentence, respectively. At each time-step, the input to the encoder RNN is both the embedding, $e$, of the current word, $e(x_t)$, as well as the hidden state from the previous time-step, $h_{t-1}$, and the encoder RNN outputs a new hidden state $h_t$. We can think of the hidden state as a vector representation of the sentence so far. The RNN can be represented as a function of both of $e(x_t)$ and $h_{t-1}$:
#
# $$h_t = \text{EncoderRNN}(e(x_t), h_{t-1})$$
#
# We're using the term RNN generally here, it could be any recurrent architecture, such as an *LSTM* (Long Short-Term Memory) or a *GRU* (Gated Recurrent Unit).
#
# Here, we have $X = \{x_1, x_2, ..., x_T\}$, where $x_1 = \text{<sos>}, x_2 = \text{guten}$, etc. The initial hidden state, $h_0$, is usually either initialized to zeros or a learned parameter.
#
# Once the final word, $x_T$, has been passed into the RNN via the embedding layer, we use the final hidden state, $h_T$, as the context vector, i.e. $h_T = z$. This is a vector representation of the entire source sentence.
#
# Now we have our context vector, $z$, we can start decoding it to get the output/target sentence, "good morning". Again, we append start and end of sequence tokens to the target sentence. At each time-step, the input to the decoder RNN (blue) is the embedding, $d$, of current word, $d(y_t)$, as well as the hidden state from the previous time-step, $s_{t-1}$, where the initial decoder hidden state, $s_0$, is the context vector, $s_0 = z = h_T$, i.e. the initial decoder hidden state is the final encoder hidden state. Thus, similar to the encoder, we can represent the decoder as:
#
# $$s_t = \text{DecoderRNN}(d(y_t), s_{t-1})$$
#
# Although the input/source embedding layer, $e$, and the output/target embedding layer, $d$, are both shown in yellow in the diagram they are two different embedding layers with their own parameters.
#
# In the decoder, we need to go from the hidden state to an actual word, therefore at each time-step we use $s_t$ to predict (by passing it through a `Linear` layer, shown in purple) what we think is the next word in the sequence, $\hat{y}_t$.
#
# $$\hat{y}_t = f(s_t)$$
#
# The words in the decoder are always generated one after another, with one per time-step. We always use `<sos>` for the first input to the decoder, $y_1$, but for subsequent inputs, $y_{t>1}$, we will sometimes use the actual, ground truth next word in the sequence, $y_t$ and sometimes use the word predicted by our decoder, $\hat{y}_{t-1}$. This is called *teacher forcing*, see a bit more info about it [here](https://machinelearningmastery.com/teacher-forcing-for-recurrent-neural-networks/).
#
# When training/testing our model, we always know how many words are in our target sentence, so we stop generating words once we hit that many. During inference it is common to keep generating words until the model outputs an `<eos>` token or after a certain amount of words have been generated.
#
# Once we have our predicted target sentence, $\hat{Y} = \{ \hat{y}_1, \hat{y}_2, ..., \hat{y}_T \}$, we compare it against our actual target sentence, $Y = \{ y_1, y_2, ..., y_T \}$, to calculate our loss. We then use this loss to update all of the parameters in our model.
#
# ## Preparing Data
#
# We'll be coding up the models in PyTorch and using TorchText to help us do all of the pre-processing required. We'll also be using spaCy to assist in the tokenization of the data.
# +
import torch
import torch.nn as nn
import torch.optim as optim
import spacy
import numpy as np
import random
import math
import time
from torchtext.datasets import Multi30k
from torchtext.data import Field, BucketIterator
# -
# We'll set the random seeds for deterministic results.
# +
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
# -
# Next, we'll create the tokenizers. A tokenizer is used to turn a string containing a sentence into a list of individual tokens that make up that string, e.g. "good morning!" becomes ["good", "morning", "!"]. We'll start talking about the sentences being a sequence of tokens from now, instead of saying they're a sequence of words. What's the difference? Well, "good" and "morning" are both words and tokens, but "!" is a token, not a word.
#
# spaCy has model for each language ("de" for German and "en" for English) which need to be loaded so we can access the tokenizer of each model.
#
# **Note**: the models must first be downloaded using the following on the command line:
# ```
# python -m spacy download en
# python -m spacy download de
# ```
#
# We load the models as such:
# !python -m spacy download en
# !python -m spacy download de
spacy_de = spacy.load('de')
spacy_en = spacy.load('en')
# Next, we create the tokenizer functions. These can be passed to TorchText and will take in the sentence as a string and return the sentence as a list of tokens.
#
# In the paper we are implementing, they find it beneficial to reverse the order of the input which they believe "introduces many short term dependencies in the data that make the optimization problem much easier". We copy this by reversing the German sentence after it has been transformed into a list of tokens.
# +
def tokenize_de(text):
"""
Tokenizes German text from a string into a list of strings (tokens) and reverses it
"""
return [tok.text for tok in spacy_de.tokenizer(text)][::-1]
def tokenize_en(text):
"""
Tokenizes English text from a string into a list of strings (tokens)
"""
return [tok.text for tok in spacy_en.tokenizer(text)]
# -
# TorchText's `Field`s handle how data should be processed. All of the possible arguments are detailed [here](https://github.com/pytorch/text/blob/master/torchtext/data/field.py#L61).
#
# We set the `tokenize` argument to the correct tokenization function for each, with German being the `SRC` (source) field and English being the `TRG` (target) field. The field also appends the "start of sequence" and "end of sequence" tokens via the `init_token` and `eos_token` arguments, and converts all words to lowercase.
# +
SRC = Field(tokenize = tokenize_de,
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
TRG = Field(tokenize = tokenize_en,
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
# -
# Next, we download and load the train, validation and test data.
#
# The dataset we'll be using is the [Multi30k dataset](https://github.com/multi30k/dataset). This is a dataset with ~30,000 parallel English, German and French sentences, each with ~12 words per sentence.
#
# `exts` specifies which languages to use as the source and target (source goes first) and `fields` specifies which field to use for the source and target.
train_data, valid_data, test_data = Multi30k.splits(exts = ('.de', '.en'),
fields = (SRC, TRG))
# We can double check that we've loaded the right number of examples:
print(f"Number of training examples: {len(train_data.examples)}")
print(f"Number of validation examples: {len(valid_data.examples)}")
print(f"Number of testing examples: {len(test_data.examples)}")
# We can also print out an example, making sure the source sentence is reversed:
print(vars(train_data.examples[0]))
# The period is at the beginning of the German (src) sentence, so it looks like the sentence has been correctly reversed.
#
# Next, we'll build the *vocabulary* for the source and target languages. The vocabulary is used to associate each unique token with an index (an integer). The vocabularies of the source and target languages are distinct.
#
# Using the `min_freq` argument, we only allow tokens that appear at least 2 times to appear in our vocabulary. Tokens that appear only once are converted into an `<unk>` (unknown) token.
#
# It is important to note that our vocabulary should only be built from the training set and not the validation/test set. This prevents "information leakage" into our model, giving us artifically inflated validation/test scores.
SRC.build_vocab(train_data, min_freq = 2)
TRG.build_vocab(train_data, min_freq = 2)
print(f"Unique tokens in source (de) vocabulary: {len(SRC.vocab)}")
print(f"Unique tokens in target (en) vocabulary: {len(TRG.vocab)}")
# The final step of preparing the data is to create the iterators. These can be iterated on to return a batch of data which will have a `src` attribute (the PyTorch tensors containing a batch of numericalized source sentences) and a `trg` attribute (the PyTorch tensors containing a batch of numericalized target sentences). Numericalized is just a fancy way of saying they have been converted from a sequence of readable tokens to a sequence of corresponding indexes, using the vocabulary.
#
# We also need to define a `torch.device`. This is used to tell TorchText to put the tensors on the GPU or not. We use the `torch.cuda.is_available()` function, which will return `True` if a GPU is detected on our computer. We pass this `device` to the iterator.
#
# When we get a batch of examples using an iterator we need to make sure that all of the source sentences are padded to the same length, the same with the target sentences. Luckily, TorchText iterators handle this for us!
#
# We use a `BucketIterator` instead of the standard `Iterator` as it creates batches in such a way that it minimizes the amount of padding in both the source and target sentences.
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# +
BATCH_SIZE = 128
train_iterator, valid_iterator, test_iterator = BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device)
# -
# ## Building the Seq2Seq Model
#
# We'll be building our model in three parts. The encoder, the decoder and a seq2seq model that encapsulates the encoder and decoder and will provide a way to interface with each.
#
# ### Encoder
#
# First, the encoder, a 2 layer LSTM. The paper we are implementing uses a 4-layer LSTM, but in the interest of training time we cut this down to 2-layers. The concept of multi-layer RNNs is easy to expand from 2 to 4 layers.
#
# For a multi-layer RNN, the input sentence, $X$, after being embedded goes into the first (bottom) layer of the RNN and hidden states, $H=\{h_1, h_2, ..., h_T\}$, output by this layer are used as inputs to the RNN in the layer above. Thus, representing each layer with a superscript, the hidden states in the first layer are given by:
#
# $$h_t^1 = \text{EncoderRNN}^1(e(x_t), h_{t-1}^1)$$
#
# The hidden states in the second layer are given by:
#
# $$h_t^2 = \text{EncoderRNN}^2(h_t^1, h_{t-1}^2)$$
#
# Using a multi-layer RNN also means we'll also need an initial hidden state as input per layer, $h_0^l$, and we will also output a context vector per layer, $z^l$.
#
# Without going into too much detail about LSTMs (see [this](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) blog post to learn more about them), all we need to know is that they're a type of RNN which instead of just taking in a hidden state and returning a new hidden state per time-step, also take in and return a *cell state*, $c_t$, per time-step.
#
# $$\begin{align*}
# h_t &= \text{RNN}(e(x_t), h_{t-1})\\
# (h_t, c_t) &= \text{LSTM}(e(x_t), h_{t-1}, c_{t-1})
# \end{align*}$$
#
# We can just think of $c_t$ as another type of hidden state. Similar to $h_0^l$, $c_0^l$ will be initialized to a tensor of all zeros. Also, our context vector will now be both the final hidden state and the final cell state, i.e. $z^l = (h_T^l, c_T^l)$.
#
# Extending our multi-layer equations to LSTMs, we get:
#
# $$\begin{align*}
# (h_t^1, c_t^1) &= \text{EncoderLSTM}^1(e(x_t), (h_{t-1}^1, c_{t-1}^1))\\
# (h_t^2, c_t^2) &= \text{EncoderLSTM}^2(h_t^1, (h_{t-1}^2, c_{t-1}^2))
# \end{align*}$$
#
# Note how only our hidden state from the first layer is passed as input to the second layer, and not the cell state.
#
# So our encoder looks something like this:
#
# 
#
# We create this in code by making an `Encoder` module, which requires we inherit from `torch.nn.Module` and use the `super().__init__()` as some boilerplate code. The encoder takes the following arguments:
# - `input_dim` is the size/dimensionality of the one-hot vectors that will be input to the encoder. This is equal to the input (source) vocabulary size.
# - `emb_dim` is the dimensionality of the embedding layer. This layer converts the one-hot vectors into dense vectors with `emb_dim` dimensions.
# - `hid_dim` is the dimensionality of the hidden and cell states.
# - `n_layers` is the number of layers in the RNN.
# - `dropout` is the amount of dropout to use. This is a regularization parameter to prevent overfitting. Check out [this](https://www.coursera.org/lecture/deep-neural-network/understanding-dropout-YaGbR) for more details about dropout.
#
# We aren't going to discuss the embedding layer in detail during these tutorials. All we need to know is that there is a step before the words - technically, the indexes of the words - are passed into the RNN, where the words are transformed into vectors. To read more about word embeddings, check these articles: [1](https://monkeylearn.com/blog/word-embeddings-transform-text-numbers/), [2](http://p.migdal.pl/2017/01/06/king-man-woman-queen-why.html), [3](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/), [4](http://mccormickml.com/2017/01/11/word2vec-tutorial-part-2-negative-sampling/).
#
# The embedding layer is created using `nn.Embedding`, the LSTM with `nn.LSTM` and a dropout layer with `nn.Dropout`. Check the PyTorch [documentation](https://pytorch.org/docs/stable/nn.html) for more about these.
#
# One thing to note is that the `dropout` argument to the LSTM is how much dropout to apply between the layers of a multi-layer RNN, i.e. between the hidden states output from layer $l$ and those same hidden states being used for the input of layer $l+1$.
#
# In the `forward` method, we pass in the source sentence, $X$, which is converted into dense vectors using the `embedding` layer, and then dropout is applied. These embeddings are then passed into the RNN. As we pass a whole sequence to the RNN, it will automatically do the recurrent calculation of the hidden states over the whole sequence for us! Notice that we do not pass an initial hidden or cell state to the RNN. This is because, as noted in the [documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM), that if no hidden/cell state is passed to the RNN, it will automatically create an initial hidden/cell state as a tensor of all zeros.
#
# The RNN returns: `outputs` (the top-layer hidden state for each time-step), `hidden` (the final hidden state for each layer, $h_T$, stacked on top of each other) and `cell` (the final cell state for each layer, $c_T$, stacked on top of each other).
#
# As we only need the final hidden and cell states (to make our context vector), `forward` only returns `hidden` and `cell`.
#
# The sizes of each of the tensors is left as comments in the code. In this implementation `n_directions` will always be 1, however note that bidirectional RNNs (covered in tutorial 3) will have `n_directions` as 2.
class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(input_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)
self.dropout = nn.Dropout(dropout)
def forward(self, src):
#src = [src len, batch size]
embedded = self.dropout(self.embedding(src))
#embedded = [src len, batch size, emb dim]
outputs, (hidden, cell) = self.rnn(embedded)
#outputs = [src len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
return hidden, cell
# ### Decoder
#
# Next, we'll build our decoder, which will also be a 2-layer (4 in the paper) LSTM.
#
# 
#
# The `Decoder` class does a single step of decoding, i.e. it ouputs single token per time-step. The first layer will receive a hidden and cell state from the previous time-step, $(s_{t-1}^1, c_{t-1}^1)$, and feeds it through the LSTM with the current embedded token, $y_t$, to produce a new hidden and cell state, $(s_t^1, c_t^1)$. The subsequent layers will use the hidden state from the layer below, $s_t^{l-1}$, and the previous hidden and cell states from their layer, $(s_{t-1}^l, c_{t-1}^l)$. This provides equations very similar to those in the encoder.
#
# $$\begin{align*}
# (s_t^1, c_t^1) = \text{DecoderLSTM}^1(d(y_t), (s_{t-1}^1, c_{t-1}^1))\\
# (s_t^2, c_t^2) = \text{DecoderLSTM}^2(s_t^1, (s_{t-1}^2, c_{t-1}^2))
# \end{align*}$$
#
# Remember that the initial hidden and cell states to our decoder are our context vectors, which are the final hidden and cell states of our encoder from the same layer, i.e. $(s_0^l,c_0^l)=z^l=(h_T^l,c_T^l)$.
#
# We then pass the hidden state from the top layer of the RNN, $s_t^L$, through a linear layer, $f$, to make a prediction of what the next token in the target (output) sequence should be, $\hat{y}_{t+1}$.
#
# $$\hat{y}_{t+1} = f(s_t^L)$$
#
# The arguments and initialization are similar to the `Encoder` class, except we now have an `output_dim` which is the size of the vocabulary for the output/target. There is also the addition of the `Linear` layer, used to make the predictions from the top layer hidden state.
#
# Within the `forward` method, we accept a batch of input tokens, previous hidden states and previous cell states. As we are only decoding one token at a time, the input tokens will always have a sequence length of 1. We `unsqueeze` the input tokens to add a sentence length dimension of 1. Then, similar to the encoder, we pass through an embedding layer and apply dropout. This batch of embedded tokens is then passed into the RNN with the previous hidden and cell states. This produces an `output` (hidden state from the top layer of the RNN), a new `hidden` state (one for each layer, stacked on top of each other) and a new `cell` state (also one per layer, stacked on top of each other). We then pass the `output` (after getting rid of the sentence length dimension) through the linear layer to receive our `prediction`. We then return the `prediction`, the new `hidden` state and the new `cell` state.
#
# **Note**: as we always have a sequence length of 1, we could use `nn.LSTMCell`, instead of `nn.LSTM`, as it is designed to handle a batch of inputs that aren't necessarily in a sequence. `nn.LSTMCell` is just a single cell and `nn.LSTM` is a wrapper around potentially multiple cells. Using the `nn.LSTMCell` in this case would mean we don't have to `unsqueeze` to add a fake sequence length dimension, but we would need one `nn.LSTMCell` per layer in the decoder and to ensure each `nn.LSTMCell` receives the correct initial hidden state from the encoder. All of this makes the code less concise - hence the decision to stick with the regular `nn.LSTM`.
class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.output_dim = output_dim
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(output_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)
self.fc_out = nn.Linear(hid_dim, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, input, hidden, cell):
#input = [batch size]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#n directions in the decoder will both always be 1, therefore:
#hidden = [n layers, batch size, hid dim]
#context = [n layers, batch size, hid dim]
input = input.unsqueeze(0)
#input = [1, batch size]
embedded = self.dropout(self.embedding(input))
#embedded = [1, batch size, emb dim]
output, (hidden, cell) = self.rnn(embedded, (hidden, cell))
#output = [seq len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#seq len and n directions will always be 1 in the decoder, therefore:
#output = [1, batch size, hid dim]
#hidden = [n layers, batch size, hid dim]
#cell = [n layers, batch size, hid dim]
prediction = self.fc_out(output.squeeze(0))
#prediction = [batch size, output dim]
return prediction, hidden, cell
# ### Seq2Seq
#
# For the final part of the implemenetation, we'll implement the seq2seq model. This will handle:
# - receiving the input/source sentence
# - using the encoder to produce the context vectors
# - using the decoder to produce the predicted output/target sentence
#
# Our full model will look like this:
#
# 
#
# The `Seq2Seq` model takes in an `Encoder`, `Decoder`, and a `device` (used to place tensors on the GPU, if it exists).
#
# For this implementation, we have to ensure that the number of layers and the hidden (and cell) dimensions are equal in the `Encoder` and `Decoder`. This is not always the case, we do not necessarily need the same number of layers or the same hidden dimension sizes in a sequence-to-sequence model. However, if we did something like having a different number of layers then we would need to make decisions about how this is handled. For example, if our encoder has 2 layers and our decoder only has 1, how is this handled? Do we average the two context vectors output by the decoder? Do we pass both through a linear layer? Do we only use the context vector from the highest layer? Etc.
#
# Our `forward` method takes the source sentence, target sentence and a teacher-forcing ratio. The teacher forcing ratio is used when training our model. When decoding, at each time-step we will predict what the next token in the target sequence will be from the previous tokens decoded, $\hat{y}_{t+1}=f(s_t^L)$. With probability equal to the teaching forcing ratio (`teacher_forcing_ratio`) we will use the actual ground-truth next token in the sequence as the input to the decoder during the next time-step. However, with probability `1 - teacher_forcing_ratio`, we will use the token that the model predicted as the next input to the model, even if it doesn't match the actual next token in the sequence.
#
# The first thing we do in the `forward` method is to create an `outputs` tensor that will store all of our predictions, $\hat{Y}$.
#
# We then feed the input/source sentence, `src`, into the encoder and receive out final hidden and cell states.
#
# The first input to the decoder is the start of sequence (`<sos>`) token. As our `trg` tensor already has the `<sos>` token appended (all the way back when we defined the `init_token` in our `TRG` field) we get our $y_1$ by slicing into it. We know how long our target sentences should be (`max_len`), so we loop that many times. The last token input into the decoder is the one **before** the `<eos>` token - the `<eos>` token is never input into the decoder.
#
# During each iteration of the loop, we:
# - pass the input, previous hidden and previous cell states ($y_t, s_{t-1}, c_{t-1}$) into the decoder
# - receive a prediction, next hidden state and next cell state ($\hat{y}_{t+1}, s_{t}, c_{t}$) from the decoder
# - place our prediction, $\hat{y}_{t+1}$/`output` in our tensor of predictions, $\hat{Y}$/`outputs`
# - decide if we are going to "teacher force" or not
# - if we do, the next `input` is the ground-truth next token in the sequence, $y_{t+1}$/`trg[t]`
# - if we don't, the next `input` is the predicted next token in the sequence, $\hat{y}_{t+1}$/`top1`, which we get by doing an `argmax` over the output tensor
#
# Once we've made all of our predictions, we return our tensor full of predictions, $\hat{Y}$/`outputs`.
#
# **Note**: our decoder loop starts at 1, not 0. This means the 0th element of our `outputs` tensor remains all zeros. So our `trg` and `outputs` look something like:
#
# $$\begin{align*}
# \text{trg} = [<sos>, &y_1, y_2, y_3, <eos>]\\
# \text{outputs} = [0, &\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>]
# \end{align*}$$
#
# Later on when we calculate the loss, we cut off the first element of each tensor to get:
#
# $$\begin{align*}
# \text{trg} = [&y_1, y_2, y_3, <eos>]\\
# \text{outputs} = [&\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>]
# \end{align*}$$
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
assert encoder.hid_dim == decoder.hid_dim, \
"Hidden dimensions of encoder and decoder must be equal!"
assert encoder.n_layers == decoder.n_layers, \
"Encoder and decoder must have equal number of layers!"
def forward(self, src, trg, teacher_forcing_ratio = 0.5):
#src = [src len, batch size]
#trg = [trg len, batch size]
#teacher_forcing_ratio is probability to use teacher forcing
#e.g. if teacher_forcing_ratio is 0.75 we use ground-truth inputs 75% of the time
batch_size = trg.shape[1]
trg_len = trg.shape[0]
trg_vocab_size = self.decoder.output_dim
#tensor to store decoder outputs
outputs = torch.zeros(trg_len, batch_size, trg_vocab_size).to(self.device)
#last hidden state of the encoder is used as the initial hidden state of the decoder
hidden, cell = self.encoder(src)
#first input to the decoder is the <sos> tokens
input = trg[0,:]
for t in range(1, trg_len):
#insert input token embedding, previous hidden and previous cell states
#receive output tensor (predictions) and new hidden and cell states
output, hidden, cell = self.decoder(input, hidden, cell)
#place predictions in a tensor holding predictions for each token
outputs[t] = output
#decide if we are going to use teacher forcing or not
teacher_force = random.random() < teacher_forcing_ratio
#get the highest predicted token from our predictions
top1 = output.argmax(1)
#if teacher forcing, use actual next token as next input
#if not, use predicted token
input = trg[t] if teacher_force else top1
return outputs
# # Training the Seq2Seq Model
#
# Now we have our model implemented, we can begin training it.
#
# First, we'll initialize our model. As mentioned before, the input and output dimensions are defined by the size of the vocabulary. The embedding dimesions and dropout for the encoder and decoder can be different, but the number of layers and the size of the hidden/cell states must be the same.
#
# We then define the encoder, decoder and then our Seq2Seq model, which we place on the `device`.
# +
INPUT_DIM = len(SRC.vocab)
OUTPUT_DIM = len(TRG.vocab)
ENC_EMB_DIM = 256
DEC_EMB_DIM = 256
HID_DIM = 512
N_LAYERS = 2
ENC_DROPOUT = 0.5
DEC_DROPOUT = 0.5
enc = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, N_LAYERS, ENC_DROPOUT)
dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, N_LAYERS, DEC_DROPOUT)
model = Seq2Seq(enc, dec, device).to(device)
# -
# Next up is initializing the weights of our model. In the paper they state they initialize all weights from a uniform distribution between -0.08 and +0.08, i.e. $\mathcal{U}(-0.08, 0.08)$.
#
# We initialize weights in PyTorch by creating a function which we `apply` to our model. When using `apply`, the `init_weights` function will be called on every module and sub-module within our model. For each module we loop through all of the parameters and sample them from a uniform distribution with `nn.init.uniform_`.
# +
def init_weights(m):
for name, param in m.named_parameters():
nn.init.uniform_(param.data, -0.08, 0.08)
model.apply(init_weights)
# -
# We also define a function that will calculate the number of trainable parameters in the model.
# +
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
# -
# We define our optimizer, which we use to update our parameters in the training loop. Check out [this](http://ruder.io/optimizing-gradient-descent/) post for information about different optimizers. Here, we'll use Adam.
optimizer = optim.Adam(model.parameters())
# Next, we define our loss function. The `CrossEntropyLoss` function calculates both the log softmax as well as the negative log-likelihood of our predictions.
#
# Our loss function calculates the average loss per token, however by passing the index of the `<pad>` token as the `ignore_index` argument we ignore the loss whenever the target token is a padding token.
# +
TRG_PAD_IDX = TRG.vocab.stoi[TRG.pad_token]
criterion = nn.CrossEntropyLoss(ignore_index = TRG_PAD_IDX)
# -
# Next, we'll define our training loop.
#
# First, we'll set the model into "training mode" with `model.train()`. This will turn on dropout (and batch normalization, which we aren't using) and then iterate through our data iterator.
#
# As stated before, our decoder loop starts at 1, not 0. This means the 0th element of our `outputs` tensor remains all zeros. So our `trg` and `outputs` look something like:
#
# $$\begin{align*}
# \text{trg} = [<sos>, &y_1, y_2, y_3, <eos>]\\
# \text{outputs} = [0, &\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>]
# \end{align*}$$
#
# Here, when we calculate the loss, we cut off the first element of each tensor to get:
#
# $$\begin{align*}
# \text{trg} = [&y_1, y_2, y_3, <eos>]\\
# \text{outputs} = [&\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>]
# \end{align*}$$
#
# At each iteration:
# - get the source and target sentences from the batch, $X$ and $Y$
# - zero the gradients calculated from the last batch
# - feed the source and target into the model to get the output, $\hat{Y}$
# - as the loss function only works on 2d inputs with 1d targets we need to flatten each of them with `.view`
# - we slice off the first column of the output and target tensors as mentioned above
# - calculate the gradients with `loss.backward()`
# - clip the gradients to prevent them from exploding (a common issue in RNNs)
# - update the parameters of our model by doing an optimizer step
# - sum the loss value to a running total
#
# Finally, we return the loss that is averaged over all batches.
def train(model, iterator, optimizer, criterion, clip):
model.train()
epoch_loss = 0
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
optimizer.zero_grad()
output = model(src, trg)
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
# Our evaluation loop is similar to our training loop, however as we aren't updating any parameters we don't need to pass an optimizer or a clip value.
#
# We must remember to set the model to evaluation mode with `model.eval()`. This will turn off dropout (and batch normalization, if used).
#
# We use the `with torch.no_grad()` block to ensure no gradients are calculated within the block. This reduces memory consumption and speeds things up.
#
# The iteration loop is similar (without the parameter updates), however we must ensure we turn teacher forcing off for evaluation. This will cause the model to only use it's own predictions to make further predictions within a sentence, which mirrors how it would be used in deployment.
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
output = model(src, trg, 0) #turn off teacher forcing
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
# Next, we'll create a function that we'll use to tell us how long an epoch takes.
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
# We can finally start training our model!
#
# At each epoch, we'll be checking if our model has achieved the best validation loss so far. If it has, we'll update our best validation loss and save the parameters of our model (called `state_dict` in PyTorch). Then, when we come to test our model, we'll use the saved parameters used to achieve the best validation loss.
#
# We'll be printing out both the loss and the perplexity at each epoch. It is easier to see a change in perplexity than a change in loss as the numbers are much bigger.
# +
N_EPOCHS = 10
CLIP = 1
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
valid_loss = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut1-model.pt')
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
# -
# 모델에서 최고의 validation loss를 제공한 parameter를(`state_dict`) 로드하고 테스트 세트에서 모델을 실행합니다.
# +
model.load_state_dict(torch.load('tut1-model.pt'))
test_loss = evaluate(model, test_iterator, criterion)
print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')
# -
# 다음 노트북에서는 향상된 테스트 복잡성을 달성하지만 인코더와 디코더에서 단일 레이어만 사용하는 모델을 구현할 것입니다.
|
AI 이노베이션 스퀘어 음성지능 과정/20201101/1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.applications.xception import preprocess_input
from keras.applications.inception_v3 import preprocess_input
from keras.models import load_model
from tensorflow.keras.models import load_model
# -
img = load_img('cupcakes.jpg', target_size=(200, 200))
img
# +
# convert image in to array
x = np.array(img)
X = np.array([x])
X = preprocess_input(X)
# -
X.shape
model = keras.models.load_model('best_model_101class.hdf5')
preds = model.predict(X)
preds
classes = ['apple_pie',
'baby_back_ribs',
'baklava',
'beef_carpaccio',
'beef_tartare',
'beet_salad',
'beignets',
'bibimbap',
'bread_pudding',
'breakfast_burrito',
'bruschetta',
'caesar_salad',
'cannoli',
'caprese_salad',
'carrot_cake',
'ceviche',
'cheese_plate',
'cheesecake',
'chicken_curry',
'chicken_quesadilla',
'chicken_wings',
'chocolate_cake',
'chocolate_mousse',
'churros',
'clam_chowder',
'club_sandwich',
'crab_cakes',
'creme_brulee',
'croque_madame',
'cup_cakes','deviled_eggs',
'donuts',
'dumplings',
'edamame',
'eggs_benedict',
'escargots',
'falafel',
'filet_mignon',
'fish_and_chips',
'foie_gras',
'french_fries',
'french_onion_soup',
'french_toast',
'fried_calamari',
'fried_rice',
'frozen_yogurt',
'garlic_bread',
'gnocchi',
'greek_salad',
'grilled_cheese_sandwich',
'grilled_salmon',
'guacamole',
'gyoza',
'hamburger',
'hot_and_sour_soup',
'hot_dog','huevos_rancheros',
'hummus',
'ice_cream',
'lasagna',
'lobster_bisque',
'lobster_roll_sandwich',
'macaroni_and_cheese',
'macarons',
'miso_soup',
'mussels',
'nachos',
'omelette',
'onion_rings',
'oysters',
'pad_thai',
'paella',
'pancakes',
'panna_cotta',
'peking_duck',
'pho',
'pizza',
'pork_chop',
'poutine',
'prime_rib',
'pulled_pork_sandwich',
'ramen',
'ravioli',
'red_velvet_cake',
'risotto',
'samosa',
'sashimi',
'scallops',
'seaweed_salad',
'shrimp_and_grits',
'spaghetti_bolognese',
'spaghetti_carbonara',
'spring_rolls',
'steak',
'strawberry_shortcake',
'sushi',
'tacos',
'takoyaki',
'tiramisu',
'tuna_tartare',
'waffles']
#combine labels and actual predections
dict(zip(classes, preds[0]))
# +
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
#save the model
with open('Best_restaurant_to_serve.tflite', 'wb') as f_out:
f_out.write(tflite_model)
# -
# # Restart Kernal
# !pip install keras-image-helper
# !pip install --extra-index-url https://google-coral.github.io/py-repo/ tflite_runtime
#import tensorflow.lite as tflite
import tflite_runtime.interpreter as tflite
from keras_image_helper import create_preprocessor
# +
interpreter = tflite.Interpreter(model_path='Best_restaurant_to_serve.tflite')
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]['index']
output_index = interpreter.get_output_details()[0]['index']
# -
preprocessor = create_preprocessor('xception', target_size=(200, 200))
url = 'https://upload.wikimedia.org/wikipedia/commons/a/a5/Cars_Cupcakes_%284725761659%29.jpg'
url
X = preprocessor.from_url(url)
interpreter.set_tensor(input_index, X) # put input in the interpreter
interpreter.invoke() #model work
preds = interpreter.get_tensor(output_index) #get or fetch output
preds
# +
classes = ['apple_pie',
'baby_back_ribs',
'baklava',
'beef_carpaccio',
'beef_tartare',
'beet_salad',
'beignets',
'bibimbap',
'bread_pudding',
'breakfast_burrito',
'bruschetta',
'caesar_salad',
'cannoli',
'caprese_salad',
'carrot_cake',
'ceviche',
'cheese_plate',
'cheesecake',
'chicken_curry',
'chicken_quesadilla',
'chicken_wings',
'chocolate_cake',
'chocolate_mousse',
'churros',
'clam_chowder',
'club_sandwich',
'crab_cakes',
'creme_brulee',
'croque_madame',
'cup_cakes','deviled_eggs',
'donuts',
'dumplings',
'edamame',
'eggs_benedict',
'escargots',
'falafel',
'filet_mignon',
'fish_and_chips',
'foie_gras',
'french_fries',
'french_onion_soup',
'french_toast',
'fried_calamari',
'fried_rice',
'frozen_yogurt',
'garlic_bread',
'gnocchi',
'greek_salad',
'grilled_cheese_sandwich',
'grilled_salmon',
'guacamole',
'gyoza',
'hamburger',
'hot_and_sour_soup',
'hot_dog','huevos_rancheros',
'hummus',
'ice_cream',
'lasagna',
'lobster_bisque',
'lobster_roll_sandwich',
'macaroni_and_cheese',
'macarons',
'miso_soup',
'mussels',
'nachos',
'omelette',
'onion_rings',
'oysters',
'pad_thai',
'paella',
'pancakes',
'panna_cotta',
'peking_duck',
'pho',
'pizza',
'pork_chop',
'poutine',
'prime_rib',
'pulled_pork_sandwich',
'ramen',
'ravioli',
'red_velvet_cake',
'risotto',
'samosa',
'sashimi',
'scallops',
'seaweed_salad',
'shrimp_and_grits',
'spaghetti_bolognese',
'spaghetti_carbonara',
'spring_rolls',
'steak',
'strawberry_shortcake',
'sushi',
'tacos',
'takoyaki',
'tiramisu',
'tuna_tartare',
'waffles'
]
dict(zip(classes, preds[0]))
# -
|
Capstone-Project-main/Models Training/Train_Final_Model.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import keras
# +
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
# +
import numpy as np
import pandas as pd
from collections import defaultdict
import re
from bs4 import BeautifulSoup
import sys
import os
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils.np_utils import to_categorical
from keras.layers import Embedding
from keras.layers import Dense, Input, Flatten
from keras.layers import Conv1D, MaxPooling1D, Embedding, Merge, Dropout
from keras.models import Model
MAX_SEQUENCE_LENGTH = 1000
MAX_NB_WORDS = 200000
EMBEDDING_DIM = 100
VALIDATION_SPLIT = 0.2
# -
def clean_str(string):
"""
Cleaning of dataset
"""
string = re.sub(r"\\", "", string)
string = re.sub(r"\'", "", string)
string = re.sub(r"\"", "", string)
return string.strip().lower()
data_train = pd.read_csv('data/train_Mixed.csv')
data_train.text[1]
# +
# Input Data preprocessing
data_train = pd.read_csv('data/train_Mixed.csv')
#data_train['label'] = data_train['label'].replace('FAKE',1)
#data_train['label'] = data_train['label'].replace('REAL',0)
print(data_train.columns)
print('What the raw input data looks like:')
print(data_train[0:5])
texts = []
labels = []
for i in range(data_train.text.shape[0]):
text1 = data_train.title[i]
text2 = data_train.text[i]
text = str(text1) +""+ str(text2)
texts.append(text)
labels.append(data_train.label[i])
tokenizer = Tokenizer(num_words=MAX_NB_WORDS)
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
# -
# Pad input sequences
data = pad_sequences(sequences, maxlen=MAX_SEQUENCE_LENGTH)
labels = to_categorical(np.asarray(labels),num_classes = 2)
print('Shape of data tensor:', data.shape)
print('Shape of label tensor:', labels.shape)
# +
# Train test validation Split
from sklearn.model_selection import train_test_split
indices = np.arange(data.shape[0])
np.random.shuffle(indices)
data = data[indices]
labels = labels[indices]
x_train, x_test, y_train, y_test = train_test_split( data, labels, test_size=0.20, random_state=42)
x_test, x_val, y_test, y_val = train_test_split( data, labels, test_size=0.50, random_state=42)
print('Size of train, validation, test:', len(y_train), len(y_val), len(y_test))
print('real & fake news in train,valt,test:')
print(y_train.sum(axis=0))
print(y_val.sum(axis=0))
print(y_test.sum(axis=0))
# -
# +
#Using Pre-trained word embeddings
GLOVE_DIR = "data"
embeddings_index = {}
f = open(os.path.join(GLOVE_DIR, 'glove.6B.100d.txt'), encoding="utf8")
for line in f:
values = line.split()
#print(values[1:])
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Total %s word vectors in Glove.' % len(embeddings_index))
embedding_matrix = np.random.random((len(word_index) + 1, EMBEDDING_DIM))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
embedding_layer = Embedding(len(word_index) + 1,
EMBEDDING_DIM,
weights=[embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH)
# +
# Simple CNN model
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences = embedding_layer(sequence_input)
l_cov1= Conv1D(128, 5, activation='relu')(embedded_sequences)
l_pool1 = MaxPooling1D(5)(l_cov1)
l_cov2 = Conv1D(128, 5, activation='relu')(l_pool1)
l_pool2 = MaxPooling1D(5)(l_cov2)
l_cov3 = Conv1D(128, 5, activation='relu')(l_pool2)
l_pool3 = MaxPooling1D(35)(l_cov3) # global max pooling
l_flat = Flatten()(l_pool3)
l_dense = Dense(128, activation='relu')(l_flat)
preds = Dense(2, activation='softmax')(l_dense)
model = Model(sequence_input, preds)
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['acc'])
print("Fitting the simple convolutional neural network model")
model.summary()
history = model.fit(x_train, y_train, validation_data=(x_val, y_val),
epochs=3, batch_size=128)
# -
import matplotlib.pyplot as plt
# %matplotlib inline
# list all data in history
print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# +
#convolutional approach
convs = []
filter_sizes = [3,4,5]
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences = embedding_layer(sequence_input)
for fsz in filter_sizes:
l_conv = Conv1D(nb_filter=128,filter_length=fsz,activation='relu')(embedded_sequences)
l_pool = MaxPooling1D(5)(l_conv)
convs.append(l_pool)
l_merge = Merge(mode='concat', concat_axis=1)(convs)
l_cov1= Conv1D(filters=128, kernel_size=5, activation='relu')(l_merge)
l_pool1 = MaxPooling1D(5)(l_cov1)
l_cov2 = Conv1D(filters=128, kernel_size=5, activation='relu')(l_pool1)
l_pool2 = MaxPooling1D(30)(l_cov2)
l_flat = Flatten()(l_pool2)
l_dense = Dense(128, activation='relu')(l_flat)
preds = Dense(2, activation='softmax')(l_dense)
model2 = Model(sequence_input, preds)
model2.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['acc'])
print("Fitting a more complex convolutional neural network model")
model2.summary()
history2 = model2.fit(x_train, y_train, validation_data=(x_val, y_val),
epochs=3, batch_size=50)
model2.save('model.h5')
# -
# list all data in history
print(history2.history.keys())
import matplotlib.pyplot as plt
# %matplotlib inline
# summarize history for accuracy
plt.plot(history2.history['acc'])
plt.plot(history2.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history2.history['loss'])
plt.plot(history2.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# +
# Test model 1
test_preds = model.predict(x_test)
test_preds = np.round(test_preds)
correct_predictions = float(sum(test_preds == y_test)[0])
print("Correct predictions:", correct_predictions)
print("Total number of test examples:", len(y_test))
print("Accuracy of model1: ", correct_predictions/float(len(y_test)))
# Creating the Confusion Matrix
from sklearn.metrics import confusion_matrix
x_pred = model.predict(x_test)
x_pred = np.round(x_pred)
x_pred = x_pred.argmax(1)
y_test_s = y_test.argmax(1)
cm = confusion_matrix(y_test_s, x_pred)
plt.matshow(cm, cmap=plt.cm.binary, interpolation='nearest')
plt.title('Confusion matrix - model1')
plt.colorbar()
plt.ylabel('expected label')
plt.xlabel('predicted label')
# plt.show()
#Test model 2
test_preds2 = model2.predict(x_test)
test_preds2 = np.round(test_preds2)
correct_predictions = float(sum(test_preds2 == y_test)[0])
print("Correct predictions:", correct_predictions)
print("Total number of test examples:", len(y_test))
print("Accuracy of model2: ", correct_predictions/float(len(y_test)))
# Creating the Confusion Matrix
x_pred = model2.predict(x_test)
x_pred = np.round(x_pred)
x_pred = x_pred.argmax(1)
y_test_s = y_test.argmax(1)
cm = confusion_matrix(y_test_s, x_pred)
plt.matshow(cm, cmap=plt.cm.binary, interpolation='nearest',)
plt.title('Confusion matrix - model2')
plt.colorbar()
plt.ylabel('expected label')
plt.xlabel('predicted label')
plt.show()
# -
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
score = model2.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
|
CNN.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: U4-S1-NLP (Python3)
# language: python
# name: u4-s1-nlp
# ---
import pandas as pd
df = pd.read_csv('heart_data.csv')
df.head()
heart_disease = df.copy()
heart_disease
state_dic = {'AL': 'Alabama', 'AK': 'Alaska', 'AZ': 'Arizona', 'AR': 'Arkansas',
'CA': 'California', 'CO': 'Colorado', 'CT': 'Connecticut', 'DE': 'Delaware',
'DC': 'District of Columbia', 'FL': 'Florida', 'GA': 'Georgia',
'HI': 'Hawaii', 'ID': 'Idaho', 'IL': 'Illinois', 'IN': 'Indiana', 'IA': 'Iowa',
'KS': 'Kansas', 'KY': 'Kentucky', 'LA': 'Louisiana', 'ME': 'Maine', 'MD': 'Maryland',
'MA': 'Massachusetts', 'MI': 'Michigan', 'MN': 'Minnesota', 'MS': 'Mississippi',
'MO': 'Missouri', 'MT': 'Montana', 'NE': 'Nebraska', 'NV': 'Nevada', 'NH': 'New Hampshire',
'NJ': 'New Jersey', 'NM': 'New Mexico', 'NY': 'New York', 'NC': 'North Carolina',
'ND': 'North Dakota', 'OH': 'Ohio', 'OK': 'Oklahoma', 'OR': 'Oregon', 'PA': 'Pennsylvania',
'RI': 'Rhode Island', 'SC': 'South Carolina', 'SD': 'South Dakota', 'TN': 'Tennessee',
'TX': 'Texas', 'UT': 'Utah', 'VT': 'Vermont', 'VA': 'Virginia', 'WA': 'Washington',
'WV': 'West Virginia', 'WI': 'Wisconsin', 'WY': 'Wyoming', 'PR': 'Puerto Rico'}
heart_disease['state'].replace(state_dic, inplace=True)
heart_disease.head()
heart_disease = heart_disease.drop('Unnamed: 0', axis='columns')
heart_disease1 = heart_disease.copy()
heart_disease1
heart_disease1 = heart_disease1.drop_duplicates(subset = ["county"])
heart_disease1 = heart_disease1.drop('state', axis='columns')
heart_disease1[heart_disease1['county'] == 'Los Angeles']
couties_cities = pd.read_csv('zip_code_database.csv')
couties_cities.head()
couties_cities[couties_cities['primary_city'] == 'Los Angeles']
couties_cities1 = couties_cities[['primary_city', 'county']]
couties_cities1.head(20)
couties_cities1 = couties_cities1.dropna()
print(couties_cities1.isna().any())
couties_cities1[couties_cities1['primary_city'] == 'Los Angeles']
couties_cities1 = couties_cities1.drop_duplicates(subset = ["primary_city"])
couties_cities1[couties_cities1['primary_city'] == 'Los Angeles']
couties_cities2 = couties_cities1.copy()
couties_cities2[couties_cities2['primary_city'] == 'Los Angeles']
couties_cities2['county'].unique().tolist()
subset = couties_cities2[['county']]
# +
def rm_county(county):
return county.strip('"')[:-7]
rm_county(couties_cities2['county'][0])
# -
couties_cities2['county'] = couties_cities2['county'].apply(rm_county)
couties_cities2.head()
couties_cities2[couties_cities2['county'] == 'Los Angeles']
heart_clean = pd.merge(heart_disease1,couties_cities2, on='county', how='right')
print(heart_clean.shape)
heart_clean.head(10)
heart_clean[heart_clean['primary_city'] == 'Los Angeles']
heart_clean.dropna()
heart_clean[heart_clean['primary_city'] == 'Los Angeles']
heart_clean.rename(columns={'primary_city': 'City_Name'}, inplace=True)
larger_dataset = pd.read_csv('completeddataset.csv')
larger_dataset = larger_dataset.drop(columns='city')
pd.set_option('display.max_columns', None)
larger_dataset.head()
completed_data = pd.merge(larger_dataset,heart_clean, how='outer')
completed_data
completed_data[completed_data['City_Name'] == 'Los Angeles']
covid = pd.read_csv('covid_may2020.csv')
covid
covid.drop(columns=['Unnamed: 0', 'state'], inplace=True)
covid.head()
data = pd.merge(completed_data, covid, how='outer')
data
data = data.dropna()
data
data[data['City_Name'] == 'Los Angeles']
|
defunct_notebooks/defunct_heart_diseaseP.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # sklearnで、交差検証ありのコード
# +
import numpy as np
import pandas as pd
from pprint import pprint
from sklearn.datasets import load_iris
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import cross_validate, StratifiedKFold, train_test_split
from sklearn.model_selection import KFold
from sklearn.metrics import accuracy_score
# -
# # データセットを取得
# +
# データの読み込み
iris=load_iris()
X,y=iris.data,iris.target
# 行・列数の確認
print(X.shape)
print(y.shape)
# -
# # 交差検証で、foldごとにモデルを訓練
# +
# データ
kf = KFold(n_splits=5, shuffle=True)
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=0)
scores=[]
#scoring = {"acc":"accuracy", "prc": "precision_macro","rec": "recall_macro", "f1":"f1_macro", "auc":"roc_auc"}
scoring = {"acc":"accuracy", "prc": "precision_macro","rec": "recall_macro", "f1":"f1_macro"}
# KFoldで、分割して作成されたデータセットのインデックスを取得できる
for train_id, test_id in kf.split(X):
train_x = X[train_id]
train_y = y[train_id]
test_x = X[test_id]
test_y = y[test_id]
print("==="*40)
# モデルを設定
clf=DecisionTreeClassifier(max_depth=3, random_state=94)
# モデルの学習
clf.fit(train_x, train_y)
# 推論
pred_y = clf.predict(test_x)
# 精度指標の取得
#scores_cv = cross_validate(clf, iris.data, iris.target, cv=skf, scoring=scoring)
scores_cv = cross_validate(clf, X=train_x, y=train_y, cv=kf, scoring=scoring)
pprint(scores_cv)
score=accuracy_score(test_y, pred_y)
scores.append(score)
#
print("==="*40)
print("\n")
scores = np.array(scores)
print(scores.mean())
# -
# # Pycaretと組み合わせる
# +
# #!pip install pycaret
# +
# パッケージの読み込み
import pandas as pd
from pycaret.classification import *
#from pycaret.regression import *
from pycaret.datasets import get_data
#boston = get_data('boston')
#exp1 = setup(boston_data, target = 'medv')
# -
# # 使用するデータの読み込み
# from pycaret.datasets import get_data
# data = get_data('employee')
# # 95%を学習データ、5%をテストデータ(Unseen Dataと呼ぶ)に分ける
# employee_data = data.sample(frac =0.95, random_state = 786).reset_index(drop=True)
# employee_data_unseen = data.drop(employee_data.index).reset_index(drop=True)
# print('Data for Modeling: ' + str(employee_data.shape))
# print('Unseen Data For Predictions: ' + str(employee_data_unseen.shape))
df_x = pd.DataFrame(X)
df_ = pd.concat([df_x, pd.DataFrame(y)])
df_.head()
iris = load_iris()
X_a = pd.DataFrame(iris.data, columns=iris.feature_names)
y_a = pd.DataFrame(iris.target, columns=["target"])
df_a = pd.concat([X_a,y_a], axis=1)
df_a.head()
# PyCaretを起動
exp1 = setup(df_a, target = 'target')
#exp1 = setup(df_x, target=df_y, ignore_features = None)
# PyCaretを起動(データ型を変更する場合)
#exp1 = setup(employee_data, target = 'left', ignore_features = None, numeric_features = ['time_spend_company'])
# モデルの比較
compare_models()
|
sk_kfold_caret.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 教程:
#
# + [Python 笔记:最全Matplotlib 入门教程](http://www.inimei.cn/archives/593.html):前七章
# + [NumPy Matplotlib | 菜鸟教程](https://www.runoob.com/numpy/numpy-matplotlib.html)
#
import matplotlib.pyplot as plt
plt.plot([1,2,3],[5,7,4])
plt.show()
# +
x = [1,2,3]
y = [5,7,4]
x2 = [1,2,3]
y2 = [10,14,12]
# -
plt.plot(x, y, label='First Line')
plt.plot(x2, y2, label='Second Line')
plt.xlabel('Plot Number')
plt.ylabel('Important var')
plt.title('Interesting Graph\nCheck it out')
plt.legend()
plt.show()
# +
plt.bar([1,3,5,7,9],[5,2,7,8,2], label="Example one")
plt.bar([2,4,6,8,10],[8,6,2,5,6], label="Example two", color='g')
plt.legend()
plt.xlabel('bar number')
plt.ylabel('bar height')
plt.title('Epic Graph\nAnother Line! Whoa')
plt.show()
# +
population_ages = [22,55,62,45,21,22,34,42,42,4,99,102,110,120,121,122,130,111,115,112,80,75,65,54,44,43,42,48]
bins = [0,10,20,30,40,50,60,70,80,90,100,110,120,130]
plt.hist(population_ages, bins, histtype='bar', rwidth=0.5)
plt.xlabel('x')
plt.ylabel('y')
plt.title('Interesting Graph\nCheck it out')
plt.show()
# +
x = [1,2,3,4,5,6,7,8]
y = [5,2,4,2,1,4,5,2]
plt.scatter(x,y, label='skitscat', color='k', s=25, marker="o")
plt.xlabel('x')
plt.ylabel('y')
plt.title('Interesting Graph\nCheck it out')
plt.legend()
plt.show()
# +
days = [1,2,3,4,5]
sleeping = [7,8,6,11,7]
eating = [2,3,4,3,2]
working = [7,8,7,2,2]
playing = [8,5,7,8,13]
plt.stackplot(days, sleeping,eating,working,playing, colors=['m','c','r','k'])
plt.xlabel('x')
plt.ylabel('y')
plt.title('Interesting Graph\nCheck it out')
plt.show()
# +
days = [1,2,3,4,5]
sleeping = [7,8,6,11,7]
eating = [2,3,4,3,2]
working = [7,8,7,2,2]
playing = [8,5,7,8,13]
plt.plot([],[],color='m', label='Sleeping', linewidth=5)
plt.plot([],[],color='c', label='Eating', linewidth=5)
plt.plot([],[],color='r', label='Working', linewidth=5)
plt.plot([],[],color='k', label='Playing', linewidth=5)
plt.stackplot(days, sleeping,eating,working,playing, colors=['m','c','r','k'])
plt.xlabel('x')
plt.ylabel('y')
plt.title('Interesting Graph\nCheck it out')
plt.legend()
plt.show()
# +
slices = [7,2,2,13]
activities = ['sleeping','eating','working','playing']
cols = ['c','m','r','b']
plt.pie(slices,
labels=activities,
colors=cols,
startangle=90,
shadow= True,
explode=(0,0.1,0,0),
autopct='%1.1f%%')
plt.title('Interesting Graph\nCheck it out')
plt.show()
# +
import csv
x = []
y = []
with open('example.txt','r') as csvfile:
plots = csv.reader(csvfile, delimiter=',')
for row in plots:
x.append(int(row[0]))
y.append(int(row[1]))
plt.plot(x,y, label='Loaded from file!')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Interesting Graph\nCheck it out')
plt.legend()
plt.show()
# +
import numpy as np
x, y = np.loadtxt('example.txt', delimiter=',', unpack=True)
plt.plot(x,y, label='Loaded from file!')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Interesting Graph\nCheck it out')
plt.legend()
plt.show()
# -
|
preparing/.ipynb_checkpoints/MatplotlibDemo-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="klGNgWREsvQv"
# ##### Copyright 2018 The TF-Agents Authors.
# + cellView="form" colab={} colab_type="code" id="nQnmcm0oI1Q-"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] colab_type="text" id="HNtBC6Bbb1YU"
# # REINFORCE agent
#
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/agents/tutorials/6_reinforce_tutorial">
# <img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
# View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/agents/blob/master/docs/tutorials/6_reinforce_tutorial.ipynb">
# <img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
# Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/agents/blob/master/docs/tutorials/6_reinforce_tutorial.ipynb">
# <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
# View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/agents/docs/tutorials/6_reinforce_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] colab_type="text" id="ZOUOQOrFs3zn"
# ## Introduction
# + [markdown] colab_type="text" id="cKOCZlhUgXVK"
# This example shows how to train a [REINFORCE](http://www-anw.cs.umass.edu/~barto/courses/cs687/williams92simple.pdf) agent on the Cartpole environment using the TF-Agents library, similar to the [DQN tutorial](1_dqn_tutorial.ipynb).
#
# 
#
# We will walk you through all the components in a Reinforcement Learning (RL) pipeline for training, evaluation and data collection.
#
# + [markdown] colab_type="text" id="1u9QVVsShC9X"
# ## Setup
# + [markdown] colab_type="text" id="I5PNmEzIb9t4"
# If you haven't installed the following dependencies, run:
# + colab={} colab_type="code" id="KEHR2Ui-lo8O"
# !sudo apt-get install -y xvfb ffmpeg
# !pip install 'gym==0.10.11'
# !pip install 'imageio==2.4.0'
# !pip install PILLOW
# !pip install 'pyglet==1.3.2'
# !pip install pyvirtualdisplay
# !pip install tf-agents
# + colab={} colab_type="code" id="sMitx5qSgJk1"
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
import pyvirtualdisplay
import tensorflow as tf
from tf_agents.agents.reinforce import reinforce_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym
from tf_agents.environments import tf_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import actor_distribution_network
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
tf.compat.v1.enable_v2_behavior()
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
# + [markdown] colab_type="text" id="LmC0NDhdLIKY"
# ## Hyperparameters
# + colab={} colab_type="code" id="HC1kNrOsLSIZ"
env_name = "CartPole-v0" # @param {type:"string"}
num_iterations = 250 # @param {type:"integer"}
collect_episodes_per_iteration = 2 # @param {type:"integer"}
replay_buffer_capacity = 2000 # @param {type:"integer"}
fc_layer_params = (100,)
learning_rate = 1e-3 # @param {type:"number"}
log_interval = 25 # @param {type:"integer"}
num_eval_episodes = 10 # @param {type:"integer"}
eval_interval = 50 # @param {type:"integer"}
# + [markdown] colab_type="text" id="VMsJC3DEgI0x"
# ## Environment
#
# Environments in RL represent the task or problem that we are trying to solve. Standard environments can be easily created in TF-Agents using `suites`. We have different `suites` for loading environments from sources such as the OpenAI Gym, Atari, DM Control, etc., given a string environment name.
#
# Now let us load the CartPole environment from the OpenAI Gym suite.
# + colab={} colab_type="code" id="pYEz-S9gEv2-"
env = suite_gym.load(env_name)
# + [markdown] colab_type="text" id="IIHYVBkuvPNw"
# We can render this environment to see how it looks. A free-swinging pole is attached to a cart. The goal is to move the cart right or left in order to keep the pole pointing up.
# + colab={} colab_type="code" id="RlO7WIQHu_7D"
#@test {"skip": true}
env.reset()
PIL.Image.fromarray(env.render())
# + [markdown] colab_type="text" id="B9_lskPOey18"
# The `time_step = environment.step(action)` statement takes `action` in the environment. The `TimeStep` tuple returned contains the environment's next observation and reward for that action. The `time_step_spec()` and `action_spec()` methods in the environment return the specifications (types, shapes, bounds) of the `time_step` and `action` respectively.
# + colab={} colab_type="code" id="exDv57iHfwQV"
print('Observation Spec:')
print(env.time_step_spec().observation)
print('Action Spec:')
print(env.action_spec())
# + [markdown] colab_type="text" id="eJCgJnx3g0yY"
# So, we see that observation is an array of 4 floats: the position and velocity of the cart, and the angular position and velocity of the pole. Since only two actions are possible (move left or move right), the `action_spec` is a scalar where 0 means "move left" and 1 means "move right."
# + colab={} colab_type="code" id="V2UGR5t_iZX-"
time_step = env.reset()
print('Time step:')
print(time_step)
action = np.array(1, dtype=np.int32)
next_time_step = env.step(action)
print('Next time step:')
print(next_time_step)
# + [markdown] colab_type="text" id="zuUqXAVmecTU"
# Usually we create two environments: one for training and one for evaluation. Most environments are written in pure python, but they can be easily converted to TensorFlow using the `TFPyEnvironment` wrapper. The original environment's API uses numpy arrays, the `TFPyEnvironment` converts these to/from `Tensors` for you to more easily interact with TensorFlow policies and agents.
#
# + colab={} colab_type="code" id="Xp-Y4mD6eDhF"
train_py_env = suite_gym.load(env_name)
eval_py_env = suite_gym.load(env_name)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
# + [markdown] colab_type="text" id="E9lW_OZYFR8A"
# ## Agent
#
# The algorithm that we use to solve an RL problem is represented as an `Agent`. In addition to the REINFORCE agent, TF-Agents provides standard implementations of a variety of `Agents` such as [DQN](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf), [DDPG](https://arxiv.org/pdf/1509.02971.pdf), [TD3](https://arxiv.org/pdf/1802.09477.pdf), [PPO](https://arxiv.org/abs/1707.06347) and [SAC](https://arxiv.org/abs/1801.01290).
#
# To create a REINFORCE Agent, we first need an `Actor Network` that can learn to predict the action given an observation from the environment.
#
# We can easily create an `Actor Network` using the specs of the observations and actions. We can specify the layers in the network which, in this example, is the `fc_layer_params` argument set to a tuple of `ints` representing the sizes of each hidden layer (see the Hyperparameters section above).
#
# + colab={} colab_type="code" id="TgkdEPg_muzV"
actor_net = actor_distribution_network.ActorDistributionNetwork(
train_env.observation_spec(),
train_env.action_spec(),
fc_layer_params=fc_layer_params)
# + [markdown] colab_type="text" id="z62u55hSmviJ"
# We also need an `optimizer` to train the network we just created, and a `train_step_counter` variable to keep track of how many times the network was updated.
#
# + colab={} colab_type="code" id="jbY4yrjTEyc9"
optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate)
train_step_counter = tf.compat.v2.Variable(0)
tf_agent = reinforce_agent.ReinforceAgent(
train_env.time_step_spec(),
train_env.action_spec(),
actor_network=actor_net,
optimizer=optimizer,
normalize_returns=True,
train_step_counter=train_step_counter)
tf_agent.initialize()
# + [markdown] colab_type="text" id="I0KLrEPwkn5x"
# ## Policies
#
# In TF-Agents, policies represent the standard notion of policies in RL: given a `time_step` produce an action or a distribution over actions. The main method is `policy_step = policy.step(time_step)` where `policy_step` is a named tuple `PolicyStep(action, state, info)`. The `policy_step.action` is the `action` to be applied to the environment, `state` represents the state for stateful (RNN) policies and `info` may contain auxiliary information such as log probabilities of the actions.
#
# Agents contain two policies: the main policy that is used for evaluation/deployment (agent.policy) and another policy that is used for data collection (agent.collect_policy).
# + colab={} colab_type="code" id="BwY7StuMkuV4"
eval_policy = tf_agent.policy
collect_policy = tf_agent.collect_policy
# + [markdown] colab_type="text" id="94rCXQtbUbXv"
# ## Metrics and Evaluation
#
# The most common metric used to evaluate a policy is the average return. The return is the sum of rewards obtained while running a policy in an environment for an episode, and we usually average this over a few episodes. We can compute the average return metric as follows.
#
# + colab={} colab_type="code" id="bitzHo5_UbXy"
#@test {"skip": true}
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
# Please also see the metrics module for standard implementations of different
# metrics.
# + [markdown] colab_type="text" id="NLva6g2jdWgr"
# ## Replay Buffer
#
# In order to keep track of the data collected from the environment, we will use the TFUniformReplayBuffer. This replay buffer is constructed using specs describing the tensors that are to be stored, which can be obtained from the agent using `tf_agent.collect_data_spec`.
# + colab={} colab_type="code" id="vX2zGUWJGWAl"
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=tf_agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_capacity)
# + [markdown] colab_type="text" id="ZGNTDJpZs4NN"
# For most agents, the `collect_data_spec` is a `Trajectory` named tuple containing the observation, action, reward etc.
# + [markdown] colab_type="text" id="rVD5nQ9ZGo8_"
# ## Data Collection
#
# As REINFORCE learns from whole episodes, we define a function to collect an episode using the given data collection policy and save the data (observations, actions, rewards etc.) as trajectories in the replay buffer.
# + colab={} colab_type="code" id="wr1KSAEGG4h9"
#@test {"skip": true}
def collect_episode(environment, policy, num_episodes):
episode_counter = 0
environment.reset()
while episode_counter < num_episodes:
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
# Add trajectory to the replay buffer
replay_buffer.add_batch(traj)
if traj.is_boundary():
episode_counter += 1
# This loop is so common in RL, that we provide standard implementations of
# these. For more details see the drivers module.
# + [markdown] colab_type="text" id="hBc9lj9VWWtZ"
# ## Training the agent
#
# The training loop involves both collecting data from the environment and optimizing the agent's networks. Along the way, we will occasionally evaluate the agent's policy to see how we are doing.
#
# The following will take ~3 minutes to run.
# + colab={} colab_type="code" id="0pTbJ3PeyF-u"
#@test {"skip": true}
try:
# %%time
except:
pass
# (Optional) Optimize by wrapping some of the code in a graph using TF function.
tf_agent.train = common.function(tf_agent.train)
# Reset the train step
tf_agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, tf_agent.policy, num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few episodes using collect_policy and save to the replay buffer.
collect_episode(
train_env, tf_agent.collect_policy, collect_episodes_per_iteration)
# Use data from the buffer and update the agent's network.
experience = replay_buffer.gather_all()
train_loss = tf_agent.train(experience)
replay_buffer.clear()
step = tf_agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss.loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, tf_agent.policy, num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return)
# + [markdown] colab_type="text" id="68jNcA_TiJDq"
# ## Visualization
#
# + [markdown] colab_type="text" id="aO-LWCdbbOIC"
# ### Plots
#
# We can plot return vs global steps to see the performance of our agent. In `Cartpole-v0`, the environment gives a reward of +1 for every time step the pole stays up, and since the maximum number of steps is 200, the maximum possible return is also 200.
# + colab={} colab_type="code" id="NxtL1mbOYCVO"
#@test {"skip": true}
steps = range(0, num_iterations + 1, eval_interval)
plt.plot(steps, returns)
plt.ylabel('Average Return')
plt.xlabel('Step')
plt.ylim(top=250)
# + [markdown] colab_type="text" id="M7-XpPP99Cy7"
# ### Videos
# + [markdown] colab_type="text" id="9pGfGxSH32gn"
# It is helpful to visualize the performance of an agent by rendering the environment at each step. Before we do that, let us first create a function to embed videos in this colab.
# + colab={} colab_type="code" id="ULaGr8pvOKbl"
def embed_mp4(filename):
"""Embeds an mp4 file in the notebook."""
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
# + [markdown] colab_type="text" id="9c_PH-pX4Pr5"
# The following code visualizes the agent's policy for a few episodes:
# + colab={} colab_type="code" id="owOVWB158NlF"
num_episodes = 3
video_filename = 'imageio.mp4'
with imageio.get_writer(video_filename, fps=60) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = tf_agent.policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
embed_mp4(video_filename)
|
site/en-snapshot/agents/tutorials/6_reinforce_tutorial.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from sklearn.model_selection import StratifiedShuffleSplit
x = np.array([[1, 2], [3, 4], [1, 2], [3, 4],[3,4],[4,3]])
print(x)
y = np.array([0, 0, 1, 1,0,0])
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.5, random_state=0)
train_index, test_index = next(ss.split(x, y))
print(train_index,test_index)
# +
splitter = ss.split(x, y)
train_i, val_i =next(ss.split(x, y))
print(train_i, val_i)
halfval_i=int(len(val_i)/2)
print(halfval_i)
val_i, test_i = val_i[:halfval_i],val_i[halfval_i:]
print(val_i,test_i)
train_x, train_y = x[train_i],y[train_i]
val_x, val_y = x[val_i],y[val_i]
test_x, test_y = x[test_i],y[test_i]
# -
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
print(train_y,type(train_y))
print(np.array([3,3 ,3]))
|
transfer-learning/Untitled1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# <a href="https://colab.research.google.com/github/DingLi23/s2search/blob/pipelining/pipelining/exp-csma/exp-csma_csma_shapley_value.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# ### Experiment Description
#
#
#
# > This notebook is for experiment \<exp-csma\> and data sample \<csma\>.
# ### Initialization
# +
# %reload_ext autoreload
# %autoreload 2
import numpy as np, sys, os
sys.path.insert(1, '../../')
from shapley_value import compute_shapley_value, feature_key_list
sv = compute_shapley_value('exp-csma', 'csma')
# -
# ### Plotting
# +
import matplotlib.pyplot as plt
import numpy as np
from s2search_score_pdp import pdp_based_importance
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(12, 5), dpi=200)
# generate some random test data
all_data = []
average_sv = []
sv_global_imp = []
for player_sv in [f'{player}_sv' for player in feature_key_list]:
all_data.append(sv[player_sv])
average_sv.append(pdp_based_importance(sv[player_sv]))
sv_global_imp.append(np.mean(np.abs(list(sv[player_sv]))))
# average_sv.append(np.std(sv[player_sv]))
# print(np.max(sv[player_sv]))
# plot violin plot
axs[0].violinplot(all_data,
showmeans=False,
showmedians=True)
axs[0].set_title('Violin plot')
# plot box plot
axs[1].boxplot(all_data,
showfliers=False,
showmeans=True,
)
axs[1].set_title('Box plot')
# adding horizontal grid lines
for ax in axs:
ax.yaxis.grid(True)
ax.set_xticks([y + 1 for y in range(len(all_data))],
labels=['title', 'abstract', 'venue', 'authors', 'year', 'n_citations'])
ax.set_xlabel('Features')
ax.set_ylabel('Shapley Value')
plt.show()
# +
plt.rcdefaults()
fig, ax = plt.subplots(figsize=(12, 4), dpi=200)
# Example data
feature_names = ('title', 'abstract', 'venue', 'authors', 'year', 'n_citations')
y_pos = np.arange(len(feature_names))
# error = np.random.rand(len(feature_names))
# ax.xaxis.grid(True)
ax.barh(y_pos, average_sv, align='center', color='#008bfb')
ax.set_yticks(y_pos, labels=feature_names)
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('PDP-based Feature Importance on Shapley Value')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
_, xmax = plt.xlim()
plt.xlim(0, xmax + 1)
for i, v in enumerate(average_sv):
margin = 0.05
ax.text(v + margin if v > 0 else margin, i, str(round(v, 4)), color='black', ha='left', va='center')
plt.show()
# +
plt.rcdefaults()
fig, ax = plt.subplots(figsize=(12, 4), dpi=200)
# Example data
feature_names = ('title', 'abstract', 'venue', 'authors', 'year', 'n_citations')
y_pos = np.arange(len(feature_names))
# error = np.random.rand(len(feature_names))
# ax.xaxis.grid(True)
ax.barh(y_pos, sv_global_imp, align='center', color='#008bfb')
ax.set_yticks(y_pos, labels=feature_names)
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('SHAP Feature Importance')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
_, xmax = plt.xlim()
plt.xlim(0, xmax + 1)
for i, v in enumerate(sv_global_imp):
margin = 0.05
ax.text(v + margin if v > 0 else margin, i, str(round(v, 4)), color='black', ha='left', va='center')
plt.show()
|
pipelining/exp-csma/exp-csma_csma_shapley_value.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "-"}
# # GDSII import
#
# See on [github](https://github.com/flexcompute/tidy3d-notebooks/blob/main/GDS_import.ipynb), run on [colab](https://colab.research.google.com/github/flexcompute/tidy3d-notebooks/blob/main/GDS_import.ipynb), or just follow along with the output below.
#
# <img src="img/splitter.png" alt="diagram" width="400"/>
#
# In Tidy3D, complex structures can be defined or imported from GDSII files via the third-party [gdspy](https://gdspy.readthedocs.io/en/stable/index.html) package. In this tutorial, we will first illustrate how to use the package to define a structure, then we will save this to file, and then we will read that file and import the structures in a simulation.
# +
# get the most recent version of tidy3d
# !pip install -q --upgrade tidy3d
# gdspy is also needed for gds import
# !pip install -q gdspy
# make sure notebook plots inline
# %matplotlib inline
# standard python imports
import numpy as np
import matplotlib.pyplot as plt
import gdspy
import os
# tidy3D import
import tidy3d as td
from tidy3d import web
# -
# ## Creating a beam splitter with gdspy
#
# First, we will construct an integrated beam splitter as in the title image in this notebook using `gdspy`. If you are only interested in importing an already existing GDSII file, see the next section.
#
# We first define some structural parameters. The two arms of the device start at a distance `wg_spacing_in` apart, then come together at a coupling distance `wg_spacing_coup` for a certain length `coup_length`, and then split again into separate ports. In the coupling region, the field overlap results in energy exchange between the two waveguides. Here, we will only see how to define, export, and import such a device using `gdspy`, while in a later example we will simulate the device and study the frequency dependence of the transmission into each of the ports.
# +
### Length scale in micron.
# Waveguide width
wg_width = 0.45
# Waveguide separation in the beginning/end
wg_spacing_in = 8
# Length of the coupling region
coup_length = 10
# Length of the bend region
bend_length = 16
# Waveguide separation in the coupling region
wg_spacing_coup = 0.10
# Total device length along propagation direction
device_length = 100
# -
# To create the device, we will define each waveguide as a GDSII path object with the given waveguide width. To do that, we just need to define a series of points along the center of each waveguide that follows the curvature we desire. First, we define a convenience function to create the central points along one of the waveguides, using a hyperbolic tangent curvature between the input and coupling regions. The second waveguide is just a reflected version of the first one.
# +
def bend_pts(bend_length, width, npts=10):
""" Set of points describing a tanh bend from (0, 0) to (length, width)"""
x = np.linspace(0, bend_length, npts)
y = width*(1 + np.tanh(6*(x/bend_length - 0.5)))/2
return np.stack((x, y), axis=1)
def arm_pts(length, width, coup_length, bend_length, npts_bend=30):
""" Set of points defining one arm of an integrated coupler """
### Make the right half of the coupler arm first
# Make bend and offset by coup_length/2
bend = bend_pts(bend_length, width, npts_bend)
bend[:, 0] += coup_length / 2
# Add starting point as (0, 0)
right_half = np.concatenate(([[0, 0]], bend))
# Add an extra point to make sure waveguide is straight past the bend
right_half = np.concatenate((right_half, [[right_half[-1, 0] + 0.1, width]]))
# Add end point as (length/2, width)
right_half = np.concatenate((right_half, [[length/2, width]]))
### Make the left half by reflecting and omitting the (0, 0) point
left_half = np.copy(right_half)[1:, :]
left_half[:, 0] = -left_half[::-1, 0]
left_half[:, 1] = left_half[::-1, 1]
return np.concatenate((left_half, right_half), axis=0)
# Plot the upper arm for the current configuration
arm_center_coords = arm_pts(
device_length,
wg_spacing_in/2,
coup_length,
bend_length)
fig, ax = plt.subplots(1, figsize=(8, 3))
ax.plot(arm_center_coords[:, 0], arm_center_coords[:, 1], lw=4)
ax.set_xlim([-30, 30])
ax.set_xlabel("x (um)")
ax.set_ylabel("y (um)")
ax.set_title("Upper beam splitter arm")
ax.axes.set_aspect('equal')
plt.show()
# -
# Next, we construct the splitter and write it to a GDS cell. We add a rectangle for the substrate to layer 0, and in layer 1 we add two paths, one for the upper and one for the lower splitter arms, and set the path width to be the waveguide width defined above.
# +
# Reset the gdspy library.
# This could be useful if re-running the notebook without restarting the kernel.
gdspy.current_library = gdspy.GdsLibrary()
lib = gdspy.GdsLibrary()
# Geometry must be placed in GDS cells to import into Tidy3D
coup_cell = lib.new_cell('Coupler')
substrate = gdspy.Rectangle(
(-device_length/2, -wg_spacing_in/2-10),
(device_length/2, wg_spacing_in/2+10),
layer=0)
coup_cell.add(substrate)
def make_coupler(
length,
wg_spacing_in,
wg_width,
wg_spacing_coup,
coup_length,
bend_length,
npts_bend=30):
""" Make an integrated coupler using the gdspy FlexPath object. """
# Compute one arm of the coupler
arm_width = (wg_spacing_in - wg_width - wg_spacing_coup)/2
arm = arm_pts(length, arm_width, coup_length, bend_length, npts_bend)
# Reflect and offset bottom arm
coup_bot = np.copy(arm)
coup_bot[:, 1] = -coup_bot[::-1, 1] - wg_width/2 - wg_spacing_coup/2
# Offset top arm
coup_top = np.copy(arm)
coup_top[:, 1] += wg_width/2 + wg_spacing_coup/2
# Create waveguides as GDS paths
path_bot = gdspy.FlexPath(coup_bot, wg_width, layer=1, datatype=0)
path_top = gdspy.FlexPath(coup_top, wg_width, layer=1, datatype=1)
return [path_bot, path_top]
# Add the coupler to a gdspy cell
gds_coup = make_coupler(
device_length,
wg_spacing_in,
wg_width,
wg_spacing_coup,
coup_length,
bend_length)
coup_cell.add(gds_coup);
# Uncomment to display the cell using the internal gdspy viewer
# gdspy.LayoutViewer(lib)
# -
# Finally, we can save what we have built to a GDSII file.
os.makedirs('data', exist_ok=True)
lib.write_gds('data/coupler.gds')
# ## Importing a GDSII file to simulation
#
# We can now add this device to a Tidy3D simulation and use our in-built plotting tools to see what we have created. First, we use ``gdspy`` to load the file that we just created, and examine its contents.
lib_loaded = gdspy.GdsLibrary(infile='data/coupler.gds')
print(lib_loaded.cells)
coup_cell = lib_loaded.cells['Coupler']
print("Layers in cell: ", coup_cell.get_layers())
print("Layer and datatype of each polygon in cell: ")
for (ip, poly) in enumerate(coup_cell.polygons):
print(f" Polygon {ip}: ({poly.layers[0]}, {poly.datatypes[0]})")
# As we know from creating the file above, Polygon 0 defines the substrate which is made of oxide, while Polygons 1 and 2 define the waveguides, made of silicon, and hence they are in a different layer. Below, we add these structures to a Tidy3D simulation, setting a height of 220nm for the waveguides. We will not do a full run here, so we will not add sources and monitors.
# +
# Waveguide height
wg_height = 0.22
# Permittivity of waveguide and substrate
wg_n = 3.48
sub_n = 1.45
mat_wg = td.Medium(n=wg_n)
mat_sub = td.Medium(n=sub_n)
# Substrate
oxide = td.GdsSlab(
material=mat_sub,
gds_cell=coup_cell,
gds_layer=0,
z_min=-10,
z_max=0)
# Waveguides (import all datatypes if gds_dtype not specified)
coupler = td.GdsSlab(
material=mat_wg,
gds_cell=coup_cell,
gds_layer=1,
z_cent=wg_height/2,
z_size=wg_height,)
# Simulation size along propagation direction
sim_length = 2 + 2*bend_length + coup_length
# Spacing between waveguides and PML
pml_spacing = 1
sim_size = [
sim_length,
wg_spacing_in + wg_width + 2*pml_spacing,
wg_height + 2*pml_spacing]
# Mesh step in all directions
mesh_step = 0.020
### Initialize and visualize simulation ###
sim = td.Simulation(
size=sim_size,
mesh_step=mesh_step,
structures=[oxide, coupler],
run_time=2e-12,
pml_layers=[12, 12, 12])
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 4))
sim.viz_mat_2D(normal='z', position=wg_height/2, ax=ax1);
sim.viz_mat_2D(normal='x', ax=ax2, source_alpha=1);
ax2.set_xlim([-3, 3])
plt.show()
|
GDS_import.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# # Lab Environment for BIA Pipeline
#
# This notebook instance will act as the lab environment for setting up and triggering changes to our pipeline. This is being used to provide a consistent environment, gain some familiarity with Amazon SageMaker Notebook Instances, and to avoid any issues with debugging individual laptop configurations during the workshop.
#
# PLEASE review the sample notebook [xgboost_customer_churn](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/introduction_to_applying_machine_learning/xgboost_customer_churn) for detailed documentation on the model being built
#
# ---
# ## Step 1: View the Data
#
# In this step we are going to upload the data that was processed using the same processing detailed in the example notebook, [xgboost_customer_churn](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/introduction_to_applying_machine_learning/xgboost_customer_churn).
#
# The following sample shows the label and a subset of the features included in the training dataset. The label is in the first column, churn, which is the value we are trying to predict to determine whether a customer will churn.
#
# [Sample with Header](images/training_data_sample.png)
#
#
# +
import pandas as pd
train_data = pd.read_csv('./data/train.csv', sep=',')
pd.set_option('display.max_columns', 500) # Make sure we can see all of the columns
pd.set_option('display.max_rows', 5) # Keep the output on one page
print('\nTraining Data\n', train_data)
smoketest_data = pd.read_csv('./data/smoketest.csv', sep=',')
pd.set_option('display.max_columns', 500) # Make sure we can see all of the columns
pd.set_option('display.max_rows', 5) # Keep the output on one page
print('\nSmoke Test Data\n', smoketest_data)
validation_data = pd.read_csv('./data/validation.csv', sep=',')
pd.set_option('display.max_columns', 500) # Make sure we can see all of the columns
pd.set_option('display.max_rows', 5) # Keep the output on one page
print('\nValidation Data\n', validation_data)
# -
# ---
# ## Step 2: Upload Data to S3
#
# We will utilize this notebook to perform some of the setup that will be required to trigger the first execution of our pipeline. In this step, we are going to simulate what would typically be the last step in an Analytics pipeline of creating training and validation datasets.
#
# To accomplish this, we will actually be uploading data from our local notebook instance (data can be found under /data/*) to S3. In a typical scenario, this would be done through your analytics pipeline. We will use the S3 bucket that was created through the CloudFormation template we launched at the beginning of the lab. You can validate the S3 bucket exists by:
# 1. Going to the [S3 Service](https://s3.console.aws.amazon.com/s3/) inside the AWS Console
# 2. Find the name of the S3 data bucket created by the CloudFormation template: mlops-bia- data-*yourintials*-*randomid*
# 3. Update the bucket variable in the cell below
#
# ### UPDATE THE BUCKET NAME BELOW BEFORE EXECUTING THE CELL
# +
import os
import boto3
import re
import time
# UPDATE THE NAME OF THE BUCKET TO MATCH THE ONE WE CREATED THROUGH THE CLOUDFORMATION TEMPLATE
# Example: mlops-bia-data-jdd-df4d4850
#bucket = 'mlops-bia-data-<yourinitials>-<generated id>'
bucket = 'mlops-bia-data-yourintials-uniqueid'
from sagemaker import get_execution_role
role = get_execution_role()
region = boto3.Session().region_name
trainfilename = 'train/train.csv'
smoketestfilename = 'smoketest/smoketest.csv'
validationfilename = 'validation/validation.csv'
s3 = boto3.resource('s3')
s3.meta.client.upload_file('./data/train.csv', bucket, trainfilename)
s3.meta.client.upload_file('./data/smoketest.csv', bucket, smoketestfilename)
s3.meta.client.upload_file('./data/validation.csv', bucket, validationfilename)
# -
# ----
#
# ## Step 3: Monitor CodePipeline Execution
#
# The code above will trigger the execution of your CodePipeline. You can monitor progress of the pipeline execution in the [CodePipeline dashboard](https://console.aws.amazon.com/codesuite/codepipeline/pipelines). Within the pipeline, explore the stages while the pipeline is execution to understand what is being performed and what user pararameters are being included as input into each stage. For example, StartTraining takes a set of parameters detailing the training environment as well as the hyperparameters for training:
#
# {"Algorithm": "xgboost:1", "traincompute": "ml.c4.2xlarge" , "traininstancevolumesize": 10, "traininstancecount": 1, "MaxDepth": "5", "eta": "0.2", "gamma": "4", "MinChildWeight": "6", "SubSample": 0.8, "Silent": 0, "Objective": "binary:logistic", "NumRound": "100"}
#
# As the pipeline is executing information is being logged to [CloudWatch logs](https://console.aws.amazon.com/cloudwatch/logs). Explore the logs for your Lambda functions (/aws/lambda/MLOps-BIA*) as well as output logs from SageMaker (/aws/sagemaker/*). Also, since this is a Built-In Algorithm, SageMaker automatically emits training metrics to understand how well your model is learning and whether it will generalize well on unseen data. Those metrics are logged to /aws/sagemaker/TrainingJobs in CloudWatch
#
#
# Note: It will take awhile to execute all the way through the pipeline. Please don't proceed to the next step until the last stage is shows **'succeeded'**
# ---
#
# ## Step 4: Additional Clean-Up
#
# Return to the [README.md](https://github.com/aws-samples/amazon-sagemaker-devops-with-ml/1-Built-In-Algorithm/README.md) to complete the environment cleanup instructions.
# # CONGRATULATIONS!
#
# You've built a basic pipeline for the use case of utilizing a Built-In SageMaker algorithm. This pipeline can act as a starting point for building in additional capabilities such as additional quality gates, more dynamic logic for capturing hyperparameter changes, various deployment strategies (ex. A/B Testing). Another common extension to the pipeline may be creating/updating your API serving predictions through API Gateway.
|
1-Built-In-Algorithm/03.MLOps-BIA-LabNotebook.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Load essential libraries
import csv
import numpy as np
import matplotlib.pyplot as plt
import statistics
import numpy as np
from scipy.signal import butter, lfilter, freqz
from IPython.display import Image
from datetime import datetime
# -
# File loading from relative path
file = '../../../Data/20200915-china.csv'
# +
# Figure initialization
fig = plt.figure()
# Time and robot egomotion
time = []
standardized_time = []
compass_heading = []
speed = []
# sonde data
temp = []
PH = []
cond = [] # ms
chlorophyll = []
ODO = [] # mg/L
sonar = []
angular_z = []
# +
initial_time = None
with open(file, 'r') as csvfile:
csvreader= csv.reader(csvfile, delimiter=',')
header = next(csvreader)
for row in csvreader:
# robot data
if initial_time is None:
initial_time = float(row[0])
current_time = float(row[0])
if current_time - initial_time >= 700 and current_time - initial_time < 1000:
#if current_time - initial_time <= 4000:
time.append(float(row[0]))
compass_heading.append(float(row[4]))
speed.append(float(row[10]))
angular_z.append(float(row[18]))
# sonde data
temp.append(float(row[23]))
PH.append(float(row[26]))
cond.append(float(row[25]))
chlorophyll.append(float(row[29]))
ODO.append(float(row[31]))
sonar.append(float(row[8]))
minimum_time = min(time)
for time_stamp in time:
standardized_time.append(time_stamp - minimum_time)
# +
# collision time around 790
# -
# ### Compass heading
plt.plot(standardized_time, compass_heading, label='compass heading')
plt.xlabel('Time [sec]', fontsize=16)
plt.ylabel('Heading [degree]', fontsize=16)
plt.legend()
plt.show()
plt.plot(standardized_time, speed, label='ground_speed_x', color='m')
plt.xlabel('Time [sec]', fontsize=16)
plt.ylabel('ground_speed_x [m/s]', fontsize=16)
plt.legend()
#plt.show()
plt.plot(standardized_time, angular_z, label='angular_z', color='r')
plt.xlabel('Time [sec]', fontsize=16)
plt.ylabel('angular_z [rad/s]', fontsize=16)
plt.legend()
#plt.show()
# ### Temperature
plt.plot(standardized_time, temp, label='temp', color='k')
plt.xlabel('Time [sec]', fontsize=16)
plt.ylabel('Temperature [degree]', fontsize=16)
plt.legend()
plt.show()
# ### PH
plt.plot(standardized_time, PH, label='PH', color='r')
plt.xlabel('Time [sec]', fontsize=16)
plt.ylabel('PH', fontsize=16)
plt.legend()
plt.show()
# ### Conductivity
# * around time 1000, catabot hit another boat
plt.plot(standardized_time, cond, label='conductivity', color='b')
plt.xlabel('Time [sec]', fontsize=16)
plt.ylabel('Conductivity [ms]', fontsize=16)
plt.legend()
plt.show()
# ### Chlorophyll
# * around time 1000, catabot hit another boat
plt.plot(standardized_time, chlorophyll, label='chlorophyll', color='g')
plt.xlabel('Time [sec]', fontsize=16)
plt.ylabel('chlorophyll [RFU]', fontsize=16)
plt.legend()
plt.show()
# ### ODO
plt.plot(standardized_time, ODO, label='ODO', color='m')
plt.xlabel('Time [sec]', fontsize=16)
plt.ylabel('ODO [mg/L]', fontsize=16)
plt.legend()
plt.show()
# ### Sonar depth
plt.plot(standardized_time, sonar, label='sonar', color='c')
plt.xlabel('Time [sec]', fontsize=16)
plt.ylabel('sonar [m]', fontsize=16)
plt.legend()
plt.show()
|
Jupyter_notebook/ISER2021/Collision/20200915-China-collision.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/yohanesnuwara/pyreservoir/blob/master/notebooks/buckley_leverett_1d_notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="6t7dTDp6Iwst"
# # Buckley-Leverett 1D Simulation using Finite Difference (Forward-Backward Scheme)
# + colab={"base_uri": "https://localhost:8080/"} id="Iwri8UT2Fmau" outputId="f5163ae1-8242-4990-9b8d-ee3e4e56ab00"
# !git clone https://github.com/yohanesnuwara/pyreservoir
# + id="x5uREYoJcfFq"
import numpy
from matplotlib import pyplot
import pandas as pd
from scipy import interpolate
import sys
sys.path.append("/content/pyreservoir/fluid_flow")
from relperm import interpolate_relperm
from twophase import buckley_leverett1d
pyplot.style.use('seaborn')
# + [markdown] id="UYO7nN_LcfFs"
# Relative permeability data
# + colab={"base_uri": "https://localhost:8080/", "height": 620} id="eq5bX_j4cfFs" outputId="d50c06b6-eaf3-48d9-a279-b3ca30c9d46c"
Sw = numpy.arange(0.2, 0.9, 0.05)
krw = numpy.array([0, .002, .02, .04, .07, .11, .15, .22, .3, .4, .5, .6, .7, .8])
kro = numpy.array([.6, .5, .4, .3, .23, .17, .12, .08, .05, .03, .02, .01, .005, 0])
df = pd.DataFrame({"Sw": Sw, "krw": krw, "kro": kro})
print(df)
pyplot.plot(Sw, krw, '.-', label='krw')
pyplot.plot(Sw, kro, '.-', label='kro')
pyplot.xlim(0.2, 0.85)
pyplot.xlabel('Sw'); pyplot.ylabel('Relative Permeability')
pyplot.legend()
pyplot.show()
# + [markdown] id="ZjKuTkgzcfFt"
# Interpolate relative permeability data
# + colab={"base_uri": "https://localhost:8080/"} id="tGoa5kUBcfFu" outputId="17c46b1a-ecb8-406b-99c5-610cc3d701c6"
# Test intepolated relperm for Sw=0.575
Sw_new = .575
krw_new, kro_new = interpolate_relperm(Sw, krw, kro, Sw_new)
print('At Sw={}, krw={} and kro={}'.format(Sw_new, krw_new, kro_new))
# + [markdown] id="0FkD9mmXGS1z"
# Initial condition
# + colab={"base_uri": "https://localhost:8080/", "height": 382} id="HQhjO5Y-cfFv" outputId="1788dea6-8486-4f4e-a253-b1a105cc9f23"
# set parameters for initial condition
L = 4
nx = 41
x = numpy.linspace(0.0, L, nx)
Sw0 = numpy.full(nx, 0.2)
Sw0[:15] = 1
pyplot.plot(x, Sw0)
pyplot.xlim(min(x), max(x))
pyplot.xlabel('x'); pyplot.ylabel('Sw')
pyplot.title('Initial Condition', size=20)
pyplot.show()
# + [markdown] id="FCZb5n30Ie2B"
# Run simulation
# + colab={"base_uri": "https://localhost:8080/", "height": 555} id="h33gOIgUcfFw" outputId="775b750a-81e6-4483-b338-16c8482bcadd"
# Set parameters for simulation
nt = 70
L = 4
sigma = 0.1
bc_value = Sw0[0]
u_max = 1
muw = 0.5E-3
muo = 1E-3
q = 200 # m3/hr
A = 30 # m2
poro = 0.24
# Simulation
nt = [10, 50, 70, 90]
pyplot.figure(figsize=(16,9))
for i in range(4):
pyplot.subplot(2,2,i+1)
buckley_leverett1d(nt[i], Sw0, L, nx, sigma, bc_value, muw, muo, q, A, poro, Sw, krw, kro)
|
notebooks/buckley_leverett_1d_notebook.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/nmningmei/Deep_learning_fMRI_EEG/blob/master/8_2_extract_representations_from_words.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="PP3ZI4ab0SG-" colab_type="code" colab={}
# Install the PyDrive wrapper & import libraries.
# This only needs to be done once per notebook.
# !pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
# This only needs to be done once per notebook.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# + [markdown] id="hIA2UjKkpuCC" colab_type="text"
# # Download the Fast Text Model from its official website: [click me](https://fasttext.cc/docs/en/crawl-vectors.html). This will take up to 10 minutes
# + id="3EGQBiZPo7Ve" colab_type="code" outputId="d5bbe082-1221-4ea4-8986-7b2bc4afee01" colab={"base_uri": "https://localhost:8080/", "height": 211}
# !wget https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.es.300.vec.gz
# + id="svCOtD3IpkYA" colab_type="code" outputId="5b59f54a-d620-4c5d-9a12-1b0a02f2e22e" colab={"base_uri": "https://localhost:8080/", "height": 35}
# !ls
# + [markdown] id="WrZ4ExgCxhHm" colab_type="text"
# # Load the structural pretrained model
#
# ```
# For .bin use: load_fasttext_format() (this typically contains full model with parameters, ngrams, etc).
#
# For .vec use: load_word2vec_format (this contains ONLY word-vectors -> no ngrams + you can't update an model).
# ```
#
# Here we use `gensim` to load the model into a quasi-dictionary object <-- learn this concept from the python course on Thursdays
# + id="F1ap7YZYrPRE" colab_type="code" colab={}
from gensim.models.keyedvectors import KeyedVectors # for loading word2vec models
# + id="fwJ3X1QurTRH" colab_type="code" outputId="5e9d73f4-8340-4867-aabb-7bd4c68ac817" colab={"base_uri": "https://localhost:8080/", "height": 90}
print('loading model, and it is going to take some time...')
model_word2vec = KeyedVectors.load_word2vec_format('cc.es.300.vec.gz')
# + [markdown] id="mEuePGP2zXhD" colab_type="text"
# # Get the representation of the word "brain"
# + id="89plvcW6rdn5" colab_type="code" outputId="1eb133b7-fade-4a44-e778-4c9778a8e6be" colab={"base_uri": "https://localhost:8080/", "height": 791}
model_word2vec.get_vector('y')
# + id="3hZGSQT_xVH3" colab_type="code" outputId="4c13fc75-9010-4914-c0e7-864a27c370f7" colab={"base_uri": "https://localhost:8080/", "height": 424}
model_word2vec.most_similar(positive = 'y',topn = 20)
# + id="4VjK_1IP1MsU" colab_type="code" colab={}
# Let's look our 36 living -- nonliving words
# + id="nq5pQZEq1MHL" colab_type="code" colab={}
words_id = 'https://drive.google.com/open?id=18nfVy-o0GWX-QKEWrKK0EKLLAltpFy4U'.split('id=')[-1]
downloaded = drive.CreateFile({'id':words_id})
downloaded.GetContentFile(f'words.npy')
# + id="Cb9Y7G9o1fsM" colab_type="code" colab={}
import numpy as np
import pandas as pd
from scipy.spatial import distance
from matplotlib import pyplot as plt
# %matplotlib inline
import seaborn as sns
sns.set_style('white')
sns.set_context('poster')
# + id="cfru4WnG1tc0" colab_type="code" colab={}
words = np.load('words.npy')
# + id="sw51N5Ph1wGD" colab_type="code" outputId="2c49bc75-976f-4536-a933-edf5490d3fef" colab={"base_uri": "https://localhost:8080/", "height": 140}
words
# + [markdown] id="CMWnWcIi6CD_" colab_type="text"
# # Let's plot the dissimilarity among the words
# + id="Uf59e5N01w2c" colab_type="code" colab={}
# get the word categories
word_type = {word:(ii<len(words)/2) for ii,word in enumerate(words)}
# map on to the semantic categories
word_type_map = {True:'animal',False:'tool'}
# make classification labels
labelize_map = {'animal':0,'tool':1}
# get the features extracted by the model
data,labels = [],[]
label2word = np.array(words)
for word in words:
temp_vec = model_word2vec.get_vector(
word.decode('UTF-8') # since the word loaded in a sequence of octets, aka btypes, we need to convert it back to string type
)
data.append(temp_vec)
labels.append(labelize_map[word_type_map[word_type[word]]])
data = np.array(data)
labels = np.array(labels)
# + id="OcBTgA2MYOKA" colab_type="code" outputId="95dbca82-9f31-4388-e9ec-8515ae32a252" colab={"base_uri": "https://localhost:8080/", "height": 35}
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters = 2,random_state = 12345)
kmeans.fit(data)
kmeans.cluster_centers_.shape
# + id="ZGqggLWhY12p" colab_type="code" outputId="d2a5e01e-9b09-4f28-db87-98278ce4ff47" colab={"base_uri": "https://localhost:8080/", "height": 87}
kmeans.labels_,labels
# + [markdown] id="U5a33-0T6HLJ" colab_type="text"
# ## Keep in mind that we have to perform some kind of normalization of the word-vectors to account for the multiple comparison
# + id="UXs-rKc_6WBl" colab_type="code" outputId="5e6c8dcc-abc9-41bd-961e-e099f81c1042" colab={"base_uri": "https://localhost:8080/", "height": 976}
dissimilarity = distance.squareform(distance.pdist(
data - data.mean(1).reshape(-1,1), # normalize the representation for each of the word
metric='cosine'))
# if you want to use seaborn.clustermap, don't run the next line
# np.fill_diagonal(dissimilarity,np.nan)
dissimilarity = pd.DataFrame(dissimilarity,columns=words)
dissimilarity.index = words
g = sns.clustermap(dissimilarity,
xticklabels = True,
yticklabels = True,
figsize = (14,14),
cmap = plt.cm.coolwarm)
g.fig.axes[2].axhline(36 / 2,linestyle = '--', color = 'black', alpha = 1.)
g.fig.axes[2].axvline(36 / 2,linestyle = '--', color = 'black', alpha = 1.)
# + id="WZd_nMeD2R0c" colab_type="code" colab={}
dissimilarity = distance.squareform(distance.pdist(
data - data.mean(1).reshape(-1,1), # normalize the representation for each of the word
metric='cosine'))
# if you want to use seaborn.clustermap, don't run the next line
np.fill_diagonal(dissimilarity,np.nan)
dissimilarity = pd.DataFrame(dissimilarity,columns=words)
dissimilarity.index = words
# + id="w-gJNCmT2Yzk" colab_type="code" outputId="e444dfce-9738-4097-ee01-fd4c1e4cfc55" colab={"base_uri": "https://localhost:8080/", "height": 966}
fig,ax = plt.subplots(figsize = (14,14))
ax = sns.heatmap(dissimilarity,
xticklabels = True,
yticklabels = True,
ax = ax,
cmap = plt.cm.coolwarm,)
_ = ax.set(title = 'Red = dissimilar, Blue = similar')
ax.axhline(36 / 2,linestyle = '--', color = 'black', alpha = 1.)
ax.axvline(36 / 2,linestyle = '--', color = 'black', alpha = 1.)
# + [markdown] id="ajZwT7DE8LSk" colab_type="text"
# # Use machine learning to demostrate the robustness of the clustering
# + id="U1ZjgfQ58mUB" colab_type="code" colab={}
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import LeavePOut, cross_val_predict
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.utils import shuffle
# + id="PET51Rft8VEu" colab_type="code" colab={}
features = data.copy()
labels = np.array([labelize_map[word_type_map[word_type[word]]] for word in words])
groups = words.copy()
cv = LeavePOut(p = 2)
results = dict(
fold = [],
score = [],
test_word1 = [],
test_word2 = [],
)
for fold, (idx_train,idx_test) in enumerate(cv.split(features,labels,groups = groups)):
X_train,y_train = features[idx_train],labels[idx_train]
X_test,y_test = features[idx_test],labels[idx_test]
X_train,y_train = shuffle(X_train,y_train)
test_pairs = groups[idx_test]
clf = make_pipeline(StandardScaler(),LogisticRegression(solver='liblinear',random_state=12345))
clf.fit(X_train,y_train)
preds = clf.predict_proba(X_test)[:,-1]
score = np.abs(preds[0] - preds[1])
results['fold'].append(fold + 1)
results['score'].append(score)
results['test_word1'].append(test_pairs[0].decode('UTF-8'))
results['test_word2'].append(test_pairs[1].decode('UTF-8'))
results_to_save = pd.DataFrame(results)
# + id="sgNDOiFt8aaR" colab_type="code" outputId="21c4a95e-85e5-4d40-b4c1-298c158ccb32" colab={"base_uri": "https://localhost:8080/", "height": 424}
results_to_save
# + id="jygFiIRo9Cn9" colab_type="code" colab={}
idx_map = {word.decode('UTF-8'):idx for idx,word in enumerate(words)}
# + id="CBAPmbAP9yZj" colab_type="code" colab={}
decode_distance = np.zeros((36,36))
for ii,row in results_to_save.iterrows():
decode_distance[idx_map[row['test_word1']],
idx_map[row['test_word2']]] = row['score']
decode_distance[idx_map[row['test_word2']],
idx_map[row['test_word1']]] = row['score']
np.fill_diagonal(decode_distance,np.nan)
# + id="EH-IDc4hCUJ3" colab_type="code" outputId="c9594c78-f560-474a-b7fc-06d76b3645e9" colab={"base_uri": "https://localhost:8080/", "height": 966}
decode_distance = pd.DataFrame(decode_distance,index = words,columns=words)
fig,ax = plt.subplots(figsize = (14,14))
ax = sns.heatmap(decode_distance,
xticklabels = True,
yticklabels = True,
ax = ax,
cmap = plt.cm.coolwarm,)
_ = ax.set(title = 'Red = dissimilar, Blue = similar')
ax.axhline(36 / 2,linestyle = '--', color = 'black', alpha = 1.)
ax.axvline(36 / 2,linestyle = '--', color = 'black', alpha = 1.)
# + [markdown] id="2nYWfxnOqd-C" colab_type="text"
# # Fit an encoding model to predict the BOLD signals given the words
# + id="wV-1bGNzC5Te" colab_type="code" colab={}
BOLD_id = 'https://drive.google.com/open?id=1d4y-6myFog7h7V_Z-3-cepM-v0gmbTQL'.split('id=')[-1]
downloaded = drive.CreateFile({'id':BOLD_id})
downloaded.GetContentFile(f'lh_fusif.npy')
event_id = 'https://drive.google.com/open?id=1MuwdvHX20OtStLqhDO1eIlHpUMA-oYgX'.split('id=')[-1]
downloaded = drive.CreateFile({'id':event_id})
downloaded.GetContentFile(f'lh_fusif.csv')
# + id="7eRxGScntTQ9" colab_type="code" colab={}
from sklearn import linear_model,metrics
from sklearn.feature_selection import VarianceThreshold
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import GroupShuffleSplit,cross_validate
from collections import defaultdict
# + id="aM_mdF7vq3-0" colab_type="code" colab={}
fmri_data_ = np.load("lh_fusif.npy")
df_data_ = pd.read_csv('lh_fusif.csv')
word2vec_vec = pd.DataFrame(data.T,columns = words)
# + id="oTgTntYIrij0" colab_type="code" colab={}
def add_track(df_sub):
n_rows = df_sub.shape[0]
temp = '+'.join(str(item + 10) for item in df_sub['index'].values)
df_sub = df_sub.iloc[1,:].to_frame().T
df_sub['n_volume'] = n_rows
df_sub['time_indices'] = temp
return df_sub
def groupby_average(fmri,df,groupby = ['trials']):
BOLD_average = np.array([np.mean(fmri[df_sub.index],0) for _,df_sub in df.groupby(groupby)])
df_average = pd.concat([add_track(df_sub) for ii,df_sub in df.groupby(groupby)])
return BOLD_average,df_average
# + id="UmWRWz49rBbE" colab_type="code" outputId="0302e25a-ef51-40dc-ba88-015b08afc415" colab={"base_uri": "https://localhost:8080/", "height": 52}
label_map = dict(animal =[1,0],
tool =[0,1])
for condition in ['read','reenact']:
# pick condition
idx_pick = df_data_['context'] == condition
fmri_data = fmri_data_[idx_pick]
df_data = df_data_[idx_pick]
fmri_data,df_data = groupby_average(fmri_data,
df_data.reset_index(),
groupby = ['id'])
df_data = df_data.reset_index()
# something we need for defining the cross validation method
BOLD = fmri_data.copy()
targets = np.array([label_map[item] for item in df_data['targets'].values])
groups = df_data['words'].values
# to remove the low variant voxels and standardize the BOLD signal
variance_threshold = VarianceThreshold()
BOLD = variance_threshold.fit_transform(BOLD)
scaler = StandardScaler()
BOLD = scaler.fit_transform(BOLD)
embedding_features = np.array([word2vec_vec[word.lower().encode()] for word in df_data['words']])
# define the cross validation strategy
cv = GroupShuffleSplit(n_splits = 100,
test_size = 0.2,
random_state = 12345)
idxs_train,idxs_test = [],[]
for idx_train,idx_test in cv.split(BOLD,targets,groups=groups):
idxs_train.append(idx_train)
idxs_test.append(idx_test)
# define the encoding model
encoding_model = linear_model.Ridge(
alpha = 100, # L2 penalty, higher means more weights are constrained to zero
normalize = True, # normalize the batch features
random_state = 12345, # random seeding
)
# black box cross validation
res = cross_validate(
encoding_model,
embedding_features,
BOLD,
groups = groups,
cv = zip(idxs_train,idxs_test),
n_jobs = 8,
return_estimator = True,)
n_coef = embedding_features.shape[1]
n_obs = int(embedding_features.shape[0] * 0.8)
preds = np.array([model.predict(embedding_features[idx_test]) for model,idx_test in zip(res['estimator'],idxs_test)])
scores = np.array([metrics.r2_score(BOLD[idx_test],y_pred,multioutput = 'raw_values') for idx_test,y_pred in zip(idxs_test,preds)])
mean_variance = np.array([np.mean(temp[temp >= 0]) for temp in scores])
positive_voxels = np.array([np.sum(temp >= 0) for temp in scores])
corr = [np.mean([np.corrcoef(a,b).flatten()[1] for a,b in zip(BOLD[idx_test],pred)]) for idx_test,pred in zip(idxs_test,preds)]
# saving the results
results = defaultdict()
results['condition' ]= [condition] * 100
results['fold' ]= np.arange(100) + 1
results['positive voxels' ]= positive_voxels
results['mean_variance' ]= mean_variance
results['corr' ]= corr
results_to_save = pd.DataFrame(results)
results_to_save.to_csv(f'{condition}.csv',index = False)
print('fast text --> BOLD @ left-fusiform',f'{condition:8s}, mean variance explained = {mean_variance.mean():.4f} with {n_obs} instances of {n_coef} features that explains {positive_voxels.mean():.2f} positive voxels')
# + id="-xm1iQDSv4ww" colab_type="code" colab={}
|
8_2_extract_representations_from_words.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: CovHosEnv
# language: python
# name: covhosenv
# ---
# # 1. Clean and Explore
# This notebook contains the process used to import, examine, and clean the dataset containing COVID hospital cases by county in the contiguous USA.
#
# **Goal: Plot the hospital cases on a map of the contiguous USA and size these points based on number of patients.**
# # 1.0.1 Import Packages
# +
import numpy as np #linear algebra
import pandas as pd #data processing, CSV file I/O
import matplotlib.pyplot as plt
import seaborn as sns
import os
# Import libraries for reading geographic data
import geopandas as gpd
from shapely.geometry import Point, shape
import io
import addfips
# %matplotlib inline
# -
os.getcwd()
# +
# List files in the raw data directory
for dirname, _, filenames in os.walk('../data/raw/'):
for filename in filenames:
print(filename)
# -
# # 1.1.1 Import and Examine COVID Hospital Data
# Import dataset as a pandas DataFrame. Examine metadata and univariate descriptive statistics.
# <br><br>Date includes:
# - County Name
# - Facility_Name
# - Full_Address
# - State
# - Total (Covid patients)
# - latitudes
# - longitudes
# Merged_Final.csv --> contains USA hospitals with COVID by county
# Put into dataframe
hosp = pd.read_csv('../data/raw/Merged_Final.csv')
# View data
hosp.head()
# Get rows and columns
hosp.shape
# List data types and counts
hosp.info()
# Summary statistics for numerical columns
hosp.describe(include=[np.number])
# Summary statistics for object and categorical columns
hosp.describe(include=[np.object, pd.Categorical])
# ## 1.1.2 Clean Hospital Data
# Drop unnecessary rows and remove duplicates. Reset index for a clean dataframe.
# Check out the number of states listed
hosp['State'].nunique()
# Notice it includes Puerto Rico
hosp['State'].unique()
# Since we are only concerned with the 50 US states, we'll drop rows containing PR
hosp = hosp[hosp.State != 'PR']
hosp.shape
# Check for duplicate hospitals
hosp['Full_Address'].value_counts()
# Examine hospitals with more than one listing
hosp[hosp['Full_Address'] == '47 SOUTH FOURTH ST,ROLLING FORK,SHARKEY,MS']
hosp[hosp['Full_Address'] == '101 CIRCLE DRIVE,HILLSBORO,HILL,TX']
# Review areas around index to determine if there is an obvious order to hospital input
hosp.iloc[2155:2163]
# No obvious pattern found
# Drop 2nd occurence of each duplicate
hosp.drop([2212,3947], inplace=True)
# Check rows have been dropped
hosp.loc[2211:2213]
hosp.loc[3945:3948]
# Notice that the index and iloc do not match anymore so we had to use loc
# and now have to reset index
# +
# Reset index
hosp.reset_index(drop=True, inplace=True)
# Check index has been reset
hosp.iloc[3945:3948]
# -
# ## 1.1.3 Add County and State FIPS
# This allows for common identifiers between dataframes and geodataframes, eventually allowing merges when necessary.
# +
hosp['STATEFP'] = ""
hosp['COUNTYFP'] = ""
af = addfips.AddFIPS()
for i in range(len(hosp)):
hosp['STATEFP'][i] = af.get_state_fips(hosp['State'][i])
hosp['COUNTYFP'][i] = af.get_county_fips(hosp['County Name'][i], state=hosp['State'][i])
# -
# Check that all rows have state and county fp entered
hosp[hosp.COUNTYFP.isnull()]
# +
# Look up FIPS for each county / state and manually enter
illinois = [1143, 1171, 1190]
for i in illinois:
hosp['COUNTYFP'][i] = 17099
louisiana = [1697, 1718]
for l in louisiana:
hosp['COUNTYFP'][l] = 22059
hosp['COUNTYFP'][3773] = 48123
# -
hosp.isnull().sum()
# ## 1.1.3 Reduce Memory Usage of Hospital Data
# Change data types of columns
# Inspect the data types of each column
hosp.dtypes
# Find memory usage of each column
original_mem = hosp.memory_usage(deep=True)
original_mem
# We know from our examination above that State only has 50 unique values
# It can be changed to a categorical variable
# Can anything else? Find the percentage of unique values in each category
hosp.select_dtypes(include=['object']).nunique()/hosp.select_dtypes(include=['object']).count()
# We see that County Name has about 29% unique values,
# Facility has 97%, Full_Address has 100%, and State has 1%
# The best candidates for categorical are the State and STATEFP columns
hosp['State'] = hosp['State'].astype('category')
hosp['STATEFP'] = hosp['STATEFP'].astype('category')
hosp['COUNTYFP'] = pd.to_numeric(hosp['COUNTYFP'])
hosp.dtypes
# compute new memory usage
new_mem = hosp.memory_usage(deep=True)
new_mem
# Compare original with updated memory usage
new_mem/original_mem
# State has shrunk to 3.8% of its original size.<br>
# STATEFP has shrunk to 4% of its original size.<br>
# COUNTYFP has shrunk to 13% of its original size.
# ## 1.1.4 Visualize Hospital Data Statistics
# Now that the dataframe is clean, let's vizualize the statistics to see if anything else needs to be done
# Histogram of Hospital Frequency
_ = plt.figure(figsize=(14,8))
_ = plt.hist(hosp['State'], bins=hosp['State'].nunique(), align='left')
_ = plt.xlabel('States')
_ = plt.ylabel('Counts of Hospitals')
_ = plt.title('Hospital Counts by State with Covid Cases')
plt.show()
# Boxplot of hospital total cases
plt.figure(figsize=(14,8))
sns.boxplot(x=hosp['State'], y= hosp['Total'], data=hosp)
plt.show()
# Swarmplot to see how often values occur
plt.figure(figsize=(20,8))
sns.swarmplot(x=hosp['State'], y= hosp['Total'], data=hosp)
plt.show()
# ## 1.1.5 Convert Hospital DataFrame to GeoDataFrame for Plotting
# convert hosp dataframe into geodataframe to allow for plotting of hospitals in a later step
# geometry = [Point(xy) for xy in zip(hosp.longitudes, hosp.latitudes)]
gdf_hosp = gpd.GeoDataFrame(hosp, crs='EPSG:4326', geometry=gpd.points_from_xy(hosp.longitudes, hosp.latitudes))
gdf_hosp.head()
# ## 1.1.6 Save Hospital Data as Geodataframe
# Save geodf of USA hospitals (hosp) --> ../data/interim
# 1_1_usa_hospitals_data.gpkg
gdf_hosp.to_file('../data/interim/1_1_usa_hospitals_data.gpkg', driver='GPKG', encoding='utf-8')
# # 1.2.1 Import and Examine USA County Population Data
# Import dataset as a pandas DataFrame. Examine metadata and univariate descriptive statistics.<br>
# Data obtained from Census.gov<br>
# https://www2.census.gov/programs-surveys/popest/datasets/2010-2019/counties/totals/
# co-est2019-alldata.csv --> contains US population estimates by county
# Put into dataframe
pop = pd.read_csv('../data/raw/co-est2019-alldata.csv', sep=',', encoding='latin-1')
pop.head()
# ### 1.2.2 Simplify Population Data & Reduce Memory
# Only Region, Division, State, County, State Name, CIty Name, and 2019 Census are needed
pop = pop[['SUMLEV','REGION','DIVISION','STATE','COUNTY','STNAME','CTYNAME','POPESTIMATE2019']]
pop.tail()
pop.shape
pop.info()
# Appears to be no null values, double check by counting
pop.isnull().sum().sum()
# Get summary statistics of numerical columns
pop.describe(include=[np.number]).T
# Get summary statistics of object columns
pop.describe(include=[np.object]).T
# State name can be categorical
pop['STNAME'] = pop['STNAME'].astype('category')
pop.dtypes
# +
# List states to determine why there are 51
states = pop.STNAME.unique()
for i in states:
print(i)
# states includes the District of Columbia
# -
# Count number of unique for all columns
pop.nunique()
# ### 1.2.3 Visualize Population Data
_ = plt.figure(figsize=(14,8))
_ = plt.hist(pop['STNAME'], bins=50, align='left')
_ = plt.xticks(rotation=90)
_ = plt.xlabel('States')
_ = plt.ylabel('Count of Counties')
plt.show()
plt.figure(figsize=(14,8))
sns.boxplot(x=pop['STNAME'], y=pop['POPESTIMATE2019'], data=pop)
plt.xticks(rotation=90)
plt.show()
# ### 1.3.1 Import and Examine USA County Shapefile
# Class Codes and Definitions For Reference<br>
# https://www.census.gov/library/reference/code-lists/class-codes.html#:~:text=The%20class%20(CLASSFP)%20code%20defines,gazetteer%20files%2C%20and%20other%20products.
# +
# Read_file method of importing data when it's saved to raw folder
# Refer to list of files in raw data directory listed in section 1.0.1
raw_path = '../data/raw/'
dbf = 'tl_2017_us_county.dbf'
prj = 'tl_2017_us_county.prj'
shp = 'tl_2017_us_county.shp'
shx = 'tl_2017_us_county.shx'
usa_counties = gpd.read_file(raw_path + shp)
print('Shape of dataframe: {}'.format(usa_counties.shape))
print('Projection of dataframe: {}'.format(usa_counties.crs))
usa_counties.tail()
# -
# Examine information about dataframe
usa_counties.info()
# Get summary statistics of objects
usa_counties.describe(include=[np.object]).T
# ### 1.3.2 Reduce Memory Usage of County Data
# We notice there are quite a few columns that can be typecasted as category or numeric.
orig_mem = usa_counties.memory_usage(deep=True)
orig_mem
# +
# Change data type to categorical
usa_counties['STATEFP'] = usa_counties['STATEFP'].astype('category')
usa_counties['COUNTYFP'] = usa_counties['COUNTYFP'].astype('category')
usa_counties['LSAD'] = usa_counties['LSAD'].astype('category')
usa_counties['CLASSFP'] = usa_counties['CLASSFP'].astype('category')
usa_counties['MTFCC'] = usa_counties['MTFCC'].astype('category')
usa_counties['FUNCSTAT'] = usa_counties['FUNCSTAT'].astype('category')
# Change data type to numeric
usa_counties['COUNTYNS'] = pd.to_numeric(usa_counties['COUNTYNS'])
usa_counties['GEOID'] = pd.to_numeric(usa_counties['GEOID'])
usa_counties['CSAFP'] = pd.to_numeric(usa_counties['CSAFP'])
usa_counties['CBSAFP'] = pd.to_numeric(usa_counties['CBSAFP'])
usa_counties['METDIVFP'] = pd.to_numeric(usa_counties['METDIVFP'])
usa_counties.dtypes
# -
new_mem = usa_counties.memory_usage(deep=True)
new_mem/orig_mem
# ### 1.3.3 Consolidate County Data
# GeoDataFrame to include the 50 states and D.C.
# Check out FIPS Codes in dataset to see number of unique States
# https://www.nrcs.usda.gov/wps/portal/nrcs/detail/?cid=nrcs143_013696
usa_counties.STATEFP.nunique()
# +
# 56 tells us that we are including more than just the 50 states
# We want a dataframe that is just the contiguous USA
# and one that includes the 50 states (plus DC)
# American Samoa (60), Guam (66), Northern Mariana Islands (69)
# Puerto Rico (72), Virgin Islands (78), Alaska (02), Hawaii (15)
# USA States
usa_state_counties = usa_counties[~usa_counties.STATEFP.isin(['60','66','69','72','78'])]
print(usa_state_counties.shape)
# Contiguous USA
usa_cont_counties = usa_state_counties[~usa_state_counties.STATEFP.isin(['02','15'])]
print(usa_cont_counties.shape)
# -
# Compare to original
usa_counties.shape
# ### 1.4.1 Import and Examine USA State Shapefile
# +
# Read_file method of importing data when it's saved to raw folder
# Source: https://www.census.gov/cgi-bin/geo/shapefiles/index.php?year=2019&layergroup=States+%28and+equivalent%29
# Refer to list of files in raw data directory listed in section 1.0.1
raw_path = '../data/raw/'
dbf = 'tl_2019_us_state.dbf'
prj = 'tl_2019_us_state.prj'
shp = 'tl_2019_us_state.shp'
shx = 'tl_2019_us_state.shx'
usa_states = gpd.read_file(raw_path + shp)
print('Shape of dataframe: {}'.format(usa_states.shape))
print('Projection of dataframe: {}'.format(usa_states.crs))
usa_states.tail()
# -
usa_states.info()
# Review summary statistics for objects
usa_states.describe(include=[np.object]).T
# Determine which columns to drop
usa_states.nunique()
# Drop columns with single value & duplicate columns
columns = ['LSAD','MTFCC','FUNCSTAT','GEOID']
usa_states.drop(columns, axis=1, inplace=True)
usa_states.head()
# ### 1.4.2 Reduce Memory Usage of State Data
usa_states.dtypes
# +
# Convert to numeric
num_cols = ['REGION','DIVISION','STATEFP','STATENS','ALAND','AWATER']
for col in num_cols:
usa_states[col] = pd.to_numeric(usa_states[col])
usa_states.dtypes
# -
# ### 1.4.3 Consolidate State Data
# GeoDataFrame to include the 50 states and D.C.
usa_states['STATEFP'].nunique()
# +
# USA States
usa_states = usa_states[~usa_states.STATEFP.isin(['60','66','69','72','78'])]
print(usa_states.shape)
# Contiguous USA
usa_cont_states = usa_states[~usa_states.STATEFP.isin(['02','15'])]
print(usa_cont_states.shape)
# -
# ### 1.5.1 Combine Visuals
gdf_hosp.head()
# +
# Plot all counties with hospital data
fig, ax = plt.subplots(figsize=(14,12))
ax.set(title='Hospitals with Covid Cases in the USA')
# plot states
usa_states.plot(ax=ax, edgecolor='black', facecolor='none')
# plot counties
usa_state_counties.plot(ax=ax)
# plot hospitals
gdf_hosp.plot(markersize=0.2, color='red', ax=ax)
xlim = ([usa_states.total_bounds[0]-1, usa_states.total_bounds[2]+1])
ylim = ([usa_states.total_bounds[1]-1, usa_states.total_bounds[3]+1])
ax.set_xlim(xlim)
ax.set_ylim(ylim)
# +
# Overlay maps to show hospitals with Covid cases in the contiguous USA
fig, ax = plt.subplots(figsize=(14,12))
ax.set_title('Hospitals with Covid Cases in the Contiguous USA',fontsize=20)
# Plot counties
usa_cont_counties.plot(ax=ax)
# Plot states
usa_cont_states.plot(ax=ax, edgecolor='black', facecolor='none')
# Plot hospitals
gdf_hosp.plot(markersize=gdf_hosp['Total']/1000, color='red', ax=ax)
# Limit to the contiguous USA
xlim = ([usa_cont_states.total_bounds[0]-1, usa_cont_states.total_bounds[2]+1])
ylim = ([usa_cont_states.total_bounds[1]-1, usa_cont_states.total_bounds[3]+1])
ax.set_xlim(xlim)
ax.set_ylim(ylim)
# -
# ### 1.5.2 Relationship Between DataFrames
# ***Hospital Data and GeoData***
# The <i>hosp</i> GeoDataFrame includes the total number of cases per hospital with Covid patients. Includes county name and state name.
#
# ***Population Data***
# The <i>pop</i> dataframe includes state and county FP identifiers, along with county name and state name.
#
# ***County GeoData***
# The <i>usa_state_counties</i> and <i>usa_cont_counties</i> GeoDataFrames include state FP, county FP, and county name.
#
# ***State GeoData***
# The <i>usa_states</i> and <i>usa_cont_states</i> GeoDataFrames include state FP and state name.
# ### 1.6.1 Save Datasets
'''# Save geodf of USA hospitals (hosp) --> ../data/interim
# 1_1_usa_hospitals_data.gpkg
gdf_hosp.to_file('../data/interim/1_1_usa_hospitals_data.gpkg', layer='hospitals', driver='GPKG')
# Save df of US Census by county (pop) --> ../data/interim
# 1_1_usa_population_county.csv
pop.to_csv('../data/interim/1_1_usa_population_county.csv')
# Save geodf of all USA state counties (usa_state_counties) --> ../data/interim
# 1_1_usa_counties_geo.gpkg
usa_state_counties.to_file('../data/interim/1_1_usa_counties_geo.gpkg', layer='USA counties', driver='GPKG')
# Save geodf of contiguous USA counties (usa_cont_counties) --> ../data/interim
# 1_1_usa_contiguous_counties_geo.shp
usa_cont_counties.to_file('../data/interim/1_1_usa_contiguous_counties_geo.gpkg', layer='USA counties contiguous', driver='GPKG')
# Save geodf of USA states (usa_states) --> ../data/interim
# 1_1_usa_state_borders_geo
usa_states.to_file('../data/interim/1_1_usa_state_borders_geo.gpkg')
# Save geodf of USA cont states (usa_cont_states)
# 1_1_usa_cont_state_borders_geo
usa_cont_states.to_file('../data/interim/1_1_usa_cont_state_borders_geo.gpkg')
# Using geopackage format due to faster rendering and contained in single file
'''
|
notebooks/Clean_and_Plot_Contiguous_USA_Covid_Cases.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Historical Shape Indicator (HSI), Visa Market
import pandas as pd
from pandas import DatetimeIndex
import datetime
import os
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook
import scipy
from scipy import stats as scs
import statsmodels
from statsmodels import stats
from statsmodels.stats import weightstats
from statsmodels.stats.power import TTestIndPower
import sys
import warnings
if not sys.warnoptions:
warnings.simplefilter("ignore")
import seaborn as sb
sb.set()
from alpha_vantage.timeseries import TimeSeries
from datetime import datetime, timedelta
# %matplotlib inline
# #### Raw Data
def get_raw(sym='V'):
'''
download data and return data dictionary
'''
# download historical prices
ts = TimeSeries(key='enter your access key')
# Get json object with the intraday data and another with the call's metadata
data, meta_data = ts.get_daily_adjusted(sym, outputsize='full')
return data
# #### Format Raw Data
def format_raw(raw_dict):
'''
import raw dictionary
format column names and sort date ascending
return dataframe
'''
# reformat
data = raw_dict.copy()
df_raw = pd.DataFrame.from_dict(data).T
df_raw.reset_index(level=0, inplace=True)
df_raw = df_raw.rename(index=str, columns={'index':'date',
'1. open': 'open',
'2. high': 'high',
'3. low': 'low',
'4. close':'close',
'5. adjusted close':'adj_close',
'6. volume':'volume',
'7. dividend amount':'dividend',
'8. split coefficient':'split',
})
df_raw = df_raw.sort_values(by='date', ascending=True)
df_raw = df_raw.reset_index(drop=True)
df_raw.date = pd.to_datetime(df_raw.date)
return df_raw
def scale_adjusted(df_raw):
'''
import raw dataframe
scale open,high,low, close to adjusted close
return updated dataframe
'''
df = df_raw.copy()
df_scale = pd.DataFrame()
close = df.close.to_numpy().astype(float)
adj = df.adj_close.to_numpy().astype(float)
scale = adj / close
df_scale['date'] = df['date'].copy()
df_scale['open']=df.open.to_numpy().astype(float)*scale
df_scale['high']=df.high.to_numpy().astype(float)*scale
df_scale['low']=df.low.to_numpy().astype(float)*scale
df_scale['close']=df.close.to_numpy().astype(float)*scale
return df_scale
# #### Preprocess Data
def compute_log_returns(prices):
'''
compute log returns
'''
return np.log(prices) - np.log(prices.shift(1))
def shift_returns(returns, shift_n):
'''
compute shift returns for trade assessment
'''
return returns.shift(shift_n)
def compute_proj(prices, lookahead_days):
'''
compute projected future lookahead returns
lookahead_days is the number of days ahead we want to predict
'''
return (prices.shift(-lookahead_days) - prices)/prices
def compute_day_shape(prices, sigmas, dayspan):
'''
compute one day shape
'''
abs_deltas = (prices) - (prices.shift(dayspan))
s_ratios = abs_deltas / sigmas
ups = 3*(s_ratios>1)
downs = 1*(s_ratios<-1)
neuts = 2*((s_ratios>=-1)&(s_ratios<=1))
return (ups+downs+neuts)
def compute_shape(dayshape, dayspan):
'''
compute 5 day shape ordinals
'''
ago5s = 10000*(dayshape.shift(4*dayspan))
ago4s = 1000*(dayshape.shift(3*dayspan))
ago3s = 100*(dayshape.shift(2*dayspan))
ago2s = 10*(dayshape.shift(1*dayspan))
return (ago5s+ago4s+ago3s+ago2s+dayshape)
def preprocess(df):
'''
compute statistics
add return parameters
add lookahead projections of 7 days
use day shape spans of 1, 3 and 5 days
build shape ordinals
'''
df_for = df.copy()
# raw data overlaps
shifts = [['o1','h1','l1','c1'],
['o2','h2','l2','c2'],
['o3','h3','l3','c3'],
['o4','h4','l4','c4'],
]
# format df to calculate price estimates and standard deviations
for j, shift in zip(range(1,6),shifts):
df_for[shift[0]] = df_for.open.shift(-j)
df_for[shift[1]] = df_for.high.shift(-j)
df_for[shift[2]] = df_for.low.shift(-j)
df_for[shift[3]] = df_for.close.shift(-j)
# define price estimate columns for 1,3,5 day spans
p1_col = df_for.loc[:,"open":"close"].astype(float)
p3_col = df_for.loc[:,"open":"c2"].astype(float)
p5_col = df_for.loc[:,"open":"c4"].astype(float)
p_cols = [p1_col, p3_col, p5_col]
# compute price estimates and standard deviations for spans
stats = [['pe1','sd1'],['pe3','sd3'],['pe5','sd5']]
for stat, p_col in zip(stats, p_cols):
df_for[stat[0]] = p_col.mean(axis=1)
df_for[stat[1]] = p_col.std(axis=1)
# keep date but leave raw data behind
df_prep = df_for[['date','pe1','sd1','pe3','sd3','pe5','sd5']].copy()
# add daily returns to df based on 1 day price estimates
daily_returns = compute_log_returns(df_prep['pe1'])
df_prep['log_ret'] = daily_returns
# compute shift returns
shift_1dlog = shift_returns(df_prep['log_ret'],-1)
df_prep['shift_ret'] = shift_1dlog
# add projections to df
lookahead_days = 7
aheads = compute_proj(df_prep['pe1'], lookahead_days)
df_prep['proj'] = aheads
# add day shapes to df
dayshapes = ['ds1','ds3','ds5']
dayspans = [1,3,5]
for shape, stat, span in zip(dayshapes, stats, dayspans):
df_prep[shape] = compute_day_shape(df_prep[stat[0]], df_prep[stat[1]], span)
# add shapes to df
shapes = ['shp1','shp3','shp5']
for shape, dayshape, span in zip(shapes, dayshapes, dayspans):
df_prep[shape] = compute_shape(df_prep[dayshape], span)
#trim the head then format
df_trim = df_prep[25:].copy()
df_trim[['shp1','shp3','shp5']] = df_trim[['shp1','shp3','shp5']].astype(int)
return df_trim
def test_train_split(df_mkt, test_year):
'''
split preprocessed data into train and test dataframes
train data comes from years prior to test year
data in years beyond the test year is not used
'''
df = df_mkt.copy()
years = df.date.map(lambda x: x.strftime('%Y')).astype(int)
#train = years < test_year for 2 years behind
train = ((test_year-3 < years) & (years < test_year))
test = np.isin(years, test_year)
df_train = df[train].copy()
df_test = df[test].copy()
return df_train, df_test
# #### Shape Ranks
def compute_shaperank(df_train, shapename):
'''
enter preprocessed train data and shapename string
return HSI dataframe for that shapename
'''
shapes = df_train[shapename]
projs = df_train['proj']
s_list = list(set(shapes))
p_avgs = []
p_stds = []
for shape in s_list:
p_avgs.append((projs*(shapes==shape)).mean())
p_stds.append((projs*(shapes==shape)).std())
# initiate dataframe build
df_shape = pd.DataFrame()
df_shape['shape'] = s_list
df_shape['p_avg'] = p_avgs
df_shape['p_std'] = p_stds
# shape ratio as a mini sharpe
df_shape['p_srs'] = df_shape['p_avg']/df_shape['p_std']
df_shape = df_shape.sort_values(by=['p_srs'])
df_shape = df_shape.reset_index(drop=True)
# normalize shape ratios into indicator
short_range = df_shape['p_srs'].max() - df_shape['p_srs'].min()
short_min = df_shape['p_srs'].min()
df_shape['HSI'] = (df_shape['p_srs'] - short_min)/short_range
return df_shape
def build_hsi(df_train):
'''
import train dataframe
return completed shape dataframe
'''
df1 = compute_shaperank(df_train, 'shp1')
df3 = compute_shaperank(df_train, 'shp3')
df5 = compute_shaperank(df_train, 'shp5')
df_hsi = pd.concat({'shp1':df1, 'shp3':df3, 'shp5':df5}, axis=1)
return df_hsi
def assign_hsi(df, df_shape):
'''
for daily market data
lookup the HSI figures given shape ordinals
return updated dataframe with daily HSC assignment
'''
df_mkt = df.copy()
# HSI lookups
shapenames = ['shp1','shp3','shp5']
hsi_names = ['hsi1','hsi3','hsi5']
for sname, hsi_name in zip(shapenames, hsi_names):
lookups = []
s_list = df_shape[sname]['shape'].tolist()
for i,nrows in df_mkt.iterrows():
shp = nrows[sname]
# assign 0.5's for unknown shapes
if shp in s_list:
lookups.append(np.asscalar(df_shape[sname][df_shape[sname]['shape']==shp]['HSI'].values))
else:
lookups.append(0.5)
df_mkt[hsi_name] = lookups
# compile three into the average of the two closest
nearest_two = []
for i,nrows in df_mkt.iterrows():
v1 = nrows['hsi1']
v2 = nrows['hsi3']
v3 = nrows['hsi5']
diffs = np.abs([v1-v2, v2-v3, v1-v3])
sums = [v1+v2, v2+v3, v1+v3]
nearest_two.append(np.max((diffs==np.amin(diffs))*sums)/2)
df_mkt['HSC'] = nearest_two
return df_mkt
# #### Trade Rules
def compute_trades(indicator, highT, lowT):
'''
compare HSC to thresholds
return binaries of in/out days
'''
trades = []
inout = 0
for ind in indicator:
# from out to enter
if inout == 0:
if ind > highT:
trades.append(1)
inout = 1
else:
trades.append(0)
# from in to exit
else:
if ind < lowT:
trades.append(0)
inout = 0
else:
trades.append(1)
return trades
def opt_tresh(seedLow, seedHigh, step_range, df):
'''
successive approximation applied to optimizing thresholds
'''
df_mkt = df.copy()
bestL = 0
bestH = 0
bestR = 0
for i in range(20):
t_low = seedLow + step_range*i/20
for j in range(20):
t_high = seedHigh + step_range*j/20
trade = compute_trades(df_mkt['HSC'], t_high, t_low)
returns = df_mkt['shift_ret']*trade
expret = (np.exp(returns[1:].T.sum())-1)*100
if expret > bestR:
bestL = t_low
bestH = t_high
bestR = expret
return bestL, bestH
def thresholds(df_train):
'''
determine trade rule thresholds
'''
# trim leader NaN's
df = df_train.iloc[:-7].copy()
low = 0.25
high = 0.75
res = 0
r_values = [0.5,0.25,0.125]
for r in r_values:
low, high = opt_tresh((low-(r/2)),(high-(r/2)),r,df)
return low, high
def compute_trades(indicator, highT, lowT):
'''
compare HSC to thresholds
return binaries of in/out days
'''
trades = []
inout = 0
for ind in indicator:
# from out to enter
if inout == 0:
if ind > highT:
trades.append(1)
inout = 1
else:
trades.append(0)
# from in to exit
else:
if ind < lowT:
trades.append(0)
inout = 0
else:
trades.append(1)
return trades
# #### Analysis Functions
def compute_trade_returns(df):
'''
compute trade returns
'''
return df['shift_ret']*df['trade']
def statistical_test(df):
'''
Unequal Variance Stats Test of equal Sample Size
This is a two-sided test for the null hypothesis that:
2 independent samples have identical average (expected) values.
With a small p_value, the null hypothesis is rejected
'''
all_ins = df[df['trade']==1]['shift_ret'].dropna()
all_outs = df[df['trade']==0]['shift_ret'].dropna()
if len(all_ins)<len(all_outs):
all_outs = np.asarray(np.random.choice(all_outs, len(all_ins)))
else:
all_ins = np.asarray(np.random.choice(all_ins, len(all_outs)))
results = statsmodels.stats.weightstats.ttest_ind(all_ins, all_outs,
alternative="two-sided",
usevar="unequal")
t_value = results[0]
p_value = results[1]
return t_value, p_value
def get_expected_return(returns):
'''
compute integrated return in percentage
'''
return (np.exp(returns[1:].T.sum())-1)*100
def get_volatility(returns):
'''
compute annualized volatility
'''
return np.std(returns)*np.sqrt(252)
def get_years(df_mkt):
'''
compute years for sharpe
'''
df = df_mkt.copy()
df = df.reset_index(drop=True)
return np.asscalar((df['date'].tail(1)-df['date'][0])/timedelta(days=365))
def get_sharpe(returns, years, vol_year):
'''
compute sharpe ratio assuming 3.5% risk free interest rate
'''
ret_year = (np.exp(returns[1:].T.sum())-1)/years
risk_free = 0.035
return (ret_year - risk_free) / vol_year
def get_benchmark(df_mkt, exp_return):
'''
compute beat the market percentage
calculates S&P500 returns using same trade days
converts log returns to simple percentage
returns difference in percentage returns
'''
df_spy = pd.read_csv('spy_index_092419.csv')
df_spy['date'] = pd.to_datetime(df_spy['date'])
df_bench = pd.merge(df_spy[['date', 'shift_ret']], df_mkt[['date','trade']], on='date', how='inner')
bench_returns = df_bench['shift_ret']*df_bench['trade']
bench_return = (np.exp(bench_returns[1:].T.sum())-1)*100
beat_percent = exp_return - bench_return
return beat_percent
# #### Processing Pipeline
def run_etl(ticker, equity):
'''
run ETL pipeline
'''
print('Runnning ETL for '+ ticker)
dict_raw = get_raw(ticker)
print('formatting')
df_for = format_raw(dict_raw)
df_scale = scale_adjusted(df_for)
print('preprocessing')
df_pre = preprocess(df_scale)
df_pre['symbol'] = ticker
print('begin test itterations')
years = df_pre.date.map(lambda x: x.strftime('%Y')).astype(int).unique().tolist()
df_res = pd.DataFrame()
for test_year in years[3:]:
print('starting test year {}'.format(test_year))
results = [ticker, equity, test_year]
print('test-train split')
df_train, df_test = test_train_split(df_pre[:-7], test_year)
est_price = np.asscalar(df_test['pe1'].tail(1).values)
results.append(est_price)
print('training shapes')
df_shape = build_hsi(df_train)
df_train = assign_hsi(df_train, df_shape)
df_test = assign_hsi(df_test, df_shape)
print('optimizing trade thresholds')
lowT, highT = thresholds(df_train)
results.append(lowT)
results.append(highT)
print('computing trades')
trades = compute_trades(df_test['HSC'], highT, lowT)
df_test['trade'] = trades
num_trades = ((np.diff(trades))==-1).sum() + trades[-1]
results.append(num_trades)
print('evaluating performance')
returns = compute_trade_returns(df_test)
results.append(np.count_nonzero(returns))
tval, pval = statistical_test(df_test)
results.append(tval)
results.append(pval)
print('t-value, p-value = ', tval, pval)
exp_ret = get_expected_return(returns)
results.append(exp_ret)
print('expected return = ', exp_ret)
vol = get_volatility(returns)
results.append(vol)
print('volatility = ', vol)
years = get_years(df_test)
results.append(years)
print('years = ', years)
sharpe = get_sharpe(returns, years, vol)
results.append(sharpe)
print('sharpe ratio = ', sharpe)
beat_percent = get_benchmark(df_test, exp_ret)
results.append(beat_percent)
print('beat percent = ', beat_percent)
print('saving result')
df_res = df_res.append(pd.Series(results),ignore_index=True)
print('formatting summary')
cols = ['symbol','equity','test_year','price$','lowT','highT','#trades','in_days',
't-val','p-val','exp_ret%','volatility','years','sharpe','beat%']
df_res.columns = cols
df_res.test_year = df_res.test_year.astype(int)
df_res.in_days = df_res.in_days.astype(int)
return df_res, df_test, df_shape
# Run ETL for one Market
ticker = 'V'
equity = 'Visa'
df_results, df_test, df_shape = run_etl(ticker, equity)
df_results.to_csv('visa_shape_102819.csv', index=None)
df_results.head()
# View results
df_results
# #### Result Discussion
# > The expected returns are all positive, amazing.
# > The sharpe ratio > 1 in 2/3 of the cases, it's mostly good.
# > The beat the market percentages are encouraging
# #### Assess Results for 2019
# returns histogram
returns = df_test['shift_ret']*df_test['trade']
plt.xlabel("Log Returns")
plt.ylabel("Frequency")
plt.title("Shape Factor Day Returns")
plt.hist(returns.dropna(),20,(-0.05,0.05), color = (.1,.1,.95,.3));
# We want more positive days than negative and we're getting them.
# show day returns statistics
returns.describe()
# t test definition
def analyze_alpha(expected_portfolio_returns_by_date, sigma):
t_test =scs.ttest_1samp(expected_portfolio_returns_by_date,sigma)
t_value = t_test[0]
p_value = t_test[1]/2
return t_value,p_value
# t test result
sigma = returns.std()
analyze_alpha(returns.dropna(), sigma)
# In this test, the null hypothesis is zero mean day returns.
# The trending market makes this impossible, so it may not mean much.
#expected returns over range
exp_ret = (np.exp(returns[1:].T.sum())-1)*100
exp_ret
# We like to see trading profit here.
#annualized volatility
vol_year = np.std(returns)*np.sqrt(252)
vol_year
# Less than 20% seems risk worthy.
# compute years for sharpe
df_test = df_test.reset_index(drop=True)
years = np.asscalar((df_test['date'].tail(1)-df_test['date'][0])/timedelta(days=365))
years
# The year 2019 is not over yet.
# sharpe ratio on this trade strategy
ret_year = (np.exp(returns[1:].T.sum())-1)/years
risk_free = 0.035
sharpe = (ret_year - risk_free) / vol_year
sharpe
# Greater than 1 is good, 2 is very good.
# show trades over last 3 months
df_test[-360:].plot(x='date',y='trade')
# This reveals week to month trading more than daytrading.
# #### Conclusion - indicator appears feasible.
df_test.to_csv('visa_index_102719.csv', index=None)
df_shape.to_csv('visa_hsi_lookup.csv', index=None)
# ### Disclaimer: this notebook is intended for educational purposes only and not recommended for real trading.
|
Initial Stage/Visa_shape_r7.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Freya-LR/Leetcode/blob/main/26.%20Remove%20Duplicates%20from%20Sorted%20Array.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="PJp5ZXsd3oKY"
# Array
# hint: Tme complexity is O(N), Space complexity is O(1), this method double indice only can be used when the input array is sorted,
# fast index and slow index. When fast index= slow index, fast index moves to next index;
# when fast index != slow index, both move to their next indice
class Solution:
def removeDuplicates(self, nums: List[int]) -> int:
if len(nums)<=1: return len(nums) # array nums has edge constraint : 0 <= nums.length <= 3 * 104, -100 <= nums[i] <= 100
index = 1
for i in range(1,len(nums)):
if nums[i] != nums[i-1]:
nums[index]=nums[i] # output nums is updated by new unduplicated array
index +=1
return index
|
26. Remove Duplicates from Sorted Array.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] _uuid="bc1fc0c4871172130a51e7c2d5058fdd34afccd8"
# <a href="https://www.bigdatauniversity.com"><img src = "https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width = 300, align = "center"></a>
#
# <h1 align=center><font size = 5>Sets and Dictionaries</font></h1>
# + [markdown] _uuid="4bf534873e3e8e549e4ed8925d7880625f8dd699"
#
# ## Table of Contents
#
#
# <div class="alert alert-block alert-info" style="margin-top: 20px">
#
# <li><a href="#ref2">Dictionaries</a></li>
# <br>
# <p></p>
# Estimated Time Needed: <strong>20 min</strong>
# </div>
#
# <hr>
# + [markdown] _uuid="87b95c781c76279fb1fd6fc34658f4347cb43b07"
# <a id="ref2"></a>
# <h2 align=center> Dictionaries in Python </h2>
#
# + [markdown] _uuid="66453af18b5d67c801d6ab810f32a2bf2ba69be1"
# A dictionary consists of keys and values. It is helpful to compare a dictionary to a list. Instead of the numerical indexes such as a list, dictionaries have keys. These keys are labels that are used to access values within a dictionary.
# + [markdown] _uuid="812ec8caa35527959d98020e564f5d33ed302066"
# <a ><img src = "https://ibm.box.com/shared/static/6tyznuwydogmtuv73o8l5g7xsb8o92h2.png" width = 650, align = "center"></a>
# <h4 align=center> A Comparison of a Dictionary to a list: Instead of the numerical indexes like a list, dictionaries have keys.
# </h4>
#
# + [markdown] _uuid="79ea517337ef6f950b6f542fe08d7f736cffc3f1"
# An example of a Dictionary **Dict**:
# + _uuid="7ee0b4c6dce1a25100e82d3744bf50de7f2a39d0"
Dict={"key1":1,"key2":"2","key3":[3,3,3],"key4":(4,4,4),('key5'):5,(0,1):6}
Dict
# + [markdown] _uuid="7f0693028d877930ec990621c21c02fd1c032ea8"
# The keys can be strings:
# + _uuid="8c817b9a8375a7f2f0ba1e02627c7de3c2b54bf6"
Dict["key1"]
# + [markdown] _uuid="284cd0938f5b124add3edcc5bc522f66932ecc2f"
# Keys can also be any immutable object such as a tuple:
# + _uuid="f89f5c4ac85bfc768a37bd333c714a9449a33c06"
Dict[(0,1)]
# + [markdown] _uuid="50099b53ffd4a1a4393c171736f8b2e7b2dc3f8b"
# Each key is separated from its value by a colon "**:**". Commas separate the items, and the whole dictionary is enclosed in curly braces. An empty dictionary without any items is written with just two curly braces, like this "**{}**".
# + _uuid="0e8309375e812ed6ee47a4578cda94c9a543b351"
release_year_dict = {"Thriller":"1982", "Back in Black":"1980", \
"The Dark Side of the Moon":"1973", "The Bodyguard":"1992", \
"Bat Out of Hell":"1977", "Their Greatest Hits (1971-1975)":"1976", \
"Saturday Night Fever":"1977", "Rumours":"1977"}
release_year_dict
# + [markdown] _uuid="4188058f45cfe8f3177d81038ee1b6a4e41604e3"
# In summary, like a list, a dictionary holds a sequence of elements. Each element is represented by a key and its corresponding value. Dictionaries are created with two curly braces containing keys and values separated by a colon. For every key, there can only be a single value, however, multiple keys can hold the same value. Keys can only be strings, numbers, or tuples, but values can be any data type.
# + [markdown] _uuid="868d515c4f1b1fdb3f02b1f108740ff97f64e4e1"
# It is helpful to visualize the dictionary as a table, as in figure 9. The first column represents the keys, the second column represents the values.
#
# + [markdown] _uuid="08819a111825fca4b1da1dbf62ef5712b0c72990"
# <a ><img src = "https://ibm.box.com/shared/static/i45fppou18c3t0fuf2ikks48tod7chbl.png" width = 650, align = "center"></a>
# <h4 align=center> Figure 9: Table representing a Dictionary
#
# </h4>
#
# + [markdown] _uuid="81eaed7e6f20b1335d3ecd8d4b88c11198c5416a"
# You will need this dictionary for the next two questions :
# + _uuid="debf075c3a90198e96a016b71a61f61d72e9de4c"
soundtrack_dic = { "The Bodyguard":"1992", "Saturday Night Fever":"1977"}
soundtrack_dic
# + [markdown] _uuid="6c25373ab365f5a776c934851cc2a01f2642ba81"
# #### In the dictionary "soundtrack_dict" what are the keys ?
# + _uuid="6636338ad80655d2089b52b2dcf694715ac58f41"
# + [markdown] _uuid="897eed2c00cf25eb16280308b0df8ff7ba11432e"
# <div align="right">
# <a href="#Dict1" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
#
# </div>
# <div id="Dict1" class="collapse">
# ```
# The Keys "The Bodyguard" and "Saturday Night Fever"
# ```
# </div>
# + [markdown] _uuid="f6015856be3834642b89c815ebb0542860b2795a"
# #### In the dictionary "soundtrack_dict" what are the values ?
# + [markdown] _uuid="32073a4d6950c4ee91a992014c241bb30c57849e"
# <div align="right">
# <a href="#Dict2" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
#
# </div>
# <div id="Dict2" class="collapse">
# ```
# The values are "1992" and "1977"
# ```
# </div>
# + [markdown] _uuid="96a1fe27bb3cae357495cfdb31952ab41f830a19"
# You can retrieve the values based on the names:
# + _uuid="375e1ad5845ddf13d37ea686fe8a2e81e1a43e4b"
release_year_dict['Thriller']
# + [markdown] _uuid="fb1a9d47d6024171dc47f3cc42bc8b5ce1c093af"
# This corresponds to:
#
# + [markdown] _uuid="752e08b0b51fadd6d578b9323775696a6bedb3f8"
# <a ><img src = "https://ibm.box.com/shared/static/glbwz23cgjjxqi7rjxn7me5i16gan7h7.png" width = 500, align = "center"></a>
# <h4 align=center>
# Table used to represent accessing the value for "Thriller"
#
# </h4>
#
# + [markdown] _uuid="94e3680fdbfdc7c3a8b5c2d11d138d348e26a622"
# Similarly for The Bodyguard
#
# + _uuid="2388d32b20c06ac263e19e8e3cfccf47be4ed139"
release_year_dict['The Bodyguard']
# + [markdown] _uuid="150d82c8158f4051cfee4f6337774cdde1572020"
# <a ><img src = "https://ibm.box.com/shared/static/6t7bu8jusckaskukwq1k0a3im5ltcpsn.png " width = 500, align = "center"></a>
# <h4 align=center>
# Accessing the value for the "The Bodyguard"
#
# </h4>
#
#
#
# + [markdown] _uuid="17e87214d0b813866b7d5e039145843601fca7a2"
# Now let us retrieve the keys of the dictionary using the method **release_year_dict()**:
# + _uuid="78ac3788c9c2d8d28e88dc7b82b3dce6e187e2c4"
release_year_dict.keys()
# + [markdown] _uuid="907fa0259c527b9f2dc6062887e692fceea6192d"
# You can retrieve the values using the method **`values()`**:
# + _uuid="8826bdd91d9d75673a7ca032d3d7e54ae18d567f"
release_year_dict.values()
# + [markdown] _uuid="930d2dc428be0594206ebcb10eba358c9347ff49"
# We can add an entry:
# + _uuid="ee7fd5256c29c70555dc7d59e4ec1f6d759ab675"
release_year_dict['Graduation']='2007'
release_year_dict
# + [markdown] _uuid="5645679b7fcefb5e324d4053443a59f2e702fd2a"
# We can delete an entry:
# + _uuid="01b460731d475dd029e256e59d77a8ac8bd98d06"
del(release_year_dict['Thriller'])
del(release_year_dict['Graduation'])
release_year_dict
# + [markdown] _uuid="869126881b9c3b6e48991000d289c39c766094e1"
# We can verify if an element is in the dictionary:
# + _uuid="fce98931682246d7723ed9a58c4e4cd8c69e14f9"
'The Bodyguard' in release_year_dict
# + [markdown] _uuid="bb918f98e48cd900512442c8eddae19090db2ca3"
# #### The Albums 'Back in Black', 'The Bodyguard' and 'Thriller' have the following music recording sales in millions 50, 50 and 65 respectively:
# + [markdown] _uuid="fcdfec0040bbf994f3d3c98a4ed6cb4dc2d469e0"
# ##### a) Create a dictionary “album_sales_dict” where the keys are the album name and the sales in millions are the values.
# + _uuid="a27ea075d488e91824931f403f5cecd0867f1351"
# + [markdown] _uuid="d88ba6b693367d20e46d15c0817cc86b061c3459"
# <div align="right">
# <a href="#q9" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
#
# </div>
# <div id="q9" class="collapse">
# ```
# album_sales_dict= { "The Bodyguard":50, "Back in Black":50,"Thriller":65}
# ```
# </div>
# + [markdown] _uuid="1cb74d9ea4299a5421f992a13f1e6b5dc55ebc20"
# #### b) Use the dictionary to find the total sales of "Thriller":
# + _uuid="3a64e3ca314ff564ddf4c85a00a43fc93487636d"
# + [markdown] _uuid="03567687d0a9515c6e1ebb49eb4ff339112a8ab3"
# <div align="right">
# <a href="#q10b" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
#
# </div>
# <div id="q10b" class="collapse">
# ```
# album_sales_dict["Thriller"]
# ```
# </div>
# + [markdown] _uuid="8111b79673ca69cd24370aa11d8cbd944c85536d"
# #### c) Find the names of the albums from the dictionary using the method "keys":
# + _uuid="e6f4c580e9877783477723591aba2121d167539a"
# + [markdown] _uuid="8b305d5d6b12f72166e23a5e837a2e47ad3236fe"
# <div align="right">
# <a href="#q10c" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
#
# </div>
# <div id="q10c" class="collapse">
# ```
# album_sales_dict.keys()
# ```
# + [markdown] _uuid="d28a620354f9035142ae622f1b39aef400c7f9aa"
# #### d) Find the names of the recording sales from the dictionary using the method "values":
# + _uuid="51b52836e1da0627ceeaf31ec5e266fe0145fe9d"
# + [markdown] _uuid="bf1daea1d0de1e73ce3904e7806806f694fe4405"
# <div align="right">
# <a href="#q10d" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
#
# </div>
# <div id="q10d" class="collapse">
# ```
# album_sales_dict.values()
# ```
# </div>
# + [markdown] _uuid="cf6d75b00fe68796fea4d7d3cfaca609566c9ba5"
# <a href="http://cocl.us/NotebooksPython101bottom"><img src = "https://ibm.box.com/shared/static/irypdxea2q4th88zu1o1tsd06dya10go.png" width = 750, align = "center"></a>
#
# + [markdown] _uuid="0bfa4ed3f9ab8dcb9a12c8e643d4ca09d51fa611"
#
#
# # About the Authors:
#
# [<NAME>]( https://www.linkedin.com/in/joseph-s-50398b136/) has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.
#
#
# + [markdown] _uuid="4584c2fcffd7ad0aab9df50411aa018c10db17f5"
# <hr>
# Copyright © 2017 [cognitiveclass.ai](cognitiveclass.ai?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/).
|
Jupyter notebook/IBM/python for data science/dictionaries.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction
#
# Here, you'll use different types of SQL **JOINs** to answer questions about the [Stack Overflow](https://www.kaggle.com/stackoverflow/stackoverflow) dataset.
#
# Before you get started, run the following cell to set everything up.
# Set up feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.sql_advanced.ex1 import *
print("Setup Complete")
# The code cell below fetches the `posts_questions` table from the `stackoverflow` dataset. We also preview the first five rows of the table.
# +
from google.cloud import bigquery
# Create a "Client" object
client = bigquery.Client()
# Construct a reference to the "stackoverflow" dataset
dataset_ref = client.dataset("stackoverflow", project="bigquery-public-data")
# API request - fetch the dataset
dataset = client.get_dataset(dataset_ref)
# Construct a reference to the "posts_questions" table
table_ref = dataset_ref.table("posts_questions")
# API request - fetch the table
table = client.get_table(table_ref)
# Preview the first five lines of the table
client.list_rows(table, max_results=5).to_dataframe()
# -
# We also take a look at the `posts_answers` table.
# +
# Construct a reference to the "posts_answers" table
table_ref = dataset_ref.table("posts_answers")
# API request - fetch the table
table = client.get_table(table_ref)
# Preview the first five lines of the table
client.list_rows(table, max_results=5).to_dataframe()
# -
# You will work with both of these tables to answer the questions below.
#
# # Exercises
#
# ### 1) How long does it take for questions to receive answers?
#
# You're interested in exploring the data to have a better understanding of how long it generally takes for questions to receive answers. Armed with this knowledge, you plan to use this information to better design the order in which questions are presented to Stack Overflow users.
#
# With this goal in mind, you write the query below, which focuses on questions asked in January 2018. It returns a table with two columns:
# - `q_id` - the ID of the question
# - `time_to_answer` - how long it took (in seconds) for the question to receive an answer
#
# Run the query below (without changes), and take a look at the output.
# +
first_query = """
SELECT q.id AS q_id,
MIN(TIMESTAMP_DIFF(a.creation_date, q.creation_date, SECOND)) as time_to_answer
FROM `bigquery-public-data.stackoverflow.posts_questions` AS q
INNER JOIN `bigquery-public-data.stackoverflow.posts_answers` AS a
ON q.id = a.parent_id
WHERE q.creation_date >= '2018-01-01' and q.creation_date < '2018-02-01'
GROUP BY q_id
ORDER BY time_to_answer
"""
first_result = client.query(first_query).result().to_dataframe()
print("Percentage of answered questions: %s%%" % \
(sum(first_result["time_to_answer"].notnull()) / len(first_result) * 100))
print("Number of questions:", len(first_result))
first_result.head()
# -
# You're surprised at the results and strongly suspect that something is wrong with your query. In particular,
# - According to the query, 100% of the questions from January 2018 received an answer. But, you know that ~80% of the questions on the site usually receive an answer.
# - The total number of questions is surprisingly low. You expected to see at least 150,000 questions represented in the table.
#
# Given these observations, you think that the type of **JOIN** you have chosen has inadvertently excluded unanswered questions. Using the code cell below, can you figure out what type of **JOIN** to use to fix the problem so that the table includes unanswered questions?
#
# **Note**: You need only amend the type of **JOIN** (i.e., **INNER**, **LEFT**, **RIGHT**, or **FULL**) to answer the question successfully.
# +
# Your code here
correct_query = """
"""
# Check your answer
q_1.check()
# Run the query, and return a pandas DataFrame
correct_result = client.query(correct_query).result().to_dataframe()
print("Percentage of answered questions: %s%%" % \
(sum(correct_result["time_to_answer"].notnull()) / len(correct_result) * 100))
print("Number of questions:", len(correct_result))
# -
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
# +
# #%%RM_IF(PROD)%%
correct_query = """
SELECT q.id AS q_id,
MIN(TIMESTAMP_DIFF(a.creation_date, q.creation_date, SECOND)) as time_to_answer
FROM `bigquery-public-data.stackoverflow.posts_questions` AS q
LEFT JOIN `bigquery-public-data.stackoverflow.posts_answers` AS a
ON q.id = a.parent_id
WHERE q.creation_date >= '2018-01-01' and q.creation_date < '2018-02-01'
GROUP BY q_id
ORDER BY time_to_answer
"""
q_1.check()
# -
# ### 2) Initial questions and answers, Part 1
#
# You're interested in understanding the initial experiences that users typically have with the Stack Overflow website. Is it more common for users to first ask questions or provide answers? After signing up, how long does it take for users to first interact with the website? To explore this further, you draft the (partial) query in the code cell below.
#
# The query returns a table with three columns:
# - `owner_user_id` - the user ID
# - `q_creation_date` - the first time the user asked a question
# - `a_creation_date` - the first time the user contributed an answer
#
# You want to keep track of users who have asked questions, but have yet to provide answers. And, your table should also include users who have answered questions, but have yet to pose their own questions.
#
# With this in mind, please fill in the appropriate **JOIN** (i.e., **INNER**, **LEFT**, **RIGHT**, or **FULL**) to return the correct information.
#
# **Note**: You need only fill in the appropriate **JOIN**. All other parts of the query should be left as-is. (You also don't need to write any additional code to run the query, since the `cbeck()` method will take care of this for you.)
#
# To avoid returning too much data, we'll restrict our attention to questions and answers posed in January 2019. We'll amend the timeframe in Part 2 of this question to be more realistic!
# +
# Your code here
q_and_a_query = """
SELECT q.owner_user_id AS owner_user_id,
MIN(q.creation_date) AS q_creation_date,
MIN(a.creation_date) AS a_creation_date
FROM `bigquery-public-data.stackoverflow.posts_questions` AS q
____ `bigquery-public-data.stackoverflow.posts_answers` AS a
ON q.owner_user_id = a.owner_user_id
WHERE q.creation_date >= '2019-01-01' AND q.creation_date < '2019-02-01'
AND a.creation_date >= '2019-01-01' AND a.creation_date < '2019-02-01'
GROUP BY owner_user_id
"""
# Check your answer
q_2.check()
# -
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_2.hint()
#_COMMENT_IF(PROD)_
q_2.solution()
# +
# #%%RM_IF(PROD)%%
q_and_a_query = """
SELECT q.owner_user_id AS owner_user_id,
MIN(q.creation_date) AS q_creation_date,
MIN(a.creation_date) AS a_creation_date
FROM `bigquery-public-data.stackoverflow.posts_questions` AS q
FULL JOIN `bigquery-public-data.stackoverflow.posts_answers` AS a
ON q.owner_user_id = a.owner_user_id
WHERE q.creation_date >= '2019-01-01' AND q.creation_date < '2019-02-01'
AND a.creation_date >= '2019-01-01' AND a.creation_date < '2019-02-01'
GROUP BY owner_user_id
"""
q_2.check()
# -
# ### 3) Initial questions and answers, Part 2
#
# Now you'll address a more realistic (and complex!) scenario. To answer this question, you'll need to pull information from *three* different tables! This syntax very similar to the case when we have to join only two tables. For instance, consider the three tables below.
#
# 
#
# We can use two different **JOINs** to link together information from all three tables, in a single query.
#
# 
#
# With this in mind, say you're interested in understanding users who joined the site in January 2019. You want to track their activity on the site: when did they post their first questions and answers, if ever?
#
# Write a query that returns the following columns:
# - `id` - the IDs of all users who created Stack Overflow accounts in January 2019 (January 1, 2019, to January 31, 2019, inclusive)
# - `q_creation_date` - the first time the user posted a question on the site; if the user has never posted a question, the value should be null
# - `a_creation_date` - the first time the user posted a question on the site; if the user has never posted a question, the value should be null
#
# Note that questions and answers posted after January 31, 2019, should still be included in the results. And, all users who joined the site in January 2019 should be included (even if they have never posted a question or provided an answer).
#
# The query from the previous question should be a nice starting point to answering this question! You'll need to use the `posts_answers` and `posts_questions` tables. You'll also need to use the `users` table from the Stack Overflow dataset. The relevant columns from the `users` table are `id` (the ID of each user) and `creation_date` (when the user joined the Stack Overflow site, in DATETIME format).
# +
# Your code here
three_tables_query = """
"""
# Check your answer
q_3.check()
# -
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_3.hint()
#_COMMENT_IF(PROD)_
q_3.solution()
# +
# #%%RM_IF(PROD)%%
three_tables_query = """
SELECT u.id AS id,
MIN(q.creation_date) AS q_creation_date,
MIN(a.creation_date) AS a_creation_date
FROM `bigquery-public-data.stackoverflow.posts_questions` AS q
FULL JOIN `bigquery-public-data.stackoverflow.posts_answers` AS a
ON q.owner_user_id = a.owner_user_id
RIGHT JOIN `bigquery-public-data.stackoverflow.users` AS u
ON q.owner_user_id = u.id
WHERE u.creation_date >= '2019-01-01' and u.creation_date < '2019-02-01'
GROUP BY id
"""
q_3.check()
# -
# ### 4) How many distinct users posted on January 1, 2019?
#
# In the code cell below, write a query that returns a table with a single column:
# - `owner_user_id` - the IDs of all users who posted at least one question or answer on January 1, 2019. Each user ID should appear at most once.
#
# In the `posts_questions` (and `posts_answers`) tables, you can get the ID of the original poster from the `owner_user_id` column. Likewise, the date of the original posting can be found in the `creation_date` column.
#
# In order for your answer to be marked correct, your query must use a **UNION**.
# +
# Your code here
all_users_query = """
"""
# Check your answer
q_4.check()
# -
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_4.hint()
#_COMMENT_IF(PROD)_
q_4.solution()
# +
# #%%RM_IF(PROD)%%
all_users_query = """
SELECT q.owner_user_id
FROM `bigquery-public-data.stackoverflow.posts_questions` AS q
WHERE EXTRACT(DATE FROM q.creation_date) = '2019-01-01'
UNION DISTINCT
SELECT a.owner_user_id
FROM `bigquery-public-data.stackoverflow.posts_answers` AS a
WHERE EXTRACT(DATE FROM a.creation_date) = '2019-01-01'
"""
q_4.check()
# +
# #%%RM_IF(PROD)%%
all_users_query = """
SELECT q.owner_user_id
FROM `bigquery-public-data.stackoverflow.posts_questions` AS q
WHERE q.creation_date >= '2019-01-01' AND q.creation_date < '2019-01-02'
UNION DISTINCT
SELECT a.owner_user_id
FROM `bigquery-public-data.stackoverflow.posts_answers` AS a
WHERE a.creation_date >= '2019-01-01' AND a.creation_date < '2019-01-02'
"""
q_4.check()
# +
# #%%RM_IF(PROD)%%
all_users_query = """
SELECT a.owner_user_id
FROM `bigquery-public-data.stackoverflow.posts_answers` AS a
WHERE EXTRACT(DATE FROM a.creation_date) = '2019-01-01'
UNION DISTINCT
SELECT q.owner_user_id
FROM `bigquery-public-data.stackoverflow.posts_questions` AS q
WHERE EXTRACT(DATE FROM q.creation_date) = '2019-01-01'
"""
q_4.check()
# -
# # Keep going
#
# Learn how to use **[analytic functions](#$NEXT_NOTEBOOK_URL$)** to perform complex calculations with minimal SQL code.
|
notebooks/sql_advanced/raw/ex1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:fisi2028]
# language: python
# name: conda-env-fisi2028-py
# ---
import numpy as np
import pandas as pd
import scipy as sp
import sklearn as sl
import seaborn as sns; sns.set()
import matplotlib as mpl
import matplotlib.pyplot as plt
# embeber en los cuadernos interactivos la salida de matplotlib
# %matplotlib inline
from scipy.fft import fft
# Método del trapecio
def Integral(f, x_i, x_f):
n = 100000
x, h = np.linspace(x_i, x_f, n + 1, retstep = True) #grilla
return (0.5)*h*f(x[0] + f(x[-1])) + h*np.sum(f(x[1:-1]))
# # Tarea 4
# #### Con base a los métodos vistos en clase resuelva las siguientes dos preguntas
#
# #### (A) Integrales
# $\int_{0}^{1}x^{-1/2}\,\text{d}x$
#
def f(x):
return x**(-0.5)
# $\int_{0}^{\infty}e^{-x}\ln{x}\,\text{d}x$
def g(x):
return (np.e**(-x))*(np.log(x))
#(0.001, 1000)
# $\int_{0}^{\infty}\frac{\sin{x}}{x}\,\text{d}x$
def h(x):
return np.sin(x)/x
#(0, 100)
# ## Resultado:
print("El resultado de la integral de la función f es:", Integral(f, 0, 1))
print("El resultado de la integral de la función g es:", Integral(g, 0.001, 1000), "aproximadamente")
print("El resultado de la integral de la función h es:", Integral(h, 0, 100), "aproximadamente")
# ## (B) Fourier
#
# Calcule la transformada rápida de Fourier para la función de la Tarea 3 (D) en el intervalo $[0,4]$ ($k$ máximo $2\pi n/L$ para $n=25$).
df = pd.read_pickle(r"ex1.gz")
# ## Sea la función f(x)
# $$f(x)=\frac{0.94519587}{\left[(x-1.43859817)^2+(0.7390972)\right]^\gamma}$$
#
# Donde 𝛾 = 1.12724243
#
# +
X = df["x"]
y = df["y"]
def f(x):
return (0.94519587)/((x-1.43859817)**2 + 0.7390972)**(1.12724243)
x = f(X)
# +
Nf = 25
a = np.min(x)
b = np.max(x)
def a_j(j):
global a, b, x, y
L = b - a
k_j = 2*np.pi*j/4
new_y = y*np.cos(k_j*x)/L
if j > 0:
new_y = new_y*2
return sp.integrate.simpson(new_y, x)
def b_j(j):
global a, b, x, y
L = b - a
k_j = 2*np.pi*j/4
new_y = y*np.sin(k_j*x)/L
if j > 0:
new_y = new_y*2
return sp.integrate.simpson(new_y, x)
A_j = np. array([a_j(j) for j in range(Nf)])
B_j = np. array([b_j(j) for j in range(Nf)])
K_j = np. array([2*np.pi*j/4 for j in range(Nf)])
# -
#Tansformada
x_tilde = np.linspace(0, 4, 100000)
y_tilde = np.sum([(A_j[j]*np.cos(K_j[j]*x_tilde) + B_j[j]*np.sin(K_j[j]*x_tilde)) for j in range(Nf) ], axis=0)
plt.plot(x_tilde,y_tilde, label = "datos")
plt.legend(loc="upper right")
plt.title("Transformada de Fourier para la función f(x)")
plt.ylabel('F(f(x))')
plt.xlabel('x')
plt.show()
# Ajuste la transformada de Fourier para los datos de la Tarea 3 usando el método de regresión exacto de la Tarea 3 (C) y compare con el anterior resultado. Para ambos ejercicios haga una interpolación y grafique para comparar.
X = np.array(x_tilde).reshape(-1, 1)
Y = np.array(y_tilde).reshape(-1, 1)
# +
P = np.array([np.ones([len(x_tilde), 1]), X, X**2, X**3, X**4, X**5]).reshape(6, len(x_tilde)).T
coeffs = np.linalg.inv(P.T @ P)@ P.T @ Y
b, c1, c2, c3, c4, c5 = coeffs
# -
Ajuste = b + (c1*X) + (c2*X**2) + (c3*X**3) + (c4*X**4) + (c5*X**5)
plt.figure()
plt.plot(x_tilde,y_tilde, label = "datos")
plt.plot(X, Ajuste, c ='k', label = "Ajuste")
plt.legend(loc="upper right")
plt.title("Regresión polinomial exacta")
plt.ylabel('F(f(x))')
plt.xlabel('x')
plt.show()
# Con el ajuste se puede ver que no la transformada hace que este ajuste esté muy cerca de cero en la mayoria de los valores.
|
soluciones/ke.solano/tarea4/solucion.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:root] *
# language: python
# name: conda-root-py
# ---
# # TFX Components Walk-through
# ## Learning Objectives
#
# 1. Develop a high level understanding of TFX pipeline components.
# 2. Learn how to use a TFX Interactive Context for prototype development of TFX pipelines.
# 3. Work with the Tensorflow Data Validation (TFDV) library to check and analyze input data.
# 4. Utilize the Tensorflow Transform (TFT) library for scalable data preprocessing and feature transformations.
# 5. Employ the Tensorflow Model Analysis (TFMA) library for model evaluation.
#
# In this lab, you will work with the [Covertype Data Set](https://github.com/jarokaz/mlops-labs/blob/master/datasets/covertype/README.md) and use TFX to analyze, understand, and pre-process the dataset and train, analyze, validate, and deploy a multi-class classification model to predict the type of forest cover from cartographic features.
#
# You will utilize **TFX Interactive Context** to work with the TFX components interactivelly in a Jupyter notebook environment. Working in an interactive notebook is useful when doing initial data exploration, experimenting with models, and designing ML pipelines. You should be aware that there are differences in the way interactive notebooks are orchestrated, and how they access metadata artifacts. In a production deployment of TFX on GCP, you will use an orchestrator such as Kubeflow Pipelines, or Cloud Composer. In an interactive mode, the notebook itself is the orchestrator, running each TFX component as you execute the notebook cells. In a production deployment, ML Metadata will be managed in a scalabe database like MySQL, and artifacts in apersistent store such as Google Cloud Storage. In an interactive mode, both properties and payloads are stored in a local file system of the Jupyter host.
#
# **Setup Note:**
# Currently, TFMA visualizations do not render properly in JupyterLab. It is recommended to run this notebook in Jupyter Classic Notebook. To switch to Classic Notebook select *Launch Classic Notebook* from the *Help* menu.
# +
import os
import time
from pprint import pprint
import absl
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
import tfx
from tensorflow_metadata.proto.v0 import schema_pb2
from tfx.components import (
CsvExampleGen,
Evaluator,
ExampleValidator,
InfraValidator,
Pusher,
SchemaGen,
StatisticsGen,
Trainer,
Transform,
)
from tfx.components.trainer import executor as trainer_executor
from tfx.dsl.components.base import executor_spec
from tfx.dsl.components.common.importer import Importer
from tfx.dsl.components.common.resolver import Resolver
from tfx.dsl.input_resolution.strategies.latest_blessed_model_strategy import (
LatestBlessedModelStrategy,
)
from tfx.orchestration import metadata, pipeline
from tfx.orchestration.experimental.interactive.interactive_context import (
InteractiveContext,
)
from tfx.proto import (
example_gen_pb2,
infra_validator_pb2,
pusher_pb2,
trainer_pb2,
)
from tfx.types import Channel
from tfx.types.standard_artifacts import Model, ModelBlessing
# -
print(tf.__version__)
print(tfx.__version__)
print(tfdv.__version__)
print(tfma.__version__)
dir(absl)
# **Note**: this lab was developed and tested with the following TF ecosystem package versions:
#
# `Tensorflow Version: 2.6.2`
# `TFX Version: 1.4.0`
# `TFDV Version: 1.4.0`
# `TFMA Version: 0.35.0`
#
# If you encounter errors with the above imports (e.g. TFX component not found), check your package versions in the cell below.
# +
print("Tensorflow Version:", tf.__version__)
print("TFX Version:", tfx.__version__)
print("TFDV Version:", tfdv.__version__)
print("TFMA Version:", tfma.VERSION_STRING)
absl.logging.set_verbosity(absl.logging.INFO)
# -
# If the versions above do not match, update your packages in the current Jupyter kernel below. The default `%pip` package installation location is not on your system installation PATH; use the command below to append the local installation path to pick up the latest package versions. Note that you may also need to restart your notebook kernel to pick up the specified package versions and re-run the imports cell above before proceeding with the lab.
os.environ["PATH"] += os.pathsep + "/home/jupyter/.local/bin"
# ## Configure lab settings
#
# Set constants, location paths and other environment settings.
ARTIFACT_STORE = os.path.join(os.sep, "home", "jupyter", "artifact-store")
SERVING_MODEL_DIR = os.path.join(os.sep, "home", "jupyter", "serving_model")
DATA_ROOT = "../../data"
pwd
# ## Creating Interactive Context
#
# TFX Interactive Context allows you to create and run TFX Components in an interactive mode. It is designed to support experimentation and development in a Jupyter Notebook environment. It is an experimental feature and major changes to interface and functionality are expected. When creating the interactive context you can specifiy the following parameters:
# - `pipeline_name` - Optional name of the pipeline for ML Metadata tracking purposes. If not specified, a name will be generated for you.
# - `pipeline_root` - Optional path to the root of the pipeline's outputs. If not specified, an ephemeral temporary directory will be created and used.
# - `metadata_connection_config` - Optional `metadata_store_pb2.ConnectionConfig` instance used to configure connection to a ML Metadata connection. If not specified, an ephemeral SQLite MLMD connection contained in the pipeline_root directory with file name "metadata.sqlite" will be used.
#
PIPELINE_NAME = "tfx-covertype-classifier"
PIPELINE_ROOT = os.path.join(
ARTIFACT_STORE, PIPELINE_NAME, time.strftime("%Y%m%d_%H%M%S")
)
PIPELINE_NAME, PIPELINE_ROOT
os.makedirs(PIPELINE_ROOT, exist_ok=True)
context = InteractiveContext(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
metadata_connection_config=None,
)
# ## Ingesting data using ExampleGen
#
# In any ML development process the first step is to ingest the training and test datasets. The `ExampleGen` component ingests data into a TFX pipeline. It consumes external files/services to generate a set file files in the `TFRecord` format, which will be used by other TFX components. It can also shuffle the data and split into an arbitrary number of partitions.
#
# <img src=../../images/ExampleGen.png width="300">
# ### Configure and run CsvExampleGen
#
# In this exercise, you use the `CsvExampleGen` specialization of `ExampleGen` to ingest CSV files from a GCS location and emit them as `tf.Example` records for consumption by downstream TFX pipeline components. Your task is to configure the component to create 80-20 `train` and `eval` splits. *Hint: review the [ExampleGen proto](https://github.com/tensorflow/tfx/blob/master/tfx/proto/example_gen.proto#L78) definition to split your data with hash buckets.*
# +
output_config = example_gen_pb2.Output(
split_config=example_gen_pb2.SplitConfig(
splits=[
# TODO: Your code to configure train data split
# TODO: Your code to configure eval data split
example_gen_pb2.SplitConfig.Split(name="train", hash_buckets=4),
example_gen_pb2.SplitConfig.Split(name="eval", hash_buckets=1),
]
)
)
example_gen = tfx.components.CsvExampleGen(
input_base=DATA_ROOT, output_config=output_config
).with_id("CsvExampleGen")
# -
context.run(example_gen)
# ### Examine the ingested data
examples_uri = example_gen.outputs["examples"].get()[-1].uri
tfrecord_filenames = [
os.path.join(examples_uri, "Split-train", name)
for name in os.listdir(os.path.join(examples_uri, "Split-train"))
]
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
for tfrecord in dataset.take(2):
example = tf.train.Example()
example.ParseFromString(tfrecord.numpy())
for name, feature in example.features.feature.items():
if feature.HasField("bytes_list"):
value = feature.bytes_list.value
if feature.HasField("float_list"):
value = feature.float_list.value
if feature.HasField("int64_list"):
value = feature.int64_list.value
print(f"{name}: {value}")
print("******")
# +
# tf.data.TFRecordDataset?
# -
# ## Generating statistics using StatisticsGen
#
# The `StatisticsGen` component generates data statistics that can be used by other TFX components. StatisticsGen uses [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started). `StatisticsGen` generate statistics for each split in the `ExampleGen` component's output. In our case there two splits: `train` and `eval`.
#
# <img src=../../images/StatisticsGen.png width="200">
# ### Configure and run the `StatisticsGen` component
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs["examples"]
).with_id("StatisticsGen")
context.run(statistics_gen)
# ### Visualize statistics
#
# The generated statistics can be visualized using the `tfdv.visualize_statistics()` function from the [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) library or using a utility method of the `InteractiveContext` object. In fact, most of the artifacts generated by the TFX components can be visualized using `InteractiveContext`.
context.show(statistics_gen.outputs["statistics"])
# ## Infering data schema using SchemaGen
#
# Some TFX components use a description input data called a schema. The schema is an instance of `schema.proto`. It can specify data types for feature values, whether a feature has to be present in all examples, allowed value ranges, and other properties. `SchemaGen` automatically generates the schema by inferring types, categories, and ranges from data statistics. The auto-generated schema is best-effort and only tries to infer basic properties of the data. It is expected that developers review and modify it as needed. `SchemaGen` uses [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started).
#
# The `SchemaGen` component generates the schema using the statistics for the `train` split. The statistics for other splits are ignored.
#
# <img src=../../images/SchemaGen.png width="200">
# ### Configure and run the `SchemaGen` components
schema_gen = SchemaGen(
statistics=statistics_gen.outputs["statistics"], infer_feature_shape=False
).with_id("SchemaGen")
context.run(schema_gen)
# ### Visualize the inferred schema
context.show(schema_gen.outputs["schema"])
# ## Updating the auto-generated schema
#
# In most cases the auto-generated schemas must be fine-tuned manually using insights from data exploration and/or domain knowledge about the data. For example, you know that in the `covertype` dataset there are seven types of forest cover (coded using 1-7 range) and that the value of the `Slope` feature should be in the 0-90 range. You can manually add these constraints to the auto-generated schema by setting the feature domain.
# ### Load the auto-generated schema proto file
schema_proto_path = "{}/{}".format(
schema_gen.outputs["schema"].get()[0].uri, "schema.pbtxt"
)
print(schema_proto_path)
schema = tfdv.load_schema_text(schema_proto_path)
display(schema)
# ### Modify the schema
#
# You can use the protocol buffer APIs to modify the schema.
#
# **Hint**: Review the [TFDV library API documentation](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/set_domain) on setting a feature's domain. You can use the protocol buffer APIs to modify the schema. Review the [Tensorflow Metadata proto definition](https://github.com/tensorflow/metadata/blob/master/tensorflow_metadata/proto/v0/schema.proto#L405) for configuration options.
# +
# TODO: Your code to restrict the categorical feature Cover_Type between the values of 0 and 6.
# TODO: Your code to restrict the numeric feature Slope between 0 and 90.
tfdv.set_domain(
schema,
"Cover_Type",
schema_pb2.IntDomain(name="Cover_Type", min=0, max=6, is_categorical=True),
)
tfdv.set_domain(
schema, "Slope", schema_pb2.IntDomain(name="Slope", min=0, max=90)
)
tfdv.display_schema(schema=schema)
# -
# #### Save the updated schema
display(ARTIFACT_STORE)
# +
schema_dir = os.path.join(ARTIFACT_STORE, "schema")
tf.io.gfile.makedirs(schema_dir)
schema_file = os.path.join(schema_dir, "schema.pbtxt")
tfdv.write_schema_text(schema, schema_file)
print(schema_dir, schema_file)
# !cat {schema_file}
# -
# ## Importing the updated schema using Importer
#
# The `Importer` component allows you to import an external artifact, including the schema file, so it can be used by other TFX components in your workflow.
#
# ### Configure and run the `Importer` component
schema_importer = Importer(
source_uri=schema_dir, artifact_type=tfx.types.standard_artifacts.Schema
).with_id("SchemaImporter")
context.run(schema_importer)
# ### Visualize the imported schema
context.show(schema_importer.outputs["result"])
# ## Validating data with ExampleValidator
#
# The `ExampleValidator` component identifies anomalies in data. It identifies anomalies by comparing data statistics computed by the `StatisticsGen` component against a schema generated by `SchemaGen` or imported by `Importer`.
#
# `ExampleValidator` can detect different classes of anomalies. For example it can:
#
# - perform validity checks by comparing data statistics against a schema
# - detect training-serving skew by comparing training and serving data.
# - detect data drift by looking at a series of data.
#
#
# The `ExampleValidator` component validates the data in the `eval` split only. Other splits are ignored.
#
# <img src=../../images/ExampleValidator.png width="350">
# ### Configure and run the `ExampleValidator` component
#
# +
# TODO: Complete ExampleValidator
# Hint: review the visual above and review the documentation on ExampleValidator's inputs and outputs:
# https://www.tensorflow.org/tfx/guide/exampleval
# Make sure you use the output of the schema_importer component created above.
example_validator = ExampleValidator(
statistics=statistics_gen.outputs["statistics"],
schema=schema_importer.outputs["result"],
).with_id("ExampleValidator")
# -
context.run(example_validator)
# ### Visualize validation results
#
# The file `anomalies.pbtxt` can be visualized using `context.show`.
context.show(example_validator.outputs["anomalies"])
# In our case no anomalies were detected in the `eval` split.
#
# For a detailed deep dive into data validation and schema generation refer to the `lab-31-tfdv-structured-data` lab.
# ## Preprocessing data with Transform
#
# The `Transform` component performs data transformation and feature engineering. The `Transform` component consumes `tf.Examples` emitted from the `ExampleGen` component and emits the transformed feature data and the `SavedModel` graph that was used to process the data. The emitted `SavedModel` can then be used by serving components to make sure that the same data pre-processing logic is applied at training and serving.
#
# The `Transform` component requires more code than many other components because of the arbitrary complexity of the feature engineering that you may need for the data and/or model that you're working with. It requires code files to be available which define the processing needed.
#
# <img src=../../images/Transform.png width="400">
#
# ### Define the pre-processing module
#
# To configure `Trainsform`, you need to encapsulate your pre-processing code in the Python `preprocessing_fn` function and save it to a python module that is then provided to the Transform component as an input. This module will be loaded by transform and the `preprocessing_fn` function will be called when the `Transform` component runs.
#
# In most cases, your implementation of the `preprocessing_fn` makes extensive use of [TensorFlow Transform](https://www.tensorflow.org/tfx/guide/tft) for performing feature engineering on your dataset.
TRANSFORM_MODULE = "preprocessing.py"
# !cat {TRANSFORM_MODULE}
# ### Configure and run the `Transform` component.
transform = Transform(
examples=example_gen.outputs["examples"],
schema=schema_importer.outputs["result"],
module_file=TRANSFORM_MODULE,
).with_id("Transform")
context.run(transform)
# ### Examine the `Transform` component's outputs
#
# The Transform component has 2 outputs:
#
# - `transform_graph` - contains the graph that can perform the preprocessing operations (this graph will be included in the serving and evaluation models).
# - `transformed_examples` - contains the preprocessed training and evaluation data.
#
# Take a peek at the `transform_graph` artifact: it points to a directory containing 3 subdirectories:
print(transform.outputs["transform_graph"].get()[0].uri)
print(transform.outputs["transformed_examples"].get()[0].uri)
os.listdir(transform.outputs["transform_graph"].get()[0].uri)
# And the `transform.examples` artifact
os.listdir(transform.outputs["transformed_examples"].get()[0].uri)
transform_uri = transform.outputs["transformed_examples"].get()[0].uri
tfrecord_filenames = [
os.path.join(transform_uri, "Split-train", name)
for name in os.listdir(os.path.join(transform_uri, "Split-train"))
]
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
for tfrecord in dataset.take(2):
example = tf.train.Example()
example.ParseFromString(tfrecord.numpy())
for name, feature in example.features.feature.items():
if feature.HasField("bytes_list"):
value = feature.bytes_list.value
if feature.HasField("float_list"):
value = feature.float_list.value
if feature.HasField("int64_list"):
value = feature.int64_list.value
print(f"{name}: {value}")
print("******")
# ## Train your TensorFlow model with the `Trainer` component
#
# The `Trainer` component trains a model using TensorFlow.
#
# `Trainer` takes:
#
# - tf.Examples used for training and eval.
# - A user provided module file that defines the trainer logic.
# - A data schema created by `SchemaGen` or imported by `Importer`.
# - A proto definition of train args and eval args.
# - An optional transform graph produced by upstream Transform component.
# - An optional base models used for scenarios such as warmstarting training.
#
# <img src=../../images/Trainer.png width="400">
#
#
# ### Define the trainer module
#
# To configure `Trainer`, you need to encapsulate your training code in a Python module that is then provided to the `Trainer` as an input.
#
TRAINER_MODULE_FILE = "model.py"
# !cat {TRAINER_MODULE_FILE}
# ### Create and run the Trainer component
#
# Note that the `Trainer` component supports passing the field `num_steps` through the `train_args` and `eval_args` arguments.
trainer = Trainer(
custom_executor_spec=executor_spec.ExecutorClassSpec(
trainer_executor.GenericExecutor
),
module_file=TRAINER_MODULE_FILE,
transformed_examples=transform.outputs["transformed_examples"],
schema=schema_importer.outputs["result"],
transform_graph=transform.outputs["transform_graph"],
train_args=trainer_pb2.TrainArgs(splits=["train"], num_steps=2),
eval_args=trainer_pb2.EvalArgs(splits=["eval"], num_steps=1),
).with_id("Trainer")
context.run(trainer)
# ### Analyzing training runs with TensorBoard
#
# In this step you will analyze the training run with [TensorBoard.dev](https://blog.tensorflow.org/2019/12/introducing-tensorboarddev-new-way-to.html). `TensorBoard.dev` is a managed service that enables you to easily host, track and share your ML experiments.
# #### Retrieve the location of TensorBoard logs
#
# Each model run's train and eval metric logs are written to the `model_run` directory by the Tensorboard callback defined in `model.py`.
logs_path = trainer.outputs["model_run"].get()[0].uri
print(logs_path)
# #### Upload the logs and start TensorBoard.dev
#
# 1. Open a new JupyterLab terminal window
#
# 2. From the terminal window, execute the following command
# ```
# tensorboard dev upload --logdir [YOUR_LOGDIR]
# ```
#
# Where [YOUR_LOGDIR] is an URI retrieved by the previous cell.
#
# You will be asked to authorize `TensorBoard.dev` using your Google account. If you don't have a Google account or you don't want to authorize `TensorBoard.dev` you can skip this exercise.
#
# After the authorization process completes, follow the link provided to view your experiment.
# ## Evaluating trained models with Evaluator
# The `Evaluator` component analyzes model performance using the [TensorFlow Model Analysis library](https://www.tensorflow.org/tfx/model_analysis/get_started). It runs inference requests on particular subsets of the test dataset, based on which slices are defined by the developer. Knowing which slices should be analyzed requires domain knowledge of what is important in this particular use case or domain.
#
# The `Evaluator` can also optionally validate a newly trained model against a previous model. In this lab, you only train one model, so the Evaluator automatically will label the model as "blessed".
#
#
# <img src=../../images/Evaluator.png width="400">
# ### Configure and run the Evaluator component
#
# Use the `ResolverNode` to pick the previous model to compare against. The model resolver is only required if performing model validation in addition to evaluation. In this case we validate against the latest blessed model. If no model has been blessed before (as in this case) the evaluator will make our candidate the first blessed model.
model_resolver = Resolver(
strategy_class=LatestBlessedModelStrategy,
model=Channel(type=tfx.types.standard_artifacts.Model),
model_blessing=Channel(type=tfx.types.standard_artifacts.ModelBlessing),
).with_id("LatestBlessedModelResolver")
context.run(model_resolver)
# Configure evaluation metrics and slices.
# +
# TODO: Your code here to create a tfma.MetricThreshold.
# Review the API documentation here: https://www.tensorflow.org/tfx/model_analysis/api_docs/python/tfma/MetricThreshold
# Hint: Review the API documentation for tfma.GenericValueThreshold to constrain accuracy between 50% and 99%.
accuracy_threshold = tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={"value": 0.0}, upper_bound={"value": 0.99}
)
)
metrics_specs = tfma.MetricsSpec(
metrics=[
tfma.MetricConfig(
class_name="SparseCategoricalAccuracy", threshold=accuracy_threshold
),
tfma.MetricConfig(class_name="ExampleCount"),
]
)
eval_config = tfma.EvalConfig(
model_specs=[tfma.ModelSpec(label_key="Cover_Type")],
metrics_specs=[metrics_specs],
slicing_specs=[
tfma.SlicingSpec(),
tfma.SlicingSpec(feature_keys=["Wilderness_Area"]),
],
)
eval_config
# -
model_analyzer = Evaluator(
examples=example_gen.outputs["examples"],
model=trainer.outputs["model"],
baseline_model=model_resolver.outputs["model"],
eval_config=eval_config,
).with_id("ModelEvaluator")
context.run(model_analyzer, enable_cache=False)
# ### Check the model performance validation status
model_blessing_uri = model_analyzer.outputs["blessing"].get()[0].uri
# !ls -l {model_blessing_uri}
# ### Visualize evaluation results
# You can visualize the evaluation results using the `tfma.view.render_slicing_metrics()` function from TensorFlow Model Analysis library.
#
# **Setup Note:** *Currently, TFMA visualizations don't render in JupyterLab. Make sure that you run this notebook in Classic Notebook.*
evaluation_uri = model_analyzer.outputs["evaluation"].get()[0].uri
evaluation_uri
# !ls {evaluation_uri}
eval_result = tfma.load_eval_result(evaluation_uri)
eval_result
tfma.view.render_slicing_metrics(eval_result)
tfma.view.render_slicing_metrics(eval_result, slicing_column="Wilderness_Area")
# ## InfraValidator
#
# The `InfraValidator` component acts as an additional early warning layer by validating a candidate model in a sandbox version of its serving infrastructure to prevent an unservable model from being pushed to production. Compared to the `Evaluator` component above which validates a model's performance, the `InfraValidator` component is validating that a model is able to generate predictions from served examples in an environment configured to match production. The config below takes a model and examples, launches the model in a sand-boxed [TensorflowServing](https://www.tensorflow.org/tfx/guide/serving) model server from the latest image in a local docker engine, and optionally checks that the model binary can be loaded and queried before "blessing" it for production.
#
# <img src=../../images/InfraValidator.png width="400">
infra_validator = InfraValidator(
model=trainer.outputs["model"],
examples=example_gen.outputs["examples"],
serving_spec=infra_validator_pb2.ServingSpec(
tensorflow_serving=infra_validator_pb2.TensorFlowServing(
tags=["latest"]
),
local_docker=infra_validator_pb2.LocalDockerConfig(),
),
validation_spec=infra_validator_pb2.ValidationSpec(
max_loading_time_seconds=60,
num_tries=5,
),
request_spec=infra_validator_pb2.RequestSpec(
tensorflow_serving=infra_validator_pb2.TensorFlowServingRequestSpec(),
num_examples=5,
),
).with_id("ModelInfraValidator")
context.run(infra_validator, enable_cache=False)
# ### Check the model infrastructure validation status
infra_blessing_uri = infra_validator.outputs["blessing"].get()[0].uri
# !ls -l {infra_blessing_uri}
# ## Deploying models with Pusher
#
# The `Pusher` component checks whether a model has been "blessed", and if so, deploys it by pushing the model to a well known file destination.
#
# <img src=../../images/Pusher.png width="400">
#
#
# ### Configure and run the `Pusher` component
trainer.outputs["model"]
pusher = Pusher(
model=trainer.outputs["model"],
model_blessing=model_analyzer.outputs["blessing"],
infra_blessing=infra_validator.outputs["blessing"],
push_destination=pusher_pb2.PushDestination(
filesystem=pusher_pb2.PushDestination.Filesystem(
base_directory=SERVING_MODEL_DIR
)
),
).with_id("ModelPusher")
context.run(pusher)
# ### Examine the output of `Pusher`
pusher.outputs
latest_pushed_model = os.path.join(
SERVING_MODEL_DIR, max(os.listdir(SERVING_MODEL_DIR))
)
# ls $latest_pushed_model
# ## Next steps
#
# This concludes your introductory walthrough through TFX pipeline components. In the lab, you used TFX to analyze, understand, and pre-process the dataset and train, analyze, validate, and deploy a multi-class classification model to predict the type of forest cover from cartographic features. You utilized a TFX Interactive Context for prototype development of a TFX pipeline directly in a Jupyter notebook. Next, you worked with the TFDV library to modify your dataset schema to add feature constraints to catch data anamolies that can negatively impact your model's performance. You utilized TFT library for feature proprocessing for consistent feature transformations for your model at training and serving time. Lastly, using the TFMA library, you added model performance constraints to ensure you only push more accurate models than previous runs to production.
# ## License
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
|
notebooks/tfx_pipelines/walkthrough/labs/tfx_walkthrough_vertex.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
#dataset
df = pd.read_csv('2-fft-malicious-n-0-3-m-9.csv')
df
df.dtypes
df.shape
df.describe()
sns.distplot(df['router'], kde = False, bins=30, color='blue')
sns.distplot(df['src_router'], kde = False, bins=30, color='blue')
sns.distplot(df['dst_router'], kde = False, bins=30, color='red')
sns.distplot(df['inport'], kde = False, bins=30, color='green')
sns.distplot(df['outport'], kde = False, bins=30, color='green')
sns.distplot(df['packet_type'], kde = False, bins=30, color='red')
direction = {'Local': 0,'North': 1, 'East': 2, 'South':3,'West':4}
df = df.replace({'inport': direction, 'outport': direction})
data = {'GETS': 1,'GETX': 2,'GUX': 3,'DATA': 4, 'PUTX': 5,'PUTS': 6,'WB_ACK':7}
df = df.replace({'packet_type': data})
df['flit_id'] = df['flit_id']+1
df['flit_type'] = df['flit_type']+1
df['vnet'] = df['vnet']+1
df['vc'] = df['vc']+1
df.dtypes
hoparr = {"0to0":0,"0to1":1,"0to2":2,"0to3":3,"0to4":1,"0to5":2,"0to6":3,"0to7":4,"0to8":2,"0to9":3,"0to10":4,"0to11":5,"0to12":3,"0to13":4,"0to14":5,"0to15":6,
"1to1":0,"1to2":1,"1to3":2,"1to4":2,"1to5":1,"1to6":2,"1to7":3,"1to8":3,"1to9":2,"1to10":3,"1to11":4,"1to12":5,"1to13":3,"1to14":4,"1to15":5,
"2to2":0,"2to3":1,"2to4":3,"2to5":2,"2to6":1,"2to7":2,"2to8":4,"2to9":3,"2to10":2,"2to11":3,"2to12":5,"2to13":4,"2to14":3,"2to15":4,
"3to3":0,"3to4":4,"3to5":3,"3to6":2,"3to7":1,"3to8":5,"3to9":4,"3to10":3,"3to11":2,"3to12":6,"3to13":5,"3to14":4,"3to15":3,
"4to4":0,"4to5":1,"4to6":2,"4to7":3,"4to8":1,"4to9":2,"4to10":3,"4to11":4,"4to12":2,"4to13":3,"4to14":4,"4to15":5,
"5to5":0,"5to6":1,"5to7":2,"5to8":2,"5to9":1,"5to10":2,"5to11":3,"5to12":3,"5to13":2,"5to14":3,"5to15":4,
"6to6":0,"6to7":1,"6to8":3,"6to9":2,"6to10":1,"6to11":2,"6to12":4,"6to13":3,"6to14":2,"6to15":3,
"7to7":0,"7to8":4,"7to9":3,"7to10":2,"7to11":1,"7to12":5,"7to13":4,"7to14":3,"7to15":2,
"8to8":0,"8to9":1,"8to10":2,"8to11":3,"8to12":1,"8to13":2,"8to14":3,"8to15":4,
"9to9":0,"9to10":1,"9to11":2,"9to12":2,"9to13":1,"9to14":2,"9to15":4,
"10to10":0,"10to11":1,"10to12":3,"10to13":2,"10to14":1,"10to15":2,
"11to11":0,"11to12":4,"11to13":3,"11to14":2,"11to15":1,
"12to12":0,"12to13":1,"12to14":2,"12to15":3,
"13to13":0,"13to14":1,"13to15":2,
"14to14":0,"14to15":1,
"15to15":0}
packarr = {}
packtime = {}
packchunk = []
hopcurrentarr = []
hoptotarr = []
hoppercentarr =[]
waitingarr = []
interval = 500
count = 0
for index, row in df.iterrows():
current_time = row["time"]
enqueue_time = row["enq_time"]
waiting_time = current_time - enqueue_time
waitingarr.append(waiting_time)
current_router = row["router"]
src_router = row["src_router"]
dst_router = row["dst_router"]
src_router_temp = src_router
if src_router_temp>dst_router:
temph = src_router_temp
src_router_temp = dst_router
dst_router = temph
hop_count_string = str(src_router_temp)+"to"+str(dst_router)
src_router_temp = src_router
hop_count = hoparr.get(hop_count_string)
if src_router_temp>current_router:
tempc = src_router_temp
src_router_temp = current_router
current_router = tempc
current_hop_string = str(src_router_temp)+"to"+str(current_router)
current_hop = hoparr.get(current_hop_string)
if(current_hop == 0 and hop_count ==0):
hop_percent = 0
else:
hop_percent = current_hop/hop_count
hoptotarr.append(hop_count)
hopcurrentarr.append(current_hop)
hoppercentarr.append(hop_percent)
if row["packet_address"] not in packarr:
packarr[row["packet_address"]] = count
packtime[row["packet_address"]] = row["time"]
packchunk.append(packarr.get(row["packet_address"]))
count+=1
else:
current_time = row["time"]
position = packarr.get(row["packet_address"])
pkt_time = packtime.get(row["packet_address"])
current_max = max(packarr.values())
if (current_time-pkt_time)<interval:
packchunk.append(packarr.get(row["packet_address"]))
else:
del packarr[row["packet_address"]]
del packtime[row["packet_address"]]
packarr[row["packet_address"]] = current_max+1
packtime[row["packet_address"]] = row["time"]
packchunk.append(packarr.get(row["packet_address"]))
if (current_max)==count:
count+=2
elif (current_max+1)==count:
count+=1
df['packet_address'].nunique()
print(len(packarr))
print(len(packchunk))
df = df.assign(traversal_id=packchunk)
df = df.assign(hop_count=hoptotarr)
df = df.assign(current_hop=hopcurrentarr)
df = df.assign(hop_percentage=hoppercentarr)
df = df.assign(enqueue_time=waitingarr)
df.rename(columns={'packet_type': 'cache_coherence_type', 'time': 'timestamp'}, inplace=True)
df = df.drop(columns=['packet_address','enq_time'])
df.isnull().sum()
df.dtypes
df.head(10)
df.to_csv('2-fft-malicious-n-0-3-m-9-data.csv',index=False)
|
[03 - Results]/dos results ver 1/dataset fetch/2-fft-malicious-n-0-3-m-9.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Stepfinder, find steps in data with low SNR
# + active=""
# Stepfinder, find steps in data with low SNR
# Copyright 2016,2017,2018,2019 <NAME>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# -
# ## Import stepfinder and other packages ...
# +
# Import necessary modules and functions
import matplotlib
matplotlib.use('module://ipympl.backend_nbagg')
import numpy as np
import os
from matplotlib import pyplot as plt
# Import stepfinder software
import sys
sys.path.append('..')
import stepfinder as sf
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
# -
# ## Show stepfinder usage information ...
# +
# sf.filter_find_analyse_steps?
# -
# ## Either, simulate data with steps ...
# +
# Set parameters to simulate the steps
duration = 1.0 # s
resolution = 40000.0 # Hz
dwell_time = 0.050 # s
SNR = 0.5
simulated_steps = sf.simulate_steps(duration=duration, resolution=resolution,
dwell_time=dwell_time, SNR=SNR)
data = simulated_steps.data + simulated_steps.noise
# Set parameters for filtering the data and finding steps
filter_min_t = 0.001 # None or s
filter_max_t = 0.020 # None or s
expected_min_step_size = 8.0 # in values of data
# -
# ## Or, read in measured data ...
# +
# Set the file to be loaded and its resolution
filename = os.path.join('.', 'data.txt')
resolution = 1000 # in Hz
# Load the data
data = np.loadtxt(filename, skiprows=1)[:,1]
print('Loaded file: {}\n Duration: {:.3f} s\n Datapoints: {}'.format(filename, len(data) / resolution, len(data)))
# Set parameters for filtering the data and finding steps
filter_min_t = 0.005 # None or s
filter_max_t = 0.050 # None or s
expected_min_step_size = 2000.0 # in values of data
# -
# ## Filter the data, find the steps, and plot the result ...
# +
# Set additional parameters for filtering the data
filter_time = None # None or s
filter_number = 40 # None or number
edginess = 1 # float
# Set additional parameters for finding the steps
expected_min_dwell_t = None # None or s
step_size_threshold = None # None (equal to 'adapt'), 'constant', or in values of data
step_finder_result, fig1 \
= sf.filter_find_analyse_steps(data, resolution, filter_time, filter_min_t, filter_max_t,
filter_number, edginess,
expected_min_step_size, expected_min_dwell_t,
step_size_threshold, pad_data=True,
verbose=True, plot=True)
# Plot the data and step finder result
fig2, fig3 = sf.plot_result(step_finder_result)#, simulated_steps)
fig1.show()
fig2.show()
fig3.show()
|
notebooks/stepfinder.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Labeled Faces in the Wild
#
# http://vis-www.cs.umass.edu/lfw/
#
# ## このデータベースを利用する目的:
#
# 制御された撮影環境にによる画像のデータベースではなく、さまざまな撮影環境で撮影された画像セットに対する顔の検出・顔の照合に使われるデータベース。近年、画像認識技術の評価に用いられることが増えてきている。
#
# ## 評価上の注意:
#
# - 東洋人の顔が少ない。
# - 既に顔画像が正規化されている。両目の位置は既に一致するようになっている。
# - rollの評価は、別途画像を回転させて検出率を評価すること。
# - 有名人の画像であるので、その同じ有名人が既に検出などの学習の被写体として利用されている可能性がある。
# - 報道などの撮影で用いられた画像とみられるので、フォーカスや画像の明るさなどは確保された画像になっている比率が高い。
# 顔検出が面内回転に対してどれくらい頑強かを評価する。
#
# データベースによっては既に目位置を正規化してあり、
# 面内回転を加えたデータで評価してはじめて、実際環境での顔検出能力を評価できる。
#
# そこで、このスクリプトでは、データに面内回転を加えた画像を作って
# 検出率を評価している。
#
# %matplotlib inline
import pandas as pd
import glob
dataset = "lfw"
names = glob.glob("lfw/lfw/*/*.jpg")
names.sort()
scales = [1.0, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1]
# +
# names = names[:10]
# -
import HaarFrontal as faceDetector
for scale in scales:
faceDetector.processDatabase(dataset, names, scale=scale)
# # 検出処理後のデータ解析
dfs={}
deg = 0
for scale in scales:
dfs[scale] = pd.read_csv("log_%s_%d_%f.csv" % (dataset, deg, scale))
print scale, dfs[scale]["truePositives"].mean()
rates = [dfs[scale]["truePositives"].mean() for scale in scales]
falseRates = [dfs[scale]["falsePositives"].mean() for scale in scales]
data = {"scales":scales, "rates":rates, "falseRates":falseRates}
df = pd.DataFrame(data, columns=["scales", "rates", "falseRates"])
df.plot(x="scales", y="rates", grid=True)
# 上の図は、画像の面内回転によって、どれだけ検出率が変化するのかを示している。
df.plot(x="scales", y="falseRates", grid=True)
# # NaNを含む値の平均をなんとかしたい
# import numpy as np
# rates = [dfs[scale]["truePositives"].mean() for scale in scales]
# falseRates = [dfs[scale]["falsePositives"].mean() for scale in scales]
# scale =1.0
# dfs[scale]["meanSize"]
#
#
# # NaNを含む値の平均をなんとかしたい
# np.nanmean(dfs[scale]["meanSize"])
# meanSize= [np.nanmean(dfs[scale]["meanSize"]) for scale in scales]
# data = {"scale":scales, "rates":rates, "falseRates":falseRates, "meanSize":meanSize}
# df = pd.DataFrame(data, columns=["scale", "rates", "falseRate", "meanSize"])
|
HaarProfile/haarCascade_lfw_size.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from awesome_panel_extensions.awesome_panel.notebook import Header
Header(notebook="WebComponent.ipynb", folder="examples/reference/panes")
# # WebComponent - Reference Guide
#
# You can think of the `WebComponent` as a `HTML` pane that supports bidirectional communication and large data transfer.
#
# You can use the `WebComponent` to quickly **plugin web component or javascript libraries**.
#
# So if you are not satisfied with the look and feel of the existing Panel widgets then use the `WebComponent` to plug in your favourite set of widgets. Or if the `DataFrame` pane or widget is not enough for your use case, then plugin an alternative data table.
#
# For an introduction to *web components* see [Web Components: the secret ingredient helping Power the web](https://www.youtube.com/watch?v=YBwgkr_Sbx0).
#
# <a href="https://www.youtube.com/watch?v=YBwgkr_Sbx0" target="blank_"><img src="https://i.ytimg.com/vi/YBwgkr_Sbx0/hqdefault.jpg"></img></a>
#
# Please note that picking and using a web component library can be challenging as the **web component tools, frameworks and standard is rapidly evolving**. The newest and best implemented web components will be easiest to use.
#
# The `WebComponent` is a also on the **roadmap for Panel**. So we need your help to identify bugs and improvements or suggestions for improving the api. You can contribute your comments and suggestions via [Github PR 1252](https://github.com/holoviz/panel/pull/1252).
#
# Parameters
# ----------
#
# You can use the `WebComponent` by instantiating an instance or inheriting from it.
#
# - `html`: The web component html tag.
# - For example `<mwc-button></mwc-button>`. But can be more complex.
# - `attributes_to_watch`: A dictionary of (`html_attribute`, `python_parameter`) names
# - The value of `python_parameter` will be used to set the `html_attribute` on construction.
# - The value of `python_parameter` and `html_attribute` will be kept in sync (*two way binding*).
# - The value of `html_attribute` may not be None
# - The value of `python_parameter` can be None.
# - `properties_to_watch`: A dictionary of (`js_property`, `python_parameter`) names
# - The value of `python_parameter` will be used to set the `js_property` on construction.
# - The value of `python_parameter` and `js_property` will be kept in sync (*two way binding*).
# - The value of `js_property` may not be None
# - The value of `python_parameter` can be None.
# - You can specify a nested `js_property` like `textInput.value` as key.
# - `parameters_to_watch`: Can be used to make the `html` parameter value dynamic. The list of `parameters_to_watch` will be watched and when changed the `html` will be updated. You need to implement the `_get_html_from_parameters_to_watch` to return the updated `html` value.
# - `events_to_watch`: A Dictionary of (`js_event`, `python_parameter_to_inc`) names.
# - The `js_event` will be watched on the JavaScript side. When fired the javascript code will
# - Increment the `python_parameter_to_inc` with `+1` if specified.
# - check whether any `js_property` key in `properties_to_watch` has changed and if yes then it will be synced to the associated `python_parameter` value.
# - The `column_data_source` can be used to efficiently transfer columnar data to the javascript side.
# - The `column_data_source_orient` is used to specify how the data should be input to the `column_data_source_load_function` below.
# - For now `dict` and `records` are supported.
# - See the `orient` parameter of the [pandas.to_dict](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_dict.html) function for more info.
# - The `column_data_source_load_function` specifies the name of the js function or property that will load the data.
# - If the JavaScript WebComponent contains an `after_layout` function this is used to
# resize the JS WebComponent. See the `ECharts` web component in the Gallery for an example.
#
# You will propably have to experiment a bit in order to determine which javascript files to import and what combination of attributes, properties, events and/ or
# parameters to watch.
# ___
# +
import param
import pandas as pd
import panel as pn
pn.extension()
from awesome_panel_extensions.web_component import WebComponent
# -
# In order for WebComponents to function we need to load the corresponding Javascript modules. Below we load the [MWC button](https://github.com/material-components/material-components-web-components/tree/master/packages/button) and [MWC slider](https://github.com/material-components/material-components-web-components/tree/master/packages/slider) components along with Material Icons CSS. We also load the [lit-datatable](https://github.com/DoubleTrade/lit-datatable) and [spinjs](https://spin.js.org/) libraries.
#
# Note that the webcomponent-loader.js is only required for older browsers which do not yet support Webcomponents natively.
# +
js_urls = {
'webcomponent_loader': 'https://cdnjs.cloudflare.com/ajax/libs/webcomponentsjs/2.4.3/webcomponents-loader.js',
"spinjs": "https://www.unpkg.com/spin@0.0.1/spin.js",
}
js_module_urls = {
'button': 'https://www.unpkg.com/@material/mwc-button?module',
'slider': 'https://www.unpkg.com/@material/mwc-slider?module',
'lit_table': 'https://unpkg.com/@doubletrade/lit-datatable@0.3.5/lit-datatable.js?module',
}
css_urls = [
'https://fonts.googleapis.com/css?family=Roboto:300,400,500',
'https://fonts.googleapis.com/css?family=Material+Icons&display=block'
]
css = """
:root {
--mdc-theme-primary: green;
--mdc-theme-secondary: purple*;
}
"""
pn.extension(
js_files=js_urls,
raw_css=[css]
)
# For now Panel does not support loading .js modules and .css style sheets like the ones we need
# Instead we just include them via an "invisible" Pane
js_module_urls_str = "".join([f"<script type='module' src='{value}'></script>" for value in js_module_urls.values()])
css_urls_str = "".join([f"<link href='{value}' rel='stylesheet'>" for value in css_urls])
extension_pane = pn.pane.HTML(js_module_urls_str+css_urls_str, width=0, height=0, sizing_mode="fixed", margin=0)
extension_pane
# -
# ### `attributes_to_watch`
# Lets create a `MWCButton`.
# +
MWC_ICONS = [None, "accessibility", "code", "favorite"] # For more icons see https://material.io/resources/icons/?style=baseline
class MWCButton(WebComponent):
html = param.String("<mwc-button></mwc-button")
attributes_to_watch = param.Dict({"label": "name", "icon": "icon", "raised":"raised"})
raised=param.Boolean(default=True)
icon=param.ObjectSelector(default="favorite", objects=MWC_ICONS)
height = param.Integer(default=30)
mwc_button = MWCButton(name="Panel")
mwc_button
# -
# Try changing some of the parameters below.
pn.Param(
mwc_button, parameters=["name", "icon", "raised", "height"]
)
# For an example that use the `attributes_to_watch` bi-directionally take a look at the `perspective-viewer` example in the Gallery.
# ### `properties_to_watch`
#
# Lets create a `SliderBase`
# +
mwc_slider_html = """
<mwc-slider
step="5"
pin
markers
max="50"
value="10">
</mwc-slider>
"""
class SliderBase(WebComponent):
html = param.String(mwc_slider_html)
properties_to_watch = param.Dict({"value": "value"})
value = param.Integer(default=10, bounds=(0,50), step=5)
height= param.Integer(default=50)
mwc_slider = SliderBase(margin=(20,10,0,10))
mwc_slider
# -
# Try changing some of the parameters below.
pn.Param(mwc_slider, parameters=["value"])
# ### `events_to_Watch`
#
# Lets add `clicks` count to the `MWCButton`
# +
MWC_ICONS = [None, "accessibility", "code", "favorite"] # For more icons see https://material.io/resources/icons/?style=baseline
class MWCButton(WebComponent):
html = param.String("<mwc-button></mwc-button")
attributes_to_watch = param.Dict({"label": "name", "icon": "icon", "raised":"raised"})
raised=param.Boolean(default=True)
icon=param.ObjectSelector(default="favorite", objects=MWC_ICONS, allow_None=True)
height = param.Integer(default=30)
# NEW IN THIS EXAMPLE
events_to_watch = param.Dict({"click": "clicks"})
clicks = param.Integer()
mwc_button = MWCButton(name="Panel")
mwc_button
# -
# Try changing some of the parameters below.
pn.Param(
mwc_button, parameters=["name", "icon", "raised", "height", "clicks"]
)
# ## Lit-DataTable Example
# In this example we will use the [lit-datatable](https://github.com/DoubleTrade/lit-datatable) which is a material design implementation of a data table.
# ### `properties_to_watch`
# Above we used the `html` attributes `data` and `conf`. So we can build a version of `LitDataTable` based on `attributes_to_watch`.
#
# But the the `data` and `conf` `html` attributes also corresponds to `data` and `conf` properties on the `js` object. It is not always the case for web components that there is a 1-1 correspondance though.
#
# Lets build the `LitDataTable` based on `properties_to_watch` to illustrate this.
class LitDataTable1(WebComponent):
html = param.String("<lit-datatable><lit-datatable>")
properties_to_watch = param.Dict({"data": "data", "conf": "conf"})
data = param.List()
conf = param.List()
# +
data = [
{ "fruit": "apple", "color": "green", "weight": "100gr" },
{ "fruit": "banana", "color": "yellow", "weight": "140gr" }
]
conf = [
{ "property": "fruit", "header": "Fruit", "hidden": False },
{ "property": "color", "header": "Color", "hidden": False },
{ "property": "weight", "header": "Weight", "hidden": False }
]
lit_data_table1 = LitDataTable1(conf=conf, data=data, height=150)
lit_data_table1
# -
# ### `column_data_source`
# If we wanted to transfer a `DataFrame` and/ or large amounts of data to the `lit-datatable` we would create version of `LitDataTable` using `column-data_source`. Let's do that.
class LitDataTable(WebComponent):
html = param.String("<lit-datatable><lit-datatable>")
properties_to_watch = param.Dict({"conf": "conf"})
conf = param.List()
column_data_source_orient = param.String("records")
column_data_source_load_function = param.String("data")
# +
import pandas as pd
from bokeh.models import ColumnDataSource
dataframe = pd.DataFrame(data)
column_data_source = ColumnDataSource(dataframe)
lit_data_table = LitDataTable(conf=conf, column_data_source=column_data_source, height=150)
lit_data_table
# -
# Lets a replace the data and see that it updates
# +
new_data = [
{ "fruit": "appleX", "color": "green", "weight": "100gr" },
{ "fruit": "banana", "color": "yellowY", "weight": "140gr" },
{ "fruit": "pineapple", "color": "yellow", "weight": "1000grZ" },
]
new_conf = [
{ "property": "fruit", "header": "Fruit (name)", "hidden": False },
{ "property": "color", "header": "Color", "hidden": False },
{ "property": "weight", "header": "Weight (g)", "hidden": False }
]
new_dataframe = pd.DataFrame(new_data)
lit_data_table.column_data_source = ColumnDataSource(new_dataframe)
lit_data_table.height = 200
lit_data_table.conf = new_conf
# -
# ## You can use .js libraries as well!
#
# You should actually just see the `WebComponent` as a `HTML` pane that supports bidirectional communication.
#
# You can actually use the `WebComponent` with most .js libraries as well.
#
# Let's try it with [spin.js](https://spin.js.org/)
# +
spinjs_html = """
<div class="spinnerContainer" style="width: 100px; height:100px;"></div>
<script type="text/javascript">
target = document.currentScript.parentElement.children[0];
Object.defineProperty(target, 'lines', {
get() {
return this.spinner.lines;
},
set(value) {
opts = {lines: value};
this.innerHTML="";
target.spinner=new Spinner(opts).spin(this);
}
});
target.lines=13
</script>
"""
class Spinjs(WebComponent):
html = param.String(spinjs_html)
properties_to_watch = param.Dict({"lines": "lines"})
lines = param.Integer(default=13, bounds=(1,20))
height= param.Integer(default=100)
spinjs = Spinjs(width=100, height=100)
pn.Column(
spinjs,
spinjs.param.lines,
sizing_mode="stretch_width",
)
# Note since the Spinner object does not support setting a new lines value, for example via target.spinner.lines=value
# I had to implement the lines Property on the target. It's a trick to learn.
# Note the implementation of WebComponent sets up bidirectional communication with the first element in the html string.
# I.e. the `div` element and not the second `script` element.
# Note I'm using https://www.unpkg.com/spin@0.0.1/spin.js as I could not get it working with
# https://spin.js.org/spin.js
# -
# ## Web Component Libraries
#
# You can find web components at
#
# - [Awesome Lit Element components](https://github.com/web-padawan/awesome-lit-html#components)
# - [webcomponents.org](https://www.webcomponents.org/)
# - [npm](https://www.npmjs.com/)
#
# Some example web component libraries are
#
# - [Amber](https://amber.bitrock.it/components/overview/)
# - [Material MWC](https://github.com/material-components/material-components-web-components#readme), [Demo](https://mwc-demos.glitch.me/demos/)
# - [Microsoft Graph Toolkit](https://github.com/microsoftgraph/microsoft-graph-toolkit)
# - [SAP UI5](https://github.com/SAP/ui5-webcomponents), [Demo](https://sap.github.io/ui5-webcomponents/playground)
# - [Smart Elements](https://www.webcomponents.org/element/@smarthtmlelements/smart-bootstrap)
#
# Some exampole .js libraries are
#
# - [material-ui](https://material-ui.com/)
# - [vuetifyjs](https://vuetifyjs.com/en/)
# - [base web](https://baseweb.design/)
# - [awesome-grid](https://github.com/FancyGrid/awesome-grid)
# ### Tips & Tricks
#
# #### Prebuilt libraries from unpkg.com
#
# If you find a web component library on [npm](https://npmjs.com) you can find the corresponding precombiled library on [unpkg](https://unpkg.com) for use with Panel.
#
# For example if you find the `wired-button` at [https://www.npmjs.com/package/wired-button](https://www.npmjs.com/package/wired-button) then you can browse the precompiled files at [https://www.unpkg.com/browse/wired-button/](https://www.unpkg.com/browse/wired-button/) to locate the relevant precombiled file at [https://www.unpkg.com/wired-button@2.0.0/lib/wired-button.js](https://www.unpkg.com/wired-button@2.0.0/lib/wired-button.js).
#
# ### Resources
#
# - [Build an app with WebComponents in 9 minutes](https://www.youtube.com/watch?v=mTNdTcwK3MM)
# - [How to use Web Components in a JavaScript project](https://www.youtube.com/watch?v=88Sa-SlHRxk&t=63s)
#
# ## Share
#
# If you think the `WebComponent` is awesome please share it on <a href="https://twitter.com/intent/tweet?url=https%3A%2F%2Fnbviewer.jupyter.org%2Fgithub%2FMarcSkovMadsen%2Fawesome-panel-extensions%2Fblob%2Fmaster%2Fexamples%2Freference%2Fpanes%2FWebComponent.ipynb&text=Checkout%20the%20awesome%20WebComponent%20extension%20for%20%40Panel_org.%0A%0APanel%20is%20a%20framework%20for%20creating%20powerful%2C%20reactive%20analytics%20apps%20in%20Python%20using%20the%20tools%20you%20know%20and%20love.%20%F0%9F%92%AA%F0%9F%90%8D%E2%9D%A4%EF%B8%8F%0A%0A">Twitter</a>.
|
examples/reference/panes/WebComponent.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: weldx
# language: python
# name: weldx
# ---
# # Welding Experiment Design
# + nbsphinx="hidden"
# enable interactive plots on Jupyterlab with ipympl and jupyterlab-matplotlib installed
# # %matplotlib widget
# -
# This tutorial will show a slightly more advanced use case example of how to combine different possibilities and do some calculations and analysis with the tools provided by `weldx`.
#
# The tasks covered are the following:
#
# - create a linear weld seam
# - add a simple (linear) welding trace along the weld seam
# - create two different groove profiles that vary along the weld seam
# - generate interpolated pointcloud data from the resulting workpiece
# - add two more complex TCP movements:
# - a horizontal weaving motion
# - a vertical weaving motion
# - calculate and plot the minimal distance between the the workpiece geometry and the different TCP movements along the entire weld seam
# ## Python imports
# +
import matplotlib.pyplot as plt
# some python imports that will be used throughout the tutorial
import numpy as np
import pandas as pd
import pint
import xarray as xr
from mpl_toolkits.mplot3d import Axes3D
# -
# importing the weldx package with prevalent default abbreviations
import weldx
import weldx.geometry as geo
from weldx import Q_
from weldx import LocalCoordinateSystem as LCS
from weldx import get_groove
from weldx.welding.util import lcs_coords_from_ts, sine
# ## helper function
# We define small helper function that will calculate the minimal (Euclidean) distance between a point in space (our current TCP position) and a 3d pointcloud that describes our workpiece geometry.
#
# Keep in mind that this is a very simple function that works only on generated point data without any meshing or interpolation. Results will be more accurate for denser point clouds with many points.
def distance(ptc, lcs_name, time):
"""Calculate minimal distance between geometry rasterization pointcloud and 3D trace"""
lcs_interp = csm.get_cs(
coordinate_system_name=lcs_name, reference_system_name="workpiece", time=time
)
trace = lcs_interp.coordinates.data
ptc = ptc.T
ptc = np.expand_dims(ptc, 1)
trace = np.expand_dims(trace, 0)
return np.min(np.sqrt(np.sum((ptc - trace) ** 2, axis=-1)), axis=0)
# ## Trace and CSM setup
# Lets define the timestamps that we will use to visualize our experiment:
time = pd.timedelta_range("0s", "10s", freq="50ms")
# For this example we will be working with a linear weld seam of 100 mm .
# +
# define the weld seam length in mm
seam_length = Q_(100, "mm")
trace_segment = geo.LinearHorizontalTraceSegment(seam_length)
trace = geo.Trace(trace_segment)
# -
# We set up the following coordinate systems:
# - the default `base` system
# - a `workpiece` system that corresponds to the baseline of our workpiece
# - a `tcp_wire` system that represents the very simple case of moving along the weld seam 2 mm above the `workpiece` system with a weld speed of 10 mm/s
# crete a new coordinate system manager with default base coordinate system
csm = weldx.transformations.CoordinateSystemManager("base")
# add the workpiece coordinate system
csm.add_cs(
coordinate_system_name="workpiece",
reference_system_name="base",
lcs=trace.coordinate_system,
)
# +
tcp_start_point = Q_([0.0, 0.0, 2.0], "mm")
tcp_end_point = np.append(seam_length, Q_([0, 2.0], "mm"))
v_weld = Q_(10, "mm/s")
s_weld = (tcp_end_point - tcp_start_point)[0] # length of the weld
t_weld = s_weld / v_weld
t_start = pd.Timedelta("0s")
t_end = pd.Timedelta(str(t_weld.to_base_units()))
coords = [tcp_start_point.magnitude, tcp_end_point.magnitude]
tcp_wire = LCS(coordinates=coords, time=[t_start, t_end])
# add the workpiece coordinate system
csm.add_cs(
coordinate_system_name="tcp_wire",
reference_system_name="workpiece",
lcs=tcp_wire,
)
# -
# ### add weaving motions
# We now want two define and add two different weaving motions, one in y-direction (towards the groove sidewalls) and another on in z-direction (up and down).
# #### add y weaving
sine_y = sine(f=Q_(1, "Hz"), amp=Q_([[0, 1, 0]], "mm"))
coords = lcs_coords_from_ts(sine_y, time)
csm.add_cs(
coordinate_system_name="tcp_sine_y",
reference_system_name="tcp_wire",
lcs=LCS(coordinates=coords),
)
# #### add z weaving
sine_z = sine(f=Q_(1, "Hz"), amp=Q_([[0, 0, 2]], "mm"), bias=Q_([0, 0, 0], "mm"))
coords = lcs_coords_from_ts(sine_z, time)
csm.add_cs(
coordinate_system_name="tcp_sine_z",
reference_system_name="tcp_wire",
lcs=LCS(coordinates=coords),
)
# Let's visualize the different coordinate systems.
csm
# ## generate I-Groove pointcloud
# Let's finish the workpiece creation by adding groove profiles to the start end end of the welding trace.
#
# For the first example we will use I-Groove profiles and have the root gap open up along the weld seam from 2 mm to 6 mm.
# +
groove_1 = get_groove(
groove_type="IGroove",
workpiece_thickness=Q_(5, "mm"),
root_gap=Q_(2, "mm"),
)
groove_2 = get_groove(
groove_type="IGroove",
workpiece_thickness=Q_(5, "mm"),
root_gap=Q_(6, "mm"),
)
display(groove_1)
display(groove_2)
# +
v_profile = geo.VariableProfile(
[groove_1.to_profile(), groove_2.to_profile()],
Q_([0, 100], "mm"),
[geo.linear_profile_interpolation_sbs],
)
# create 3d workpiece geometry from the groove profile and trace objects
geometry = geo.Geometry(profile=v_profile, trace_or_length=trace)
pointcloud_I = geometry.rasterize(
profile_raster_width=Q_(1, "mm"), trace_raster_width=Q_(0.5, "mm")
)
# -
# ## Calculate distance (simple trace in I-Groove center)
# The first analysis example is very simple and used to check if our tools work as expected.
#
# If the wire TCP moves on a straight line along the groove center, the distance to the workpiece geometry is equivalent to the distance of each sidewall to the groove center. Since the root gap of the groove changes linearly from 2 mm to 6 mm, we expect the distance to change from 1 mm to 3 mm accordingly.
d = distance(pointcloud_I.m, "tcp_wire", time=time)
plt.plot(d)
# ## Calculate distance (y-weaving in I-Groove)
# Now let's analyze the distance when weaving along the y-axis (towards the sidewalls).
# We can see the expected superposition of the weaving motion with the opening of the root gap.
# Note the doubling of the observed frequency with regards to the weaving frequency of 1 Hz because we calculate the minimum distance instead of the distance to each sidewall separately.
d = distance(pointcloud_I.m, "tcp_sine_y", time=time)
plt.plot(d)
# If we analyze the weaving motion in z-direction alongside the I-Groove, we see no change in distance depending on the z-Position as expected.
# The small disturbances are due to the coarse rasterisation of the workpiece geometry.
d = distance(pointcloud_I.m, "tcp_sine_z", time=time)
plt.plot(d)
# ## Generate V-Groove geometry
# Lets look a more complex interaction of groove shapes and TCP motion.
#
# For this example we define two V-Groove profiles that begin at a groove angle of 60 degree with a root face and root gap of 1 mm each. At the end of the weld seam thee groove shape changes to a steep 20 degree angle with a root gap and root face of 3 mm.
# +
groove_1 = get_groove(
groove_type="VGroove",
workpiece_thickness=Q_(5, "mm"),
groove_angle=Q_(60, "deg"),
root_face=Q_(1, "mm"),
root_gap=Q_(1, "mm"),
)
groove_2 = get_groove(
groove_type="VGroove",
workpiece_thickness=Q_(5, "mm"),
groove_angle=Q_(20, "deg"),
root_face=Q_(3, "mm"),
root_gap=Q_(3, "mm"),
)
display(groove_1)
display(groove_2)
# +
v_profile = geo.VariableProfile(
[groove_1.to_profile(), groove_2.to_profile()],
Q_([0, 100], "mm"),
[geo.linear_profile_interpolation_sbs],
)
# create 3d workpiece geometry from the groove profile and trace objects
geometry = geo.Geometry(profile=v_profile, trace_or_length=trace)
pointcloud_V = geometry.rasterize(
profile_raster_width=Q_(0.25, "mm"), trace_raster_width=Q_(0.5, "mm")
)
# -
# Now let's look at the CTWD with our previously defined weaving motion in z-direction.
d = distance(pointcloud_V.m, "tcp_sine_z", time=time)
plt.plot(d)
# There are a few things to note:
#
# - as expected, we once again see the linear trend where the root gap opens as a baseline.
# - in addition, the distance now consists of two distinct phases:
# - one phase where the TCP is "submerged" into the root face part of the weld. In this case, the distance is not impacted by the z weaving motion
# - the second phase, where the TCP is above the root face and the distance of the V shaped groove section is mimicking the z weaving motion.
# - we can also see that the weaving motion reflection is more distinctly discernible if the groove opening angle is large.
|
tutorials/experiment_design_01.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Juntando R y Python
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import rpy2.robjects as ro
import rpy2.robjects.numpy2ri
rpy2.robjects.numpy2ri.activate()
codigo_r = """
saludar <- function(cadena){
return(paste("Hola, ", cadena))
}
"""
ro.r(codigo_r)
saludar_py = ro.globalenv["saludar"] #llamamos a la funcion creada en R llamada saludar
res = saludar_py("<NAME>")
res[0]
type(res)
print(saludar_py.r_repr()) #para acceder al código de R
var_from_python = ro.FloatVector(np.arange(1,5,0.1))
var_from_python
print(var_from_python.r_repr()) #representacion en R c(...)
ro.globalenv["var_to_r"] = var_from_python #añade en R una variable var_to_r a partir de la var_from_python
ro.r("var_to_r") #accedemos a la variable de R
ro.r("sum(var_to_r)")
ro.r("mean(var_to_r)")
ro.r("sd(var_to_r)")
np.sum(var_from_python)
np.mean(var_from_python)
ro.r("summary(var_to_r)")
ro.r("hist(var_to_r, breaks = 4)")
# # Trabajar de forma conjunta entre R y Python
from rpy2.robjects.packages import importr
ro.r("install.packages('extRemes')")# si os falla decidle 'n' al hacer la instalación
extremes = importr("extRemes") # library(extRemes)
fevd = extremes.fevd #usamos un metodo de la estimación máxima verosimil(fevd) como prueba de uso de paquetes de R
print(fevd.__doc__) #consutlamos la documentación de fevd
data = pd.read_csv("../datasets/time/time_series.txt",
sep = "\s+", skiprows = 1, parse_dates = [[0,1]], #con parse_dates agrupamos la columna 0 y la 1(fecha y hora)
names = ["date", "time", "wind_speed"],
index_col = 0)
data.head(5)
data.shape
max_ws = data.wind_speed.groupby(pd.TimeGrouper(freq="A")).max() #Lo agrupamos por año ynos quedamos con el máximo valor de 'A' de año
max_ws
max_ws.plot(kind="bar", figsize=(16,9))
result = fevd(max_ws.values, type="GEV", method = "GMLE")
print(type(result))
result.r_repr
print(result.names)
res = result.rx("results")
print(res[0])
loc, scale, shape = res[0].rx("par")[0]
loc
scale
shape
# # Función mágica para R
# %load_ext rpy2.ipython
help(rpy2.ipython.rmagic.RMagics.R)
# %R X=c(1,4,5,7); sd(X); mean(X)
# + language="R"
# Y = c(2,4,3,9)
# lm = lm(Y~X)
# summary(lm)
# -
# %R -i result plot.fevd(result)
# %R -i var_from_python hist(var_from_python)
ro.globalenv["result"] = result
ro.r("plot.fevd(result)") ## puede dar error y generar un objeto rpy2.rinterface.NULL
# # Un ejemplo complejo de R, Python y Rmagic
metodos = ["MLE", "GMLE", "Bayesian", "Lmoments"]
tipos = ["GEV", "Gumbel"]
for t in tipos:
for m in metodos:
print("Tipo de Ajuste: ", t)
print("Método del Ajuste: ", m)
result = fevd(max_ws.values, method = m, type = t)
print(result.rx("results")[0])
# %R -i result plot.fevd(result)
|
notebooks/T11 - 1 - R y Python.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# # Finding Similar Movies
# + [markdown] deletable=true editable=true
# We'll start by loading up the MovieLens dataset. Using Pandas, we can very quickly load the rows of the u.data and u.item files that we care about, and merge them together so we can work with movie names instead of ID's. (In a real production job, you'd stick with ID's and worry about the names at the display layer to make things more efficient. But this lets us understand what's going on better for now.)
# + deletable=true editable=true
import pandas as pd
r_cols = ['user_id', 'movie_id', 'rating']
ratings = pd.read_csv('C:/Users/Lucian-PC/Desktop/DataScience/DataScience-Python3/ml-100k/u.data', sep='\t', names=r_cols, usecols=range(3), encoding="ISO-8859-1")
m_cols = ['movie_id', 'title']
movies = pd.read_csv('C:/Users/Lucian-PC/Desktop/DataScience/DataScience-Python3/ml-100k/u.item', sep='|', names=m_cols, usecols=range(2), encoding="ISO-8859-1")
ratings = pd.merge(movies, ratings)
# + deletable=true editable=true
ratings.head()
# + [markdown] deletable=true editable=true
# Now the amazing pivot_table function on a DataFrame will construct a user / movie rating matrix. Note how NaN indicates missing data - movies that specific users didn't rate.
# + deletable=true editable=true
movieRatings = ratings.pivot_table(index=['user_id'],columns=['title'],values='rating')
movieRatings.head()
# + [markdown] deletable=true editable=true
# Let's extract a Series of users who rated Star Wars:
# + deletable=true editable=true
starWarsRatings = movieRatings['Star Wars (1977)']
starWarsRatings.head()
# + [markdown] deletable=true editable=true
# Pandas' corrwith function makes it really easy to compute the pairwise correlation of Star Wars' vector of user rating with every other movie! After that, we'll drop any results that have no data, and construct a new DataFrame of movies and their correlation score (similarity) to Star Wars:
# + deletable=true editable=true
similarMovies = movieRatings.corrwith(starWarsRatings)
similarMovies = similarMovies.dropna()
df = pd.DataFrame(similarMovies)
df.head(10)
# + [markdown] deletable=true editable=true
# (That warning is safe to ignore.) Let's sort the results by similarity score, and we should have the movies most similar to Star Wars! Except... we don't. These results make no sense at all! This is why it's important to know your data - clearly we missed something important.
# + deletable=true editable=true
similarMovies.sort_values(ascending=False)
# + [markdown] deletable=true editable=true
# Our results are probably getting messed up by movies that have only been viewed by a handful of people who also happened to like Star Wars. So we need to get rid of movies that were only watched by a few people that are producing spurious results. Let's construct a new DataFrame that counts up how many ratings exist for each movie, and also the average rating while we're at it - that could also come in handy later.
# + deletable=true editable=true
import numpy as np
movieStats = ratings.groupby('title').agg({'rating': [np.size, np.mean]})
movieStats.head()
# + [markdown] deletable=true editable=true
# Let's get rid of any movies rated by fewer than 100 people, and check the top-rated ones that are left:
# + deletable=true editable=true
popularMovies = movieStats['rating']['size'] >= 200
movieStats[popularMovies].sort_values([('rating', 'mean')], ascending=False)[:15]
# + [markdown] deletable=true editable=true
# 100 might still be too low, but these results look pretty good as far as "well rated movies that people have heard of." Let's join this data with our original set of similar movies to Star Wars:
# + deletable=true editable=true
df = movieStats[popularMovies].join(pd.DataFrame(similarMovies, columns=['similarity']))
# + deletable=true editable=true
df.head()
# + [markdown] deletable=true editable=true
# And, sort these new results by similarity score. That's more like it!
# + deletable=true editable=true
df.sort_values(['similarity'], ascending=False)[:15]
# + [markdown] deletable=true editable=true
# Ideally we'd also filter out the movie we started from - of course Star Wars is 100% similar to itself. But otherwise these results aren't bad.
# + [markdown] deletable=true editable=true
# ## Activity
# + [markdown] deletable=true editable=true
# 100 was an arbitrarily chosen cutoff. Try different values - what effect does it have on the end results?
# + deletable=true editable=true
|
SimilarMovies.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <small><small><i>
# All the IPython Notebooks in **Python Flow Control Statements** lecture series by Dr. <NAME> are available @ **[GitHub](https://github.com/milaan9/03_Python_Flow_Control)**
# </i></small></small>
# # Loops in Python
#
# Loops in Python programming function similar to loops in C, C++, Java or other languages. Python loops are used to repeatedly execute a block of statements until a given condition returns to be **`False`**. In Python, we have **two types of looping statements**, namely:
# <div>
# <img src="img/loop1.png" width="200"/>
# </div>
# # Python `for` Loop
#
# In this class, you'll learn to iterate over a sequence of elements using the different variations of **`for`** loop. We use a **`for`** loop when we want to repeat a code block for a **fixed number of times**.
# ## What is `for` loop in Python?
#
# The for loop in Python is used to iterate over a sequence (**[string](https://github.com/milaan9/02_Python_Datatypes/blob/main/002_Python_String.ipynb)**, **[list](https://github.com/milaan9/02_Python_Datatypes/blob/main/003_Python_List.ipynb)**, **[dictionary](https://github.com/milaan9/02_Python_Datatypes/blob/main/005_Python_Dictionary.ipynb)**, **[set](https://github.com/milaan9/02_Python_Datatypes/blob/main/006_Python_Sets.ipynb)**, or **[tuple](https://github.com/milaan9/02_Python_Datatypes/blob/main/004_Python_Tuple.ipynb)**). Iterating over a sequence is called traversal.
# ## Why use `for` loop?
#
# Let’s see the use **`for`** loop in Python.
#
# * **Definite Iteration:** When we know how many times we wanted to run a loop, then we use count-controlled loops such as **`for`** loops. It is also known as definite iteration. For example, Calculate the percentage of 50 students. here we know we need to iterate a loop 50 times (1 iteration for each student).
# * **Reduces the code’s complexity:** Loop repeats a specific block of code a fixed number of times. It reduces the repetition of lines of code, thus reducing the complexity of the code. Using **`for`** loops and while loops we can automate and repeat tasks in an efficient manner.
# * **Loop through sequences:** used for iterating over lists, strings, tuples, dictionaries, etc., and perform various operations on it, based on the conditions specified by the user.
# ### Syntax :
#
# ```python
# for element in sequence:
# body of for loop
# ```
#
# 1. First, **`element`** is the variable that takes the value of the item inside the sequence on each iteration.
#
# 2. Second, all the **`statements`** in the body of the for loop are executed with the same value. The body of for loop is separated from the rest of the code using indentation.
#
# 3. Finally, loop continues until we reach the last item in the **`sequence`**. The body of for loop is separated from the rest of the code using indentation.
#
# <div>
# <img src="img/for0.png" width="400"/>
# </div>
# +
# Example 1: For loop
words = ['one', 'two', 'three', 'four', 'five']
for i in words:
print(i)
# +
# Example 2: Calculate the average of list of numbers
numbers = [10, 20, 30, 40, 50]
# definite iteration
# run loop 5 times because list contains 5 items
sum = 0
for i in numbers:
sum = sum + i
list_size = len(numbers)
average = sum / list_size
print(average)
# -
# ## `for` loop with `range()` function
#
# The **[range()](https://github.com/milaan9/04_Python_Functions/blob/main/002_Python_Functions_Built_in/053_Python_range%28%29.ipynb)** function returns a sequence of numbers starting from 0 (by default) if the initial limit is not specified and it increments by 1 (by default) until a final limit is reached.
#
# The **`range()`** function is used with a loop to specify the range (how many times) the code block will be executed. Let us see with an example.
#
# We can generate a sequence of numbers using **`range()`** function. **`range(5)`** will generate numbers from 0 to 4 (5 numbers).
#
# <div>
# <img src="img/forrange.png" width="600"/>
# </div>
#
# The **`range()`** function is "lazy" in a sense because it doesn't generate every number that it "contains" when we create it. However, it is not an iterator since it supports **[len()](https://github.com/milaan9/04_Python_Functions/blob/main/002_Python_Functions_Built_in/040_Python_len%28%29.ipynb)** and **`__getitem__`** operations.
#
# This **`range()`** function does not store all the values in memory; it would be inefficient. So it remembers the start, stop, step size and generates the next number on the go.
#
# We can also define the start, stop and step size as **`range(start, stop,step_size)`**. **`step_size`** defaults to 1 if not provided.
# +
# Example 1: How range works in Python?
# empty range
print(list(range(0)))
# using range(stop)
print(list(range(10)))
# using range(start, stop)
print(list(range(1, 10)))
# +
# Example 2:
for num in range(10):
print(num)
# +
# Example 3:
for i in range(1, 11):
print(i)
# +
# Example 4:
for i in range (2, 12, 2): # beginning 2 with distance of 2 and stop before 12
print (i)
# +
# Example 5:
num=2
for a in range (1,6): # range (1,6) means numbers from 1 to 5, i.e., (1,2,3,4,5)
print (num * a)
# +
# Example 6: Find Sum of 10 Numbers
sum=0
for n in range(1,11): # range (1,11) means numbers from 1 to 5, i.e., (1,2,3,4,5,6,7,8,9,10)
sum+=n
print (sum)
'''
0+1 = 1
1+2 = 3
3+3 = 6
6+4 = 10
10+5 =15
21
28
36
45
45+10 = 55
'''
# +
# Example 7: printing a series of numbers using for and range
print("Case 1:")
for i in range(5): # Print numbers from 0 to 4
print (i)
print("Case 2:")
for i in range(5, 10): # Print numbers from 5 to 9
print (i)
print("Case 3:")
for i in range(5, 10, 2): # Print numbers from 5 with distace 2 and stop before 10
print (i)
# -
# ## `for` loop with `if-else`
#
# A **`for`** loop can have an optional **[if-else](https://github.com/milaan9/03_Python_Flow_Control/blob/main/002_Python_if_else_statement.ipynb)** block. The **`if-else`** checks the condition and if the condition is **`True`** it executes the block of code present inside the **`if`** block and if the condition is **`False`**, it will execute the block of code present inside the **`else`** block.
# +
# Example 1: Print all even and odd numbers
for i in range(1, 11):
if i % 2 == 0:
print('Even Number:', i)
else:
print('Odd Number:', i)
# -
# ## `for` loop with `else`
#
# A **`for`** loop can have an optional **`else`** block as well. The **`else`** part is executed if the items in the sequence used in for loop exhausts.
#
# **`else`** block will be skipped/ignored when:
#
# * **`for`** loop terminate abruptly
# * the **[break statement](https://github.com/milaan9/03_Python_Flow_Control/blob/main/007_Python_break_continue_pass_statements.ipynb)** is used to break the **`for`** loop.
# +
# Example 1:
digits = [0, 1, 5]
for i in digits:
print(i)
else:
print("No items left.")
# -
# **Explanation:**
#
# Here, the for loop prints items of the list until the loop exhausts. When the for loop exhausts, it executes the block of code in the **`else`** and prints **`No items left`**.
# +
# Example 2:
for number in range(11):
print(number) # prints 0 to 10, not including 11
else:
print('The loop stops at', number)
# +
# Example 3: Else block in for loop
for i in range(1, 6):
print(i)
else:
print("Done")
# -
# This **`for-else`** statement can be used with the **`break`** keyword to run the **`else`** block only when the **`break`** keyword was not executed. Let's take an example:
# +
# Example 4:
student_name = 'Arthur'
marks = {'Alan': 99, 'Bill': 55, 'Cory': 77}
for student in marks:
if student == student_name:
print(marks[student])
break
else:
print('No entry with that name found.')
# +
# Example 5:
count = 0
for i in range(1, 6):
count = count + 1
if count > 2:
break
else:
print(i)
else:
print("Done")
# -
# ## Using Control Statement in `for` loops in Python
#
# **[Control statements](https://github.com/milaan9/03_Python_Flow_Control/blob/main/007_Python_break_continue_pass_statements.ipynb)** in Python like **`break`**, **`continue`**, etc can be used to control the execution flow of **`for`** loop in Python. Let us now understand how this can be done.
#
# It is used when you want to exit a loop or skip a part of the loop based on the given condition. It also knows as **transfer statements**.
# ### a) `break` in `for` loop
#
# Using the **`break`** statement, we can exit from the **`for`** loop before it has looped through all the elements in the sequence as shown below. As soon as it breaks out of the **`for`** loop, the control shifts to the immediate next line of code. For example,
# +
# Example 1:
numbers = (0,1,2,3,4,5)
for number in numbers:
print(number)
if number == 3:
break
# -
# **Explanation:**
#
# In the above example, the loop stops when it reaches 3.
# +
# Example 2:
color = ['Green', 'Pink', 'Blue']
for i in color:
if(i == 'Pink'):
break
print (i)
# -
# **Explanation:**
#
# Here, in the second iteration, the **`if`** condition becomes **`True`**. Hence the loop beaks out of the for loop and the immediate next line of code i.e **`print (i)`** is executed and as a result, pink is outputted.
# +
# Example 3:
numbers = [1, 4, 7, 8, 15, 20, 35, 45, 55]
for i in numbers:
if i > 15:
# break the loop
break
else:
print(i)
# +
# Example 4:
for i in range(5):
for j in range(5):
if j == i:
break
print(i, j)
# -
# **Explanation:**
#
# We have two loops. The outer **`for`** loop iterates the first four numbers using the **[range()](https://github.com/milaan9/04_Python_Functions/blob/main/002_Python_Functions_Built_in/053_Python_range%28%29.ipynb)** function, and the inner **`for`** loop also iterates the first four numbers. If the outer number and a current number of the inner loop are the same, then break the inner (nested) loop.
# ### b) `continue` in `for` loop
#
# The **`continue`** statement is used to stop/skip the block of code in the loop for the current iteration only and continue with the next iteration. For example,
# +
# Example 1:
color = ['Green', 'Pink', 'Blue']
for i in color:
if(i == 'Pink'):
continue
print (i)
# -
# **Explanation:**
#
# Here, in the second iteration, the condition becomes **`True`**. Hence the interpreter skips the **`print (i)`** statement and immediately executes the next iteration.
# +
# Example 2:
numbers = (0,1,2,3,4,5)
for number in numbers:
print(number)
if number == 3:
continue
print('Next number should be ', number + 1) if number != 5 else print("loop's end") # for short hand conditions need both if and else statements
print('outside the loop')
# -
# **Explanation:**
#
# In the example above, if the number equals 3, the step **after** the condition (but inside the loop) is skipped and the execution of the loop continues if there are any iterations left.
# +
# Example 3:
first = [3, 6, 9]
second = [3, 6, 9]
for i in first:
for j in second:
if i == j:
continue
print(i, '*', j, '= ', i * j)
# -
# **Explanation:**
#
# We have two loops. The outer for loop iterates the first list, and the inner loop also iterates the second list of numbers. If the outer number and the inner loop’s current number are the same, then move to the next iteration of an inner loop.
#
# As you can see in the output, no same numbers multiplying to each other.
# +
# Example 4:
name = "<NAME>"
count = 0
for char in name:
if char != 'm':
continue
else:
count = count + 1
print('Total number of m is:', count)
# -
# ### c) `pass` in `for` loop
#
# The **`pass`** statement is a null statement, i.e., nothing happens when the statement is executed. Primarily it is used in empty functions or classes. When the interpreter finds a pass statement in the program, it returns no operation.
# +
# Example 1:
for number in range(6):
pass
# +
# Example 2:
num = [1, 4, 5, 3, 7, 8]
for i in num:
# calculate multiplication in future if required
pass
# -
# ## Reverse for loop
#
# Till now, we have learned about forward looping in **`for`** loop with various examples. Now we will learn about the backward iteration of a loop.
#
# Sometimes we require to do reverse looping, which is quite useful. For example, to reverse a list.
#
# There are three ways to iterating the **`for`** loop backward:
#
# * Reverse **`for`** loop using **[range()](https://github.com/milaan9/04_Python_Functions/blob/main/002_Python_Functions_Built_in/053_Python_range%28%29.ipynb)** function
# * Reverse **`for`** loop using the **[reversed()](https://github.com/milaan9/04_Python_Functions/blob/main/002_Python_Functions_Built_in/055_Python_reversed%28%29.ipynb)** function
# ### Backward Iteration using the `reversed()` function
#
# We can use the built-in function **[reversed()](https://github.com/milaan9/04_Python_Functions/blob/main/002_Python_Functions_Built_in/055_Python_reversed%28%29.ipynb)** with **`for`** loop to change the order of elements, and this is the simplest way to perform a reverse looping.
# +
# Example 1: Reversed numbers using `reversed()` function
list1 = [10, 20, 30, 40]
for num in reversed(list1):
print(num)
# -
# ### Reverse for loop using `range()`
#
# We can use the built-in **[range()](https://github.com/milaan9/04_Python_Functions/blob/main/002_Python_Functions_Built_in/053_Python_range%28%29.ipynb)** function with the **`for`** loop to reverse the elements order. The **`range()`** generates the integer numbers between the given start integer to the stop integer.
# +
# Example 1:
print("Reverse numbers using for loop")
num = 5
# start = 5
# stop = -1
# step = -1
for num in (range(num, -1, -1)):
print(num)
# -
# There are many helper functions that make **`for`** loops even more powerful and easy to use. For example **[enumerate()](https://github.com/milaan9/04_Python_Functions/blob/main/002_Python_Functions_Built_in/018_Python_enumerate%28%29.ipynb)**, **[zip()](https://github.com/milaan9/04_Python_Functions/blob/main/002_Python_Functions_Built_in/066_Python_zip%28%29.ipynb)**, **[sorted()](https://github.com/milaan9/04_Python_Functions/blob/main/002_Python_Functions_Built_in/060_Python_sorted%28%29.ipynb)**, **[reversed()](https://github.com/milaan9/04_Python_Functions/blob/main/002_Python_Functions_Built_in/055_Python_reversed%28%29.ipynb)**
# +
# Example 2:
print("reversed: \t",end="")
for ch in reversed("abc"):
print(ch,end=";")
print("\nenuemerated:\t",end="")
for i,ch in enumerate("abc"):
print(i,"=",ch,end="; ")
print("\nzip'ed: ")
for a,x in zip("abc","xyz"):
print(a,":",x)
# -
# ## Nested `for` loops
#
# **Nested `for` loop** is a **`for`** loop inside another **`for`** a loop.
#
# A nested loop has one loop inside of another. In Python, you can use any loop inside any other loop. For instance, a **`for`** loop inside a **[while loop](https://github.com/milaan9/03_Python_Flow_Control/blob/main/006_Python_while_Loop.ipynb)**, a **`while`** inside **`for`** in and so on. It is mainly used with two-dimensional arrays.
#
# In nested loops, the inner loop finishes all of its iteration for each iteration of the outer loop. i.e., For each iteration of the outer loop inner loop restart and completes all its iterations, then the next iteration of the outer loop begins.
#
# **Syntax:**
#
# ```python
# # outer for loop
# for element in sequence
# # inner for loop
# for element in sequence:
# body of inner for loop
# body of outer for loop
# other statements
# ```
# ### `for` loop inside `for` loop
# #### Example: Nested `for` loop
#
# In this example, we are using a **`for`** loop inside a **`for`** loop. In this example, we are printing a multiplication table of the first ten numbers.
#
# <div>
# <img src="img/nforloop1.png" width="600"/>
# </div>
#
# 1. The outer **`for`** loop uses the **[range()](https://github.com/milaan9/04_Python_Functions/blob/main/002_Python_Functions_Built_in/053_Python_range%28%29.ipynb)** function to iterate over the first ten numbers
# 2. The inner **`for`** loop will execute ten times for each outer number
# 3. In the body of the inner loop, we will print the multiplication of the outer number and current number
# 4. The inner loop is nothing but a body of an outer loop.
# +
# Example 1: printing a multiplication table of the first ten numbers
# outer loop
for i in range(1, 11):
# nested loop
for j in range(1, 11): # to iterate from 1 to 10
print(i * j, end=' ') # print multiplication
print()
# -
# **Explanation:**
#
# * In this program, the outer **`for`** loop is iterate numbers from 1 to 10. The **`range()`** return 10 numbers. So total number of iteration of the outer loop is 10.
# * In the first iteration of the nested loop, the number is 1. In the next, it 2. and so on till 10.
# * Next, the inner loop will also execute ten times because we rea printing multiplication table up to ten. For each iteration of the outer loop, the inner loop will execute ten times.
# * In each iteration of an inner loop, we calculated the multiplication of two numbers.
# +
# Example 1:
person = {
'first_name':'Milaan',
'last_name':'Parmar',
'age':96,
'country':'Finland',
'is_marred':True,
'skills':['Python', 'Matlab', 'R', 'C', 'C++'],
'address':{
'street':'Space street',
'zipcode':'02210'
}
}
for key in person:
if key == 'skills':
for skill in person['skills']:
print(skill)
# +
# Example 2: Write a code to add all the prime numbers between 17 to 53 using while loop
# 17, 19, 23, 29, 31, 37, 41, 43, 47, 53
sum=0
for i in range(17,54):
for j in range(2,i):
if i%j ==0:
break
else:
sum=sum+i
print(i)
print(sum)
# + code_folding=[]
# Example 3: iterating through nested for loops
color = ['Red', 'Pink']
element = ['flower', 'watch']
for i in color:
for j in element:
print(i, j)
# +
# Example 4: A use case of a nested for loop in `list_of_lists` case would be
list_of_lists = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
total=0
for list1 in list_of_lists:
for i in list1:
total = total+i
print(total)
# +
# Example 5:
numbers = [[1, 2, 3], [4, 5, 6]]
cnt = 0
for i in numbers:
for j in i:
print('iteration', cnt, end=': ')
print(j)
cnt = cnt + 1
# -
# #### Example: Nested `for` loop to print pattern
#
# ```python
# *
# * *
# * * *
# * * * *
# * * * * *
# ```
#
# ```python
# >>>rows = 5
# # outer loop
# >>>for i in range(1, rows + 1):
# # inner loop
# >>> for j in range(1, i + 1):
# >>> print("*", end=" ")
# >>> print('')
# ```
#
# <div>
# <img src="img/nforloop2.png" width="600"/>
# </div>
#
# **Explanation:**
#
# * In this program, the outer loop is the number of rows print.
# * The number of rows is five, so the outer loop will execute five times
# * Next, the inner loop is the total number of columns in each row.
# * For each iteration of the outer loop, the columns count gets incremented by 1
# * In the first iteration of the outer loop, the column count is 1, in the next it 2. and so on.
# * The inner loop iteration is equal to the count of columns.
# * In each iteration of an inner loop, we print star
# +
# Example 1: Method 1
rows = 5
for i in range(1, rows + 1): # outer loop
for j in range(1, i + 1): # inner loop
print("*", end=" ")
print('')
# +
# Example 1: Method 2 - Print Floyd Triangle with user input
ran = input("How many rows? ")
rang = int(ran)
k = 1
for i in range(1, rang+1):
for j in range(1, i+1):
print("*", end=" ")
k = k + 1
print()
# +
# Example 2: Method 1
for i in range(1,6): # numbers from 0,1,2,3,4,5
for j in range(1, i+1):
print(i, end=" ") # print number
print(" ") # line after each row to display pattern correctly
# +
# Example 2: Method 2
rows = 6
for num in range(rows): # from 0,1,2,3,4,5
for i in range(num):
print(num,end=" ") # print the number
print(" ") # line after each row to print
# +
# Example 3: Method 3
n=5 # giving number of rows i want
x = 0
for i in range(0 , n): # from 0 to 4
x += 1 # equivalent to x=x+1
for j in range(0, i + 1): # 0,1,2,3,4,5
print(x , end=" ")
print(" ")
# +
# Example 4: Method 1
rows = 5
for row in range(1, rows+1): # from 1
for column in range(1, row+1): # from 1,2,3,4,5
print(column, end=" ")
print(" ")
# +
# Example 4: Method 2
for i in range (1, 6): # rows from 1 to 5
for j in range(i): # column range(i)
print (j + 1, end = ' ')
print ()
# +
# Example 5: Method 1
for i in range (1,6):
for j in range (i,0,-1):
print(j, end=" ")
print(" ")
"""
i = 1 2 3 4 5
# loop 1
for i = 1, range (1,0,-1): j=1
i = 1, print: 1
# loop 2
for i =2, range (2,0,-1): j = 2,1
i = 2, print: 2,1
"""
# +
# Example 5: Method 2
for i in range(0,5):
for j in range(i+1,0,-1):
print(j, end=" ")
print()
# +
# Example 6: Example 1 reverse pyramid
for i in range (1,6): # rows from 1 to 5
for j in range (5,i-1,-1): # column range(5,0,-1) = 54321
print ("*", end=" "),
print (" ")
# +
# Example 7: Example 2 reverse pyramid Method 1
rows = 5
# range(1,10,2) # from 1,3,5,7,9
for i in range(rows,0,-1): # from 3,2,1
num = i
for j in range(0,i):
print(num, end=" ")
print(" ")
# +
# Example 7: Example 2 reverse pyramid Method 2
for i in range(5,0,-1): # range from 5 4 3 2 1
for j in range(0,i): # range(0,5)=0 1 2 3 4
print(i, end=" ")
print(" ")
"""
i = 5 4 3 2 1
# loop 1
for i = 5, range (0,5): j=5 4 3 2 1
i = 5, print: 5 5 5 5 5
# loop 2
for i = 4, range (0,4): j=4 3 2 1
i = 4, print: 4 4 4 4
"""
# +
# Example 8: Method 1
for i in range(5,0,-1): # rows range = 5 4 3 2 1
for j in range(1,i+1): # column range
print(j, end =" ")
print()
# +
# Example 8: Method 2
for i in range(1,6):
for j in range(6-i):
print(j+1, end=" ")
print(" ")
"""
i = 1 2 3 4 5
# loop 1
for i = 1, range (5): j=0 1 2 3 4
i = 1, print: 1 2 3 4 5
# loop 2
for i = 2, range (4): j=0 1 2 3
i = 2, print: 1 2 3 4
# loop 3
for i = 3, range (3): j=0 1 2
i = 3, print: 1 2 3
# loop 4
for i = 4, range (2): j=0 1
i = 4, print: 1 2
# loop 5
for i = 5, range (1): j=0
i = 5, print: 1
"""
# -
# Example 9: Print following rectangle of stars of 6 rows and 3 columns
# ### `while` loop inside `for` loop
#
# The **[while loop](https://github.com/milaan9/03_Python_Flow_Control/blob/main/006_Python_while_Loop.ipynb)** is an entry-controlled loop, and a **`for`** loop is a count-controlled loop. We can also use a **`while`** loop under the for loop statement. For example,
# +
# Example 1: Print Multiplication table of a first 5 numbers using `for` loop and `while` loop
# outer loop
for i in range(1, 6):
print('Multiplication table of:', i)
count = 1
# inner loop to print multiplication table of current number
while count < 11:
print(i * count, end=' ')
count = count + 1
print('\n')
# -
# **Explanation:**
#
# * In this program, we iterate the first five numbers one by one using the outer loop and range function
# * Next, in each iteration of the outer loop, we will use the inner while loop to print the multiplication table of the current number
# +
# Example 2:
names = ['Amy', 'Bella', 'Cathy']
for name in names: # outer loop
count = 0 # inner while loop
while count < 5:
print(name, end=' ')
# increment counter
count = count + 1
print()
# -
# ## `for ` loop in one line
#
# We can also formulate the **`for`** loop statement in one line to reduce the number of lines of code. For example:
# +
# Example 1: regular `for` loop code
first = [3, 6, 9]
second = [30, 60, 90]
final = []
for i in first:
for j in second:
final.append(i+j)
print(final)
# +
# Example 1: single line `for` loop code
first = [3, 6, 9]
second = [30, 60, 90]
final = [i+j for i in first for j in second]
print(final)
# +
# Example 2: Print the even numbers by adding 1 to the odd numbers in the list
odd = [1, 5, 7, 9]
even = [i + 1 for i in odd if i % 2 == 1]
print(even)
# +
# Example 3:
final = [[x, y] for x in [30, 60, 90] for y in [60, 30, 90] if x != y]
print(final)
# -
# ## Accessing the index in for loop
#
# The **[enumerate()](https://github.com/milaan9/04_Python_Functions/blob/main/002_Python_Functions_Built_in/018_Python_enumerate%28%29.ipynb)** function is useful when we wanted to access both value and its index number or any sequence such as list or string. For example, a list is an ordered data structure that stores each item with its index number. Using the item’s index number, we can access or modify its value.
#
# Using enumerate function with a loop, we can access the list’s items with their index number. The **`enumerate()`** adds a counter to iteration and returns it in the form of an enumerable object.
#
# There three ways to access the index in **`for`** loop let’s see each one by one
# +
# Example 1: Print elements of the list with its index number using the `enumerate()` function
#In this program, the for loop iterates through the list and displays the
#elements along with its index number.
numbers = [4, 2, 5, 7, 8]
for i, v in enumerate(numbers):
print('Numbers[', i, '] =', v)
# +
# Example 2: Printing the elements of the list with its index number using the `range()` function
numbers = [1, 2, 4, 6, 8]
size = len(numbers)
for i in range(size):
print('Index:', i, " ", 'Value:', numbers[i])
# -
# ## Iterate String using `for` loop
#
# By looping through the **[string](https://github.com/milaan9/02_Python_Datatypes/blob/main/002_Python_String.ipynb)** using **`for`** loop, we can do lots of string operations. Let’s see how to perform various string operations using a **`for`** loop.
# +
# Example 1: For loop with string
# Method 1:
language = 'Python'
for letter in language:
print(letter)
# Method 2: using range() function
for i in range(len(language)):
print(language[i])
# +
# Example 2: Printing the elements of a string using for loop
for i in 'Hello World':
print(i)
# +
# Example 3: Access all characters of a string
name = "Alan"
for i in name:
print(i, end=' ')
# +
# Example 4: Iterate string in reverse order
name = "Alan"
for i in name[::-1]:
print(i, end=' ')
# +
# Example 5: Iterate over a particular set of characters in string
name = "<NAME>"
for char in name[2:7:1]:
print(char, end=' ')
# +
# Example 6: Iterate over words in a sentence using the `split()` function.
dialogue = "Remember, Red, hope is a good thing, maybe the best of things, and no good thing ever dies"
# split on whitespace
for word in dialogue.split():
print(word)
# +
# Example 7:
for ch in 'abc':
print(ch)
total = 0
for i in range(5): # from 0 to 4
total += i # total = 0+1+2+3+4 = 10
for i,j in [(1,2),(3,1)]: # [(1),(3)]
total += i**j # total = 1+3 = 4
print("total =",total)
# -
# ## Iterate List using `for` loop
#
# **[Python list](https://github.com/milaan9/02_Python_Datatypes/blob/main/003_Python_List.ipynb)** is an ordered collection of items of different data types. It means Lists are ordered by index numbers starting from 0 to the total items-1. List items are enclosed in square **`[]`** brackets.
#
# Below are the few examples of Python list.
#
# ```python
# >>> numers = [1,2,4,6,7]
# >>> players = ["Messi", "Ronaldo", "Neymar"]
# ```
# Using a loop, we can perform various operations on the list. There are ways to iterate through elements in it. Here are some examples to help you understand better.
# +
# Example 1: Iterate over a list Method 1
numbers = [1, 2, 3, 6, 9]
for num in numbers:
print(num)
# +
# Example 2: Iterate over a list Method 2 (list comprehension)
numbers = [1, 2, 3, 6, 9]
[print(i) for i in numbers]
# +
# Example 3: Iterate over a list using a for loop and range.
numbers = [1, 2, 3, 6, 9]
size = len(numbers)
for i in range(size):
print(numbers[i])
# +
# Example 4: printing the elements of a list using for loop
even_numbers = [2, 4, 6, 8, 10] # list with 5 elements
for i in even_numbers:
print(even_numbers)
# +
# Example 5: printing the elements of a list using for loop
list = [60, "HelloWorld", 90.96]
for i in list:
print(i)
# +
# Example 6: Program to find the sum of all numbers stored in a list
# List of numbers
numbers = [6, 5, 3, 8, 4, 2, 5, 6, 11] # list with 9 elements
# variable to store the sum
sum = 0
# iterate over the list
for val in numbers:
sum = sum+val
print("The sum is", sum)
# +
# Example 7: Calculate the square of each number using for loop.
numbers = [1, 2, 3, 4, 5]
# iterate over each element in list num
for i in numbers:
# ** exponent operator
square = i ** 2
print("Square of:", i, "is:", square)
# -
# **Explanation:** **`i`** iterates over the 0,1,2,3,4. Every time it takes each value and executes the algorithm inside the loop. It is also possible to iterate over a nested list illustrated below.
# +
# Example 8: Calculate the average of list of numbers
numbers = [10, 20, 30, 40, 50]
# definite iteration
# run loop 5 times because list contains 5 items
sum = 0
for i in numbers:
sum = sum + i
list_size = len(numbers)
average = sum / list_size
print(average)
# +
# Example 9: Printing a list using range function
color = ['Green', 'Pink', 'Blue'] # list with total 3 elements
print(len(color)) # print length of color
for i in range(len(color)):
print(color[i])
# +
# Example 10: Printing a list using range function
list_of_lists = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] # list with 3 elements
for list1 in list_of_lists:
print(list1)
# -
# ## Iterate Tuple using `for` loop
#
# **[Python list](https://github.com/milaan9/02_Python_Datatypes/blob/main/004_Python_Tuple.ipynb)** Only the difference is that list is enclosed between square bracket **`[]`**, tuple between parentheses **`()`** and we cannot change the elements of a tuple once it is assigned, i.e., **immutable** whereas we can change the elements of a list i.e., **mutable**.
#
# Below are the few examples of Python list.
#
# ```python
# >>> numers = (1,2,4,6,7)
# >>> players = ("Messi", "Ronaldo", "Neymar")
# ```
# Using a loop, we can perform various operations on the tuple. There are ways to iterate through elements in it. Here are some examples to help you understand better.
# +
# Example 1: For loop with tuple
numbers = (0, 1, 2, 3, 4, 5)
for number in numbers:
print(number)
# -
# ## Iterate Dictionary using `for` loop
#
# **[Python dictionary](https://github.com/milaan9/02_Python_Datatypes/blob/main/005_Python_Dictionary.ipynb)** is used to store the items in the format of **key-value** pair. It doesn’t allow duplicate items. It is enclosed with **`{}`**. Here are some of the examples of dictionaries.
#
# ```python
# >>> dict1 = {1: "Apple", 2: "Ball", 3: "Cat"}
# >>> dict2 = {"Antibiotics": "Penicillin", "Inventor": "Fleming", "Year": 1928}
# ```
#
# There are ways to iterate through **key-value** pairs. Here are some examples to help you understand better.
# +
# Example 1: Access only the keys of the dictionary.
dict1 = {"Antibiotics": "Penicillin", "Inventor": "Fleming", "Year": 1928}
for key in dict1:
print(key)
# +
# Example 2: Iterate keys and values of the dictionary
dict1 = {"Vaccine": "Polio", "Inventor": "Salk", "Year": 1953}
for key in dict1:
print(key, "->", dict1[key])
# +
# Example 3: Iterate only the values the dictionary
dict1 = {"Vaccine": "Smallpox ", "Inventor": "Jenner", "Year": 1796}
for value in dict1.values():
print(value)
# +
# Example 4: For loop with dictionary
#Looping through a dictionary gives you the key of the dictionary.
person = {
'first_name':'Milaan',
'last_name':'Parmar',
'age':96,
'country':'Finland',
'is_marred':True,
'skills':['Python', 'Matlab', 'R', 'C', 'C++'],
'address':{
'street':'Space street',
'zipcode':'02210'
}
}
for key in person:
print(key)
for key, value in person.items():
print(key, value) # this way we get both keys and values printed out
# -
# ## Iterate Set using `for` loop
#
# **[Python sets](https://github.com/milaan9/02_Python_Datatypes/blob/main/006_Python_Sets.ipynb)** is an unordered collection of items. Every set element is unique (no duplicates) and must be immutable (cannot be changed).
#
# However, a set itself is **mutable**. We can add or remove items from it.
#
# ```python
# >>> my_set = {1, 2, 3}
# >>> my_vaccine = {"Penicillin", "Fleming", "1928"}
# ```
#
# Sets can also be used to perform mathematical set operations like **union**, **intersection**, **symmetric difference**, etc.
# +
# Example 6: For loop with set
mix_fruits = {'Banana', 'Apple', 'Mango', 'Orange', 'GUava', 'Kiwi', 'Grape'}
for fruits in mix_fruits:
print(fruits)
# -
# ## 💻 Exercises ➞ <span class='label label-default'>List</span>
#
# ### Exercises ➞ <span class='label label-default'>Level 1</span>
#
# 1. Iterate 0 to 10 using **`for`** loop, do the same using **`while`** loop.
# 2. Iterate 10 to 0 using **`for`** loop, do the same using **`while`** loop.
# 3. Write a loop that makes seven calls to **`print()`**, so we get on the output the following triangle:
#
# ```py
# # # # # # # # #
# # # # # # # # #
# # # # # # # # #
# # # # # # # # #
# # # # # # # # #
# # # # # # # # #
# # # # # # # # #
# # # # # # # # #
# ```
#
# 4. Use nested loops to create the following:
#
# ```py
# #
# ###
# #####
# #######
# #########
# ###########
# #############
# ```
#
# 5. Print the following pattern using loops
#
# ```py
# 0 x 0 = 0
# 1 x 1 = 1
# 2 x 2 = 4
# 3 x 3 = 9
# 4 x 4 = 16
# 5 x 5 = 25
# 6 x 6 = 36
# 7 x 7 = 49
# 8 x 8 = 64
# 9 x 9 = 81
# 10 x 10 = 100
# ```
#
# 6. Iterate through the list, ['Python', 'Numpy','Pandas','Scikit', 'Pytorch'] using a **`for`** loop and print out the items.
# 7. Use **`for`** loop to iterate from 0 to 100 and print only even numbers
# 8. Use **`for`** loop to iterate from 0 to 100 and print only odd numbers
#
# ### Exercises ➞ <span class='label label-default'>Level 2</span>
#
# 1. Use **`for`** loop to iterate from 0 to 100 and print the sum of all numbers.
#
# ```py
# The sum of all numbers is 5050.
# ```
#
# 1. Use **`for`** loop to iterate from 0 to 100 and print the sum of all evens and the sum of all odds.
#
# ```py
# The sum of all evens is 2550. And the sum of all odds is 2500.
# ```
#
# ### Exercises ➞ <span class='label label-default'>Level 3</span>
#
# 1. Go to the data folder and use the **[countries.py](https://github.com/Asabeneh/30-Days-Of-Python/blob/master/data/countries.py)** file. Loop through the countries and extract all the countries containing the word **`land`**.
# 1. This is a fruit list, ['banana', 'orange', 'mango', 'lemon'] reverse the order using loop.
# 2. Go to the data folder and use the **[countries_data.py](https://github.com/milaan9/03_Python_Flow_Control/blob/main/countries_details_data.py)** file.
# 1. What are the total number of languages in the data
# 2. Find the ten most spoken languages from the data
# 3. Find the 10 most populated countries in the world
|
005_Python_for_Loop.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense, Activation, BatchNormalization
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.applications import ResNet50
from sklearn.metrics import confusion_matrix, classification_report
import matplotlib.pyplot as plt
# -
plt.style.use("ggplot")
TRAIN_DATASET_PATH = "./observations/experiements/dest_folder/train/"
VALIDATION_DATASET_PATH = "./observations/experiements/dest_folder/val/"
TEST_DATASET_PATH = "./observations/experiements/dest_folder/test/"
BATCH_SIZE = 60
EPOCHS = 50
train_datagen = ImageDataGenerator(rescale=1/255)
train_generator = train_datagen.flow_from_directory(
TRAIN_DATASET_PATH,
target_size=(256, 256),
class_mode='binary',
batch_size=BATCH_SIZE)
validation_datagen = ImageDataGenerator(rescale=1/255)
validation_generator = validation_datagen.flow_from_directory(
VALIDATION_DATASET_PATH,
target_size=(256, 256),
class_mode='binary',
batch_size=BATCH_SIZE)
test_datagen = ImageDataGenerator(rescale=1/255)
test_generator = test_datagen.flow_from_directory(
TEST_DATASET_PATH,
target_size=(256, 256),
class_mode='binary',
batch_size=BATCH_SIZE)
# transfer learning
model = Sequential([
ResNet50(include_top=False, pooling='avg', input_shape=(256, 256, 3)),
Dense(1, activation='sigmoid')
])
model.layers[0].trainable = False
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model.summary()
history = model.fit(train_generator, epochs=EPOCHS, steps_per_epoch=5, validation_data=validation_generator)
N = np.arange(0, EPOCHS)
plt.figure()
plt.plot(N, history.history['accuracy'], label='accuracy')
plt.plot(N, history.history['val_accuracy'], label='val_accuracy')
plt.xlabel("epochs")
plt.ylabel("accuracy/loss")
plt.title("RMS Prop Optimizer")
plt.legend()
plt.show()
plt.figure()
plt.plot(N, history.history['loss'], label='loss')
plt.plot(N, history.history['val_loss'], label='val_loss')
plt.xlabel("epochs")
plt.ylabel("val accuracy/loss")
plt.legend()
plt.show()
model.save("model.h5")
# +
# model = tf.keras.models.load_model("model.h5")
# -
model.evaluate(test_generator)
predictions = model.predict(train_generator)
confusion_matrix(train_generator.classes, np.round(predictions))
print(classification_report(train_generator.classes, np.round(predictions)))
|
mask_detection_tensorflow_keras_transfer_learning.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Notebook Object
# #### 1. Finds:
# - longest street
# - shortest street
# - straightest street
# - curviest Street
#
# in Boulder,Colorado, USA
#
# #### 2. Gives a function for above information in any cities (with OSMnx Relations)
#
# #### Author: Boulder - Yu
import osmnx as ox
import shapely
import geopandas as gpd
import pandas as pd
import os
import matplotlib.pyplot as plt
import fiona
# %pylab inline
import numpy as np
from collections import Counter
from geopy.distance import vincenty
from shapely.geometry import Point
import pylab as pl
# ## Street Length
# ### 1.1 OSMnx Visualization
Boulder = ox.graph_from_place('Boulder, Colorado, USA',network_type='drive')
Boulder_projected = ox.project_graph(Boulder)
fig, ax = ox.plot_graph(Boulder_projected)
Boulder_stats = ox.basic_stats(Boulder)
Boulder_stats
ox.save_graph_shapefile(Boulder_projected, filename='boulder_osmnx')
streets_osmnx = gpd.read_file('data/boulder_osmnx/edges/edges.shp')
streets_osmnx.head()
# ### 1.2 Combine Same Street Name
street = pd.DataFrame(streets_osmnx)
streets = pd.DataFrame(street.groupby(['name','length','highway','oneway'],as_index=False).size())
streets.reset_index(inplace=True)
streets.columns = ['name','length','highway','oneway','count']
streets.head()
streets.length = streets.length.astype(float)
streets.head()
streets.highway.unique()
# +
#way = streets[(streets.highway !='motorway')|(streets.highway !='trunk')]
#way.head()
# -
# ### 1.3 Find Longest Street
street_length = streets.groupby('name')['length'].sum()
street_length = pd.DataFrame(street_length)
street_length.head()
street_length.loc[street_length['length'].idxmax()]
# +
#street_length.sort_values('length', ascending=False).head()
# -
# ### 1.4 Find Shortest Street
street_length.loc[street_length['length'].idxmin()]
# +
#street_length.sort_values('length', ascending=True).head()
# -
# ## Functions for street length
def howlong(place, ntype='drive'):
"""
Find the shortest and longest roads by name in a given geographical area in OSM.
Args:
1. place: a string of place name
2. ntype: network type (not tested yet)
Return:
1. name and length of the shortest road (length problem unsolved)
2. name and length of the longest road (length problem unsolved)
3. network graph plot of given place
"""
# try different API query results
try:
G = ox.project_graph(ox.graph_from_place(place,network_type='drive', which_result=1))
except:
G = ox.project_graph(ox.graph_from_place(place, network_type='drive',which_result=2))
# convert road segments as df
segs = list(G.edges(data=True))
df = pd.DataFrame([[i[0], i[1], i[2]['highway'], i[2]['length'],
i[2]['name'], i[2]['oneway'], i[2]['osmid']] for i in segs if 'name' in i[2]],
columns=['node_a', 'node_b', 'type', 'length',
'name', 'oneway', 'osmid'])
df['name'] = df['name'].apply(lambda x: str(x))
# combine road segments
for j in df[df['oneway'] == False].index:
df.loc[j,'length'] /= 2
df2 = df.groupby(by='name').sum().reset_index()
#df2.drop(['node_a', 'node_b', 'oneway'], axis=1, inplace=True)
# calculate shortest and longest roads
short = df2.loc[df2['length'].idxmin()]
long = df2.loc[df2['length'].idxmax()]
# output
print('Shortest road: {:s} ({:.2f} meters)'.format(short['name'], short['length']))
print('Longest road: {:s} ({:.2f} meters)'.format(long['name'], long['length']))
ox.plot_graph(G);
howlong('Boulder, Colorado, USA')
Boulder_stats = ox.basic_stats(ox.graph_from_place('Boulder, Colorado, USA'))
Boulder_stats['circuity_avg']
place = 'Boulder, Colorado, USA'
gdf = ox.gdf_from_place(place)
area = ox.project_gdf(gdf).unary_union.area
G = ox.graph_from_place(place, network_type='drive_service')
# calculate basic and extended network stats, merge them together, and display
G_stats = ox.basic_stats(G, area=area)
extended_stats = ox.extended_stats(G, ecc=True, bc=True, cc=True)
for key, value in extended_stats.items():
G_stats[key] = value
pd.Series(G_stats)
Universityhill = ox.graph_from_place('University hill,Boulder, Colorado, USA',network_type='drive')
Universityhill_projected = ox.project_graph(Universityhill)
fig, ax = ox.plot_graph(Universityhill_projected)
ox.save_graph_shapefile(Universityhill_projected, filename='Universityhill')
universityhill_osmnx = gpd.read_file('data/Universityhill/edges/edges.shp')
universityhill_osmnx.head()
universityhill_streets = pd.DataFrame(universityhill_osmnx.groupby(['name','highway','from','to'],as_index=False).size())
universityhill_streets.reset_index(inplace=True)
universityhill_streets.columns = ['name','highway','from','to','count']
universityhill_streets.head()
ulist = list(universityhill_streets['from'].values) + list(universityhill_streets['to'].values)
udict = dict(Counter(ulist))
udict
# nodes =1 imply that they are the origin and destination
head = pd.DataFrame(list(udict.items()), columns=['node', 'count'])
headtoe = head[head['count']==1]
headtoe
university_nodes = gpd.read_file('data/Universityhill/nodes/nodes.shp')
university_nodes.head()
university_nodes_or = university_nodes[list(map(lambda n: n in list(udict.keys()), list(university_nodes['osmid'])))]
university_nodes_term = university_nodes[list(map(lambda n: n in list(headtoe['node'].values), list(university_nodes['osmid'])))]
university_nodes_term
# Supposedly we are expecting only two nodes (one as the origin, one as the destination), but this has multiple origin and/or destination.
# 
# this needs to be further discussed, for now we will be considering two nodes streets
# ## Function for street curviness
def howcurve(place, ntype='drive'):
"""
Find the straightest and curviest roads by name in a given geographical area in OSM.
Args:
1. place: a string of place name
2. ntype: network type (not tested yet)
Return:
1. name and circuity info of the straightest road
2. name and circuity info of the curviest road
3. network graph plot of given place
"""
# try different API query results
try:
G = ox.graph_from_place(place, network_type='drive', which_result=1)
except:
G = ox.graph_from_place(place, network_type='drive', which_result=2)
# save and read as .shp file
ox.save_graph_shapefile(G, filename='howcurve')
G_edges = gpd.read_file('data/howcurve/edges/edges.shp')
G_nodes = gpd.read_file('data/howcurve/nodes/nodes.shp')
# extract road names
roads = list(G_edges['name'].unique())
#roads.remove('') ### remove messy segements without names ###
rnames = []
dist_v = []
dist_r = []
circuity = []
# calculate circuity for each road
for i,r in enumerate(roads):
df_road = G_edges[G_edges['name'] == roads[i]]
nor = list(df_road['from'].values) + list(df_road['to'].values)
tdict = dict(Counter(nor))
tdf = pd.DataFrame(list(tdict.items()), columns=['node', 'count'])
tdf_sub = tdf[tdf['count']==1]
if len(tdf_sub) != 2:
continue ### skip roads with more than two terminal nodes for now ###
else:
G_nodes_term = G_nodes[list(map(lambda n: n in list(tdf_sub['node'].values), list(G_nodes['osmid'])))]
coord1 = list(G_nodes_term.iloc[0,:]['geometry'].coords)[0]
coord2 = list(G_nodes_term.iloc[1,:]['geometry'].coords)[0]
p1 = coord1[1], coord1[0]
p2 = coord2[1], coord2[0]
d_v = vincenty(p1, p2).meters
d_r = df_road['length'].astype('float', error='coerce').sum()
circ = d_r / d_v
rnames.append(r)
dist_v.append(d_v)
dist_r.append(d_r)
circuity.append(circ)
# create a dataframe with circuity data
df_circ = pd.DataFrame({'name': rnames,
'dist_v': dist_v,
'dist_r': dist_r,
'circuity': circuity})
# calculate straightest and curviest roads
straight = df_circ.sort_values('circuity', ascending=True).head(1).iloc[0]
curve = df_circ.sort_values('circuity', ascending=False).head(1).iloc[0]
# output
print('Straightest road: {:s}\nroad dist.: {:.2f}\nshortest dist.: {:.2f}\ncircuity: {:.5f}\n'.format(straight['name'], straight['dist_r'], straight['dist_v'], straight['circuity']))
print('Curviest road: {:s}\nroad dist.: {:.2f}\nshortest dist.: {:.2f}\ncircuity: {:.5f}\n'.format(curve['name'], curve['dist_r'], curve['dist_v'], curve['circuity']))
#ox.plot_graph(G);
fig, ax = pl.subplots(figsize=(10,10))
G_edges.plot(ax=ax)
G_edges[G_edges['name'] == straight['name']].plot(color='red', ax=ax, label='straightest')
G_edges[G_edges['name'] == curve['name']].plot(color='orange', ax=ax, label='curviest')
pl.legend(fontsize='medium')
pl.show()
howcurve('Boulder, CO')
# ## Explore Nominatim forDetails
from geopy import Nominatim
geolocator = Nominatim()
location = geolocator.geocode("Broadway,Boulder,Colorado")
print(location.address)
print(location.raw)
location = geolocator.geocode("Bixby Lane,Boulder,Colorado")
print(location.address)
print(location.raw)
|
notebooks/sandbox_Boulder.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] pycharm={"name": "#%% md\n"}
# Write out the traces and polygons of faults in an RSQsim fault model as shp files.
# + pycharm={"name": "#%%\n"}
#import relevant modules
from rsqsim_api.catalogue.catalogue import RsqSimCatalogue
from rsqsim_api.fault.multifault import RsqSimMultiFault
import fnmatch
import os
# + pycharm={"name": "#%%\n"}
# Tell python where field paths etc are relative to
script_dir = os.path.abspath('')
fault_dir = "../../../data/shaw_new_catalogue/NewZealand/rundir5382"
catalogue_dir = fault_dir
outdir=os.path.join(catalogue_dir,"by_fault")
if not os.path.exists(outdir):
os.mkdir(outdir)
# + [markdown] pycharm={"name": "#%% md\n"}
# Read in fault model.
# + pycharm={"name": "#%%\n"}
fault_model=RsqSimMultiFault.read_fault_file_bruce(main_fault_file=os.path.join(fault_dir, "zfault_Deepen.in"),
name_file=os.path.join(fault_dir, "znames_Deepen.in"),
transform_from_utm=True)
# + [markdown] pycharm={"name": "#%% md\n"}
# Write fault traces to shp file.
# + pycharm={"name": "#%%\n"}
fault_model.write_fault_traces_to_gis(prefix=os.path.join(outdir,"bruce_faults"),crs="EPSG:2193")
# + [markdown] pycharm={"name": "#%% md\n"}
# Write out fault outlines to shp file.
# + pycharm={"name": "#%%\n"}
fault_model.write_fault_outlines_to_gis(prefix=os.path.join(outdir,"bruce_faults"),crs="EPSG:2193")
# + pycharm={"name": "#%%\n"}
|
examples/rsqsim_api/fault_attributes/fault_polys_traces_to_shp.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # 一些你可能不知道、但有必要知道的python知识点
# 1. OrderedDict
# 2. 以\_\_init\_\_, \_\_setattr\_\_为代表的magic function
# 3. weakref
# ## OrderedDict就是有顺序的字典
# +
from collections import OrderedDict
od = OrderedDict()
od['a'] = 1
od['b'] = 2
od['c'] = 3
for i in od:
print(i)
dictionary = dict()
dictionary['a'] = 1
dictionary['b'] = 2
dictionary['c'] = 3
dictionary['d'] = 4
for i in dictionary:
print(i)
# -
# # module的五个主要模块
# 不妨按照这张图,花5到10分钟浏览一下module.py的各个函数。接下来我并不按照五个模块的顺序来阐述,而是按照module.py函数的顺序来讲。
# 
# # \_\_init\_\_
# - self._parameters 用来存放注册的 Parameter 对象
#
# - self._buffers 用来存放注册的 Buffer 对象。(pytorch 中 buffer 的概念就是 不需要反向传导更新的值)
#
# - self._modules 用来保存注册的 Module 对象。
#
# - self.training 标志位,用来表示是不是在 training 状态下
#
# - ...hooks 用来保存 注册的 hook
# ## **\_buffers是模型中不需要通过反向传播来更新的量**
# 最典型的就是BN层中的实现,等具体分析BN层再讲,下面是一个比较直接的例子。
# +
import torch
import torch.nn as nn
from torch.nn import Module
import torch.nn.functional as F
from collections import OrderedDict
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5, 1)
self.conv2 = nn.Conv2d(20, 50, 5, 1)
self.fc1 = nn.Linear(4 * 4 * 50, 500)
self.fc2 = nn.Linear(500, 10)
self.register_buffer('multiply', torch.tensor([1,2]))
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2, 2)
x = x.view(-1, 4 * 4 * 50)
x = F.relu(self.fc1(x))
x = self.fc2(x)
x = F.log_softmax(x, dim=1)
x = self._buffer['multiply'] * x
return x
net = Net()
print('net itself\'s _buffers = ', net._buffers)
print('it\'s {}empty'.format('' if net._buffers == OrderedDict() else 'not '))
print('multiply is {}in net.state_dict()'.format('' if 'multiply' in net.state_dict() else 'not '))
print('#'*20)
for sub_module in net._modules.values():
print(' ',sub_module)
print(' ',sub_module._parameters.keys())
print(' this sub module has {}nodes as buffers'.format('no ' if sub_module._buffers == OrderedDict() else ''))
print(' *****************')
# -
# # forward
# 每个具体的Module都要override这个函数才行
# # regitster_buffer(self, name, tensor)
def register_buffer(self, name, tensor):
# 1. name不能不是string
if not isinstance(name, torch._six.string_classes):
raise TypeError("buffer name should be a string. "
"Got {}".format(torch.typename(name)))
# 2. name中不能包含 点.
elif '.' in name:
raise KeyError("buffer name can't contain \".\"")
# 3. name不能为空字符串
elif name == '':
raise KeyError("buffer name can't be empty string \"\"")
# 4. 如果name是module的属性,但是不在_buffer这个字典里面,这就有冲突
elif hasattr(self, name) and name not in self._buffers:
raise KeyError("attribute '{}' already exists".format(name))
# 5. 如果tensor不为空但是又不是真正的torch.Tensor, 也不行。这里我们可能会犯的错误就是用numpy的array来注册buffer
elif tensor is not None and not isinstance(tensor, torch.Tensor):
raise TypeError("cannot assign '{}' object to buffer '{}' "
"(torch Tensor or None required)"
.format(torch.typename(tensor), name))
else:
self._buffers[name] = tensor
# 1. name不能不是string
# 2. name中不能包含 点.
# 3. name不能为空字符串
# 4. 如果name是module的属性,但是不在_buffer这个字典里面,这就有冲突
# 5. 如果tensor不为空但是又不是真正的torch.Tensor, 也不行。这里我们可能会犯的错误就是用numpy的array来注册buffer(如下代码会报错)
import numpy as np
t = np.array([1,2,2])
net.register_buffer('t', t)
# # register_parameter(self, name, param)
def register_parameter(self, name, param):
r"""Adds a parameter to the module.
The parameter can be accessed as an attribute using given name.
Args:
name (string): name of the parameter. The parameter can be accessed
from this module using the given name
param (Parameter): parameter to be added to the module.
"""
# 如果self.__dict__中还没有_parameters,那说明还没有调用__init__
if '_parameters' not in self.__dict__:
raise AttributeError(
"cannot assign parameter before Module.__init__() call")
# name不能不是string
elif not isinstance(name, torch._six.string_classes):
raise TypeError("parameter name should be a string. "
"Got {}".format(torch.typename(name)))
# name不能包括 点.
elif '.' in name:
raise KeyError("parameter name can't contain \".\"")
# name不应该为空
elif name == '':
raise KeyError("parameter name can't be empty string \"\"")
# name已经是module的属性之一,并且不在_parameter里面,这就有冲突
elif hasattr(self, name) and name not in self._parameters:
raise KeyError("attribute '{}' already exists".format(name))
# 如果param是None, 放进_parameters里面
if param is None:
self._parameters[name] = None
# 如果param不是Parameter类,不行
elif not isinstance(param, Parameter):
raise TypeError("cannot assign '{}' object to parameter '{}' "
"(torch.nn.Parameter or None required)"
.format(torch.typename(param), name))
# param.grad_fn要为None。param.grad_fn不为None,就说明param不是叶节点,或者说它是由其他节点来生成的
# pytorch不允许我们把这类param直接拿来放进一个模型中。
elif param.grad_fn:
raise ValueError(
# TODO
# 这里是说往模型增加的参数应该是leaf层的, 不过我还不清楚为什么这样
# 这里是说如果你往模型里面增加了一个已经有了有grad_fn这个属性的参数
# 那它肯定就不是leaf层的咯,讲得通
"Cannot assign non-leaf Tensor to parameter '{0}'. Model "
"parameters must be created explicitly. To express '{0}' "
"as a function of another Tensor, compute the value in "
"the forward() method.".format(name))
else:
self._parameters[name] = param
# 1. **如果self.\_\_dict__中还没有\_parameters,那说明还没有调用\_\_init__**
# 2. name不能不是string
# 3. name不能包括 点.
# 4. name不应该为空
# 5. name已经是module的属性之一,并且不在_parameter里面,这就有冲突
# 6. 如果param是None, 放进_parameters里面
# 7. 如果param不是Parameter类,不行
# 8. param.grad_fn要为None。param.grad_fn不为None,就说明param不是叶节点,或者说它是由其他节点来生成的,pytorch不允许我们把这类param直接拿来放进一个模型中。
# **为什么register_parameter需要考虑Module.\_\_init__是否已经调用了,而register_buffer不需要?**
# - 做了些实验,我认为需要,已经提交了PR。原因如下面实验。首先跑下面的代码,我们会得到一个报错" Linear object has no attribute '\_buffers' "
# +
import math
import torch
from torch.nn.parameter import Parameter
from torch.nn import functional as F
from torch.nn import Module
class Linear(Module):
def __init__(self, in_features, out_features, bias=True):
self.in_features = in_features
self.out_features = out_features
self.register_buffer('test', torch.Tensor(out_features, in_features))
self.weight = Parameter(torch.Tensor(out_features, in_features))
if bias:
self.bias = Parameter(torch.Tensor(out_features))
else:
self.register_parameter('bias', None)
super(Linear, self).__init__()
self.reset_parameters()
def reset_parameters(self):
stdv = 1. / math.sqrt(self.weight.size(1))
self.weight.data.uniform_(-stdv, stdv)
if self.bias is not None:
self.bias.data.uniform_(-stdv, stdv)
def forward(self, input):
return F.linear(input, self.weight, self.bias)
def extra_repr(self):
return 'in_features={}, out_features={}, bias={}'.format(
self.in_features, self.out_features, self.bias is not None
)
linear = Linear(3,4)
# -
# 2. 如果我在module.py源代码的第106行前面加上(你也可以去修改一下看看)
# ```
# if '_buffers' not in self.__dict__:
# raise AttributeError(
# "cannot assign buffer before Module.__init__() call")
# ```
# 并将106行的if not isinstance(name, torch.\_six.string\_classes)改为elif not isinstance(name, torch.\_six.string\_classes):
# 那么上述代码的报错就是"cannot assign buffer before Module.__init__() call",这样用户就会更清楚它应该先super(Linear, self).\_\_init\_\_()
# # add_module
# 逻辑跟上述的差不多,不赘述
# # \_apply(self, fn)
# 1. 以\_开头的方法都是隐方法,就是建议用户不要直接调用的方法
# 2. fn最终是作用于module的param和buffer
# # apply(self, fn)
# 我认为跟\_apply的区别就是fn的区别,在这里fn是随意的所有对module的操作,而上面的\_apply的fn是对module的param和buffer作用的
# +
import torch.nn as nn
def init_weights(m):
print(m)
if type(m) == nn.Linear:
m.weight.data.fill_(1.0)
print(m.weight)
net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
net.apply(init_weights)
# -
# # cuda、cpu、type、float、double、half、to等方法=\_apply(self, 具体的fn)
# # hook
# 1. 重要结论:
# - hook只是辅助工具,它本身并不参与forward或者backward的任何计算。
# - 在hook中你可以定义任何辅助的操作,常见的就是打印中间节点的grad-->原因是如果forward或者backward之后,中间节点的grad不会保存。
# 2. hook的意义
# - 如1中说的,可以帮助打印grad
# - 为什么要专门有个hook来帮助打印,而不直接去调用参数的grad的呢?因为当我们backward的过程中,对于算完grad的中间节点,一旦它的子节点的grad算完,它的grad就被释放了,这样可以帮助节省内存。
# 3. [参照](https://oldpan.me/archives/pytorch-autograd-hook)
def register_backward_hook(self, hook):
handle = hooks.RemovableHandle(self._backward_hooks) # 这句暂时你直接忽略
self._backward_hooks[handle.id] = hook
return handle
# # TODO pytorch的动态图妙处
# module加tensor模块,理解透彻之后,我们应该能体会到pytorch的动态图妙处。暂时我还体会不够深刻,所以很可能需要tensor那一部分。
# [如何理解pytorch的动态计算图](https://zhuanlan.zhihu.com/p/33378444)
# # TODO 然后,为什么静态图更高效??动态图和静态图在底层上的区别是什么?
# # TODO \_tracing_name、_slow_forward涉及jit,暂按不表
# # \_\_call\_\_
def __call__(self, *input, **kwargs):
for hook in self._forward_pre_hooks.values():
hook(self, input)
if torch.jit._tracing:
result = self._slow_forward(*input, **kwargs)
else:
result = self.forward(*input, **kwargs)
for hook in self._forward_hooks.values():
hook_result = hook(self, input, result)
if hook_result is not None:
raise RuntimeError(
"forward hooks should never return any values, but '{}'"
"didn't return None".format(hook))
if len(self._backward_hooks) > 0:
var = result
### 以下这几行可能是为了解决当result是_slow_forward产生的时候,我们来得到
while not isinstance(var, torch.Tensor):
if isinstance(var, dict):
var = next((v for v in var.values() if isinstance(v, torch.Tensor)))
else:
var = var[0]
grad_fn = var.grad_fn
# TODO: 不太懂,需要明白grad_fn.register_hook跟module.register_hook的区别和联系
if grad_fn is not None:
for hook in self._backward_hooks.values():
wrapper = functools.partial(hook, self)
functools.update_wrapper(wrapper, hook)
grad_fn.register_hook(wrapper)
return result
# # \_\_setstate\_\_和\_\_getstate\_\_都比较简单
# - 为什么在\_\_setstate\_\_中需要专门再声明\_forward\_pre_hooks?
# # 重点:\_\_setattr\_\_
# 1. \_\_setattr\_\_控制着实例的属性赋值。比如, m = Module(), 然后**m.\_parameters = OrderedDict()时**,其实是会跳转进\_\_setattr\_\_(m, \_parameters, OrderedDict()),进入这个函数的逻辑。
# 2. 你有没有想过,为什么我们用pytorch定义好网络结构之后,然后调用state_dict就能看到参数呢,如下
# +
import torch
from torch import nn
from torch.nn import functional as F
class My_net(nn.Module):
def __init__(self):
super(My_net, self).__init__()
# Conv layer 1
self.cl1 = nn.Linear(2, 3)
def forward(self, x):
# conv layer 1
x = self.cl1(x)
return x
model = My_net()
model.state_dict()
# -
# 3. 事实上逻辑是这样的,当我们调用state_dict()时,它会将它自身的param和buffer加进OrderedDict,然后再将它的子module的param和buffer加进去。这个看源代码很容易。可是,我们并没有显式地将变量放进param和buffer,对吗?那么pytorch是怎么帮助我们完成的?
# 4. 回到\_\_set\_\_的源代码,它是这么处理的:先检查你的value是不是param类型的,如果是的话,将它注册进parameter(register_parameter)中,不是的话再检查是不是module,然后是不是buffer,都不是的话最后用最普通的object.__setattr__(self, name, value) (最后一行)来声明。
# **_下面的请暂时忽略,可能是不对的。_**
#
# 报错的原因是,运行到self.weight = Parameter(torch.Tensor(out_features, in_features)), python解释器会运行\_\_setattr\_\_(self, 'weight', Parameter(torch.Tensor(out_features, in_features)))。进入\_\_setattr\_\_后,检查到Parameter(torch.Tensor(out_features, in_features)))是parameter,所以解释器试图将注册Parameter(torch.Tensor(out_features, in_features))),即把Parameter(torch.Tensor(out_features, in_features)))放进self.\_parameters,此时你发现根本没有self.\_parameters,原因是你还没有super(Linear, self).\_\_init\_\_。
#
# 而对应地,__setattr__里面并没有用到register_buffers,而是直接用的buffers[name]=param。同时,很重要的一点是,当你命令self.xxx = yyy时,因为\_\_setattr\_\_里面不能根据你的yyy的类型判断yyy是buffer;而是根据xxx是不是在已经有的self.\_buffer中,如果不在,就把xxx当成普通attribute。也就是说,self.xxx = yyy是不能隐形地注册buffers的,那么你要注册buffers,只能显式地自己调用register_buffer。那么
# 1. 你是在super(Linear, self).\_\_init\_\_之前register,那么它会发现没有self没有_buffer,就会自动报错。
# # 后面的所有方法都不赘述,简单,但是最好自己过一遍源代码吧
# **下面代码都不用看,是我的测试,因为担心还会用到就没删**
x = torch.randn((2, 2))
y = torch.randn((2, 2))
z = torch.randn((2, 2), requires_grad=True)
a = x + y
b = a + z
print(x.grad_fn, a.grad_fn)
print(x.requires_grad, y.requires_grad, z.requires_grad)
print(a.requires_grad, b.requires_grad)
print(b.is_leaf)
class foo(object):
def __init__(self, x):
self.name = x
def add(self, y):
self.name['a'] = y
def __setattr__(self, name, value):
print('into setattr')
object.__setattr__(self, name, value)
f = foo({'b':2})
f.name
f.add(3)
f.name
# +
import weakref
from collections import OrderedDict
test = OrderedDict({'a':1})
t2 = weakref.ref(test)
print(t2())
test['b'] = 2
print(t2())
# +
import collections
import weakref
class RemovableHandle(object):
"""A handle which provides the capability to remove a hook."""
next_id = 0
def __init__(self, hooks_dict):
self.hooks_dict_ref = weakref.ref(hooks_dict)
print(self.hooks_dict_ref())
self.id = RemovableHandle.next_id
RemovableHandle.next_id += 1
def remove(self):
hooks_dict = self.hooks_dict_ref()
if hooks_dict is not None and self.id in hooks_dict:
del hooks_dict[self.id]
def __getstate__(self):
return (self.hooks_dict_ref(), self.id)
def __setstate__(self, state):
if state[0] is None:
# create a dead reference
self.hooks_dict_ref = weakref.ref(collections.OrderedDict())
else:
self.hooks_dict_ref = weakref.ref(state[0])
self.id = state[1]
RemovableHandle.next_id = max(RemovableHandle.next_id, self.id + 1)
def __enter__(self):
return self
def __exit__(self, type, value, tb):
self.remove()
class foo1:
def __init__(self, x):
self.orderdict = x
def register_hook(self, hook):
"实例化RemovableHandle,实例化的时候打印一下ref"
handle = RemovableHandle(self.orderdict)
self.orderdict[handle.id] = hook
"经过上面这一步,原先的orderdict改变了,ref也就同步改变了"
print(handle.hooks_dict_ref())
return handle
# -
d = OrderedDict({'a':1})
f1 = foo1(d)
handle = f1.register_hook(print)
handle.remove()
print(f1.orderdict)
f1.orderdict = None
print(f1.orderdict)
print(handle.hooks_dict_ref())
f1
# +
import torch
import torch.nn as nn
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
class MyMul(nn.Module):
def forward(self, input):
out = input * 2
return out
class MyMean(nn.Module): # 自定义除法module
def forward(self, input):
out = input/4
return out
def tensor_hook(grad):
print('tensor hook')
print('grad:', grad)
return grad
class MyNet(nn.Module):
def __init__(self):
super(MyNet, self).__init__()
self.f1 = nn.Linear(4, 1, bias=True)
self.f2 = MyMean()
self.weight_init()
def forward(self, input):
self.input = input
output = self.f1(input) # 先进行运算1,后进行运算2
output = self.f2(output)
return output
def weight_init(self):
self.f1.weight.data.fill_(8.0) # 这里设置Linear的权重为8
self.f1.bias.data.fill_(2.0) # 这里设置Linear的bias为2
def my_hook(self, module, grad_input, grad_output):
print('doing my_hook')
print('original grad:', grad_input)
print('original outgrad:', grad_output)
# grad_input = grad_input[0]*self.input # 这里把hook函数内对grad_input的操作进行了注释,
# grad_input = tuple([grad_input]) # 返回的grad_input必须是tuple,所以我们进行了tuple包装。
# print('now grad:', grad_input)
# grad_input_ = (grad_input[0] * 2,)
# print('grad_input_', grad_input_)
# return grad_input_
return grad_input
if __name__ == '__main__':
input = torch.tensor([1, 2, 3, 4], dtype=torch.float32, requires_grad=True).to(device)
net = MyNet()
net.to(device)
net.register_backward_hook(net.my_hook) # 这两个hook函数一定要result = net(input)执行前执行,因为hook函数实在forward的时候进行绑定的
input.register_hook(tensor_hook)
result = net(input)
print('result =', result)
result.backward()
print('input.grad:', input.grad)
for param in net.parameters():
print('{}:grad->{}'.format(param, param.grad))
print('*'*20)
from collections import OrderedDict
print(net._parameters == OrderedDict())
for name, param in net.named_parameters():
print('{}, {}:grad->{}'.format(name, param, param.grad))
# +
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.cl1 = nn.Linear(25, 60)
self.cl2 = nn.Linear(60, 16)
self.fc1 = nn.Linear(16, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = F.relu(self.cl1(x))
x = F.relu(self.cl2(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.log_softmax(self.fc3(x), dim=1)
return x
activation = {}
def get_activation(name):
def hook(model, input, output):
print('output = ', output)
activation[name] = output.detach()
return hook
model = MyModel()
model.fc2.register_forward_hook(get_activation('fc2'))
x = torch.randn(1, 25)
output = model(x)
print(activation['fc2'])
|
module.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pickle
import numpy as np
import pandas as pd
import plotnine as p9
from sklearn.datasets import load_digits
from scipy.spatial.distance import pdist
from sklearn.manifold.t_sne import _joint_probabilities
from scipy import linalg
from sklearn.metrics import pairwise_distances
from scipy.spatial.distance import squareform
from sklearn.manifold import TSNE
from matplotlib import pyplot as plt
import seaborn as sns
sns.set(rc={'figure.figsize':(11.7,8.27)})
palette = sns.color_palette("bright", 10)
# with open('data/tsne/bert_embed.pickle', 'rb') as f:
# bert_data = pickle.load(f)
with open('data/tsne/moco_embed.pickle', 'rb') as f:
moco_data = pickle.load(f)
# +
def plot(data, exclude=[], n_iter=10000, perplexity=50, mean=True):
all_data = [] # negative_data[:]
labels = []
for key, val in data.items():
if key not in exclude:
labels.extend([key] * len(val))
all_data.extend(val)
tsne = TSNE(n_iter=n_iter, perplexity=perplexity)
if mean:
z = [x.mean(0).mean(0) for x in all_data]
else:
z = [x.flatten() for x in all_data]
tsne_results = tsne.fit_transform(z)
df = pd.DataFrame(tsne_results, columns=['x', 'y'])
print(len(df), len(labels))
df['Method Tag'] = labels
return p9.ggplot(p9.aes('x', 'y'), df) + p9.geom_point(p9.aes(color='Method Tag'), alpha=0.8) + p9.theme_classic()
from tqdm.auto import tqdm
n_iter = 10000
for perplexity in [30, 60, 90, 120]:
p = plot(moco_data, mean=False, perplexity=perplexity, n_iter=n_iter)
out_file = f"/work/paras/representjs/data/tsne/transformer_p{perplexity}_n{n_iter}.pdf"
p.save(out_file)
p
# -
plot(bert_data, ['compute', 'sort', 'compress', 'database'], perplexity=20)
plot(moco_data, ['compute', 'sort', 'compress', 'database'], mean=False, perplexity=90)
|
nb/093020_visualize_tsne.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import MySQLdb
data = ["172.16.17.32",
"root",
"dss",
"world"]
world_db = MySQLdb.connect(*data, charset='utf8')
QUERY = """
SELECT Code, Name, GNP, Population
FROM country
"""
df = pd.read_sql(QUERY, world_db)
df.tail(2)
# -
QUERY = """
select code, round(population/surfacearea,2) as pop_per_sur
from country
where population > 10000000
order by pop_per_sur desc
limit 3
"""
df = pd.read_sql(QUERY, world_db)
df
# +
# QUERY2 = """
# create view pop_per_su
# as(
# select code, round(population/surfacearea,2) as pop_per_sur
# from country
# where population > 10000000
# order by pop_per_sur desc
# limit 3)
# use pop_per_su
# """
# df = pd.read_sql(QUERY2, world_db)
# df
# -
QUERY3 = """
select c.code, language, percentage,round(c.population * cl.percentage/100, 1) as language_population
from countrylanguage as cl
join (select *
from country
where population > 50000000) as c
on c.code = cl.countrycode
order by percentage desc
limit 5
"""
df = pd.read_sql(QUERY3, world_db)
df
|
practice/database_quiz_code_joohyunjoon3.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Labs
# language: python
# name: myenv
# ---
import pandas as pd
df = pd.read_csv('l191.csv')
df['zipcode'].unique()
pd.set_option('display.max_columns', 500)
df['zipcode'].str.split(' ',expand=True)[0]
df.head(2)
df.loc[df['latitude'] == 0]
|
db/Untitled.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] nbsphinx="hidden"
# # Minimum Variance Distortionless Response (MVDR) Beamformer
#
# *This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the masters module Selected Topics in Audio Signal Processing, Communications Engineering, Universität Rostock. Please direct questions and suggestions to [<EMAIL>](mailto:<EMAIL>).*
# -
# ## Design of MVDR Beamformer
# +
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
dx = 0.1 # spatial sampling interval
M = 8 # total number of microphones
om = 2*np.pi * np.linspace(100, 8000, 1000) # evaluated angular frequencies
theta = np.pi/2 # steering direction
c = 343 # speed of sound
# -
# First a function is defined which computes the complex weights of a narrowband MVDR beamformer
def design_mvdr(theta, omega):
d = np.exp(1j*omega * dx/c * np.cos(theta) * np.arange(M)).T
Gd = np.zeros(shape=(M,M))
for i in range(M):
for j in range(M):
Gd[i, j] = np.sinc(omega * (j-i) * dx/c * 1/np.pi)
h = np.dot(np.linalg.inv(Gd), d) / np.dot(np.dot(d.conj().T, np.linalg.inv(Gd)), d)
return h
# ## Beampattern
#
# The resulting beampattern of the MVDR beamformer is computed and plotted for a steering angle of $\theta = 90^\text{o}$ (broadside operation)
# +
def compute_beampattern(theta, theta_pw, om):
B = np.zeros(shape=(len(om), len(theta_pw)), dtype=complex)
for n in range(len(om)):
h = design_mvdr(theta, om[n])
for mu in range(M):
B[n, :] += np.exp(-1j * om[n]/c * mu*dx * np.cos(theta_pw) ) * h[mu]
return B
def plot_beampattern(B, theta_pw, om):
plt.figure(figsize=(10,10))
plt.imshow(20*np.log10(np.abs(B)), aspect='auto', vmin=-30, vmax=0, origin='lower', \
extent=[0, 180, om[0]/(2*np.pi), om[-1]/(2*np.pi)], cmap='viridis')
plt.xlabel(r'$\theta_{pw}$ in deg')
plt.ylabel('$f$ in Hz')
plt.title(r'$|\bar{P}(\theta, \theta_{pw}, \omega)|$ in dB')
plt.colorbar()
# +
theta_pw = np.linspace(0, np.pi, 181) # evaluated angles of incident plane waves
B = compute_beampattern(theta, theta_pw, om)
plot_beampattern(B, theta_pw, om)
# -
# ### Directivity Index
#
# The directivity index is a quantitative measure for the spatial selectivity of the beamformer. First functions for the computation of the directivity factor (DF) and plotting of its logarithm, known as directivity index (DI) are defined.
# +
def compute_directivity_factor(theta, omega):
d = np.exp(1j*omega * dx/c * np.cos(theta) * np.arange(M)).T
Gd = np.zeros(shape=(M,M))
for i in range(M):
for j in range(M):
Gd[i, j] = np.sinc(omega * (j-i) * dx/c * 1/np.pi)
return np.dot(np.dot(d.conj().T, np.linalg.inv(Gd)), d)
def plot_directivity_index(DF):
plt.figure(figsize=(10,5))
plt.plot(om/(2*np.pi), 10*np.log10(np.abs(DF)))
plt.xlabel(r'$f$ in Hz')
plt.ylabel(r'$\mathrm{DI}(\theta, \omega)$ in dB')
plt.grid()
# -
# The resulting DI of the MVDR beamformer is computed and plotted for a steering angle of $\theta = 90^\text{o}$
# +
DF = np.zeros(len(om), dtype=complex)
for mu in range(len(om)):
DF[mu] = compute_directivity_factor(theta, om[mu])
plot_directivity_index(DF)
# -
# ### White-Noise Gain
#
# The white-noise gain (WNG) characterized the noise attenuation of a beamformer. First functions are defined for computation and plotting of the WNG.
# +
def compute_white_noise_gain(theta, omega):
d = np.exp(1j*omega * dx/c * np.cos(theta) * np.arange(M)).T
Gd = np.zeros(shape=(M,M))
for i in range(M):
for j in range(M):
Gd[i, j] = np.sinc(omega * (j-i) * dx/c * 1/np.pi)
WNG = np.dot(np.dot(d.conj().T, np.linalg.inv(Gd)), d)**2 / np.dot(np.dot(d.conj().T, np.linalg.inv(Gd)**2), d)
return WNG
def plot_white_noise_gain(DF):
plt.figure(figsize=(10,5))
plt.plot(om/(2*np.pi), 10*np.log10(np.abs(DF)))
plt.xlabel(r'$f$ in Hz')
plt.ylabel(r'$\mathrm{WNG}(\theta, \omega)$ in dB')
plt.grid()
# -
# The resulting WNG of the MVDR beamformer is computed and plotted for a steering angle of $\theta = 90^\text{o}$
# +
WNG = np.zeros(len(om), dtype=complex)
for mu in range(len(om)):
WNG[mu] = compute_white_noise_gain(theta, om[mu])
plot_white_noise_gain(WNG)
# + [markdown] nbsphinx="hidden"
# **Copyright**
#
# This notebook is provided as [Open Educational Resources](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text/images/data are licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *<NAME>, Selected Topics in Audio Signal Processing - Supplementary Material, 2017*.
|
sound_field_analysis/MVDR_truncated.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import requests
# !pip install fake_useragent
from fake_useragent import UserAgent
ua = UserAgent()
ua.chrome
ua.random
import pandas as pd
testdict = {'0J0': ('1000', '1000', 200),
'2A8': ('1000', '1000', 200),
'KEET': ('3.80', '4.15', 200),
'8A0': ('4.45', '4.10', 200),}
testdict.keys()
airports = pd.read_json("USairports.json",orient='index')
airports[airports['icao'].isin(testdict.keys())]
airports[airports['icao'].str.startswith('K')]
|
Untitled.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Self-Driving Car Engineer Nanodegree
#
# ## Unscented Kalman Filter
#
# ## NIS Visualization Notebook
#
# NIS values were logged into csv files and used to plot the NIS for both sensors.
#
# ---
# ## Load The Data
# +
import csv
import matplotlib.pyplot as plt
Lidar_time = []
Lidar_NIS_values = []
Radar_time = []
Radar_NIS_values = []
NIS_lidar_file = "build/NIS_lidar.csv"
NIS_radar_file = "build/NIS_radar.csv"
with open(NIS_lidar_file, 'r') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
Lidar_time.append(float(row[0]))
Lidar_NIS_values.append(float(row[1]))
with open(NIS_radar_file, 'r') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
Radar_time.append(float(row[0]))
Radar_NIS_values.append(float(row[1]))
first_time_lidar = Lidar_time[0]
for i in range(0, len(Lidar_time), 1):
Lidar_time[i]=Lidar_time[i]-first_time_lidar
first_time_radar = Radar_time[0]
for i in range(0, len(Radar_time), 1):
Radar_time[i]=Radar_time[i]-first_time_radar
# -
# ## Plot NIS for Lidar
plt.plot(Lidar_NIS_values)
plt.title('NIS Lidar')
plt.grid(True)
plt.show()
# ## Plot NIS for Radar
plt.plot(Radar_NIS_values)
plt.title('NIS Radar')
plt.grid(True)
plt.show()
|
NIS_visualization.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/dhrg/linreg/blob/master/why_is_gradient_descent_bad_polynomial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] toc=true id="b6Q30JGIVCZ_" colab_type="text"
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Why-is-gradient-descent-so-bad-at-optimizing-polynomial-regression?" data-toc-modified-id="Why-is-gradient-descent-so-bad-at-optimizing-polynomial-regression?-0.1"><span class="toc-item-num">0.1 </span>Why is gradient descent so bad at optimizing polynomial regression?</a></span></li><li><span><a href="#Polynomial-regression" data-toc-modified-id="Polynomial-regression-0.2"><span class="toc-item-num">0.2 </span>Polynomial regression</a></span></li></ul></li><li><span><a href="#:-Comparing-results-for-high-order-polynomial-regression" data-toc-modified-id=":-Comparing-results-for-high-order-polynomial-regression-1"><span class="toc-item-num">1 </span>: Comparing results for high order polynomial regression</a></span></li><li><span><a href="#:-Repeating-the-experiment-with-2-polynomial-variables-and-visualizing-the-results" data-toc-modified-id=":-Repeating-the-experiment-with-2-polynomial-variables-and-visualizing-the-results-2"><span class="toc-item-num">2 </span>: Repeating the experiment with 2 polynomial variables and visualizing the results</a></span></li></ul></div>
# + [markdown] id="ena_VA5wVCaC" colab_type="text"
# ## Why is gradient descent so bad at optimizing polynomial regression?#
#
# Question from Stackexchange:
# https://stats.stackexchange.com/questions/350130/why-is-gradient-descent-so-bad-at-optimizing-polynomial-regression
#
#
#
# + [markdown] id="XCD0Ml6cVCaD" colab_type="text"
# ### Linear regression
#
# #### Cost function
# $J(\theta) = \frac{1}{2m}\sum_{i=1}^m(h_\theta(x^{(i)}) - y^{(i)})^2 $
#
# $J(\theta) = \frac{1}{2m}(X\theta - y)^T(X\theta - y) $ (vectorized version)
#
# #### Gradient
# $\frac{\partial J(\theta)}{\partial \theta} = \frac{1}{m}X^T(X\theta - y) $
#
# ##### Hessian
# $\frac{\partial^2 J(\theta)}{\partial \theta^2} = \frac{1}{m}X^T X $
#
# ## Polynomial regression
# The design matrix is of the form:
#
# $ \mathbf{X = [1 , x , x^2 , x^3 , ... , x}^n]$
#
# ### Libraries
# + id="ZwF8glsTVCaE" colab_type="code" colab={} outputId="543623b2-9017-414d-bcaa-65a18d77e876"
import numpy as np
import pandas as pd
import scipy.optimize as opt
from sklearn import linear_model
import statsmodels.api as sm
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# %matplotlib inline
plt.style.use('seaborn-white')
# + [markdown] id="0UUb_8-dVCaK" colab_type="text"
# ### Helper Functions
# + id="bdtydzViVCaL" colab_type="code" colab={}
def costfunction(theta,X,y):
m = np.size(y)
theta = theta.reshape(-1,1)
#Cost function in vectorized form
h = X @ theta
J = float((1./(2*m)) * (h - y).T @ (h - y));
return J;
def gradient_descent(theta,X,y,alpha = 0.0005,num_iters=1000):
m = np.size(y)
J_history = np.empty(num_iters)
count_history = np.empty(num_iters)
theta_1_hist, theta_2_hist = [], []
for i in range(num_iters):
#Grad function in vectorized form
h = X @ theta
theta = theta - alpha * (1/m)* (X.T @ (h-y))
#Tracker values for plotting
J_history[i] = costfunction(theta,X,y)
count_history[i] = i
theta_1_hist.append(theta[0,0])
theta_2_hist.append(theta[1,0])
return theta, J_history,count_history, theta_1_hist, theta_2_hist
def grad(theta,X,y):
#Initializations
theta = theta[:,np.newaxis]
m = len(y)
grad = np.zeros(theta.shape)
#Computations
h = X @ theta
grad = (1./m)*(X.T @ ( h - y))
return (grad.flatten())
def polynomial_features(data, deg):
data_copy=data.copy()
for i in range(1,deg):
data_copy['X'+str(i+1)]=data_copy['X'+str(i)]*data_copy['X1']
return data_copy
def hessian(theta,X,y):
m,n = X.shape
X = X.values
return ((1./m)*(X.T @ X))
# + [markdown] id="Z5OFLPaDVCaO" colab_type="text"
# # : Comparing results for high order polynomial regression
# + [markdown] id="z6EZnMUTVCaP" colab_type="text"
# ### Initializing the data
# + id="gCsxXFIcVCaQ" colab_type="code" colab={}
#Create data from sin function with uniform noise
x = np.linspace(0.1,1,40)
noise = np.random.uniform( size = 40)
y = np.sin(x * 1.5 * np.pi )
y_noise = (y + noise).reshape(-1,1)
y_noise = y_noise - y_noise.mean() #Centering the data
degree = 7
X_d = polynomial_features(pd.DataFrame({'X0':1,'X1': x}),degree)
# + [markdown] id="dKCLEGiNVCaS" colab_type="text"
# ### Closed form solution
# + id="c6bh8aG-VCaT" colab_type="code" colab={} outputId="e12b5a6b-4aa2-4c88-cbee-e5d0db0dc31f"
def closed_form_solution(X,y):
return np.linalg.inv(X.T @ X) @ X.T @ y
coefs = closed_form_solution(X_d.values,y_noise)
coefs
# + [markdown] id="abQUWPItVCaX" colab_type="text"
# ### Numpy only fit
# + id="E1SUg29QVCaY" colab_type="code" colab={} outputId="9ee7af1e-5c2f-4af4-99f2-bb24141657a8"
stepsize = .1
theta_result_1,J_history_1, count_history_1, theta_1_hist, theta_2_hist = gradient_descent(np.zeros((len(X_d.T),1)).reshape(-1,1), X_d,y_noise,alpha = stepsize,num_iters=5000)
display(theta_result_1)
# + [markdown] id="6ZlFJj7HVCac" colab_type="text"
# ### Sciy optimize fit using first order derivative only
# #### Comment: BFGS does very well but requires adjustment of options
# In particular tolerance must be made smaller as the cost function is very flat near the global minimum
# + id="el05qJjRVCad" colab_type="code" colab={} outputId="8da1342d-9c0d-471c-c388-03bb184ce35a"
import scipy.optimize as opt
theta_init = np.ones((len(X_d.T),1)).reshape(-1,1)
model_t = opt.minimize(fun = costfunction, x0 = theta_init , args = (X_d, y_noise),
method = 'BFGS', jac = grad, options={'maxiter':1000, 'gtol': 1e-10, 'disp' : True})
model_t.x
# + [markdown] id="48DuMFgFVCah" colab_type="text"
# ### Sciy optimize fit using hessian matrix
# #### As expected, 2nd order information allows to converge much faster
# + id="4zzQ6TzyVCai" colab_type="code" colab={} outputId="2891901f-e3d7-4ebd-91e0-46698c9396df"
import scipy.optimize as opt
theta_init = np.ones((len(X_d.T),1)).reshape(-1,1)
model_t = opt.minimize(fun = costfunction, x0 = theta_init , args = (X_d, y_noise),
method = 'dogleg', jac = grad, hess= hessian, options={'maxiter':1000, 'disp':True})
model_t.x
# + [markdown] id="I2AeCv-LVCam" colab_type="text"
# ### Sklearn fit
# + id="Ty5H0fbbVCan" colab_type="code" colab={} outputId="f52e0fa2-95c2-4702-d454-fd50bd2b29a2"
from sklearn import linear_model
model_d = linear_model.LinearRegression(fit_intercept=False)
model_d.fit(X_d,y_noise)
model_d.coef_
# + [markdown] id="WmU5-2DJVCar" colab_type="text"
# ### Statsmodel fit
# + id="3jv3fVKgVCas" colab_type="code" colab={} outputId="b765eb17-9ce7-4a2c-d10f-c442840e0798"
import statsmodels.api as sm
model_sm = sm.OLS(y_noise, X_d)
res = model_sm.fit()
print(res.summary())
# + [markdown] id="y7hS4N4cVCaw" colab_type="text"
# # : Repeating the experiment with 2 polynomial variables and visualizing the results
# Here we will focus on a 2-D design matrix with $x$ and $x^2$ values. The y values have been centered so we will ignore the constant term and y-intercept
# + [markdown] id="sCQJQoCYVCa2" colab_type="text"
# ### Initializing the data
# + id="xPExOVv-VCa3" colab_type="code" colab={}
#Create data from sin function with uniform noise
x = np.linspace(0.1,1,40) #Adjusting the starting point to reduce numerical instability
noise = np.random.uniform( size = 40)
y = np.sin(x * 1.5 * np.pi )
y_noise = (y + noise).reshape(-1,1)
y_noise = y_noise - y_noise.mean() #Centering the data
#2nd order polynomial only
degree = 2
X_d = polynomial_features(pd.DataFrame({'X1': x}),degree)
#Starting point for gradient descent - see later diagrams
initial_theta = np.array([0,-2]).reshape(-1,1)
# + id="rE19XQzLVCa6" colab_type="code" colab={} outputId="2232fba1-ea63-48ed-810c-360cfb96005c"
X_d = X_d[['X1','X2']]
X_d.head()
# + [markdown] id="QxogL3ibVCbA" colab_type="text"
# ### Closed form solution
# + id="Wz_PCN89VCbC" colab_type="code" colab={} outputId="ab042a44-6b41-4aa0-f797-bf0a650b69a2"
def closed_form_solution(X,y):
return np.linalg.inv(X.T @ X) @ X.T @ y
coefs = closed_form_solution(X_d.values,y_noise)
coefs
# + [markdown] id="peqaeZnOVCbM" colab_type="text"
# ### Numpy only fit
# + id="klz--J-MVCbN" colab_type="code" colab={} outputId="4471e4f2-cc4b-4e56-c257-6671a499ad3f"
stepsize = .3
theta_result,J_history, count_history_1, theta_1, theta_2 = gradient_descent(initial_theta,X_d,y_noise,alpha = stepsize,num_iters=10000)
display(theta_result)
# + [markdown] id="Y7buiryKVCbQ" colab_type="text"
# ### Plotting the gradient descent convergence and resulting fits
# + id="JJ_ocEDWVCbQ" colab_type="code" colab={} outputId="67afa845-bf7c-4d91-97ed-5ba4595871f2"
fig = plt.figure(figsize = (18,8))
#Looping through different stepsizes
for s in [.001,.01,.1,1]:
theta_calc,J_history_1, count_history_1, theta_1, theta_2 = gradient_descent(initial_theta, X_d,y_noise,alpha = s,num_iters=5000)
#Plot gradient descent convergence
ax = fig.add_subplot(1, 2, 1)
ax.plot(count_history_1, J_history_1, label = 'Grad. desc. stepsize: {}'.format(s))
#Plot resulting fits on data
ax = fig.add_subplot(1, 2, 2)
ax.plot(x,X_d@theta_calc, label = 'Grad. desc. stepsize: {}'.format(s))
#Adding plot features
ax = fig.add_subplot(1, 2, 1)
ax.axhline(costfunction(coefs, X_d, y_noise), linestyle=':', label = 'Closed form minimum')
ax.set_xlabel('Count')
ax.set_ylabel('Cost function')
ax.set_title('Plot of convergence: Polynomial regression x, x^2 ={}'.format(degree))
ax.legend(loc = 1)
ax = fig.add_subplot(1, 2, 2)
ax.scatter(x,y_noise, facecolors = 'none', edgecolor = 'darkblue', label = 'f(x) + noise')
ax.plot(x,X_d@coefs, linestyle=':', label = 'Closed form fit')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_title('Noisy data and gradient descent fits'.format(degree))
ax.legend()
plt.show()
# + [markdown] id="iTZ6wJCnVCbT" colab_type="text"
# ### Plotting the cost function in 3D
# + id="a-r86kJ8VCbU" colab_type="code" colab={} outputId="d80e0c07-e066-4fe0-be90-618fa64871b9"
#Creating the dataset (as previously)
X = X_d.values
#Setup of meshgrid of theta values
T0, T1 = np.meshgrid(np.linspace(0,6,100),np.linspace(0,-8,100))
#Computing the cost function for each theta combination
zs = np.array( [costfunction(np.array([t0,t1]).reshape(-1,1), X, y_noise.reshape(-1,1))
for t0, t1 in zip(np.ravel(T0), np.ravel(T1)) ] )
#Reshaping the cost values
Z = zs.reshape(T0.shape)
#Computing the gradient descent
theta_result,J_history, count_history_1, theta_1, theta_2 = gradient_descent(initial_theta,X,y_noise,alpha = 0.3,num_iters=5000)
#Angles needed for quiver plot
anglesx = np.array(theta_1)[1:] - np.array(theta_1)[:-1]
anglesy = np.array(theta_2)[1:] - np.array(theta_2)[:-1]
# %matplotlib inline
fig = plt.figure(figsize = (16,8))
#Surface plot
ax = fig.add_subplot(1, 2, 1, projection='3d')
ax.plot_surface(T0, T1, Z, rstride = 5, cstride = 5, cmap = 'jet', alpha=0.5)
ax.plot(theta_1,theta_2,J_history, marker = '*',markersize = 4, color = 'r', alpha = .2, label = 'Gradient descent')
ax.plot(coefs[0],coefs[1], marker = '*', color = 'black', markersize = 10)
ax.set_xlabel('theta 1')
ax.set_ylabel('theta 2')
ax.set_zlabel('Cost function')
ax.set_title('Gradient descent: Root at {}'.format(theta_calc.flatten().round(2)))
ax.view_init(45, -45)
ax.legend()
#Contour plot
ax = fig.add_subplot(1, 2, 2)
ax.contour(T0, T1, Z, 70, cmap = 'jet')
ax.quiver(theta_1[:-1], theta_2[:-1], anglesx, anglesy, scale_units = 'xy', angles = 'xy', scale = 1, color = 'r', alpha = .9)
ax.plot(coefs[0],coefs[1], marker = '*', color = 'black', markersize = 10)
ax.set_xlabel('theta 1')
ax.set_ylabel('theta 2')
ax.set_title('Gradient descent: Root at {}'.format(theta_calc.flatten().round(2)))
ax.legend()
plt.legend()
plt.show()
# + [markdown] id="ZCkux-XkVCbX" colab_type="text"
# ### Sciy optimize fit
# + id="KgKB3RjKVCbY" colab_type="code" colab={} outputId="010fdf28-95e5-4828-d320-b48a0577600b"
import scipy.optimize as opt
theta_init = np.ones((len(X_d.T),1)).reshape(-1,1)
model_t = opt.minimize(fun = costfunction, x0 = theta_init , args = (X_d, y_noise),
method = 'dogleg', jac = grad, hess= hessian, options={'maxiter':1000})
model_t.x
# + [markdown] id="1Ac6wJI3VCba" colab_type="text"
# ### Sklearn fit
# + id="iZm2TQxcVCbb" colab_type="code" colab={} outputId="96670be3-069e-491f-bc8e-0baa1a0c9fce"
from sklearn import linear_model
model_d = linear_model.LinearRegression(fit_intercept=False)
model_d.fit(X_d,y_noise)
model_d.coef_
# + [markdown] id="OyHSwPZ_VCbf" colab_type="text"
# ### Statsmodel fit
# + id="MJml09sEVCbg" colab_type="code" colab={} outputId="ba4a3528-22ce-4518-d147-3bfc5c18f9cc"
import statsmodels.api as sm
model_sm = sm.OLS(y_noise, X_d)
res = model_sm.fit()
print(res.summary())
# + id="B3sJJNQVVCbj" colab_type="code" colab={}
# + id="wrUNeaZeVCbl" colab_type="code" colab={}
|
why_is_gradient_descent_bad_polynomial.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:py37] *
# language: python
# name: conda-env-py37-py
# ---
conda install -c conda-forge category_encoders
import category_encoders as ce
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import _encoders
from sklearn.tree import export_graphviz
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import accuracy_score
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import r2_score
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
from sklearn.metrics import plot_confusion_matrix
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import Ridge
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import RandomForestRegressor
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RandomizedSearchCV
from xgboost import XGBClassifier
from xgboost import
df = pd.read_csv('https://raw.githubusercontent.com/mattdmeans/NFLplayData/master/NFLPlaybyPlay2015.csv', low_memory = False)
df['PlayType'].value_counts(normalize = True)
df = df[(df['PlayType'] == 'Run') | (df['PlayType'] == 'Pass') | (df['PlayType'] == 'Sack')]
df.isnull().sum()
features = ['Drive', 'qtr', 'down', 'TimeUnder', 'PlayTimeDiff', 'SideofField', 'yrdln', 'yrdline100', 'GoalToGo', 'posteam', 'DefensiveTeam',
'sp', 'PlayType', 'PassLocation', 'RunLocation', 'RunGap', 'PosTeamScore', 'DefTeamScore', 'ScoreDiff']
target = 'Yards.Gained'
# +
train, test = train_test_split(df, train_size = .8, test_size = .2, random_state = 42)
train.shape, test.shape
# -
w_features = ['Drive', 'qtr', 'down', 'TimeUnder', 'PlayTimeDiff', 'SideofField', 'yrdln', 'yrdline100', 'GoalToGo', 'posteam', 'DefensiveTeam',
'sp', 'PlayType', 'PassLocation', 'RunLocation', 'RunGap', 'PosTeamScore', 'DefTeamScore', 'ScoreDiff', 'Yards.Gained']
def wrangle(X):
X = X.copy()
X = X[w_features]
X.fillna(np.NaN)
X = X.dropna(axis=0, subset=['down'])
X = X.dropna(axis=0, subset=['PlayTimeDiff'])
X['PassLocation'] = X['PassLocation'].fillna('None')
X['RunLocation'] = X['RunLocation'].fillna('None')
X['RunGap'] = X['RunGap'].fillna('None')
return X
X_train.isnull().sum()
df[df['down'].isnull()]
train = wrangle(train)
test = wrangle(test)
train.isnull().sum()
train.shape, test.shape
# +
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
# +
# %%time
pipeline = make_pipeline(
ce.OneHotEncoder(use_cat_names= True),
SimpleImputer(strategy='median'),
RandomForestRegressor(random_state=0, n_jobs=-1)
)
pipeline.fit(X_train, y_train)
print('Validation Accuracy', pipeline.score(X_test, y_test))
# -
y_pred = pipeline.predict(X_test)
print('r2 score =', r2_score(y_pred, y_test))
print(mean_absolute_error(y_test, y_pred))
# +
from xgboost import XGBClassifier
xgb = make_pipeline(
ce.OneHotEncoder(),
SimpleImputer(strategy='median'),
XGBClassifier(random_state=0, n_jobs=-1)
)
y_pred = xgb.predict(X_test)
pipeline_.fit(X_train, y_train)
print('Test MAE', mean_absolute_error(y_test, y_pred))
# +
# Get feature importances
n1 = len(X_train.columns)
rf = pipeline.named_steps['randomforestregressor']
importances = pd.Series(rf.feature_importances_[0:n1], X_train.columns)
# Plot feature importances
# %matplotlib inline
import matplotlib.pyplot as plt
n = 30
plt.figure(figsize=(10,n/2))
plt.title(f'Top {n} features')
importances.sort_values()[-n:].plot.barh(color='blue');
# -
conda install -c conda-forge eli5
# + jupyter={"outputs_hidden": true}
import eli5
from eli5.sklearn import PermutationImportance
# +
transformers = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median')
)
X_train_transformed = transformers.fit_transform(X_train)
X_test_transformed = transformers.transform(X_test)
model = RandomForestClassifier(n_estimators=20, random_state=42, n_jobs=-1)
model.fit(X_train_transformed, y_train)
feature_names = X_test.columns.tolist()
permuter = PermutationImportance(
model,
scoring='accuracy',
n_iter=5,
random_state=42
)
permuter.fit(X_test_transformed, y_test)
eli5.show_weights(
permuter,
top=None,
feature_names=feature_names
)
# +
processor = make_pipeline(
ce.OneHotEncoder(),
SimpleImputer(strategy='median')
)
X_train_processed = processor.fit_transform(X_train)
X_test_processed = processor.transform(X_test)
eval_set = [(X_train_processed, y_train),
(X_test_processed, y_test)]
# -
conda install -c conda-forge pdpbox
from pdpbox.pdp import pdp_isolate, pdp_plot
# +
feature = 'down'
isolated = pdp_isolate(
model= pipeline,
dataset= X_test,
model_features=X_test.columns,
feature=feature
)
pdp_plot(isolated, feature_name=feature, plot_lines=True);
# -
from pdpbox.pdp import pdp_interact, pdp_interact_plot
# +
features = ['down', 'yrdln']
interaction = pdp_interact(
model=pipeline,
dataset=X_test,
model_features=X_test.columns,
features=features
)
pdp_interact_plot(interaction, plot_type='grid', feature_names=feature)
# +
processor = make_pipeline(
ce.OneHotEncoder(),
SimpleImputer()
)
X_train_processed = processor.fit_transform(X_train)
# -
X_test_processed = processor.transform(X_test)
eval_set = [(X_train_processed, y_train),
(X_test_processed, y_test)]
ce.OneHotEncoder(),
RandomForestRegressor(n_estimators=100, n_jobs=-1)
model.fit(X_train, y_train)
conda install -c conda-forge shap
from shap import TreeExplainer, GradientExplainer
import shap
row = X_test.iloc[[10]]
row
# What was the actual rent for this apartment?
y_test.iloc[[10]]
# What does the model predict for this apartment?
model.predict(row)
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(row)
shap.initjs()
shap.force_plot(
base_value=explainer.expected_value,
shap_values=shap_values,
features=row)
# +
from scipy.stats import randint, uniform
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import RandomizedSearchCV
param_distributions = {
'n_estimators': randint(50, 500),
'max_depth': [5, 10, 15, 20, None],
'max_features': uniform(0, 1), }
search = RandomizedSearchCV(
RandomForestRegressor(random_state=42),
param_distributions=param_distributions,
n_iter=10,
cv=2,
scoring='neg_mean_absolute_error',
verbose=50,
return_train_score=True,
n_jobs=6,
random_state=42)
search.fit(X_train, y_train);
# -
print('Best hyperparameters', search.best_params_)
print('Cross-validation MAE', -search.best_score_)
model = search.best_estimator_
enc = ce.OneHotEncoder(handle_unknown='ignore')
|
module4-model-interpretation/Untitled1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: S2S Challenge
# language: python
# name: s2s
# ---
# %load_ext autoreload
# %autoreload 2
# # Unbiased ECMWF
#
# Here we propose a small model which is a debiased ECMWF forecast according to the data we have.
# The plan is
# * Compute the bias between the ECMWF model and the observations
# * Make a debiased model
# * Turn this model into a probabilistic forecast
# For this notebook we want to do it on precipitation and temperature, for weeks 1-2, 3-4, and 5-6.
import dask
import dask.array as da
import dask.distributed
import datetime
import matplotlib.pyplot as plt
import os
import numpy as np
import pandas as pd
import pathlib
import scipy.stats
import typing
import xarray as xr
import xskillscore as xs
from crims2s.dask import create_dask_cluster
from crims2s.util import fix_dataset_dims
INPUT_TRAIN = '***BASEDIR***training-input/0.3.0/netcdf'
OBSERVATIONS = '***BASEDIR***/processed/training-output-reference/'
BENCHNMARK = '***BASEDIR***training-output-benchmark/'
# ## Boost dask cluster
cluster = create_dask_cluster()
cluster.scale(jobs=2)
client = dask.distributed.Client(cluster)
client
# ## Generic Functions
def extract_train_validation_from_lead_time(xr_data) -> typing.Tuple:
xr_data_sub_train = xr_data.sel(forecast_year=slice(None, 2018))
xr_data_sub_val = xr_data.sel(forecast_year=slice(2019, None))
return xr_data_sub_train, xr_data_sub_val
def compute_and_correct_bias(data_center_train, data_center_val, obs_train):
bias = (obs_train - data_center_train).mean(dim=['lead_time', 'forecast_year'])
corrected_bias = data_center_val + bias
return bias, corrected_bias
def add_biweekly_dim(dataset):
weeklys = []
for s in [slice('0D', '13D'), slice('14D', '27D'), slice('28D', '41D')]:
weekly_forecast = dataset.sel(lead_time=s)
first_lead = pd.to_timedelta(weekly_forecast.lead_time[0].item())
weekly_forecast = weekly_forecast.expand_dims(dim='biweekly_forecast').assign_coords(biweekly_forecast=[first_lead])
weekly_forecast = weekly_forecast.assign_coords(lead_time=(weekly_forecast.lead_time - first_lead))
weeklys.append(weekly_forecast)
return xr.concat(weeklys, dim='biweekly_forecast').transpose('forecast_year', 'forecast_dayofyear', 'biweekly_forecast', ...)
# ## Read data
# ### ECMWF Temperature
CENTER = 'ecmwf'
FIELD = 'tp'
input_path = pathlib.Path(INPUT_TRAIN)
input_files_tp = sorted([f for f in input_path.iterdir() if CENTER in f.stem and FIELD in f.stem])
input_files_tp[:10]
ecmwf_tp_raw = xr.open_mfdataset(input_files_tp, preprocess=fix_dataset_dims)
ecmwf_tp_raw = ecmwf_tp_raw.assign_coords(lead_time=ecmwf_tp_raw.lead_time - ecmwf_tp_raw.lead_time[0])
# Fix the lead times by starting them at 0. To be validated with the organizers.
ecmwf_tp = add_biweekly_dim(ecmwf_tp_raw)
ecmwf_tp
# ### Observations
obs_path = pathlib.Path(OBSERVATIONS)
obs_files = [f for f in obs_path.iterdir() if 'tp' in f.stem]
obs_files[:4]
obs_tp_raw = xr.open_mfdataset(obs_files)
obs_tp_raw = obs_tp_raw.assign_coords(lead_time=obs_tp_raw.lead_time - obs_tp_raw.lead_time[0])
obs_tp = add_biweekly_dim(obs_tp_raw)
obs_tp
# For precipitation we first have to take the biweekly total precip. We can't compute the difference directly on the daily forecasts.
ecmwf_tp = ecmwf_tp.isel(lead_time=-1) - ecmwf_tp.isel(lead_time=0)
ecmwf_tp.isel(biweekly_forecast=1, forecast_dayofyear=0, forecast_year=0, realization=0).compute().tp.plot()
obs_tp = obs_tp.isel(lead_time=-1) - obs_tp.isel(lead_time=0)
ecmwf_tp_train, ecmwf_tp_val = extract_train_validation_from_lead_time(ecmwf_tp)
obs_tp_train, obs_tp_val = extract_train_validation_from_lead_time(obs_tp)
ecmwf_tp_train
obs_tp_train
# ## Debiasing
ecmwf_tp_train
# ### Compute bias using training data
ecmwf_tp_bias = (obs_tp_train - ecmwf_tp_train).mean(dim=['forecast_year'])
ecmwf_tp_bias
# ### Bias correct ECMWF
ecmwf_tp_val_corrected = ecmwf_tp_val + ecmwf_tp_bias
# + tags=[]
ecmwf_tp_val_corrected
# -
ecmwf_tp_val_corrected_comp = ecmwf_tp_val_corrected.compute()
# ## Turn into probabilistic forecast
# ### Get thresholds from train observations
obs_tp_train_thresholds = obs_tp_train.chunk({'forecast_year': -1}).quantile([0.33, 0.67], dim=['forecast_year'])
obs_tp_train_thresholds
obs_tp_train_thresholds_comp = obs_tp_train_thresholds.compute()
# ### Compute p of thresholds according to the model
#
# There are two ways to do this.
# We can either count the amount of members that are whithin each category.
# Or compute a distribution of all the members of the model, and then compute the value of the CDF for each threshold.
#
# Here we do it using the distribution method.
# #### Compute a distribution of the members of the model
ecmwf_tp_val_corrected_mean = ecmwf_tp_val_corrected_comp.mean(dim=['realization'])
ecmwf_tp_val_corrected_std = ecmwf_tp_val_corrected_comp.std(dim=['realization'])
# #### Compute the value of the CDF for each threshold
ecmwf_tp_val_corrected_mean
ecmwf_tp_val_corrected_mean.isel(biweekly_forecast=1, forecast_dayofyear=25).tp.plot()
obs_tp_train_thresholds_comp.isel(biweekly_forecast=2, quantile=0, forecast_dayofyear=40).tp.plot()
def make_probabilistic(forecast, thresholds):
loc = forecast.mean(dim=['realization']).compute().tp
scale = forecast.std(dim=['realization']).compute().tp
cdfs = xr.apply_ufunc(scipy.stats.norm.cdf, thresholds.tp, dask='allowed', kwargs={'loc': loc, 'scale': scale})
below = cdfs.isel(quantile=0).drop_vars('quantile')
normal = (cdfs.isel(quantile=1) - cdfs.isel(quantile=0))
above = xr.ones_like(normal) - cdfs.isel(quantile=1).drop_vars('quantile')
return xr.Dataset({'tp': xr.concat([below, normal, above], 'category').assign_coords(category=['below normal', 'near normal', 'above normal'])})
val_probabilistic_forecast = make_probabilistic(ecmwf_tp_val_corrected_comp, obs_tp_train_thresholds_comp)
val_probabilistic_forecast = val_probabilistic_forecast.expand_dims('forecast_year').assign_coords(forecast_year=ecmwf_tp_val_corrected_comp.forecast_year)
# +
#val_probabilistic_forecast = val_probabilistic_forecast.assign_coords(valid_time=ecmwf_t2m_val_corrected_comp.valid_time)
# -
val_probabilistic_forecast.biweekly_forecast.data
val_probabilistic_forecast
val_probabilistic_forecast = val_probabilistic_forecast.rename_dims({'biweekly_forecast': 'lead_time'}).assign_coords(lead_time=val_probabilistic_forecast.biweekly_forecast.data)
val_probabilistic_forecast
val_probabilistic_forecast.to_netcdf('***BASEDIR***/test_tp_forecast.nc')
val_probabilistic_forecast.isel(category=2, forecast_dayofyear=40, lead_time=1).tp.plot()
val_probabilistic_forecast.isel(category=1, forecast_dayofyear=40, biweekly_forecast=0).plot()
# ### Sanity check
val_probabilistic_forecast.sum(dim='category').isel(forecast_dayofyear=0, lead_time=2).tp.plot()
# ## Make submission file out of it
val_probabilistic_forecast_unfixed = val_probabilistic_forecast.stack(forecast_time=['forecast_year', 'forecast_dayofyear'])
val_probabilistic_forecast_unfixed
forecast_times = []
for f in val_probabilistic_forecast_unfixed.forecast_time:
year, dayofyear = f.data.item()
year = pd.to_datetime(f'{year}-01-01')
dayofyear = pd.Timedelta(dayofyear - 1, 'D')
forecast_times.append(year + dayofyear)
forecast_time = xr.DataArray(forecast_times, dims='forecast_time')
val_probabilistic_forecast_unfixed.assign_coords(forecast_time=forecast_time).to_netcdf('***BASEDIR***/test_tp_forecast.nc')
|
notebooks/debias-ecmwf-04-tp.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# !pip install PyDrive2
# +
from pydrive2.auth import GoogleAuth
from pydrive2.drive import GoogleDrive
directorio_credenciales= 'credentials_module.json'
#Iniciar sesión
def login():
gauth = GoogleAuth()
gauth.LoadCredentialsFile(directorio_credenciales)
if gauth.access_token_expired:
gauth.Refresh()
gauth.SaveCredentialsFile(directorio_credenciales)
else:
gauth.Authorize()
return GoogleDrive(gauth)
#Crear archivo de tezto simple
def crear_archivo_texto(nombre_archivo,contenido, id_folder):
credenciales =login()
archivo = credenciales.CreateFile({'title': nombre_archivo,
'parents': [{'kind':'driver#fileLink','id':id_folder}]})
archivo.SetContentString(contenido)
archivo.Upload()
#Subir un archivo
def subir_archivo(ruta_archivo, id_folder):
credenciales = login()
archivo = credenciales.CreateFile({'parents':[{'kind':'driver#fileLink','id':id_folder}]})
archivo['title'] = ruta_archivo.split('/')[-1]
archivo.SetContentFile(ruta_archivo)
archivo.Upload()
# -
if __name__ == "__main__":
crear_archivo_texto('HolaDriveDeNuevo.txt','Hola Jupyter Drive!, de nuevo','1TmdDKYhA2P1sPSoS2n24PQ2OB0KKRxmX')
subir_archivo('C:/Users/julio/OneDrive/Escritorio/Hola Jupyter Drive.xlsx','1TmdDKYhA2P1sPSoS2n24PQ2OB0KKRxmX')
|
Archivos Drive.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a name="building-language-model"></a>
# # Building the language model
#
# <a name="count-matrix"></a>
# ### Count matrix
#
# To calculate the n-gram probability, you will need to count frequencies of n-grams and n-gram prefixes in the training dataset. In some of the code assignment exercises, you will store the n-gram frequencies in a dictionary.
#
# In other parts of the assignment, you will build a count matrix that keeps counts of (n-1)-gram prefix followed by all possible last words in the vocabulary.
#
# The following code shows how to check, retrieve and update counts of n-grams in the word count dictionary.
# +
# manipulate n_gram count dictionary
n_gram_counts = {
('i', 'am', 'happy'): 2,
('am', 'happy', 'because'): 1}
# get count for an n-gram tuple
print(f"count of n-gram {('i', 'am', 'happy')}: {n_gram_counts[('i', 'am', 'happy')]}")
# check if n-gram is present in the dictionary
if ('i', 'am', 'learning') in n_gram_counts:
print(f"n-gram {('i', 'am', 'learning')} found")
else:
print(f"n-gram {('i', 'am', 'learning')} missing")
# update the count in the word count dictionary
n_gram_counts[('i', 'am', 'learning')] = 1
if ('i', 'am', 'learning') in n_gram_counts:
print(f"n-gram {('i', 'am', 'learning')} found")
else:
print(f"n-gram {('i', 'am', 'learning')} missing")
# -
# The next code snippet shows how to merge two tuples in Python. That will be handy when creating the n-gram from the prefix and the last word.
# +
# concatenate tuple for prefix and tuple with the last word to create the n_gram
prefix = ('i', 'am', 'happy')
word = 'because'
# note here the syntax for creating a tuple for a single word
n_gram = prefix + (word,)
print(n_gram)
# -
# In the lecture, you've seen that the count matrix could be made in a single pass through the corpus. Here is one approach to do that.
# +
import numpy as np
import pandas as pd
from collections import defaultdict
def single_pass_trigram_count_matrix(corpus):
"""
Creates the trigram count matrix from the input corpus in a single pass through the corpus.
Args:
corpus: Pre-processed and tokenized corpus.
Returns:
bigrams: list of all bigram prefixes, row index
vocabulary: list of all found words, the column index
count_matrix: pandas dataframe with bigram prefixes as rows,
vocabulary words as columns
and the counts of the bigram/word combinations (i.e. trigrams) as values
"""
bigrams = []
vocabulary = []
count_matrix_dict = defaultdict(dict)
# go through the corpus once with a sliding window
for i in range(len(corpus) - 3 + 1):
# the sliding window starts at position i and contains 3 words
trigram = tuple(corpus[i : i + 3])
bigram = trigram[0 : -1]
if not bigram in bigrams:
bigrams.append(bigram)
last_word = trigram[-1]
if not last_word in vocabulary:
vocabulary.append(last_word)
if (bigram,last_word) not in count_matrix_dict:
count_matrix_dict[bigram,last_word] = 0
count_matrix_dict[bigram,last_word] += 1
# convert the count_matrix to np.array to fill in the blanks
count_matrix = np.zeros((len(bigrams), len(vocabulary)))
for trigram_key, trigam_count in count_matrix_dict.items():
count_matrix[bigrams.index(trigram_key[0]), \
vocabulary.index(trigram_key[1])]\
= trigam_count
# np.array to pandas dataframe conversion
count_matrix = pd.DataFrame(count_matrix, index=bigrams, columns=vocabulary)
return bigrams, vocabulary, count_matrix
corpus = ['i', 'am', 'happy', 'because', 'i', 'am', 'learning', '.']
bigrams, vocabulary, count_matrix = single_pass_trigram_count_matrix(corpus)
print(count_matrix)
# -
# <a name="probability-matrix"></a>
# ### Probability matrix
# The next step is to build a probability matrix from the count matrix.
#
# You can use an object dataframe from library pandas and its methods [sum](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sum.html?highlight=sum#pandas.DataFrame.sum) and [div](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.div.html) to normalize the cell counts with the sum of the respective rows.
# create the probability matrix from the count matrix
row_sums = count_matrix.sum(axis=1)
print(row_sums.shape)
print(row_sums)
# delete each row by its sum
prob_matrix = count_matrix.div(row_sums, axis=0)
#a = count_matrix / row_sums 会有问题 等同于 count_matrix.div(row_sums, axis=1))
# 相当于第一个column 除以2, 第二个column 除以1, 第三个column 除以 1....
print(prob_matrix)
# The probability matrix now helps you to find a probability of an input trigram.
# +
# find the probability of a trigram in the probability matrix
trigram = ('i', 'am', 'happy')
# find the prefix bigram
bigram = trigram[:-1]
print(f'bigram: {bigram}')
# find the last word of the trigram
word = trigram[-1]
print(f'word: {word}')
# we are using the pandas dataframes here, column with vocabulary word comes first, row with the prefix bigram second
trigram_probability = prob_matrix[word][bigram]
print(f'trigram_probability: {trigram_probability}')
# -
# In the code assignment, you will be searching for the most probable words starting with a prefix. You can use the method [str.startswith](https://docs.python.org/3/library/stdtypes.html#str.startswith) to test if a word starts with a prefix.
#
# Here is a code snippet showing how to use this method.
# +
# lists all words in vocabulary starting with a given prefix
vocabulary = ['i', 'am', 'happy', 'because', 'learning', '.', 'have', 'you', 'seen','it', '?']
starts_with = 'ha'
print(f'words in vocabulary starting with prefix: {starts_with}\n')
for word in vocabulary:
if word.startswith(starts_with):
print(word)
# -
# <a name="language-model-evaluation"></a>
# ## Language model evaluation
# <a name="train-validation-test-split"></a>
# ### Train/validation/test split
# In the videos, you saw that to evaluate language models, you need to keep some of the corpus data for validation and testing.
#
# The choice of the test and validation data should correspond as much as possible to the distribution of the data coming from the actual application. If nothing but the input corpus is known, then random sampling from the corpus is used to define the test and validation subset.
#
# Here is a code similar to what you'll see in the code assignment. The following function allows you to randomly sample the input data and return train/validation/test subsets in a split given by the method parameters.
# +
# we only need train and validation %, test is the remainder
import random
def train_validation_test_split(data, train_percent, validation_percent):
"""
Splits the input data to train/validation/test according to the percentage provided
Args:
data: Pre-processed and tokenized corpus, i.e. list of sentences.
train_percent: integer 0-100, defines the portion of input corpus allocated for training
validation_percent: integer 0-100, defines the portion of input corpus allocated for validation
Note: train_percent + validation_percent need to be <=100
the reminder to 100 is allocated for the test set
Returns:
train_data: list of sentences, the training part of the corpus
validation_data: list of sentences, the validation part of the corpus
test_data: list of sentences, the test part of the corpus
"""
# fixed seed here for reproducibility
random.seed(87)
# reshuffle all input sentences
random.shuffle(data)
train_size = int(len(data) * train_percent / 100)
train_data = data[0:train_size]
validation_size = int(len(data) * validation_percent / 100)
validation_data = data[train_size:train_size + validation_size]
test_data = data[train_size + validation_size:]
return train_data, validation_data, test_data
data = [x for x in range (0, 100)]
train_data, validation_data, test_data = train_validation_test_split(data, 80, 10)
print("split 80/10/10:\n",f"train data:{train_data}\n", f"validation data:{validation_data}\n",
f"test data:{test_data}\n")
train_data, validation_data, test_data = train_validation_test_split(data, 98, 1)
print("split 98/1/1:\n",f"train data:{train_data}\n", f"validation data:{validation_data}\n",
f"test data:{test_data}\n")
# -
# <a name="perplexity"></a>
# ### Perplexity
#
# In order to implement the perplexity formula, you'll need to know how to implement m-th order root of a variable.
#
# \begin{equation*}
# PP(W)=\sqrt[M]{\prod_{i=1}^{m}{\frac{1}{P(w_i|w_{i-1})}}}
# \end{equation*}
#
# Remember from calculus:
#
# \begin{equation*}
# \sqrt[M]{\frac{1}{x}} = x^{-\frac{1}{M}}
# \end{equation*}
#
# Here is a code that will help you with the formula.
# to calculate the exponent, use the following syntax
p = 10 ** (-250)
M = 100
perplexity = p ** (-1 / M)
print(perplexity)
# That's all for the lab for "N-gram language model" lesson of week 3.
|
Natural Language Processing with Probabilistic Models/week3-Building the language model.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PythonData
# language: python
# name: pythondata
# ---
import pandas as pd
import numpy as np
import json
df = pd.DataFrame(np.array(([1, 2, 3], [4, 5, 6])),
index=['mouse', 'rabbit'],
columns=['one', 'two', 'three'])
df
df.keys()
df2 = pd.DataFrame({'col1': [1, 2],
'col2': [0.5, 0.75]},
index=['row1', 'row2'])
df2
dd = dict(df2)
dd
dd2 = df2.to_dict('series')
dd2
list([{col:dd2[col]} for col in dd2 if col != 'col1'])
# Open and read the Wikipedia data JSON file.
with open('wikipedia-movies.json', mode ='r') as wiki_movies:
wiki_movies_raw = json.load(wiki_movies)
wiki_movies_df = pd.DataFrame(wiki_movies_raw)
wiki_movies_df.head()
wiki_movies_df['Box office'].isnull().sum()
|
testing_stuff.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Modularity and Community Detection
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import community # --> http://perso.crans.org/aynaud/communities/
import networkx as nx
import numpy as np
from igraph import *
# +
g = nx.full_rary_tree(3, 6)
g = nx.karate_club_graph()
N = g.number_of_nodes()
partition = community.best_partition(g)
# nx.draw(g)
print "Louvain Modularity: ", community.modularity(partition, g)
print "Louvain Partition: ", partition.values()
# -
edges = g.edges()
g = Graph(edges=edges, directed=False)
cl = g.community_fastgreedy()
k=2
cl.as_clustering(k).membership
# ## Finding the Best Number of Partitions
#
# - Community Fast Greedy
# This is a method from igraph and here is how it works.
#
# ## Another approach to finding community
# +
# Create the graph
vertices = [i for i in range(N)]
# edges = [(0,2),(0,1),(0,3),(1,0),(1,2),(1,3),(2,0),(2,1),(2,3),(3,0),(3,1),(3,2),(2,4),(4,5),(4,6),(5,4),(5,6),(6,4),(6,5)]
g = Graph(vertex_attrs={"label":vertices}, edges=edges, directed=False)
# +
visual_style = {}
# Scale vertices based on degree
outdegree = g.outdegree()
visual_style["vertex_size"] = [x/max(outdegree)*20+10 for x in outdegree]
# # Set bbox and margin
visual_style["bbox"] = (800,800)
visual_style["margin"] = 100
# +
# Define colors used for outdegree visualization
colours = ['#fecc5c', '#a31a1c']
# Order vertices in bins based on outdegree
bins = np.linspace(0, max(outdegree), len(colours))
digitized_degrees = np.digitize(outdegree, bins)
# Set colors according to bins
g.vs["color"] = [colours[x-1] for x in digitized_degrees]
# Also color the edges
for ind, color in enumerate(g.vs["color"]):
edges = g.es.select(_source=ind)
edges["color"] = [color]
# Don't curve the edges
visual_style["edge_curved"] = False
visual_style["vertex_label_size"] = 4
# +
# Community detection
communities = g.community_edge_betweenness(directed=True)
clusters = communities.as_clustering()
# Set edge weights based on communities
weights = {v: len(c) for c in clusters for v in c}
g.es["weight"] = [weights[e.tuple[0]] + weights[e.tuple[1]] for e in g.es]
# Choose the layout
N = len(vertices)
visual_style["layout"] = g.layout_fruchterman_reingold(weights=g.es["weight"], maxiter=1000, area=N**3, repulserad=N**3)
# Plot the graph
print visual_style
plot(g, **visual_style)
# -
# ## A different framework to do the same
# [<NAME>](http://www.traag.net/2013/10/25/easy-flexible-framework-for-community-detection/#zp-ID-163-1383957-EPZH5I7G) posted a new framework in 2013
#
import igraph as ig
import louvain as louvain
G = ig.Graph.Tree(n=100, children=3);
opt = louvain.Optimiser();
partition = opt.find_partition(graph=G,
partition_class=louvain.SignificanceVertexPartition);
# +
from igraph import *
import numpy as np
vertices = [i for i in range(7)]
edges = [(0,2),(0,1),(0,3),(1,0),(1,2),(1,3),(2,0),(2,1),(2,3),(3,0),(3,1),(3,2),(2,4),(4,5),(4,6),(5,4),(5,6),(6,4),(6,5)]
g = Graph(vertex_attrs={"label":vertices}, edges=edges, directed=True)
visual_style = {}
# Scale vertices based on degree
outdegree = g.outdegree()
visual_style["vertex_size"] = [x/max(outdegree)*25+50 for x in outdegree]
# Set bbox and margin
visual_style["bbox"] = (600,600)
visual_style["margin"] = 100
# Define colors used for outdegree visualization
colours = ['#fecc5c', '#a31a1c']
# Order vertices in bins based on outdegree
bins = np.linspace(0, max(outdegree), len(colours))
digitized_degrees = np.digitize(outdegree, bins)
# Set colors according to bins
g.vs["color"] = [colours[x-1] for x in digitized_degrees]
# Also color the edges
for ind, color in enumerate(g.vs["color"]):
edges = g.es.select(_source=ind)
edges["color"] = [color]
# Don't curve the edges
visual_style["edge_curved"] = False
# Community detection
communities = g.community_edge_betweenness(directed=True)
clusters = communities.as_clustering()
# Set edge weights based on communities
weights = {v: len(c) for c in clusters for v in c}
g.es["weight"] = [weights[e.tuple[0]] + weights[e.tuple[1]] for e in g.es]
# Choose the layout
N = len(vertices)
visual_style["layout"] = g.layout_fruchterman_reingold(weights=g.es["weight"], maxiter=1000, area=N**3, repulserad=N**3)
# Plot the graph
plot(g, **visual_style)
# -
|
Modularity/ModularityCommunityClustering.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Google Mobility Data Visualization
#
# ## <NAME>
# 02/25/2021
# ---
# Google community mobility data is the open-source data reflecting the mobility changes across global communities in response to policies aimed at combating the COVID-19 crisis. This data set tracks daily movement trends into six categories, including retail and recreation, groceries and pharmacy, parks, transit stations, workplace, and residential. In this notebook, we demonstrate how to query google mobility data using the BigQuery API client python library, and visualize the mobility changes over time since February 2020.
#
# ## Understanding Google mobility data
# - Google documentation on mobility data is [here](https://www.google.com/covid19/mobility/data_documentation.html).
# - Mobility report does not give an absolute visitor number, but relative changes instead. These values were calculated by comparing to a baseline derived from dates before the pandemic started.
# - The baseline is the median value, for the corresponding day of the week, during the 5-week period Jan 3–Feb 6, 2020. We advise users to interpret the Google mobility trend cautiously since the baseline mobility patterns are impacted by other region-specific factors, such as weather and local events.
# - There are gaps and missing data for dates that do not meet the privacy threshold.
#
# ## Setup Notebook
# Using the Google BigQuery API Client library requires **authentication** set up. Please follow the [Instruction](
# https://cloud.google.com/bigquery/docs/reference/libraries#setting_up_authentication) and download the a JSON key to your local computer.
#
# Uncomment the line(by removing the #) to install python packages if needed.
# +
# #!pip install numpy
# #!pip install pandas
# #!pip install geopandas
# #!pip install --upgrade 'google-cloud-bigquery[bqstorage,pandas]'
# #!pip install plotly==4.14.3
# #!pip install --upgrade google-cloud-bigquery
import pandas as pd
import geopandas as gpd
import plotly.express as px
import plotly.graph_objects as go
from plotly.subplots import make_subplots
import numpy as np
import json
import os
import math
from google.cloud import bigquery
# Setup the Google application credential
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="<PATH_to_Google_JSON_key>"
# Create Client object for query purpose
client = bigquery.Client()
# -
# ## The Google mobility data table
# +
# Query for data description in google mobility data
table_sql="""
SELECT *
FROM `bigquery-public-data.covid19_google_mobility`.INFORMATION_SCHEMA.COLUMNS
"""
query_job = client.query(table_sql)
mobility_col_df = query_job.to_dataframe()
mobility_col_df[["column_name", "data_type"]]
# -
# ## Visualize mobility data across the United States
# Query for mobility data (State level) of the US
sql_us="""
SELECT
country_region,
sub_region_1,
iso_3166_2_code,
date,
retail_and_recreation_percent_change_from_baseline as recreation,
grocery_and_pharmacy_percent_change_from_baseline as grocery,
parks_percent_change_from_baseline as park,
transit_stations_percent_change_from_baseline as transit,
workplaces_percent_change_from_baseline as workplace,
residential_percent_change_from_baseline as residential
FROM `bigquery-public-data.covid19_google_mobility.mobility_report`
WHERE
country_region_code = 'US' AND
sub_region_1 is not null AND
iso_3166_2_code is not null
order by sub_region_1, date
"""
us_query_job = client.query(sql_us)
us_mobility_df = us_query_job.to_dataframe()
us_mobility_df.describe()
# ### Visualize changes in retail and recreation across the US
# +
# Change the format of "iso_3166_2_code" and "date"
us_mobility_df["iso_3166_2_code"]=us_mobility_df["iso_3166_2_code"].str.replace('US-', '', regex=True)
us_mobility_df = us_mobility_df.fillna(0).copy()
us_mobility_df['date'] = pd.to_datetime(us_mobility_df['date'])
us_mobility_df['date'] = us_mobility_df['date'].dt.strftime('%m/%d/%Y')
# Visualize the mobility changes at "Retail and Recreation" in the US over time
# To visualize the mobility changes in other categories, simply change the color="<category>"
us_fig =px.choropleth(us_mobility_df,
locations = 'iso_3166_2_code',
color="recreation",
animation_frame="date",
color_continuous_scale="Inferno",
locationmode='USA-states',
scope="usa",
range_color=(-100, 20),
labels={'recreation':'Retail and Recreation'}
)
us_fig.update_layout(height=650,
margin={"r":0,"t":100,"l":0,"b":0},
title = "Google Mobility Trend in the US: Changes in Retail and Recreation Visits Since Feb 2020",
title_font_size=20)
us_fig.show()
# -
# ### Summary
# - The baseline was calculated between Jan and Feb 2020. As expected, the decline in retail and recreation mobility started around mid-March last year, when COVID-19 was declared as a National Emergency.
# - Drastic mobility decline (> 50%) in retail and recreation were observed throughout the country during holidays, such as Easter(April 12th), and Christmas(Dec 25).
# - Currently, a 20% decline in retail and recreation is still widely observed in the US.
#
# ### Visualize mobility changes between States
# +
# Compare moblity trend between States
# Four States were selected for demonstration purpose, including California, Florida, Illinois, and New York
us_df = us_mobility_df.drop(['country_region', 'sub_region_1'], axis=1)
us_df_melted = pd.melt(us_df, id_vars=["iso_3166_2_code", "date"] ,
value_vars=["recreation","grocery","park","transit","workplace", "residential"],
var_name="location", value_name='mobility')
us_df_subset = us_df_melted[us_df_melted["iso_3166_2_code"].isin(["CA", "FL", "IL", "NY"])]
fig = px.line(us_df_subset, x= "date", y= "mobility",color="location",
facet_col="iso_3166_2_code", facet_col_wrap=1,
line_shape='linear', render_mode="svg", hover_name="location",
labels=dict(date="Date", mobility="Mobility (%)", location="Location"))
fig.update_layout(height=900,
width=800,
margin={"r":0,"t":100,"l":0,"b":0},
title = "Mobility Trend in Four States Since Feb 2020",
title_font_size=25)
fig.add_hline(y=0, line_dash="dot",
annotation_text="Baseline",
annotation_position="bottom right")
fig.update_yaxes(matches=None)
fig.for_each_annotation(lambda a: a.update(text=a.text.split("=")[-1]))
fig.update_xaxes(tickvals=["03/01/2020","05/01/2020","07/01/2020","09/01/2020","11/01/2020","01/01/2021"])
fig.add_vrect(x0="03/09/2020", x1="04/11/2020", col=1,
annotation_text="Decline", annotation_position="top left",
fillcolor="green", opacity=0.25, line_width=0)
fig.show()
# -
# ### Summary
# - A drastic decline in mobility was aobserved between mid-March and early April in all categories, except "`Residential`", across all four states;
# - Among these declined categories, the least reduction was observed in "`Groceries and Pharmacy`" since the pandemic of COVID19;
# - A significant increase in "`Park`" mobility was observed during `summer` in both the states of Illinois and New York by comparing to their baseline calculated between Jan and Feb of 2020. However, the absence of mobility increase in "`Park`" in some states does not imply less mobility compared to other states, such as Illinois, because the absolute number of visits is not shared in Google mobility data table.
#
# ## Visualize mobility data of counties in IL
# To visualize the mobility data in county data on the map, we would need the county shape files, such as `IL_BNDY_County_Py.shp`. These files can be downloaded with this [link](https://clearinghouse.isgs.illinois.edu/sites/clearinghouse.isgs/files/Clearinghouse/data/ISGS/Reference/zips/IL_BNDY_County.zip). Download the file and decompress the zip file before you run the cell below.
# +
# Add path of illinois shape file(IL_BNDY_County_py.shp) to the line below.
# This file can be found inside decompressed zip folder (IL_BNDY_County)
shape_f = "<PATH_to_IL_BNDY_County_py.shp>"
map_df = gpd.read_file(shape_f)
# Save the GeoJson file at local
map_df.to_file("IL_States_gpd.json", driver='GeoJSON')
# Load GeoJson file
with open("IL_States_gpd.json") as f:
illinois_state = json.load(f)
# Modify the fips id to align with the fips id in Google mobility table
def modify_fips(number):
newnum = '{:03d}'.format(number)
newnum = '17' + str(newnum)
return newnum
for i in range(len(illinois_state["features"])):
subdict = illinois_state["features"][i]
newfips = modify_fips(subdict["properties"]["CO_FIPS"])
illinois_state["features"][i]["properties"]["CO_FIPS"] = newfips
# Query county level monthly mobility data from Google Mobility data table
il_month_sql = """
SELECT
country_region_code,
sub_region_1,
sub_region_2,
census_fips_code,
EXTRACT (YEAR FROM date) as year,
EXTRACT (MONTH FROM date) as month,
ROUND(AVG(retail_and_recreation_percent_change_from_baseline),2) as recreation,
ROUND(AVG(grocery_and_pharmacy_percent_change_from_baseline),2) as grocery,
ROUND(AVG(parks_percent_change_from_baseline),2) as park,
ROUND(AVG(transit_stations_percent_change_from_baseline),2) as transit,
ROUND(AVG(workplaces_percent_change_from_baseline),2) as workplace,
ROUND(AVG(residential_percent_change_from_baseline),2) as residential
FROM `bigquery-public-data.covid19_google_mobility.mobility_report`
WHERE
country_region_code = 'US' AND
sub_region_1 = "Illinois" AND
census_fips_code is not null
GROUP BY country_region_code, sub_region_1, sub_region_2, census_fips_code, year, month
order by sub_region_1, sub_region_2, year, month
"""
query_job = client.query(il_month_sql)
il_month_df = query_job.to_dataframe()
il_month_df.fillna(0, inplace=True)
il_month_df.head()
# +
# Create a new date format
il_month_df['date'] = il_month_df["year"].astype(str) + "-" + il_month_df["month"].astype(str)
# Visualize the residential mobility change in all counties of IL since Feb 2020
il_residential_df = il_month_df.drop(['recreation', 'grocery', 'park', 'transit', 'workplace'], axis=1)
il_residential_df['date'] = pd.to_datetime(il_residential_df['date'])
il_residential_df['date'] = il_residential_df['date'].dt.strftime('%m/%Y')
fig = px.choropleth_mapbox(il_residential_df, geojson=illinois_state,
locations='census_fips_code',
color='residential',
color_continuous_scale="redor",
range_color=(-1, 23),
featureidkey="properties.CO_FIPS",
mapbox_style="carto-positron",
opacity=0.6,
center = {"lat": 40, "lon": -89.3985},
zoom=5.7,
hover_name = "sub_region_2",
animation_frame='date')
fig.update_geos(fitbounds="locations",visible=False)
fig.update_layout(height=800,
width=750,
margin={"r":0,"t":50,"l":0,"b":0},
title = "Residential Mobility Trend in IL Since Feb 2020",
title_font_size=30)
fig.show()
# -
# ### Summary
# - Significant increase in residential mobility was observed during March and April, especially in the Metropolitan area, in response to a stay-at-home order issued by Governor <NAME> back in March 2020.
# - Missing residential data were observed in several counties of IL, such as Greene County, due to privacy reasons.
#
# ## Visualize mobility and COVID-19 data of Cook County, IL
# Here, we demonstrate how to query information from two tables and join the information through BigQuery
cook_county_sql = """
SELECT
date,
confirmed,
sub_region_1,
sub_region_2,
fips,
Recreation,
Park,
Transit,
Grocery,
Workplace,
Residential
FROM (
SELECT
date,
confirmed,
fips
FROM `bigquery-public-data.covid19_jhu_csse.summary`
WHERE
fips = "17031") a
FULL JOIN (
SELECT
sub_region_1,
sub_region_2,
census_fips_code as fips,
date,
retail_and_recreation_percent_change_from_baseline as Recreation,
grocery_and_pharmacy_percent_change_from_baseline as Grocery,
parks_percent_change_from_baseline as Park,
transit_stations_percent_change_from_baseline as Transit,
workplaces_percent_change_from_baseline as Workplace,
residential_percent_change_from_baseline as Residential
FROM `bigquery-public-data.covid19_google_mobility.mobility_report`
WHERE census_fips_code = "17031") b
USING (date, fips)
ORDER By date
"""
query_job = client.query(cook_county_sql)
cook_df = query_job.to_dataframe()
cook_df.head()
cook_df["sub_region_1"]=["Illinois"]*(cook_df.shape[0])
cook_df["sub_region_2"]=["Cook County"]*(cook_df.shape[0])
cook_df.head()
# +
# Calculate daily new cases in cook county using "confirmed" column
cook_confirm = cook_df["confirmed"]
new_case = [None] * len(cook_confirm)
for i in range(len(cook_confirm)):
if math.isnan(cook_confirm[i]):
pass
elif math.isnan(cook_confirm[i-1]):
pass
else:
new_case[i] = cook_confirm[i]-cook_confirm[i-1]
cook_df["new_cases"] = new_case
# Prepare dataframes for ploting
cook_case_df = cook_df[["date", "confirmed", "new_cases"]]
cook_mobility_df = cook_df[["date", "Recreation", "Transit", "Park", "Grocery", "Workplace", "Residential"]]
# Plot both dataframes in one figure
colors=["rgb(166,206,227)", "rgb(31,120,180)", "rgb(178,223,138)",
"rgb(51,160,44)", "rgb(251,154,153)", "rgb(227,26,28)"]
fig = make_subplots(rows=2, cols=1,
subplot_titles=("Mobility Trend in Cook County, IL since Feb, 2020", "Daily Confirmed New Cases in Cook County, IL since Feb, 2020 "),
row_heights=[0.7, 0.3])
fig.add_trace(
go.Scatter(x=cook_mobility_df["date"],
y=cook_mobility_df["Residential"],
mode = 'lines',
line_shape='spline',
name="Residential",
line=dict(width=1.5, color=colors[0])),
row=1, col=1
)
fig.add_trace(
go.Scatter(x=cook_mobility_df["date"],
y=cook_mobility_df["Workplace"],
mode = 'lines',
line_shape='spline',
name="Workplace",
line=dict(width=1.5, color=colors[1])),
row=1, col=1
)
fig.add_trace(
go.Scatter(x=cook_mobility_df["date"],
y=cook_mobility_df["Park"],
mode = 'lines',
name="Park",
line_shape='spline',
line=dict(width=1.5, color=colors[3])),
row=1, col=1
)
fig.add_trace(
go.Scatter(x=cook_mobility_df["date"],
y=cook_mobility_df["Recreation"],
mode = 'lines',
name="Recreation",
line_shape='spline',
line=dict(width=1.5, color=colors[2])),
row=1, col=1
)
fig.add_trace(
go.Scatter(x=cook_mobility_df["date"],
y=cook_mobility_df["Grocery"],
mode = 'lines',
name="Grocery",
line_shape='spline',
line=dict(width=1.5, color=colors[4])),
row=1, col=1
)
fig.add_trace(
go.Scatter(x=cook_mobility_df["date"],
y=cook_mobility_df["Transit"],
mode = 'lines',
name="Transit",
line_shape='spline',
line=dict(width=1.5, color=colors[5])),
row=1, col=1
)
fig.add_trace(
go.Scatter(x=cook_case_df["date"], y=cook_case_df["new_cases"],
mode = 'lines', line_shape='spline', name="New Confirmed Cases",
line=dict(width=2)),
row=2, col=1
)
fig.update_layout(plot_bgcolor='rgba(0,0,0,0)', margin={"r":0,"t":100,"l":0,"b":0})
fig.update_xaxes(title_text="Date")
fig.show()
# -
# ### Summary
# - A significant increase in park visits was observed in Cook County between June and October, 2020
# - Mobility in "`Transit`" and "`Workplace`" has decreased around 50% since the pandemic started.
# - Large numbers of daily confirmed cases of COVID-19 were observed between October 2020 and Jan 2020 in cook county, compared to the spring of 2020. However, the lower number of confirmed cases during the spring of 2020 is likely caused by limited testing capacity at the begining of the pandemic.
|
covid19-notebooks/google_mobility.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: kenny_tf
# language: python
# name: kenny_tf
# ---
# +
import os
import sys
import random
import math
import numpy as np
import skimage.io
import matplotlib
import matplotlib.pyplot as plt
# Root directory of the project
ROOT_DIR = os.path.abspath("../")
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
# Import COCO config
sys.path.append(os.path.join(ROOT_DIR, "samples/coco/")) # To find local version
import coco
# %matplotlib inline
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(COCO_MODEL_PATH):
utils.download_trained_weights(COCO_MODEL_PATH)
# Directory of images to run detection on
IMAGE_DIR = os.path.join(ROOT_DIR, "images")
# +
class InferenceConfig(coco.CocoConfig):
# Set batch size to 1 since we'll be running inference on
# one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
GPU_COUNT = 1
IMAGES_PER_GPU = 1
config = InferenceConfig()
config.display()
# +
# Create model object in inference mode.
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
model.load_weights(COCO_MODEL_PATH, by_name=True)
# -
class_names = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane',
'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird',
'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear',
'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie',
'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
'kite', 'baseball bat', 'baseball glove', 'skateboard',
'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup',
'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed',
'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote',
'keyboard', 'cell phone', 'microwave', 'oven', 'toaster',
'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors',
'teddy bear', 'hair drier', 'toothbrush']
# +
file_names = next(os.walk(IMAGE_DIR))[2]
image = skimage.io.imread(os.path.join(IMAGE_DIR, random.choice(file_names)))
# Run detection
results = model.detect([image], verbose=1)
# Visualize results
r = results[0]
visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'],
class_names, r['scores'])
# -
|
samples/demo_tf.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.5 32-bit
# language: python
# name: python3
# ---
# +
for x in array:
if x <pivot:
less.append(x)
else:
greater.append(x)
a=5;b=6;c=7
c
# +
results = []
for line in file_handle:
results.append(line.replace('foo','bar'))
print("Reached this line")
# -
a=5
type(a)
a=4.5
b=2
print('a is {0}, b is {1}'.format(type(a),type(b)))
a/b
a=5
isinstance(a,int)
a=5; b=4.5
isinstance(a,(int, float))
isinstance(b, (int,float))
a='foo'
a.<Press Tab>
a='foo'
a.upper()
getattr(a,'split')
def isiterable(obj):
try:
iter(obj)
return true
except TypeError:
return False
isiterable('a string')
isiterable([1,2,3])
isiterable(b)
# +
import some_module
result = some_module.f(5)
# +
pi = some_module.PI
pi
# -
from some_module import f, g, PI
result = g(5,PI)
result
# +
import some_module as sm
from some_module import PI as pi, g as gf
r1 =sm.f(pi)
r2 = gf(6,pi)
# -
r1
r2
a= [1,2,3]
b=a
c=list(a)
a is b
a is not c
a==c
a= None
a is None
a_list = ['foo',2,[4,5]]
a_list[2]=(3,4)
a_list
a_tuple =(3,5,(4,5))
a_tuple[1] ='four'
ival=17239871
ival **6
fval= 7.243
fval2=6.78e-5
type(3/2)
3//2
type(3//2)
a='one way of writing a string'
b="another way"
c="""
This is a longer strnig that
spans multiple lines
"""
c
c.count('\n')
a='this is a string'
a[10]='f'
b= a.replace('string','longer string')
b
a
a=5.6
s=str(a)
print(s)
s='python'
list(s)
s[:3]
print('12\n34')
s='12\\34'
print(s)
a='this is the first half'
b='and this is the second half'
a+b
template='{0:2f} {1:s} are worth US${2:d}'
template
template.format(4.5560,'Argentine Pesos',1)
template.format(1263.23, 'won',1)
True and True
False or True
s= '3.14159'
fval = float(s)
type(fval)
int(fval)
bool(fval)
bool(0)
a=None
a is None
b=5
b is not None
def add_and_maybe_multiply(a,b,c=None):
result = a+b
if c is not None:
result = result *c
return result
add_and_maybe_multiply(5,3)
add_and_maybe_multiply(5,3,10)
type(None)
from datetime import datetime, date, time
dt = datetime(2011,10,209,20,30,21)
dt
dt.day
dt.minute
dt.date()
dt.time()
dt.strftime('%Y/%m/%d %H:%M)
datetime.strptime('20091031','%Y%M%D')
dt.replace(minute=0,second=0)
dt2 = datetime(2011,11,15,22,30)
delta =dt2 - dt
delta
type(delta)
dt
dt+delta
# +
x=-5
if x<0:
print('It is negative')
elif x==0:
print('Equal to zero')
elif 0<x<5:
print('Positive but smaller than 5')
else:
print('Positive and larger than or equal to 5')
# -
a=5; b=7
c=8; d=4
if a <b or c>d:
print('Made it')
4>3>2>1
3>5 or 2>1
3>5>2>1
# +
sequence = [1,2,None,4,None,5]
total=0
for value in sequence:
total += value
# -
total
sequence = [1,2,0,4,6,5,2,1]
total_until_5 =0
for value in sequence:
if value ==5:
break
total_until_5+=value
total_until_5
for i in range(4):
for j in range(4):
if j > i:
break
print((i,j))
for a,b,c in [[1,2,3],[4,5,6],[7,8,9]]:
print(a,b,c)
x=256
total = 0
while x>0:
if total >500:
break
total +=x
x= x//2
total
x
256+128+64+32+16+8
# +
x=-1
if x<0:
print('negative!')
elif x==0:
pass
else:
print('positive!')
# -
range(10)
list(range(10))
list(range(0,20,2))
list(range(5,0,-1))
seq=[1,2,3,4]
for i in range(len(seq)):
val=seq[i]
val
sum=0
for i in range(100000):
if i % 3 ==0 or i % 5 ==0:
sum+=1
x=5
'Non=negative' id >=0 else 'Negative'
x=5
a=100 if x>=0 else -100
a
|
_notebooks/python_basic.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + raw_mimetype="text/restructuredtext" active=""
# .. _networks::
#
# |
# |
#
# Download This Notebook: :download:`Networks.ipynb`
#
# -
# # Networks
# ## Introduction
# This tutorial gives an overview of the microwave network analysis
# features of **skrf**. For this tutorial, and the rest of the scikit-rf documentation, it is assumed that **skrf** has been imported as `rf`. Whether or not you follow this convention in your own code is up to you.
import skrf as rf
from pylab import *
# If this produces an import error, please see [Installation ](Installation.ipynb).
# ## Creating Networks
#
# **skrf** provides an object for a N-port microwave [Network](../api/network.rst). A [Network](../api/network.rst) can be created in a number of ways. One way is from data stored in a touchstone file.
# +
from skrf import Network, Frequency
ring_slot = Network('data/ring slot.s2p')
# -
#
# A short description of the network will be printed out if entered onto the command line
#
ring_slot
# Networks can also be created by directly passing values for the `frequency`, `s`-parameters and port impedance `z0`.
freq = Frequency(1,10,101,'ghz')
ntwk = Network(frequency=freq, s= [-1, 1j, 0], z0=50, name='slippy')
ntwk
#
# See [Network](../api/network.rst) for more information on network creation.
# ## Basic Properties
#
#
# The basic attributes of a microwave [Network](../api/network.rst) are provided by the
# following properties :
#
# * `Network.s` : Scattering Parameter matrix.
# * `Network.z0` : Port Characteristic Impedance matrix.
# * `Network.frequency` : Frequency Object.
# The [Network](../api/network.rst) object has numerous other properties and methods. If you are using IPython, then these properties and methods can be 'tabbed' out on the command line.
#
# In [1]: ring_slot.s<TAB>
# ring_slot.line.s ring_slot.s_arcl ring_slot.s_im
# ring_slot.line.s11 ring_slot.s_arcl_unwrap ring_slot.s_mag
# ...
#
# All of the network parameters are represented internally as complex `numpy.ndarray`. The s-parameters are of shape (nfreq, nport, nport)
shape(ring_slot.s)
# ## Slicing
# You can slice the `Network.s` attribute any way you want.
ring_slot.s[:11,1,0] # get first 10 values of S21
# Slicing by frequency can also be done directly on Network objects like so
ring_slot[0:10] # Network for the first 10 frequency points
# or with a human friendly string,
ring_slot['80-90ghz']
# Notice that slicing directly on a Network **returns a Network**. So, a nice way to express slicing in both dimensions is
ring_slot.s11['80-90ghz']
# ## Plotting
# Amongst other things, the methods of the [Network](../api/network.rst) class provide convenient ways to plot components of the network parameters,
#
# * `Network.plot_s_db` : plot magnitude of s-parameters in log scale
# * `Network.plot_s_deg` : plot phase of s-parameters in degrees
# * `Network.plot_s_smith` : plot complex s-parameters on Smith Chart
# * ...
#
# If you would like to use skrf's plot styling,
# %matplotlib inline
rf.stylely()
#
# To plot all four s-parameters of the `ring_slot` on the Smith Chart.
ring_slot.plot_s_smith()
# Combining this with the slicing features,
# +
from matplotlib import pyplot as plt
plt.title('Ring Slot $S_{21}$')
ring_slot.s11.plot_s_db(label='Full Band Response')
ring_slot.s11['82-90ghz'].plot_s_db(lw=3,label='Band of Interest')
# -
# For more detailed information about plotting see [Plotting](Plotting.ipynb).
#
# ## Operators
#
# ### Arithmetic Operations
#
# Element-wise mathematical operations on the scattering parameter matrices are accessible through overloaded operators. To illustrate their usage, load a couple Networks stored in the `data` module.
# +
from skrf.data import wr2p2_short as short
from skrf.data import wr2p2_delayshort as delayshort
short - delayshort
short + delayshort
short * delayshort
short / delayshort
# -
# All of these operations return [Network](../api/network.rst) types. For example, to plot the complex difference between `short` and `delay_short`,
difference = (short - delayshort)
difference.plot_s_mag(label='Mag of difference')
# Another common application is calculating the phase difference using the division operator,
(delayshort/short).plot_s_deg(label='Detrended Phase')
# Linear operators can also be used with scalars or an `numpy.ndarray` that ais the same length as the [Network](../api/network.rst).
hopen = (short*-1)
hopen.s[:3,...]
rando = hopen *rand(len(hopen))
rando.s[:3,...]
# + raw_mimetype="text/restructuredtext" active=""
# .. notice ::
#
# Note that if you multiply a Network by an `numpy.ndarray` be sure to place the array on right side.
# -
# ## Comparaisons of Network
# Comparison operators also work with networks:
short == delayshort
short != delayshort
# ### Cascading and De-embedding
#
# Cascading and de-embeding 2-port Networks can also be done though operators. The `cascade` function can be called through the power operator, `**`. To calculate a new network which is the cascaded connection of the two individual Networks `line` and `short`,
short = rf.data.wr2p2_short
line = rf.data.wr2p2_line
delayshort = line ** short
# De-embedding can be accomplished by cascading the *inverse* of a network. The inverse of a network is accessed through the property `Network.inv`. To de-embed the `short` from `delay_short`,
# +
short_2 = line.inv ** delayshort
short_2 == short
# -
# The cascading operator also work for to 2N-port Networks. This is illustrated in [this example on balanced Networks](../examples/networktheory/Balanced%20Network%20De-embedding.ipynb). For example, assuming you want to cascade three 4-port Network `ntw1`, `ntw2` and `ntw3`, you can use:
# `resulting_ntw = ntw1 ** ntw2 ** ntw3`
# The port scheme assumed by the ** cascading operator with 4-port networks is:
# + active=""
# ntw1 ntw2 ntw3
# +----+ +----+ +----+
# 0-|0 2|--|0 2|--|0 2|-2
# 1-|1 3|--|1 3|--|1 3|-3
# +----+ +----+ +----+
# -
# ## Connecting Multi-ports
#
# **skrf** supports the connection of arbitrary ports of N-port networks. It accomplishes this using an algorithm called sub-network growth[[1]](#References), available through the function `connect()`. Terminating one port of an ideal 3-way splitter can be done like so,
tee = rf.data.tee
tee
#
#
# To connect port `1` of the tee, to port `0` of the delay short,
terminated_tee = rf.connect(tee,1,delayshort,0)
terminated_tee
# Note that this function takes into account port impedances. If two connected ports have different port impedances, an appropriate impedance mismatch is inserted.
#
# For advanced connection between arbitrary N-port Networks, the `Circuit` object is more relevant since it allows defining explicitely the connections between ports and Networks. For more information, please refer to the [Circuit documentation](Circuit.ipynb).
#
# ## Interpolation and Concatenation
#
# A common need is to change the number of frequency points of a [Network](../api/network.rst). To use the operators and cascading functions the networks involved must have matching frequencies, for instance. If two networks have different frequency information, then an error will be raised,
# +
from skrf.data import wr2p2_line1 as line1
line1
# -
# line1+line
#
# ---------------------------------------------------------------------------
# IndexError Traceback (most recent call last)
# <ipython-input-49-82040f7eab08> in <module>()
# ----> 1 line1+line
#
# /home/alex/code/scikit-rf/skrf/network.py in __add__(self, other)
# 500
# 501 if isinstance(other, Network):
# --> 502 self.__compatable_for_scalar_operation_test(other)
# 503 result.s = self.s + other.s
# 504 else:
#
# /home/alex/code/scikit-rf/skrf/network.py in __compatable_for_scalar_operation_test(self, other)
# 701 '''
# 702 if other.frequency != self.frequency:
# --> 703 raise IndexError('Networks must have same frequency. See `Network.interpolate`')
# 704
# 705 if other.s.shape != self.s.shape:
#
# IndexError: Networks must have same frequency. See `Network.interpolate`
#
#
# This problem can be solved by interpolating one of Networks allong the frequency axis using `Network.resample`.
line1.resample(201)
line1
# And now we can do things
line1 + line
# You can also interpolate from a `Frequency` object. For example,
line.interpolate_from_f(line1.frequency)
# A related application is the need to combine Networks which cover different frequency ranges. Two Netwoks can be concatenated (aka stitched) together using `stitch`, which concatenates networks along their frequency axis. To combine a WR-2.2 Network with a WR-1.5 Network,
#
# +
from skrf.data import wr2p2_line, wr1p5_line
big_line = rf.stitch(wr2p2_line, wr1p5_line)
big_line
# -
# ## Reading and Writing
#
#
# For long term data storage, **skrf** has support for reading and partial support for writing [touchstone file format](http://en.wikipedia.org/wiki/Touchstone_file). Reading is accomplished with the Network initializer as shown above, and writing with the method `Network.write_touchstone()`.
#
# For **temporary** data storage, **skrf** object can be [pickled](http://docs.python.org/2/library/pickle.html) with the functions `skrf.io.general.read` and `skrf.io.general.write`. The reason to use temporary pickles over touchstones is that they store all attributes of a network, while touchstone files only store partial information.
rf.write('data/myline.ntwk',line) # write out Network using pickle
ntwk = Network('data/myline.ntwk') # read Network using pickle
# + raw_mimetype="text/restructuredtext" active=""
# .. warning::
#
# Pickling methods cant support long term data storage because they require the structure of the object being written to remain unchanged. something that cannot be guarnteed in future versions of skrf. (see http://docs.python.org/2/library/pickle.html)
#
# -
# Frequently there is an entire directory of files that need to be analyzed. `rf.read_all` creates Networks from all files in a directory quickly. To load all **skrf** files in the `data/` directory which contain the string `'wr2p2'`.
dict_o_ntwks = rf.read_all(rf.data.pwd, contains = 'wr2p2')
dict_o_ntwks
# Other times you know the list of files that need to be analyzed. `rf.read_all` also accepts a files parameter. This example file list contains only files within the same directory, but you can store files however your application would benefit from.
import os
dict_o_ntwks_files = rf.read_all(files=[os.path.join(rf.data.pwd, test_file) for test_file in ['ntwk1.s2p', 'ntwk2.s2p']])
dict_o_ntwks_files
# ## Other Parameters
#
# This tutorial focuses on s-parameters, but other network representations are available as well. Impedance and Admittance Parameters can be accessed through the parameters `Network.z` and `Network.y`, respectively. Scalar components of complex parameters, such as `Network.z_re`, `Network.z_im` and plotting methods are available as well.
#
# Other parameters are only available for 2-port networks, such as wave cascading parameters (`Network.t`), and ABCD-parameters (`Network.a`)
ring_slot.z[:3,...]
ring_slot.plot_z_im(m=1,n=0)
# ## Conclusion
#
# There are many more features of Networks that can be found in [networks](networks.rst)
# ## References
#
#
# [1] <NAME>.; , "Perspectives in microwave circuit analysis," Circuits and Systems, 1989., Proceedings of the 32nd Midwest Symposium on , vol., no., pp.716-718 vol.2, 14-16 Aug 1989. URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=101955&isnumber=3167
#
|
doc/source/tutorials/Networks.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# ### Ejercicio 1
#
# A partir de la API de Pokemon, se pide generar un DataFrame como el de la imagen con las columnas:
#
# - **"height","id","order","weight","types"**
#
# ### Ejercicio 2
#
# ¿Has visto que dentro del DataFrame la columna "types" hay diccionarios? Se pide agregar, por cada pokemon, las columnas necesarias al DataFrame para albergar toda la información de "types" (solo las keys)
#
# 
|
API_Pokemon/api_pokemon_dataframe.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="ZaAQc-XXhwDl" colab_type="text"
# # Sample Steam Reviews with word-level RNN
#
# Code inspired from https://github.com/woctezuma/sample-steam-reviews
# + id="EJKaWG2ghrA5" colab_type="code" outputId="e04225dc-2add-48ce-b9b5-c47635c17864" colab={"base_uri": "https://localhost:8080/", "height": 35}
from google.colab import drive
mount_folder = '/content/gdrive'
drive.mount(mount_folder)
# + id="0iXYkA6wh-G2" colab_type="code" outputId="3d82c2af-15ae-4f6d-f7b5-629eb3fbb214" colab={"base_uri": "https://localhost:8080/", "height": 35}
# %cd '/content/gdrive/My Drive/'
# + id="8mv11ec_g4Wf" colab_type="code" colab={}
# #!rm -rf sample-steam-reviews/
# #!git clone https://github.com/woctezuma/sample-steam-reviews.git
# + id="A23sSmypg7km" colab_type="code" outputId="b8febbb6-36fe-4a0d-9a52-e8d617667417" colab={"base_uri": "https://localhost:8080/", "height": 35}
# %cd sample-steam-reviews/
# + id="nhZ1FxF6g9oK" colab_type="code" outputId="90d33bde-1d15-4944-afbb-95151e9c9893" colab={"base_uri": "https://localhost:8080/", "height": 72}
# !git pull
# !git checkout stacked-lstm
# + id="RUVySvRMimRw" colab_type="code" colab={}
# !pip install -r requirements.txt
# + id="CfKVsqNLt8Vc" colab_type="code" colab={}
# #!python word_level_rnn.py
# + id="851IG8VOt-hw" colab_type="code" outputId="d8a7c944-8c5d-4c04-c50f-eaac52548eb0" colab={"base_uri": "https://localhost:8080/", "height": 35}
from download_review_data import get_artifact_app_id
from export_review_data import get_output_file_name
from word_level_rnn import train, generate_from_word_level_rnn
# + id="IiPIDBwiwirS" colab_type="code" colab={}
app_id = get_artifact_app_id()
text_file_name = get_output_file_name(app_id)
# + id="anCPRjeBwPg4" colab_type="code" outputId="fa86505c-9bfb-4c1d-a818-d77776b18f27" colab={"base_uri": "https://localhost:8080/", "height": 146}
maxlen=20
model = train(path=text_file_name,
maxlen=maxlen)
# + id="wfXGOV76wQP1" colab_type="code" colab={}
response = generate_from_word_level_rnn(path=text_file_name)
|
word_level_rnn.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="raw" id="_5KgGZt_QES3"
# ========= ===============================================<br>
#
# run task<br>
# ========= ===============================================<br>
# 1 Baseline, eyes open<br>
# 2 Baseline, eyes closed<br>
# 3, 7, 11 Motor execution: left vs right hand<br>
# 4, 8, 12 Motor imagery: left vs right hand<br>
# 5, 9, 13 Motor execution: hands vs feet<br>
# 6, 10, 14 Motor imagery: hands vs feet<br>
# ========= ===============================================<br>
# + colab={"base_uri": "https://localhost:8080/", "height": 224} colab_type="code" id="X31haKOdQRGB" outputId="5711eb37-74b7-4322-9ff9-1eb861df4b36"
# # !pip install mne
# + colab={} colab_type="code" id="PgeMza0fRDK-"
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Permute, Dropout
from tensorflow.keras.layers import Conv2D, MaxPooling2D, AveragePooling2D
from tensorflow.keras.layers import Conv1D, MaxPooling1D, AveragePooling1D
from tensorflow.keras.layers import SeparableConv2D, DepthwiseConv2D
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.layers import SpatialDropout2D
from tensorflow.keras.regularizers import l1_l2
from tensorflow.keras.layers import Input, Flatten
from tensorflow.keras.constraints import max_norm
from tensorflow.keras import backend as K
def EEGNet(nb_classes, Chans = 64, Samples = 128,
dropoutRate = 0.5, kernLength = 64, F1 = 8,
D = 2, F2 = 16, norm_rate = 0.25, dropoutType = 'Dropout', gpu=True):
""" Keras Implementation of EEGNet
http://iopscience.iop.org/article/10.1088/1741-2552/aace8c/meta
Note that this implements the newest version of EEGNet and NOT the earlier
version (version v1 and v2 on arxiv). We strongly recommend using this
architecture as it performs much better and has nicer properties than
our earlier version. For example:
1. Depthwise Convolutions to learn spatial filters within a
temporal convolution. The use of the depth_multiplier option maps
exactly to the number of spatial filters learned within a temporal
filter. This matches the setup of algorithms like FBCSP which learn
spatial filters within each filter in a filter-bank. This also limits
the number of free parameters to fit when compared to a fully-connected
convolution.
2. Separable Convolutions to learn how to optimally combine spatial
filters across temporal bands. Separable Convolutions are Depthwise
Convolutions followed by (1x1) Pointwise Convolutions.
While the original paper used Dropout, we found that SpatialDropout2D
sometimes produced slightly better results for classification of ERP
signals. However, SpatialDropout2D significantly reduced performance
on the Oscillatory dataset (SMR, BCI-IV Dataset 2A). We recommend using
the default Dropout in most cases.
Assumes the input signal is sampled at 128Hz. If you want to use this model
for any other sampling rate you will need to modify the lengths of temporal
kernels and average pooling size in blocks 1 and 2 as needed (double the
kernel lengths for double the sampling rate, etc). Note that we haven't
tested the model performance with this rule so this may not work well.
The model with default parameters gives the EEGNet-8,2 model as discussed
in the paper. This model should do pretty well in general, although it is
advised to do some model searching to get optimal performance on your
particular dataset.
We set F2 = F1 * D (number of input filters = number of output filters) for
the SeparableConv2D layer. We haven't extensively tested other values of this
parameter (say, F2 < F1 * D for compressed learning, and F2 > F1 * D for
overcomplete). We believe the main parameters to focus on are F1 and D.
Inputs:
nb_classes : int, number of classes to classify
Chans, Samples : number of channels and time points in the EEG data
dropoutRate : dropout fraction
kernLength : length of temporal convolution in first layer. We found
that setting this to be half the sampling rate worked
well in practice. For the SMR dataset in particular
since the data was high-passed at 4Hz we used a kernel
length of 32.
F1, F2 : number of temporal filters (F1) and number of pointwise
filters (F2) to learn. Default: F1 = 8, F2 = F1 * D.
D : number of spatial filters to learn within each temporal
convolution. Default: D = 2
dropoutType : Either SpatialDropout2D or Dropout, passed as a string.
"""
if gpu:
config = tf.ConfigProto( device_count = {'GPU': 1 , 'CPU': 4} )
sess = tf.Session(config=config)
K.set_session(sess)
if dropoutType == 'SpatialDropout2D':
dropoutType = SpatialDropout2D
elif dropoutType == 'Dropout':
dropoutType = Dropout
else:
raise ValueError('dropoutType must be one of SpatialDropout2D '
'or Dropout, passed as a string.')
input1 = Input(shape = (1, Chans, Samples))
##################################################################
block1 = Conv2D(F1, (1, kernLength), padding = 'same', input_shape = (1, Chans, Samples), use_bias = False, data_format='channels_first')(input1)
block1 = BatchNormalization(axis = 1)(block1)
block1 = DepthwiseConv2D((Chans, 1), use_bias = False, depth_multiplier = D, data_format='channels_first', depthwise_constraint = max_norm(1.))(block1)
block1 = BatchNormalization(axis = 1)(block1)
block1 = Activation('elu')(block1)
block1 = AveragePooling2D((1, 4), data_format='channels_first')(block1)
block1 = dropoutType(dropoutRate)(block1)
block2 = SeparableConv2D(F2, (1, 16), use_bias = False, padding = 'same')(block1)
block2 = BatchNormalization(axis = 1)(block2)
block2 = Activation('relu')(block2)
block2 = AveragePooling2D((1, 8), data_format='channels_first')(block2)
block2 = dropoutType(dropoutRate)(block2)
block3 = SeparableConv2D(F2, (1, 16), use_bias = False, padding = 'same')(block2)
block3 = BatchNormalization(axis = 1)(block3)
block3 = Activation('relu')(block3)
block3 = AveragePooling2D((1, 8), data_format='channels_first')(block3)
block3 = dropoutType(dropoutRate)(block3)
block4 = SeparableConv2D(F2, (1, 16), use_bias = False, padding = 'same')(block3)
block4 = BatchNormalization(axis = 1)(block4)
block4 = Activation('relu')(block4)
block4 = AveragePooling2D((1, 8), data_format='channels_first')(block4)
block4 = dropoutType(dropoutRate)(block4)
block5 = SeparableConv2D(F2, (1, 16), use_bias = False, padding = 'same')(block4)
block5 = BatchNormalization(axis = 1)(block5)
block5 = Activation('elu')(block5)
block5 = AveragePooling2D((1, 8), data_format='channels_first')(block5)
block5 = dropoutType(dropoutRate)(block5)
# block6 = SeparableConv2D(F2, (1, 16), use_bias = False, padding = 'same')(block5)
# block6 = BatchNormalization(axis = 1)(block6)
# block6 = Activation('elu')(block6)
# block6 = AveragePooling2D((1, 8), data_format='channels_first')(block6)
# block6 = dropoutType(dropoutRate)(block6)
flatten = Flatten(name = 'flatten')(block5)
# dense = Dense(50,name='dense1', kernel_constraint = max_norm(norm_rate))(flatten)
dense = Dense(1, name = 'out', kernel_constraint = max_norm(norm_rate))(flatten)
softmax = Activation('sigmoid', name = 'sigmoid')(dense)
return Model(inputs=input1, outputs=softmax)
def my_EEGNet(input_shape, batch_size=1, n_classes=2):
"""
My Special EEGNet
Arguments:
input_shape: (n_channels, sequence)
batch_size : number of batches
n_classes : number of output classes
"""
model = Sequential()
# model.add(Input(input_shape, batch_size=batch_size, name='input'))
model.add(Conv1D(60, 15, input_shape=input_shape, batch_size=batch_size, activation='relu', name='Conv1_1'))
model.add(Conv1D(40, 15, activation='relu', name='Conv1_2'))
model.add(MaxPooling1D(name='pooling1_1'))
model.add(Conv1D(10, 10, input_shape=input_shape, batch_size=batch_size, activation='relu', name='Conv2_1'))
model.add(Conv1D(15, 5, activation='relu', name='Conv2_2'))
model.add(MaxPooling1D(name='pooling2_1'))
model.add(Flatten())
model.add(Dense(30 , activation='relu', name='Dense1'))
model.add(Dense(1,activation='sigmoid', name='output'))
return model
# + colab={} colab_type="code" id="8MvKK8WcQES5"
# %load_ext autoreload
# %autoreload
import numpy as np
from tensorflow.keras import *
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn import svm
from sklearn.naive_bayes import GaussianNB, BernoulliNB
from sklearn.linear_model import LassoLarsCV
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import ShuffleSplit, cross_val_score, train_test_split
from mne import Epochs, pick_types, events_from_annotations
from mne.channels import read_layout
from mne.io import concatenate_raws, read_raw_edf
from mne.datasets import eegbci
from mne.decoding import CSP
import mne
import pandas as pd
from matplotlib import pyplot as plt
from mne import time_frequency
# from ipywidgets import *
# + [markdown] colab_type="text" id="hqf9U_VbQES8"
# # Download dataset from https://physionet.org/pn4/eegmmidb/
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="ywcdMQo5QES-" outputId="0f5d4613-0980-4c9e-9aef-628dbad44776"
def download_dataset(subjects, runs):
raws_list = []
for i in subjects:
try:
raws_list.append(eegbci.load_data(i, runs))
except Exception as e:
print(i,' => ' ,e)
return raws_list
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="L144-iezQETC" outputId="c156b14a-9355-46be-c175-73958696ec96" slideshow={"slide_type": "-"}
def load_dataset(subjects, runs):
# check if dataset is downloaded
download_dataset(subjects, runs)
raw = None
for i in subjects:
for f in eegbci.load_data(i, runs):
if raw is None:
raw = read_raw_edf(f, preload=True)
else:
try:
raw = concatenate_raws([raw, read_raw_edf(f, preload=True)])
except:
print('subject {} failed to concatinate'.format(i))
return raw
def preprocess(raw, event_id, use_filter = True, low_freq=7, high_freq=30, tmin=1, tmax=2):
# strip channel names of "." characters
raw.rename_channels(lambda x: x.strip('.'))
# Apply band-pass filter
if use_filter:
raw.filter(low_freq, high_freq)
events, _ = get_events(raw)
picks = get_picks(raw)
# Read epochs (train will be done only between 1 and 2s)
# Testing will be done with a running classifier
epochs = get_epochs(raw, events, event_id)
epochs_train = epochs.copy().crop(tmin=tmin, tmax=tmax)
labels = epochs_train.events[:, -1] - 2
labels = labels.reshape((labels.shape[0],1))
epochs_data_train = epochs_train.get_data()
epochs_data_train = epochs_data_train.reshape((epochs_data_train.shape[0],1, epochs_data_train.shape[1], epochs_data_train.shape[2]))
return epochs_data_train, labels
def get_events(raw, event_id=dict(T1=2, T2=3)):
return events_from_annotations(raw, event_id=event_id)
def get_picks(raw):
return pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,
exclude='bads')
def get_epochs(raw,events, event_id):
return Epochs(raw, events, event_id, -1, 4, proj=True, picks=get_picks(raw),
baseline=(None, None), preload=True)
# + [markdown] colab_type="text" id="yaFklGvVQETI"
# ## Load Dataset
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="tkgOVPCAQETQ" outputId="67f45a35-4177-4553-c16a-28bfa0b74aed"
subjects = [i for i in range(1,2)]
runs = [4, 8, 12] # Motor imagery: left hand vs right hand
raw = load_dataset(subjects, runs)
raw.get_data().shape
# raw = concatenate_raws(raw, load_dataset(subjects, [3, 7, 11]))
# + colab={"base_uri": "https://localhost:8080/", "height": 136} colab_type="code" id="BvNw9areQETW" outputId="07cbce5a-01b8-4665-b787-a7b25b0d1001"
event_id = dict(left=2, right=3)
epochs_data_train, labels = preprocess(raw, event_id,use_filter=True,low_freq=5, high_freq=60, tmax=0.5, tmin=-0.2)
# + [markdown] colab_type="text" id="lxEBi8BeQETZ"
# ## Plot the montage of sensors
# + colab={"base_uri": "https://localhost:8080/", "height": 88} colab_type="code" id="gSDu4QcwQETa" outputId="0fd99a93-9d9c-4430-9ed8-a1324cc4cf8f"
montage = mne.channels.read_montage('biosemi64')
# %matplotlib qt5
raw.set_montage(montage)
raw.plot_sensors('3d')
# # %matplotlib qt5
# raw.plot_psd(fmax=30)
# + [markdown] colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="Gl8QI_P_QETg" outputId="8d0fbcb7-b28a-4d65-f736-635eceb833b7"
# # Visualizations
# + colab={} colab_type="code" id="gJfGo_suQET6"
# df=pd.DataFrame(get_events(raw), columns = ['time','x','event'])
# df.head()
events, _ = get_events(raw)
epochs = get_epochs(raw, events, event_id)
epochs.info
# -
# +
freqs = np.arange(5, 160., 2)
vmin, vmax = -3., 3. # Define our color limits.
n_cycles = freqs / 2.
time_bandwidth = 2 # Least possible frequency-smoothing (1 taper)
power = time_frequency.tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles,
time_bandwidth=time_bandwidth, return_itc=False)
# Plot results. Baseline correct based on first 100 ms.
power.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax,
title='Sim: Least smoothing, most variance')
# + colab={} colab_type="code" id="c6-_b0hqQET9"
# fig, ax = plt.subplots()
# ax.scatter(df.index, df['event'])
# indexA = np.array(df.index[df['event']==3])
# for i in indexA:
# ax.axvline(x=i, ymin=0, ymax=1, c='r', linestyle='--')
# + colab={} colab_type="code" id="SftEa6ndQEUF"
# mne.viz.plot_events(events, raw.info['sfreq'], raw.first_samp,
# event_id=event_id)
# plt.show()
# + colab={} colab_type="code" id="FaZrl5u8QEUK"
# df.hist()
# + colab={} colab_type="code" id="FDDKX0JXQEUP"
# # Extract data from the first 5 channels
n_ch = 5
sfreq = raw.info['sfreq']
data, times = raw[:n_ch, int(sfreq * 1):int(sfreq * 5)]
fig = plt.subplots(figsize=(10,8))
plt.plot(times, data.T);
plt.xlabel('Seconds')
plt.ylabel('$\mu V$')
plt.title('Channels: 1-' + str(n_ch));
plt.legend(raw.ch_names[:n_ch]);
# + [markdown] colab_type="text" id="W1NVMjjzQEUU"
# ## Train test spilt
# ### using mne method
# + colab={} colab_type="code" id="NPkLfZ2jQEUU"
# Define a monte-carlo cross-validation generator (reduce variance):
scores = []
cv = ShuffleSplit(10, test_size=0.25, random_state=42)
cv_split = cv.split(epochs_data_train)
# + colab={} colab_type="code" id="aUMqFd-TQEUX"
# c = []
# for train_idx, test_idx in cv_split:
# y_train, y_test = labels[train_idx], labels[test_idx]
# print(epochs_data_train[train_idx])
# c.append(csp.fit_transform(epochs_data_train[train_idx], y_train))
# # c[0]
# + [markdown] colab_type="text" id="I9q1Uo4HQEUa"
# # Baseline models
# + colab={} colab_type="code" id="9wC15620QEUa"
classifier = svm.SVC(gamma='auto')
# classifier = LogisticRegression(solver='lbfgs')
# classifier = LinearDiscriminantAnalysis()
# classifier = GaussianNB()
# classifier = LassoLarsCV()
# + colab={} colab_type="code" id="cDQdcuhsQEUh"
csp = CSP(n_components=2, reg=None, log=True, norm_trace=False)
# # Use scikit-learn Pipeline with cross_val_score function
clf = Pipeline([('CSP', csp), ('Classifier', classifier)])
scores = cross_val_score(clf, epochs_data_train, labels, cv=cv, n_jobs=1)
# # Printing the results
class_balance = np.mean(labels == labels[0])
class_balance = max(class_balance, 1. - class_balance)
print("Classification accuracy: %f / Chance level: %f" % (np.mean(scores),
class_balance))
# # plot CSP patterns estimated on full data for visualization
csp.fit_transform(epochs_data, labels)
layout = read_layout('EEG1005')
csp.plot_patterns(epochs.info, layout=layout, ch_type='eeg',
units='Patterns (AU)', size=2)
# + [markdown] colab={} colab_type="code" id="8CRqxW9JQEUl"
# # Neural Network Implementation
# + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="ESzKL25qQEUp" outputId="a7078029-843b-4e8f-c87b-4df9e5c0f8b8"
X_train, X_test, y_train, y_test = train_test_split(epochs_data_train, labels, test_size=0.2, random_state=42)
print('shape of train-set-X: {} \nshape of test-set-X: {}'.format(X_train.shape, X_test.shape))
print('shape of train-set-y: {} \nshape of test-set-y: {}'.format(y_train.shape, y_test.shape))
# + colab={} colab_type="code" id="VNXqv_awQEUv"
batch_size = 50
# model = EEGNet.my_EEGNet(input_shape=(64,161), batch_size=None,n_classes=2)
# model = EEGNet.EEGNet(2,Samples=161, F1=8, kernLength=20)
model = EEGNet(2, Chans = 64, Samples = X_train.shape[-1],
dropoutRate = 0, kernLength = 96, F1 = 16,
D = 3, F2 = 48, norm_rate = 0.1, dropoutType = 'Dropout')
# model.summary()
# + colab={} colab_type="code" id="fojLKP2GQEUy"
optimizer = optimizers.Adam(lr=0.0001,decay=0.00003)
model.compile(optimizer , 'binary_crossentropy', metrics=['acc'])
# + colab={} colab_type="code" id="K5h9bv0bQEU2"
# model.summary()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="x1vW_WAHQEU8" outputId="98bdfcbf-2270-439f-f98e-44145604f5dd"
history = model.fit(X_train, y_train, batch_size=500,epochs=300,validation_data=(X_test, y_test))
# + colab={} colab_type="code" id="wKfWdBkZQEVC"
plt.plot(history.history['acc'])
plt.plot(history.history['loss'])
# plt.legend((history.history['acc'],history.history['loss']), ('Accuracy', 'Loss'),)
# + colab={} colab_type="code" id="fx5saH71QEVF"
model.save('model_' + str(X_train.shape[-1]) + '.h5')
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="Q9S0VlMIQEVI" outputId="70ac2874-39f1-42b1-de9f-faff1c34efcf"
np.mean(y_test == (model.predict(X_test) > 0.5))
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="aD40FRD0Zh6n" outputId="b22de05e-2a08-40d5-c7a5-f4d055cd8f98"
test_epochs, test_labels = preprocess(load_dataset([i for i in range(95,100)], runs), event_id, use_filter=False, tmax=2.5, tmin=0.5)
# +
def my_EEGNet(channels=64, samples=321, dropoutRate=0.0, n_classes=2):
"""
My Special EEGNet
Arguments:
input_shape: (n_channels, sequence)
batch_size : number of batches
n_classes : number of output classes
"""
input_shape = (1, channels, samples)
model = Sequential()
# model.add(Input(input_shape, name='input'))
model.add(Conv2D(16, (1, channels), padding='same',input_shape=input_shape, data_format='channels_first', name='Conv1_1'))
model.add(BatchNormalization(axis=1))
model.add(DepthwiseConv2D((channels, 1), padding='valid',data_format='channels_first', depth_multiplier=3))
model.add(BatchNormalization(axis=1))
model.add(Activation('elu'))
model.add(AveragePooling2D((1, 4),data_format='channels_first', name='pooling1'))
model.add(Dropout(dropoutRate))
model.add(SeparableConv2D(48, (1, 16), padding='same', data_format='channels_first'))
model.add(BatchNormalization(axis=1))
model.add(Activation('elu'))
model.add(AveragePooling2D((1, 8),data_format='channels_first', name='pooling2'))
model.add(Dropout(dropoutRate))
# model.add(Flatten())
# model.add(Dense(30 , activation='relu', name='Dense1'))
# model.add(Dense(1,activation='sigmoid', name='output'))
return model
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="2A6HX_goijue" outputId="676b3b2b-17c2-4d49-f916-f6ccd98df55c"
np.mean(test_labels == (model.predict(test_epochs) > 0.5))
# + colab={} colab_type="code" id="QlWqcmDpkbqp"
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Permute, Dropout
from tensorflow.keras.layers import Conv2D, MaxPooling2D, AveragePooling2D
from tensorflow.keras.layers import Conv1D, MaxPooling1D, AveragePooling1D
from tensorflow.keras.layers import SeparableConv2D, DepthwiseConv2D
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.layers import SpatialDropout2D
from tensorflow.keras.regularizers import l1_l2
from tensorflow.keras.layers import Input, Flatten
from tensorflow.keras.constraints import max_norm
from tensorflow.keras import backend as K
def EEGNet(nb_classes, Chans = 64, Samples = 128,
dropoutRate = 0.5, kernLength = 64, F1 = 8,
D = 2, F2 = 16, norm_rate = 0.25, dropoutType = 'Dropout', gpu=True):
""" Keras Implementation of EEGNet
http://iopscience.iop.org/article/10.1088/1741-2552/aace8c/meta
Note that this implements the newest version of EEGNet and NOT the earlier
version (version v1 and v2 on arxiv). We strongly recommend using this
architecture as it performs much better and has nicer properties than
our earlier version. For example:
1. Depthwise Convolutions to learn spatial filters within a
temporal convolution. The use of the depth_multiplier option maps
exactly to the number of spatial filters learned within a temporal
filter. This matches the setup of algorithms like FBCSP which learn
spatial filters within each filter in a filter-bank. This also limits
the number of free parameters to fit when compared to a fully-connected
convolution.
2. Separable Convolutions to learn how to optimally combine spatial
filters across temporal bands. Separable Convolutions are Depthwise
Convolutions followed by (1x1) Pointwise Convolutions.
While the original paper used Dropout, we found that SpatialDropout2D
sometimes produced slightly better results for classification of ERP
signals. However, SpatialDropout2D significantly reduced performance
on the Oscillatory dataset (SMR, BCI-IV Dataset 2A). We recommend using
the default Dropout in most cases.
Assumes the input signal is sampled at 128Hz. If you want to use this model
for any other sampling rate you will need to modify the lengths of temporal
kernels and average pooling size in blocks 1 and 2 as needed (double the
kernel lengths for double the sampling rate, etc). Note that we haven't
tested the model performance with this rule so this may not work well.
The model with default parameters gives the EEGNet-8,2 model as discussed
in the paper. This model should do pretty well in general, although it is
advised to do some model searching to get optimal performance on your
particular dataset.
We set F2 = F1 * D (number of input filters = number of output filters) for
the SeparableConv2D layer. We haven't extensively tested other values of this
parameter (say, F2 < F1 * D for compressed learning, and F2 > F1 * D for
overcomplete). We believe the main parameters to focus on are F1 and D.
Inputs:
nb_classes : int, number of classes to classify
Chans, Samples : number of channels and time points in the EEG data
dropoutRate : dropout fraction
kernLength : length of temporal convolution in first layer. We found
that setting this to be half the sampling rate worked
well in practice. For the SMR dataset in particular
since the data was high-passed at 4Hz we used a kernel
length of 32.
F1, F2 : number of temporal filters (F1) and number of pointwise
filters (F2) to learn. Default: F1 = 8, F2 = F1 * D.
D : number of spatial filters to learn within each temporal
convolution. Default: D = 2
dropoutType : Either SpatialDropout2D or Dropout, passed as a string.
"""
if gpu:
config = tf.ConfigProto( device_count = {'GPU': 1 , 'CPU': 4} )
sess = tf.Session(config=config)
K.set_session(sess)
if dropoutType == 'SpatialDropout2D':
dropoutType = SpatialDropout2D
elif dropoutType == 'Dropout':
dropoutType = Dropout
else:
raise ValueError('dropoutType must be one of SpatialDropout2D '
'or Dropout, passed as a string.')
input1 = Input(shape = (1, Chans, Samples))
##################################################################
block1 = Conv2D(F1, (1, kernLength), padding = 'same', input_shape = (1, Chans, Samples), use_bias = True, data_format='channels_first')(input1)
block1 = BatchNormalization(axis = 1)(block1)
block1 = DepthwiseConv2D((Chans, 1), use_bias = True, depth_multiplier = D, data_format='channels_first', depthwise_constraint = max_norm(1.))(block1)
block1 = BatchNormalization(axis = 1)(block1)
block1 = Activation('elu')(block1)
block1 = AveragePooling2D((1, 4), data_format='channels_first')(block1)
block1 = dropoutType(dropoutRate)(block1)
block2 = SeparableConv2D(F2, (1, 16), use_bias = True, padding = 'same')(block1)
block2 = BatchNormalization(axis = 1)(block2)
block2 = Activation('relu')(block2)
block2 = AveragePooling2D((1, 8), data_format='channels_first')(block2)
block2 = dropoutType(dropoutRate)(block2)
block3 = SeparableConv2D(F2, (1, 16), use_bias = True, padding = 'same')(block2)
block3 = BatchNormalization(axis = 1)(block3)
block3 = Activation('relu')(block3)
block3 = AveragePooling2D((1, 8), data_format='channels_first')(block3)
block3 = dropoutType(dropoutRate)(block3)
block4 = SeparableConv2D(F2, (1, 16), use_bias = True, padding = 'same')(block3)
block4 = BatchNormalization(axis = 1)(block4)
block4 = Activation('relu')(block4)
block4 = AveragePooling2D((1, 8), data_format='channels_first')(block4)
block4 = dropoutType(dropoutRate)(block4)
block5 = SeparableConv2D(F2, (1, 16), use_bias = True, padding = 'same')(block4)
block5 = BatchNormalization(axis = 1)(block5)
block5 = Activation('elu')(block5)
block5 = AveragePooling2D((1, 8), data_format='channels_first')(block5)
block5 = dropoutType(dropoutRate)(block5)
# block6 = SeparableConv2D(F2, (1, 16), use_bias = False, padding = 'same')(block5)
# block6 = BatchNormalization(axis = 1)(block6)
# block6 = Activation('elu')(block6)
# block6 = AveragePooling2D((1, 8), data_format='channels_first')(block6)
# block6 = dropoutType(dropoutRate)(block6)
flatten = Flatten(name = 'flatten')(block5)
# dense = Dense(50,name='dense1', kernel_constraint = max_norm(norm_rate))(flatten)
dense = Dense(1, name = 'out', kernel_constraint = max_norm(norm_rate))(flatten)
softmax = Activation('sigmoid', name = 'sigmoid')(dense)
return Model(inputs=input1, outputs=softmax)
# -
model = EEGNet(2, Samples=321,kernLength=96, Chans=64, F1=16, D=3, F2=48, dropoutRate=0, norm_rate=0)
model.summary()
|
BCI_new.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.5 64-bit
# name: python385jvsc74a57bd0ef8d51b6e765846ba3dbc6b588704097c20c1d9961af5b0a6ba5d38dba7ef221
# ---
# + [markdown] _uuid="1a67880eb71b1350b7b1280a612430450082d4e9"
# # Exercises 04 - Lists
# + [markdown] _uuid="47f2a23f94e26e3a8b4d97dba00bfceb4c47c4fe"
# ## 1. Second Element
#
# Complete the function below according to its docstring.
#
# *HINT*: Python starts counting at 0.
# + _uuid="0abcaf62d979ca49db3f5a6ffd53d3aa3f47772d"
def select_second(L):
"""Return the second element of the given list.
If the list has no second element, return None.
"""
try:
return L[1]
except:
return None
# -
L = []
print(select_second(L))
# + [markdown] _uuid="582c4e6ca2d9714e9eb780540c0cf3bab59c5eca"
# ## 2. Captain of the Worst Team
#
# You are analyzing sports teams. Members of each team are stored in a list. The **Coach** is the first name in the list, the **Captain** is the second name in the list, and other players are listed after that.
# These lists are stored in another list, which starts with the best team and proceeds through the list to the worst team last. Complete the function below to select the **captain** of the worst team.
# + _uuid="9d3ff53f43d375faf107ec0d15778f1f9758b81a"
def losing_team_captain(teams):
"""Given a list of teams, where each team is a list of names, return the 2nd player (captain)
from the last listed team
"""
return select_second(teams[-1])
# + [markdown] _uuid="e03348668601e4c4a73e49fea71f46a274c44c54"
# ## 3. Purple Shell item
#
# The next iteration of <NAME> will feature an extra-infuriating new item, the ***Purple Shell***. When used, it warps the last place racer into first place and the first place racer into last place. Complete the function below to implement the Purple Shell's effect.
# + _uuid="24a46b1ffedf35797fdddc1e394bfe1ae12d8c68"
def purple_shell(racers):
"""Given a list of racers, set the first place racer (at the front of the list) to last
place and vice versa.
>>> r = ["Mario", "Bowser", "Luigi"]
>>> purple_shell(r)
>>> r
["Luigi", "Bowser", "Mario"]
"""
racers[0],racers[-1] = racers[-1],racers[0]
return racers
purple_shell(["Mario", "Bowser", "Luigi"])
# + [markdown] _uuid="af48fc9e1e88852c0018e3b7132e8a223c41303c"
# ## 4. Guess the Length!
#
# What are the lengths of the following lists? Fill in the variable `lengths` with your predictions. (Try to make a prediction for each list *without* just calling `len()` on it.)
# + _uuid="c4a4fcf09f84c3f7c6cb5911b1aea5ce4bfe0f96"
a = [1, 2, 3]
b = [1, [2, 3]]
c = []
d = [1, 2, 3][1:]
# Put your predictions in the list below. Lengths should contain 4 numbers, the
# first being the length of a, the second being the length of b and so on.
lengths = [3,2,0,2]
# -
# + [markdown] _uuid="deee98f41c11c20c366999414a20cbdbe347ac05"
# ## 5. Fashionably Late <span title="A bit spicy" style="color: darkgreen ">🌶️</span>
#
# We're using lists to record people who attended our party and what order they arrived in. For example, the following list represents a party with 7 guests, in which Adela showed up first and Ford was the last to arrive:
#
# party_attendees = ['Adela', 'Fleda', 'Owen', 'May', 'Mona', 'Gilbert', 'Ford']
#
# A guest is considered **'fashionably late'** if they arrived after at least half of the party's guests. However, they must not be the very last guest (that's taking it too far). In the above example, Mona and Gilbert are the only guests who were fashionably late.
#
# Complete the function below which takes a list of party attendees as well as a person, and tells us whether that person is fashionably late.
# + _uuid="ce4940fda8abd40d9c360e5f19d759244644fed1"
def fashionably_late(arrivals, name):
"""Given an ordered list of arrivals to the party and a name, return whether the guest with that
name was fashionably late.
"""
half_size = int(len(arrivals)/2)
late_list = arrivals[-(half_size):-1]
if name in late_list:
return True
return False
# -
party_attendees = ['Adela', 'Fleda', 'Owen', 'May', 'Mona', 'Gilbert']
fashionably_late(party_attendees,'Gilbert')
# + [markdown] _uuid="8ac91b9a71f70ebbcf6cca84722c5b590c62afba"
#
# # Keep Going 💪
|
python-for-data/Ex04 - Lists.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/patchikoooo/data-science-from-scratch/blob/master/ConfidenceIntervals_py.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="Gz1DRy_KjlqB" colab_type="text"
# # **Confidence Intervals**
# + id="b0zQTdGqjkmu" colab_type="code" colab={}
import math
import random
# + id="N566knD32NiG" colab_type="code" colab={}
def normal_cdf(x, mu=0, sigma=1):
return (1 + math.erf((x-mu) / math.sqrt(2) / sigma)) / 2
def inverse_normal_cdf(p, mu=0, sigma=1, tolerance=0.00001):
# find approximate inverse using binary search
# if not standard, compute standard and rescale
if mu != 0 or sigma != 1:
return mu + sigma * inverse_normal_cdf(p, tolerance=tolerance)
low_z, low_p = -10.0, 0 # normalcdf(-10) is (very close to) 0
hi_z, hi_p = 10.0, 1 # normal_cdf(10) is (very close to) 1
while hi_z - low_z > tolerance:
mid_z = (low_z + hi_z) / 2 # consider the midpoint
mid_p = normal_cdf(mid_z) # and the cdf's value there
if mid_p < p:
# midpoint is still too low, search above it
low_z, low_p = mid_z, mid_p
elif mid_p > p:
# midpoint is still too high, search below it
hi_z, hi_p = mid_z, mid_p
else:
break
return mid_z
normal_probability_below = normal_cdf
def normal_approximation_to_binomial(n, p):
# finds mu and sigma corresponding to Binomial(n, p)
mu = p*n
sigma = math.sqrt(n*p*(1-p))
return mu, sigma
# + id="qdMpV0yg2PER" colab_type="code" colab={}
# we can use normal_cdf to figure out that probability that its realized value
# lies within (or outside) a particular interval
# the normal cdf is the probability the variable is below a threshold
normal_probability_below = normal_cdf
# it's above the threshold if it's not below the threshold
def normal_probability_above(lo, mu=0, sigma=1):
return 1 - normal_cdf(lo, mu, sigma)
# it's between if it's less than hi, but not less than lo
def normal_probability_between(lo, hi, mu=0, sigma=1):
return normal_cdf(hi, mu, sigma) - normal_cdf(lo, mu, sigma)
# it's outside if it's not between
def normal_probability_outside(lo, hi, mu=0, sigma=1):
return 1 - normal_probability_between(lo, hi, mu=0, sigma=1)
# + id="z8zMmOkv2Tfa" colab_type="code" colab={}
# We can also do reverse -- find either the nontail region or the (symmetric) interval
# around the mean that accounts for a certain level of likelihood.
# For example, if we want to find an interval centered at the mean and containing
# 60% probability, then we find the cutoffs where the upper and lower tails
# each contain 20% of the probability( leaving 60%)
# z --> z score
def normal_upper_bound(probability, mu=0, sigma=1):
"""returns the z for which P(Z <= z) = probability"""
return inverse_normal_cdf(probability, mu, sigma)
def normal_lower_bound(probability, mu=0, sigma=1):
"""returns the z for which P(Z >= z)"""
return inverse_normal_cdf(1 - probability, mu, sigma)
def normal_two_sided_bounds(probability, mu=0, sigma=1):
"""returns the symmetric (about the mean) bounds that contain
the specified probability"""
tail_probability = (1 - probability) / 2
# upper bound should have tail_probability above it
upper_bound = normal_lower_bound(tail_probability, mu, sigma)
# lower bound should have tail probability below it
lower_bound = normal_upper_bound(tail_probability, mu, sigma)
return lower_bound, upper_bound
def two_sided_p_value(x, mu=0, sigma=1):
if x >= mu:
# if x is greater than the mean, the tail is what's greater than x
return 2 * normal_probability_above(x, mu, sigma)
else:
# if x is less thatn the mean, the tail is what's less than x
return 2 * normal_probability_below(x, mu, sigma)
# + id="nrR8SOifkjlP" colab_type="code" outputId="69cc78e2-4f0e-4b98-ae23-f33a016110a1" colab={"base_uri": "https://localhost:8080/", "height": 197}
""" For example, we can estimate the probability of the unfair coin by looking
at the average value of the Bernoulli variables corresponding to each flip.
If we observe 525 heads out of 1,000 flips, then we can estimate the
observed value of the parameter.
How confident can we be about this estimate? Weell, if we knew the exact
value of p, CLT tells us that the average of those Bernoulli variables
should be approximately normal, with mean p and standard deviation """
math.sqrt(p * (1-p) / 1000)
# + id="Zbs0l1sulJXN" colab_type="code" outputId="056ccada-bf65-4be2-9a48-1936f79c10e5" colab={"base_uri": "https://localhost:8080/", "height": 51}
# since we don't know our p, we use p_hat
p_hat = 525 / 1000
mu = p_hat
sigma = math.sqrt(p_hat * (1 - p_hat) / 1000)
print("(mu, sigma)", (mu, sigma))
# This is not entirely justified, but people seem to do it anyway.
# Using the normal approximation, we conclude that we are "95% confident"
# that the folloowing interval contains the true parameter p.
normal_two_sided_bounds(0.95, mu, sigma)
# + id="JNiCNr_gllbE" colab_type="code" outputId="bb771226-0e48-4af8-99f8-c94fd738dc29" colab={"base_uri": "https://localhost:8080/", "height": 51}
# trying 540 heads out of 1000, we'd have:
p_hat = 540 / 1000
mu = p_hat
sigma = math.sqrt(p_hat * (1 - p_hat) / 1000)
print("(mu, sigma)", (mu, sigma))
normal_two_sided_bounds(0.95, mu, sigma)
# Here, "fair coin" doesn't lie in the confidence interval
# + [markdown] id="fKoY0j103jc5" colab_type="text"
# # **P-hacking**
# + id="xfe_8S693i2B" colab_type="code" colab={}
""" A procedure that erroneously rejects the null hypothesis only 5% of the time
- by definition - 5% of the time erroneously reject the hypothesis """
def run_experiment():
""" flip a fair coin 1000 times, True = heads, False = tails """
return [random.random() < 0.5 for _ in range(1000)]
def reject_fairness(experiment):
""" using the 5% significance levels """
num_heads = len([flip for flip in experiment if flip])
return num_heads < 469 or num_heads > 531
# + id="13qmMKLs3H7b" colab_type="code" outputId="0148f6c2-710c-46df-deb0-4af2a4b89835" colab={"base_uri": "https://localhost:8080/", "height": 34}
random.seed(0)
experiments = [run_experiment() for _ in range(1000)]
num_rejections = len([experiment for experiment in experiments
if reject_fairness(experiment)])
num_rejections
# + id="YYLvZljI6jge" colab_type="code" outputId="fcc169b2-9a8c-4bec-cdec-c235367e4f17" colab={"base_uri": "https://localhost:8080/", "height": 54}
""" Running AB Test """
""" One of your primary responsibilities at DataSciencester is experience optimization,
which is a euphemism for trying to get people to click on advertisements.
One of your advertisers has developed a new energy drink targeted at data scientists,
and the VP of Advertisements wants your help choosing between advertisement A
(“tastes great!”) and advertisement B (“less bias!”). Being a scientist,
you decide to run an experiment by randomly showing site visitors one
of the two advertisements and tracking how many people click on each one.
If 990 out of 1,000 A-viewers click their ad while only 10 out of 1,000
B-viewers click their ad, you can be pretty confident that A is the better
ad. But what if the differences are not so stark? Here’s where you’d use
statistical inference. """
# + id="MxpvRYVdiLF3" colab_type="code" colab={}
def estimated_parameters(N, n):
# n is the number of successes and N is the total number of trials
p = n / N
sigma = math.sqrt(p * (1 - p) / N)
return p, sigma
# + id="MRPQhaGTiaMr" colab_type="code" outputId="325cdb20-5bc9-4549-d758-490f3562c578" colab={"base_uri": "https://localhost:8080/", "height": 34}
estimated_parameters(1000, 800)
# + id="qsYlAonWifST" colab_type="code" colab={}
""" If we assume those two normals are independent, then their difference should
also be normal with mean Pb - Pa and standard deviation
math.sqrt(sigmaA^2 + sigmaB^2)
Note that this is sort of CHEATING. The math only works out exactly like this
if you know the standard deviations. Here we're estimating them from the data,
which means that we really should be using a t-distribution. But for large
enough data sets, it's close enought that it doesn't make much of a
difference.
This means we can test the null hypothesis that Pa and Pb are the same.
that is, that Pa - Pb == 0;
using the statistic: """
def a_b_test_statistic(N_A, n_A, N_B, n_B):
p_A, sigma_A = estimated_parameters(N_A, n_A)
p_B, sigma_B = estimated_parameters(N_B, n_B)
# mean difference and sigma difference
# sigma difference will also be sum since squaring sigma will yield
# positive number
return (p_B - p_A) / math.sqrt(sigma_A ** 2 + sigma_B ** 2)
# this should be approximately be a standard normal.
# + id="HpSdaK4Qmadn" colab_type="code" outputId="e13f6bf2-758f-454a-b87c-ba11771911a9" colab={"base_uri": "https://localhost:8080/", "height": 34}
""" For example, if "tastes great" gets 200 clicks our of 1000 views and "less
bias" gets 180 clicks out of 1000 views, the statistic equals: """
z = a_b_test_statistic(1000, 200, 1000, 180)
print(z)
# + id="8PUUJyPSm2Od" colab_type="code" outputId="6d38363b-912b-4bba-aa91-8461f81e1ada" colab={"base_uri": "https://localhost:8080/", "height": 34}
""" The probability of seeing such a large difference if the means
were actually equal would be: """
# Which is large enough that you can't conclude there's much of a difference.
# + id="Ym-C5LCUnTX-" colab_type="code" outputId="3e81d08b-1d9d-496d-c18f-c77dcc330866" colab={"base_uri": "https://localhost:8080/", "height": 34}
# On the other hand, if "less bias" only got 150 chicks, we'd have:
z = a_b_test_statistic(1000, 200, 1000, 150)
print(two_sided_p_value(z))
# which means there's only a 0.003 probability you'd see such a large
# if the ads were equally effective.
# + id="oWv1_LnGoZGO" colab_type="code" colab={}
# + [markdown] id="rjqGL2LmoqjG" colab_type="text"
# # **Bayesian Inference**
# + id="YA41F3HzovEG" colab_type="code" colab={}
"""
The procedures we've looked at have involved making probability statements about
our tests: "there's only a 3% chance you'd observe such an extreme statistic
if our null hypothesis were true.
"""
""" An alternative approach to inference involves treating the unknown parameters
themselves as random variables. The analyst ( that's you ) starts with a
prior distribution for the parameters and then uses the observed data
and Baye's Theorem to get an updated posterioir distribution for the
parameters. Rather than making probability judgments about the tests,
you make probability judgments about the parameters themselves. """
# For example, when unknown parameter is a probability, we often use a prior
# from the Beta distribution, which puts all its probability between 0 and 1:
def B(alpha, beta):
# normalizing constant so that the total probability is 1
return math.gamma(alpha) * math.gamma(beta) / math.gamma(alpha + beta)
def beta_pdf(x, alpha, beta):
if x < 0 or x > 1: # no weight outside of [0, 1]
return 0
return x ** (alpha - 1) * (1 - x) ** (betta - 1) / B(alpha, beta)
# this distribution centers its weight at:
# alpha / (alpha + beta)
# and the larger alpha and beta are, the "tighter" the distribution is
# + id="xqO9Qt1erXUL" colab_type="code" outputId="bd98919d-6fbb-4195-b69d-ac22907cadab" colab={"base_uri": "https://localhost:8080/", "height": 54}
"""
For example, if alpha and beta are both 1, it's just the uniform distribtion
(centered at 0.5, very dispered). If alpha is much larger than beta,
most of the weights is near 1. And if alpha is much smaller than beta,
most of the weight is near zero.
"""
""" Let's assume a priod distribution on p. Maybe we don't want to take
a stand on whether the coin if fair, and we choose alpha and beta
to both equal 1. Or maybe we have a strong belief that it lands heads
55% of the time, and we choose alpha equals 55, beta equals 45.
Then we flip our coin a bunch of times and see h heads and t tails.
Baye's Theorem ( and some mathematics that's too tedious for us to go
through here) tells us that the posterioir distribution for p is again
a Beta distribution with parameters alpha + h and beta + t. """
""" NOTE:
It is coincidence that the posterior distribution was again a Beta distribution.
The number of heads given by a Binomial distribution, and the Beta is the
conjugate prioir to the binomial distribution. This means that whenever you
update a Beta prior using observations from the corresponding binomial, you
will get back a Beta posterior.
"""
# + id="J5p7wtgMtk37" colab_type="code" colab={}
# end page 142
# + [markdown] id="ngPq4oc5TxqM" colab_type="text"
# **For Further Exploration**
#
#
#
# * We've barely scratched the surface of what you should know about statistical inference. The books recommended at the end of Chapter 5 go into a lot more detail.
# * Coursera offers a [Data Analysis and Statistical](https://www.coursera.org/specializations/statistics) Inference course that covers many of these topics.
#
#
# + id="X9aAPYu2ttKB" colab_type="code" colab={}
|
patch/ConfidenceIntervals_py.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
my_list = [1,2,3]
for item in my_list:
print(f'Item {item}')
# +
class PrintList:
def __init__(self, numberList):
self.numberList = numberList
def print_list(self):
for item in my_list:
print(f'Item {item}')
A = PrintList(my_list)
A.print_list()
# -
# %load_ext autoreload
# %autoreload 2
from oop_lecture import print_list
from oop_lecture import Game
dir(Game)
help(Game)
game = Game()
game.round
game.round = 4
game.round
game.add_rounds()
game.round
game2 = Game()
game2.round
game = Game('Bruno', 'Harvey')
game.print_players()
# +
from oop_lecture import TicTacToe
#Show how TicTacToe class inherits parameters from game class
ttt = TicTacToe()
ttt.print_players()
print(ttt.round)
ttt.add_rounds()
print(ttt.round)
# -
# Show how TicTacToe class inherits methods from Game class
ttt.print_players()
ttt.winner()
# Python guidlines!!!
import this
|
module2-oop-code-style-and-reviews/DSPT1_LS312_OOP.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.6 64-bit (''mestrado_env'': venv)'
# name: python3
# ---
# # EDA & Feature Engineering
# @Author: <NAME>
#
# Goals: Investigate some distributions, correlations and associations. Find Multicollinearity, plot low dimension representations to find patterns, perform feature engineering and feature selection algorithms to find the best set of features to use in a classification task.
# +
# Libs
import os
import json
import importlib
import numpy as np
import pandas as pd
import utils.dev.feature as ft
import utils.dev.cleaning as cl
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm import tqdm
from operator import itemgetter
from statsmodels.graphics.mosaicplot import mosaic
from sklearn.manifold import TSNE
pd.set_option('display.max_columns', 100)
# -
# Paths and Filenames
DATA_INPUT_PATH = 'data/interim'
DATA_OUTPUT_PATH = DATA_INPUT_PATH
DATA_INPUT_NAME = 'train.csv'
DATA_OUTPUT_NAME = 'train_selected_features.csv'
df_twitter_train = pd.read_csv(os.path.join('..',DATA_INPUT_PATH, DATA_INPUT_NAME))
# # 1) Feature Engineering
# Let's create some features using the raw data We have.
df_twitter_train.columns
list_columns_colors = df_twitter_train.filter(regex='color').columns.tolist()
# Length of name
df_twitter_train['name'] = df_twitter_train['name'].apply(lambda x: len(x) if x is not np.nan else 0)
# Length of screen name
df_twitter_train['screen_name'] = df_twitter_train['screen_name'].apply(lambda x: len(x) if x is not np.nan else 0)
# has profile location
df_twitter_train['profile_location'] = df_twitter_train['profile_location'].apply(lambda x: 'true' if x is not np.nan else 'false')
# Length of description
df_twitter_train['description'] = df_twitter_train['description'].apply(lambda x: len(x) if x is not np.nan else 0)
# rate (friends_count/followers_count)
df_twitter_train['rate_friends_followers'] = df_twitter_train['friends_count']/df_twitter_train['followers_count']
df_twitter_train['rate_friends_followers'] = df_twitter_train['rate_friends_followers'].replace({np.inf:0})
# has background image
df_twitter_train['has_background_image'] = df_twitter_train['profile_background_image_url'].apply(lambda x: 'true' if x is not np.nan else 'false')
# has image
df_twitter_train['profile_image_url_https'] = df_twitter_train['profile_image_url_https'].apply(lambda x: 'true' if x is not np.nan else 'false')
# Length Unique Colors
df_twitter_train['unique_colors'] = df_twitter_train[list_columns_colors].stack().groupby(level=0).nunique()
df_twitter_train.columns
columns_to_drop_useless = ['id', 'id_str', 'location', 'created_at', 'profile_background_color', 'profile_background_image_url', 'profile_background_image_url_https', 'profile_image_url',
'profile_link_color', 'profile_sidebar_border_color', 'profile_sidebar_fill_color', 'profile_text_color']
df_twitter_train.drop(columns_to_drop_useless, axis=1, inplace=True)
# # 2) Feature Selection
dict_dtypes = cl.check_dtypes(df_twitter_train)
dict_dtypes
num_columns = dict_dtypes['int64']+dict_dtypes['float64']
cat_columns = dict_dtypes['object']
# ## 2.1) Chi2
df_chi2, columns_to_drop_chi2, logs = ft.chi_squared(df_twitter_train, y='label', cat_columns=cat_columns, significance=0.05)
columns_to_drop_chi2
df_chi2
# ## 2.2) Point-biserial Correlation
df_pb, columns_to_drop_pb = ft.point_biserial(df_twitter_train, y='label', num_columns=num_columns, significance=0.05)
columns_to_drop_pb
df_pb
df_twitter_train.drop(columns_to_drop_chi2+columns_to_drop_pb, axis=1, inplace=True)
# ## 2.3) Boruta
X = df_twitter_train.drop('label', axis=1)
y = df_twitter_train['label']
boruta_selector = ft.Boruta()
boruta_selector.fit(X, y, cat_columns=True, num_columns=True)
boruta_selector._columns_remove_boruta
columns_to_drop_boruta = boruta_selector._columns_remove_boruta
df_twitter_train.drop(columns_to_drop_boruta, axis=1, inplace=True)
df_twitter_train
# # 3) EDA
dict_dtypes = cl.check_dtypes(df_twitter_train)
num_columns = dict_dtypes['int64']+dict_dtypes['float64']
cat_columns = dict_dtypes['object']
for col_num in tqdm(num_columns):
plt.figure(figsize=(5,5))
df_twitter_train[df_twitter_train['label']==0][col_num].hist(alpha = 0.5, color ='blue', label = 'Humano')
df_twitter_train[df_twitter_train['label']==1][col_num].hist(alpha = 0.5, color = 'red', label = 'BOT')
plt.legend()
plt.xlabel(col_num)
plt.show()
df_twitter_train
#df_twitter_train['friends_count_log'] = np.log(df_twitter_train['friends_count'])
#df_twitter_train['friends_count_log'] = np.log(df_twitter_train['friends_count'])
#Cria o grid
fig, ax = plt.subplots (1, 2, figsize = (16,8))
sns.scatterplot(x='followers_count', y='friends_count', hue='label', data=df_twitter_train, alpha=0.1, ax=ax[0],legend= False)
ax[0].set_yscale('log')
ax[0].set_xscale('log')
sns.scatterplot(x='unique_colors', y='name', hue='label', data=df_twitter_train, alpha=0.1, ax=ax[1])
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5), ncol=1)
plt.show()
fig, ax = plt.subplots (1, 2, figsize = (16,8))
sns.scatterplot(x='listed_count', y='favourites_count', hue='label', data=df_twitter_train, alpha=0.1, ax=ax[0],legend= False)
ax[0].set_yscale('log')
ax[0].set_xscale('log')
sns.scatterplot(x='unique_colors', y='listed_count', hue='label', data=df_twitter_train, alpha=0.1, ax=ax[1])
ax[1].set_yscale('log')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5), ncol=1)
plt.show()
for col in cat_columns:
mosaic(df_twitter_train, [col, 'label'])
plt.title(f'{col} vs target')
df_twitter_train['label'].value_counts(normalize=True)
df_twitter_train = df_twitter_train.replace({'False ':'FALSE', 'True ':'TRUE', 'true':'TRUE', 'false':'FALSE'})
X_tsne = df_twitter_train.replace({'TRUE':1, 'FALSE':0})
tsne = TSNE(n_components=2, verbose=1, random_state=123)
z = tsne.fit_transform(X_tsne)
X_tsne['tsne-2d-one'] = z[:,0]
X_tsne['tsne-2d-two'] = z[:,1]
X_tsne['label'] = df_twitter_train['label']
plt.figure(figsize=(16,10))
sns.scatterplot(
x='tsne-2d-one', y='tsne-2d-two',
hue='label',
palette=sns.color_palette('hls', 2),
data=X_tsne,
legend='full',
alpha=0.3
)
plt.show()
# # 3) Saving the Data
# Before We finish this step, I would drop some useless columns. All others will be feature engineered and evaluated as predictors in the step of Exploratory Data Analysis, Feature Engineering and Feature Selection
df_twitter_train.columns
columns_to_keep = df_twitter_train.columns
# exporting final data to the interim folder
df_twitter_train.to_csv(os.path.join('..', DATA_OUTPUT_PATH, DATA_OUTPUT_NAME), index=False)
os.path.join('..', DATA_OUTPUT_PATH, DATA_OUTPUT_NAME)
|
notebooks/eda_ft_engineering.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: cuda
# language: python
# name: cuda
# ---
import torch
import torch.nn.init
import torchvision
from torch.autograd import Variable
import torchvision.utils as utils
import torchvision.datasets as dsets
import torchvision.transforms as transforms
# +
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5,0.5,0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=128,
shuffle=True, num_workers=10)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=10)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
# -
#GPU
cuda = torch.device('cuda') # Default CUDA device
# +
import matplotlib.pylab as plt
import numpy as np
def imshow(img):
img = img / 2 + 0.5
npimg = img.numpy()
plt.imshow(np.transpose(npimg,(1,2,0)))
plt.show()
dataiter = iter(trainloader)
images, labels = dataiter.next()
imshow(torchvision.utils.make_grid(images))
print(''.join('%5s' % classes[labels[j]] for j in range(4)))
# +
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 24, 5)
self.b1 = nn.BatchNorm2d(24)
self.pool = nn.MaxPool2d(2,2)
self.conv2 = nn.Conv2d(24, 64, 5)
self.b2 = nn.BatchNorm2d(64)
self.fc1 = nn.Linear(64 * 5 * 5, 240)
self.fc2 = nn.Linear(240, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.b1((self.conv1(x)))))
x = self.pool(F.relu(self.b2(self.conv2(x))))
x = x.view(-1, 64 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
#GPU
net = net.cuda()
net
# +
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters())
# +
for epoch in range(10,30):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
inputs, labels = data
inputs, labels = Variable(inputs), Variable(labels)
#GPU
inputs = inputs.cuda()
labels = labels.cuda()
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if i % 128 == 127: # print every 2000 mini-batches
print('[%d, %5d] loss : %.3f' % (epoch + 1, i+ 1, running_loss / 128))
running_loss = 0.0
correct = 0
total = 0
for data in testloader:
images, labels = data
#GPU
images = images.cuda()
labels = labels.cuda()
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print('Accuracy 1000 test images : %d %%'% (100 * correct/total))
print('Finished Training')
torch.save(net.state_dict(), 'pkl/CNN_CIFAR-10.pkl')
# +
dataiter = iter(testloader)
images, labels = dataiter.next()
imshow(torchvision.utils.make_grid(images))
print('GroundTruth : ',' '.join('%8s' % classes[labels[j]] for j in range(4)))
print('Predicted: : ',' '.join('%8s' % classes[predicted[j]] for j in range(4)))
# -
# +
from torch.autograd import Variable
correct = 0
total = 0
for data in testloader:
images, labels = data
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print('Accurcay to 10000 test images : %d %%' %(100* correct / total))
# +
class_correct = [0. for i in range(10)]
class_total = [0. for i in range(10)]
for data in testloader:
images, labels = data
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i]
class_total[label] += 1
# -
class_correct
classes[1]
class_correct[2]
for i in range(10):
print('Accuray of %s : %2d %%' % (classes[i], 100 * class_correct[i] / class_total[i]))
|
Tutorials/.ipynb_checkpoints/11_CNN_CIFAR-10-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## <span style="color:Blue"> Feature Selection : Embedded Method or Intrinsic or Implicit Method
#
# - Youtube Video Explanation for Regularization Feature selection : https://youtu.be/uLBqlU9Q3F8
# - Youtube Video Explanation for Tree based Feature selection : https://youtu.be/OIkAx9OTjvA
# Wrapper methods provide better results in terms of performance, but they’ll also cost us a lot of computation time/resources.so if we could include the feature selection process in ML model training itself? That could lead us to even better features for that model, in a shorter amount of time. This is where embedded methods come into play.
#
# **Embedded methods,they perform feature selection during the model training, which is why we call them embedded methods.**
#
# **Embedded Methods: Advantages**
#
# The embedded method solves both issues we encountered with the filter and wrapper methods by combining their advantages. Here’s how:
#
# - They take into consideration the interaction of features like wrapper methods do.
# - They are faster like filter methods.
# - They are more accurate than filter methods.
# - They find the feature subset for the algorithm being trained.
# - They are much less prone to overfitting.
# **Feature Selection Process: Recursive Feature Elimination**
#
# First, the estimator is trained on the initial set of features and the importance of each feature is obtained either through a coef_ attribute or through a feature_importances_ attribute. Then, the least important features are pruned from the current set of features. That procedure is recursively repeated on the pruned set until the desired number of features to select is eventually reached. It then ranks the features based on the order of their elimination.
# ## <span style="color:green"> Feature Selection using Regularization
#
# As model complexity increases, the bias of the model decreases and variance increases (and vice-versa). By using various regularization techniques, we can try to achieve low training and testing error so that we’re able to trade-off bias and variance perfectly.
#
# Regularization in machine learning adds a penalty to the different parameters of a model to reduce its freedom(avoid overfitting). This penalty is applied to the coefficient that multiplies each of the features in the linear model, and is done to avoid overfitting, make the model robust to noise, and to improve its generalization.
#
# There are three main types of regularization for linear models:
# 1. lasso regression or L1 regularization
# 2. ridge regression or L2 regularization
# 3. elastic nets or L1/L2 regularization
# #### <span style="color:red"> Lasso Regression L1 Regularisation
#
# Lasso is exactly same as Ridge, in the sense that it also adds penalty. But instead of the squared slope/coefficient/weight, it adds the absolute value of the slope/weight as the penalty to Sum squared Error loss function.
#
# <img src="https://raw.githubusercontent.com/atulpatelDS/Machine_Learning/master/Images/Lasso.png" width="240" height="100" align="left"/>
# If we take various values of penalty parameter Lambda or Alpha and try to get output with both Lasso and Ridge regression regularizations in the linear regression line eguation where we have multiple variables then you will notice that ,Lasso quickly made the coefficient of X to Zero whereas Ridge could reduce it to near zero with large values of Lambda. But Ridge was unable to make it zero even with Lambda as 100 or even 1000.
# So in this way we can say that Lasso can reduce the no of unrequired features very quickly.
# **A tuning parameter, λ controls the strength of the L1. λ is basically the amount of shrinkage:**
#
# When λ = 0, no parameters are eliminated. The estimate is equal to the one found with linear regression.
# As λ increases, more and more coefficients are set to zero and eliminated (theoretically, when λ = ∞, all coefficients are eliminated).
# As λ increases, bias increases.
# As λ decreases, variance increases.
#
# If an intercept is included in the model, it is usually left unchanged.
# #### <span style="color:red"> L2 regularization
#
# It doesn’t set the coefficient to zero, but only approaching zero—that’s why we use only L1 in feature selection.
#
# #### <span style="color:red"> L1/L2 regularization or Elastic Nets
#
# It is a combination of the L1 and L2. It incorporates their penalties, and therefore we can end up with features with zero as a coefficient—similar to L1.
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.neighbors import KNeighborsClassifier as knn
from sklearn.linear_model import LogisticRegression as LGR,Lasso
from sklearn.ensemble import RandomForestClassifier as rfc
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import SelectFromModel
from sklearn import metrics
from sklearn.datasets import load_boston
boston = load_boston()
bos = pd.DataFrame(boston.data, columns = boston.feature_names)
bos['Price'] = boston.target
X = bos.drop("Price", 1) # feature matrix
y = bos['Price'] # target feature
features = X.columns
features
X_train,X_test,y_train,y_test = train_test_split(X,y,random_state = 42,test_size =0.30)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
# Lets apply the Lasso Model -- in Linear Dataset
lasso = Lasso(alpha=0.1)
lasso.fit(X_train,y_train)
coeff = lasso.coef_
coeff
df_coeff = pd.DataFrame({"features":features,"coeff":coeff})
df_coeff.sort_values("coeff")
# Lets plot the coeff with features
plt.plot(range(len(features)),coeff, color='red', linestyle='dashdot', linewidth = 1,marker='o',
markerfacecolor='blue', markersize=5)
plt.xticks(range(len(features)),features,rotation=60)
plt.ylabel("Features_Coeff")
plt.show()
# Use Bar chart to show coeff
df_coeff.set_index('coeff')
# sort in ascending order to better visualization.
df_coeff = df_coeff.sort_values('coeff')
# plot the feature coeff in bars.
plt.figure(figsize=(10,3))
plt.xticks(rotation=45)
sns.barplot(x="features",y= "coeff", data=df_coeff)
# ## <span style="color:green"> Tree-based Feature Selection
#
#
# Tree-based algorithms and models (i.e. random forest) are well-established algorithms that not only offer good predictive performance but can also provide us with what we call feature importance as a way to select features.
#
# Feature importance tells us which variables are more important in making accurate predictions on the target variable/class. In other words, it identifies which features are the most used by the machine learning algorithm in order to predict the target.
# Random forests provide us with feature importance using straightforward methods — mean decrease impurity and mean decrease accuracy.
#
#
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
# Import the RFE from sklearn library
from sklearn.feature_selection import RFE,SelectFromModel
from sklearn.model_selection import train_test_split
from sklearn import metrics
import seaborn as sns
#Load the dataset #https://www.kaggle.com/burak3ergun/loan-data-set
df_loan = pd.read_csv("https://raw.githubusercontent.com/atulpatelDS/Data_Files/master/Loan_Dataset/loan_data_set.csv")
df_loan.head()
# Remove all null value
df_loan.dropna(inplace=True)
# drop the uninformatica column("Loan_ID")
df_loan.drop(labels=["Loan_ID"],axis=1,inplace=True)
df_loan.reset_index(drop=True,inplace=True)
#df_loan.info()
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
cols = df_loan.columns.tolist()
for column in cols:
if df_loan[column].dtype == 'object':
df_loan[column] = le.fit_transform(df_loan[column])
X = df_loan.iloc[:,0:-1]
y = df_loan["Loan_Status"]
X.shape,y.shape
X_train,X_test,y_train,y_test = train_test_split(X,y,random_state = 42,test_size =0.30)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
# Without feature selection check auuracy with Random forest
rf_w = RandomForestClassifier(random_state=100, n_estimators=50)
rf_w.fit(X_train, y_train)
y_pred_rf_w = rf_w.predict(X_test)
metrics.accuracy_score(y_test,y_pred_rf_w)
# +
# get the importance of the resulting features.
importances = rf_w.feature_importances_
# create a data frame for visualization.
final_df = pd.DataFrame({"Features": X_train.columns, "Importances":importances})
final_df.set_index('Importances')
# sort in ascending order to better visualization.
final_df = final_df.sort_values('Importances')
# plot the feature importances in bars.
plt.figure(figsize=(10,3))
plt.xticks(rotation=45)
sns.barplot(x="Features",y= "Importances", data=final_df)
# +
# With feature selection check auuracy with Random Forest
# The following example shows how to retrieve the 7 most informative features
model_tree = RandomForestClassifier(n_estimators=100,random_state=42)
# use RFE to eleminate the less importance features
sel_rfe_tree = RFE(estimator=model_tree, n_features_to_select=7, step=1)
X_train_rfe_tree = sel_rfe_tree.fit_transform(X_train, y_train)
print(sel_rfe_tree.get_support())
print(sel_rfe_tree.ranking_)
#Reduce X to the selected features and then predict using the predict
y_pred_rf = sel_rfe_tree.predict(X_test)
metrics.accuracy_score(y_test,y_pred_rf)
# -
# The features which were selected are rank 1.
# find the number of selected features with the help of the following script:
selected_cols = [column for column in X_train.columns if column in X_train.columns[sel_rfe_tree.get_support()]]
selected_cols
X_train.columns
|
Feature_Engineering/Feature_Selection_Tutorial_10_11_Embedded_Method.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Ising fitter for capped homopolymer repeat proteins.
#
# Authors: <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>
#
# This notebook performs an Ising model fit to consensus Ankyrin repeat proteins (cANK). It reads data from Aviv data files, converts the data to normalized unfolding transitions, generates partition functions and expressions for fraction folded, and uses these expressions to fit the normalized transitions. Data and fits are plotted in various ways, and bootstrap analysis is performed. Correlation plots are generated for pairs of bootstrap parameter values.
# ## Imports, path, and project name
#
# Path and project name should be set by the user. Note that because of the kernel restart below, these must be specified in subsequent scripts, along with any other imports that are needed.
# +
import numpy as np
import glob
import csv
import json
import os
import time
import pandas as pd
import sympy as sp
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
import lmfit
import math
path = os.getcwd() # change this to package path once we make setup.py
proj_name = 'cANK'
# -
# ## Data conversion.
#
# Data are read from an Aviv.dat file.
#
# Outputs are
#
# 1. A numpy data file for each melt, contining [denaturant], normalized signal, construct ID, and melt ID.
#
# 2. A list of constructs.
#
# 3. A list of melts.
# +
def extract_cd_signal_from_aviv(filepath):
"""Extracts the X,Y values (denaturant, CD signal) from an Aviv .dat file
Accepts:
-filepath: str the filepath location of the .dat file
Returns:
- np.array(float[])
"""
xylist = []
with open(filename, 'r') as f:
lines = f.read().splitlines() #define the beginning and end of the data
begin = 0
end = 0
while not lines[begin] == '$DATA':
begin = begin + 1
begin = begin + 4
while not lines[end] == '$ENDDATA':
end = end + 1
for row in range(begin, end - 1): #extract the [denat] and CD signal
line = lines[row]
n = line.split()
xylist.append([float(n[0]), float(n[1])])
return np.array(xylist)
def normalize_y_values(xy):
"""Normalizes the y values of the signal
Accepts: np.array(float[])
Returns: float[]
"""
maxval = max(xy[:,1])
minval = min(xy[:,1])
normylist = [float(((xy[i,1] - maxval)/(minval - maxval))) for i in range(len(xy))]
return normylist
def organize_constructs_by_name(melts):
'''
This loop puts melts in order of type (NRxC, NRx, RxC) and length. This is useful for the
plotting script below, putting the by_melt legends in a sensible order
Accepts: str[] - construct names
Returns: str[] - ordered construct names
'''
NRClist = []
NRlist = []
RClist = []
melts.sort() # Puts in order based on length
for melt in melts:
if melt[0] == 'N':
if melt[-3] == 'C':
NRClist.append(melt)
else:
NRlist.append(melt)
else:
RClist.append(melt)
melts = NRClist + NRlist + RClist
return melts
# +
start = time.time()
den_nsig_const_melt = []
constructs = [] # List of constructs used to build partition functions and
melts = [] # List of melts to be used in fitting.
den_nsig_const_melt_df = []
# Gets file names, and extracts information including construct name, melt number.
for num, filename in enumerate(glob.glob(os.path.join(path, "NRC_data", "*.dat"))):
num = num + 1
base = os.path.basename(filename)
melt = base.split(".")[0]
# Store the names of each construct to map to partition functions
construct_name = melt[:-2]
if construct_name not in constructs:
constructs.append(construct_name)
# Reads the data portion of Aviv file
xyarray = extract_cd_signal_from_aviv(filename)
# Normalize the y-values from 0-1
normylist = normalize_y_values(xyarray)
# a melt number to use as an ID for fitting in ising script.
single_melt_dncm = []
for i in range(len(xyarray)):
x_y_name_num = [xyarray[i,0], normylist[i], construct_name, num]
den_nsig_const_melt.append(x_y_name_num)
single_melt_dncm.append(x_y_name_num)
# Build a numpy array for each melt and output for Ising fitter.
# Columns are denaturant, normalized CD, construct, melt number.
melt_array = np.array(single_melt_dncm)
np.save(os.path.join(path, melt), melt_array) # Writes an npy file to disk for each melt.
melt_df = pd.DataFrame(melt_array, columns=['denat','signal','construct_melt','dataset'])
den_nsig_const_melt_df.append(melt_df)
melts.append(melt)
den_nsig_const_melt_df = pd.concat(den_nsig_const_melt_df)
den_nsig_const_melt_df.to_csv(os.path.join(path, f"{proj_name}_combined_data.csv"), index=False, header=False)
melts = organize_constructs_by_name(melts)
# Write out the results.
with open(os.path.join(path, f"{proj_name}_constructs.json"), 'w') as r:
json.dump(constructs, r)
with open(os.path.join(path, f"{proj_name}_melts.json"), 'w') as s:
json.dump(melts, s)
stop = time.time()
runtime = stop - start
print('\nThe elapsed time was ' + str(runtime) + ' sec')
# -
den_nsig_const_melt_df
# ## Generate a partition function and fraction folded expressions for fitting.
#
# Inputs the constructs.json, melts.json, and processed .npy data files from the data processing script above.
#
# Generates a dictionary of partition functions using the capped homopolymer 1D-Ising model, and converts these to dictionaries of fraction-folded expressions (**fraction_folded_dict**) for fitting by partial differentiation. Manipulations are done using the sympy module which allows symbolic math operations. This is important for partial differentiation, but also for "simplification" of the fraction folded exprssions. This simplification factors common terms, significantly decreasing the time it takes to fit and bootstrap below. The fraction-folded dictionary is exported in json format.
#
# Because the numpy exponential function (np.exp) gets reassigned in this script, and I cannot figure out how to undo this, the kernel must be restarted at the bottom of the script (exit()). The user will be prompted to accept.
#
# Though the path, project name, and most (but not all imports) are redundant with the command above, the kernel restart at the end of this script can create problems, if the script is run more than once. For this reason I am keeping them associated with this script (and with subsequent scripts--fitting, plotting, etc).
#
# Note that on 2020_05_05, I am changing the equation that gives the denaturant dependence of DGi to DGi + mi denat in the three equations for N, R, and C. This corresponds to a positive m-value (free energies become more positive with denaturant). Also change initial guess in the fitting cell.
# +
proj_name = 'cANK'
start = time.time()
print('Generating partition functions and fraction folded expressions. This may take a minute...')
# Parameters for partition function calculation. Note these are sympy symbols.
RT = sp.Symbol('RT')
dGN = sp.Symbol('dGN')
dGR = sp.Symbol('dGR')
dGC = sp.Symbol('dGC')
mi = sp.Symbol('mi')
denat = sp.Symbol('denat')
Kn = sp.Symbol('Kn')
Kr = sp.Symbol('Kr')
Kc = sp.Symbol('Kc')
dGinter = sp.Symbol('dGinter')
W = sp.Symbol('W')
#np.exp = sp.Function('np.exp')
exp = sp.Function('np.exp')
with open(os.path.join(path, f'{proj_name}_constructs.json'), 'r') as cons:
constructs = json.load(cons)
#define matricies and end vectors to be used to calculate partition functions
begin = sp.Matrix([[0,1]])
N = sp.Matrix([[(Kn*W),1],[Kn,1]])
R = sp.Matrix([[(Kr*W),1],[Kr,1]])
C = sp.Matrix([[(Kc*W),1],[Kc,1]])
end = sp.Matrix([[1],[1]])
# Build dictionaries of partition functions, partial derivs with respect
# to K, and fraction folded.
q_dict = {}
dqdKn_dict = {}
dqdKr_dict = {}
dqdKc_dict = {}
frac_folded_dict = {}
# Number of repeats of each type. Seems like they should be floats, but
# I get an error in the matrix multiplication (q_dict) if they are declared to be.
for construct in constructs:
# Make partition function dictionary and expressions for fraction folded.
# Note, only one pf is generated per construct, even when there are multiple melts.
matrixlist = construct.split('_')
q_dict[construct + '_q'] = begin
for i in range(0,len(matrixlist)):
num_Ni = 0
num_Ri = 0
num_Ci = 0
if matrixlist[i] == 'N':
num_Ni=1
if matrixlist[i] == 'R':
num_Ri=1
if matrixlist[i] == 'C':
num_Ci=1
q_dict[construct + '_q'] = q_dict[construct + '_q'] *\
np.linalg.matrix_power(N, num_Ni) * np.linalg.matrix_power(R, num_Ri) *\
np.linalg.matrix_power(C, num_Ci)
q_dict[construct + '_q'] = q_dict[construct + '_q'] * end
# Next two lines convert from sp.Matrix to np.array to something else.
# Not sure the logic here, but it works.
q_dict[construct + '_q'] = np.array(q_dict[construct + '_q'])
q_dict[construct + '_q'] = q_dict[construct + '_q'].item(0)
# Partial derivs wrt Kn dictionary.
dqdKn_dict[construct + '_dqdKn'] \
= sp.diff(q_dict[construct + '_q'], Kn)
# Partial derivs wrt Kr dictionary.
dqdKr_dict[construct + '_dqdKr'] \
= sp.diff(q_dict[construct + '_q'], Kr)
# Partial derivs wrt Kc dictionary.
dqdKc_dict[construct + '_dqdKc'] \
= sp.diff(q_dict[construct + '_q'], Kc)
# Fraction folded dictionary.
frac_folded_dict[construct + '_frac_folded'] \
= (Kn/( q_dict[construct + '_q']) * dqdKn_dict[construct + '_dqdKn'] \
+ Kr/(q_dict[construct + '_q']) * dqdKr_dict[construct + '_dqdKr'] \
+ Kc/( q_dict[construct + '_q']) * dqdKc_dict[construct + '_dqdKc']) \
/ (len(matrixlist))
# The loop below replaces K's and W's the fraction folded terms in the
# dictionary with DGs, ms, and denaturant concentrations. The simplify line
# is really important for making compact expressions for fraction folded.
# This simplification greatly speeds up fitting. The last line
# converts from a sympy object to a string, to allow for json dump.
for construct in frac_folded_dict:
frac_folded_dict[construct] = frac_folded_dict[construct].subs({
Kn:(exp(-((dGN + (mi*denat))/RT))),
Kr:(exp(-((dGR + (mi*denat))/RT))),
Kc:(exp(-((dGC + (mi*denat))/RT))),
W:(exp(-dGinter/RT)) })
frac_folded_dict[construct] = sp.simplify(frac_folded_dict[construct])
frac_folded_dict[construct] = str(frac_folded_dict[construct])
with open(os.path.join(path, f'{proj_name}_frac_folded_dict.json'), 'w') as f:
json.dump(frac_folded_dict, f)
stop = time.time()
runtime = stop - start
print('\nThe elapsed time was ' + str(runtime) + ' sec')
# -
# ## Calculate the rank of the coefficient matrix
#
# The construct list is used to build a matrix of coefficeints for each energy term. The user must input a list containing the thermodynamic parameters. In principle, these could be extracted from the SymPy symbols above, or maybe from the initial guesses list below, but that would require a fit to be done first, which is probably not a good idea for models that have incomplete rank.
# +
with open(os.path.join(path, f'{proj_name}_constructs.json'), 'r') as cons:
constructs = json.load(cons)
num_constructs = len(constructs)
thermo_param_list = ['dGN','dGR','dGC','dGinter']
num_params = len(thermo_param_list)
coeff_matrix = np.zeros((num_constructs, num_params))
row = 0
for construct in constructs:
repeats_list = construct.split('_')
for repeat in repeats_list:
if repeat == 'N':
coeff_matrix[row, 0] = coeff_matrix[row, 0] + 1
elif repeat == 'R':
coeff_matrix[row, 1] = coeff_matrix[row, 1] + 1
else:
coeff_matrix[row, 2] = coeff_matrix[row, 2] + 1
coeff_matrix[row, 3] = len(repeats_list) - 1
row = row + 1
rank = np.linalg.matrix_rank(coeff_matrix)
if rank == num_params:
print("\nThe coefficeint matrix has full column rank (r=",rank,")\n") #leaves a space betw rank and ). Not sure why.
else:
print("\nThe coefficeint matrix has incomplete column rank (r=",rank,").")
print("You should revise your model or include the necessary constructs to obtain full rank.\n")
# -
# ## Fitting the data with the Ising model
#
# Processed data files are imported along with the fraction-folded dictionary and construct and melt lists. The fit is performed with the lmfit module, which has extra functionality over fitting routines in scipy.
#
# Note that if your initial guesses are poor, the fit may be slowed significantly or the fit may not converge.
#
# Fitted thermodynamic parameters are outputted to the screen and are written to a csv file. Baseline parameters are also written to a csv file.
# +
print("\nFitting the data...\n")
start = time.time()
plt.close()
plt.clf
RT = 0.001987 * 298.15 # R in kcal/mol/K, T in Kelvin.
# Dictionary of frac folded eqns from partition function generator script.
with open(os.path.join(path, f'{proj_name}_frac_folded_dict.json'), 'r') as ffd:
frac_folded_dict = json.load(ffd)
with open(os.path.join(path, f'{proj_name}_constructs.json'), 'r') as construct:
constructs = json.load(construct)
with open(os.path.join(path, f'{proj_name}_melts.json'), 'r') as m:
melts = json.load(m)
num_melts = len(melts)
num_constructs = len(constructs)
melt_data_dict = {melt: np.load(os.path.join(path, f'{melt}.npy')) for melt in melts}
# Compile fraction folded expressions.
comp_frac_folded_dict = {}
for construct in constructs:
frac_folded_string = frac_folded_dict[construct + '_frac_folded']
comp_frac_folded = compile(frac_folded_string, '{}_comp_ff'.format(construct), 'eval')
comp_frac_folded_dict[construct + '_comp_ff'] = comp_frac_folded #comp_frac_folded
# CREATE INITIAL GUESSES
# First, thermodynamic parameters. These are Global.
init_guesses = lmfit.Parameters()
init_guesses.add('dGN', value = 6)
init_guesses.add('dGR', value = 5)
init_guesses.add('dGC', value = 6)
init_guesses.add('dGinter', value = -12)
init_guesses.add('mi', value = 1.0)
# Next, baseline parameters. These are local.
for melt in melts:
init_guesses.add('af_{}'.format(melt), value=0.02)
init_guesses.add('bf_{}'.format(melt), value=1)
init_guesses.add('au_{}'.format(melt), value=0.0)
init_guesses.add('bu_{}'.format(melt), value=0.0)
# Transfers init_guesses to params for fitting, but init_guesses are maintained.
params = init_guesses
def fitting_function(params, denat, frac_folded, melt):
af = params['af_{}'.format(melt)].value
bf = params['bf_{}'.format(melt)].value
au = params['au_{}'.format(melt)].value
bu = params['bu_{}'.format(melt)].value
dGN = params['dGN'].value
dGR = params['dGR'].value
dGC = params['dGC'].value
dGinter = params['dGinter'].value
mi = params['mi'].value
return ((af * denat) + bf) * frac_folded + (((au * denat) + bu) * (1 - frac_folded))
# Objective function creates an array of residuals to be used by lmfit minimize.
def objective(params):
resid_dict = {}
dGN = params['dGN'].value
dGR = params['dGR'].value
dGC = params['dGC'].value
dGinter = params['dGinter'].value
mi = params['mi'].value
for melt in melts:
denat = melt_data_dict[melt][:,0] # A numpy array of type str
norm_sig = melt_data_dict[melt][:,1] # A numpy array of type str
denat = denat.astype(float) # A numpy array of type float
norm_sig = norm_sig.astype(float) # A numpy array of type float
string_to_eval = comp_frac_folded_dict[melt[:-2] + '_comp_ff']
frac_folded = eval(string_to_eval)
# frac_folded name gets associated for use in fitting_function call in frac_folded_string assignment above.
af = params['af_{}'.format(melt)].value
bf = params['bf_{}'.format(melt)].value
au = params['au_{}'.format(melt)].value
bu = params['bu_{}'.format(melt)].value
resid = norm_sig - fitting_function(params, denat, frac_folded, melt)
resid_dict[melt + '_resid'] = resid
residuals = np.concatenate(list(resid_dict.values()))
return residuals
# Fit with lmfit
result = lmfit.minimize(objective, init_guesses)
fit_resid = result.residual
# Print out features of the data, the fit, and optimized param values
print("There are a total of {} data sets.".format(num_melts))
print("There are {} observations.".format(result.ndata))
print("There are {} fitted parameters.".format(result.nvarys))
print("There are {} degrees of freedom. \n".format(result.nfree))
print("The sum of squared residuals (SSR) is: {0:7.4f}".format(result.chisqr))
print("The reduced SSR (SSR/DOF): {0:8.6f} \n".format(result.redchi))
dGN = result.params['dGN'].value
dGR = result.params['dGR'].value
dGC = result.params['dGC'].value
dGinter = result.params['dGinter'].value
mi = result.params['mi'].value
print('Optimized parameter values:')
print('dGN = {0:8.4f}'.format(result.params['dGN'].value))
print('dGR = {0:8.4f}'.format(result.params['dGR'].value))
print('dGC ={0:8.4f}'.format(result.params['dGC'].value))
print('dGinter ={0:8.4f}'.format(result.params['dGinter'].value))
print('mi ={0:8.4f}'.format(result.params['mi'].value))
print("\nWriting best fit parameter and baseline files")
# Compile a list of optimized Ising params and write to file.
fitted_ising_params = [["dGN", result.params['dGN'].value],
["dGR", result.params['dGR'].value],
["dGC", result.params['dGC'].value],
["dGinter", result.params['dGinter'].value],
["mi", result.params['mi'].value],
["Chi**2",result.chisqr],
["RedChi",result.redchi]]
with open(os.path.join(path, f'{proj_name}_fitted_Ising_params.csv'), "w") as n:
writer = csv.writer(n, delimiter=',')
writer.writerows(fitted_ising_params)
n.close()
# Compile a list of optimized baseline params and write to file.
fitted_base_params = []
for melt in melts:
af = result.params['af_%s' % (melt)].value
bf = result.params['bf_%s' % (melt)].value
au = result.params['au_%s' % (melt)].value
bu = result.params['bu_%s' % (melt)].value
fitted_base_params.append([melt, af, bf, au, bu])
with open(os.path.join(path, f'{proj_name}_fitted_baseline_params.csv'), "w") as m:
writer = csv.writer(m, delimiter=',')
writer.writerows(fitted_base_params)
m.close()
stop = time.time()
runtime = stop - start
print('\nThe elapsed time was ' + str(runtime) + ' sec')
# -
# ## Plotting the results of the fit
#
# This cell generates four plots. Two are "normalized" data (the data that were actually fit in the scipt above) and fits. The other two are fraction-folded data and fits. One each shows all the constructs, which ideally includes multiple melts of each construct, allowing all fits to be inspected. The other shows only a single melt for each construct (the first one in the melt list for each), simplifying the plot.
#
# The resulting plots are dumped to the screen below the cell, and are saved as png files.
#
# Note that this script is meant to be run after the fitting script. If the fit has not been performed in the current session (or the kernel was restarted after the fit--*not usually the case*), then imports will have to be run, along with data and fitted parameters. That would be pain, so just re-run the fit again, if you find yourself in this situation.
# +
print("\nPlotting results...\n")
# The function "baseline_adj" gives an adjusted y value based on fitted baseline
# parameters (fraction folded).
def baseline_adj(y, x, params, construct):
af = result.params['af_{}'.format(construct)].value
bf = result.params['bf_{}'.format(construct)].value
au = result.params['au_{}'.format(construct)].value
bu = result.params['bu_{}'.format(construct)].value
return (y-(bu+(au*x)))/((bf+(af*x))-(bu+(au*x)))
# Defining global best-fit parameters
dGN = result.params['dGN'].value
dGR = result.params['dGR'].value
dGC = result.params['dGC'].value
dGinter =result.params['dGinter'].value
mi = result.params['mi'].value
# The function fit_model used for plotting best-fit lines and for adding
# residuals to best-fit lines in bootstrapping. Normalized, not frac folded.
def fit_model(params, x, melt):
denat = x
af = result.params['af_{}'.format(melt)].value
bf = result.params['bf_{}'.format(melt)].value
au = result.params['au_{}'.format(melt)].value
bu = result.params['bu_{}'.format(melt)].value
frac_folded = eval(comp_frac_folded_dict[melt[:-2] + '_comp_ff']) # :-2 leaves off the _1, _2, etc from melt id.
return ((af * denat) + bf) * frac_folded + (((au * denat) + bu) * \
(1 - frac_folded))
# Finding the maximum denaturant value out of all the melts to
# set x axis bound
denat_maxer = np.zeros(0)
for melt in melts:
denat_maxer = np.concatenate((denat_maxer, melt_data_dict[melt][:, 0]))
denat_maxer_list = denat_maxer.tolist()
denat_max = float(max(denat_maxer_list))
denat_bound = np.around(denat_max, 1) + 0.2
# Denaturant values to use when evaluating fits. Determines how smooth the
# fitted curve will be, based on the third value (300) in the argument below.
# I might keep using this for fraction_foldeed, but for nomralized baseline
# use a local set of points for each melt, so as not to extrapolate the
# bselines too far.
denat_fit = np.linspace(0, denat_bound, 300)
#defining a dictionary using the first melt of each construct (construct_1)
#Move this to the plotting part, and why not do this for all constructs?
construct1_data_dict = {}
for construct in constructs:
construct1_data_dict[construct] = np.load(os.path.join(path, f'{construct}_1.npy'))
# The four dictionaries below define lower and upper denaturant limnits to be
# used for plotting normalized curves, so crazy-long baseline extrapolations
# are not shown. Do both for melts and construct 1. These are then used
# to create 300-point synthetic baselines in the fifth and sixth dictionaries.
melt_lower_denat_dict = {}
for melt in melts:
melt_lower_denat_dict[melt] = round(float(min(melt_data_dict[melt][:,0]))) -0.2
melt_upper_denat_dict = {}
for melt in melts:
melt_upper_denat_dict[melt] = round(float(max(melt_data_dict[melt][:,0]))) + 0.2
construct1_lower_denat_dict = {}
for construct in constructs:
construct1_lower_denat_dict[construct] = round(float(min(construct1_data_dict[construct][:,0]))) - 0.2
construct1_upper_denat_dict = {}
for construct in constructs:
construct1_upper_denat_dict[construct] = round(float(max(construct1_data_dict[construct][:,0]))) + 0.2
melt_denat_synthetic_dict = {}
for melt in melts:
melt_denat_synthetic_dict[melt] = np.linspace(melt_lower_denat_dict[melt],
melt_upper_denat_dict[melt], 300)
construct1_denat_synthetic_dict = {}
for construct in constructs:
construct1_denat_synthetic_dict[construct] = np.linspace(construct1_lower_denat_dict[construct],
construct1_upper_denat_dict[construct], 300)
''' Global Plot Aesthetics'''
# Defining how the plots are colored
num_melt_colors = num_melts
num_construct_colors = num_constructs
coloration = plt.get_cmap('hsv')
# Dictonary defining title font
title_font = {
'family': 'arial',
'color': 'black',
'weight': 'normal',
'size': 16
}
# Dictionary defining label font
label_font = {
'family': 'arial',
'color': 'black',
'weight': 'normal',
'size': 14
}
'''First Plot: Fraction Folded by Melt'''
#extracting the melt data and creating plot lines for each melt
colorset = 0 # counter to control color of curves and points
for melt in melts:
colorset = colorset + 1
denat = melt_data_dict[melt][:,0] # A numpy array of type str
norm_sig = melt_data_dict[melt][:,1] # A numpy array of type str
denat = denat.astype(float) # A numpy array of type float
norm_sig = norm_sig.astype(float) # A numpy array of type float
y_adj = baseline_adj(norm_sig, denat, result.params, melt)
y_fit = fit_model(result.params, denat_fit, melt)
y_fit_adj = baseline_adj(y_fit, denat_fit, result.params, melt)
plt.plot(denat, y_adj, 'o', color = coloration(colorset/num_melt_colors),
label = melt[:-2] + ' melt ' + melt[-1])
plt.plot(denat_fit, y_fit_adj, '-', color = coloration(colorset/num_melt_colors))
#set axis limits
axes=plt.gca()
axes.set_xlim([-0.1, denat_bound])
axes.set_ylim([-0.1,1.1])
axes.set_aspect(5.5)
#lot aesthetics and labels
plt.legend(loc = 'center', bbox_to_anchor = (1.25, 0.5), fontsize=8)
plt.title('Fraction Folded by Melt', fontdict = title_font)
plt.xlabel('Denaturant (Molar)', fontdict = label_font)
plt.ylabel('Fraction Folded', fontdict = label_font)
#saving plot in individual doc
plt.savefig(os.path.join(path, f'{proj_name}_plot_frac_folded_by_melt.png'),\
dpi = 500, bbox_inches='tight')
#show plot in iPython window and then close
plt.show()
plt.close()
plt.clf
'''Second Plot: Normalized Signal by Melt'''
colorset = 0
for melt in melts:
colorset = colorset + 1
denat = melt_data_dict[melt][:,0] # A numpy array of type str
norm_sig = melt_data_dict[melt][:,1] # A numpy array of type str
denat = denat.astype(float) # A numpy array of type float
norm_sig = norm_sig.astype(float) # A numpy array of type float
y_fit = fit_model(result.params, melt_denat_synthetic_dict[melt], melt)
plt.plot(denat, norm_sig, 'o', color=coloration(colorset/num_melt_colors),
label = melt[:-2] + ' melt ' + melt[-1])
plt.plot(melt_denat_synthetic_dict[melt], y_fit, '-', \
color=coloration(colorset/num_melt_colors))
#set axis limits
axes=plt.gca()
axes.set_xlim([-0.1, denat_bound])
axes.set_ylim([-0.1,1.1])
axes.set_aspect(5.5)
#plot aesthetics and labels
plt.legend(loc = 'center', bbox_to_anchor = (1.25, 0.5), fontsize=8)
plt.title('Normalized Signal by Melt', fontdict = title_font)
plt.xlabel('Denaturant (Molar)', fontdict = label_font)
plt.ylabel('Normalized Signal', fontdict = label_font)
#saving plot in individual doc
plt.savefig(os.path.join(path, f'{proj_name}_plot_normalized_by_melt.png'),\
dpi=500, bbox_inches='tight')
#show plot in iPython window and then close
plt.show()
plt.close()
plt.clf
'''Third Plot: Fraction Folded by Construct'''
colorset = 0
for construct in constructs:
colorset = colorset + 1
denat = construct1_data_dict[construct][:,0] # A numpy array of type str
denat_line = construct1_data_dict[construct][:, 0] # A numpy array of type str
norm_sig = construct1_data_dict[construct][:, 1] # A numpy array of type str
denat = denat.astype(float) # A numpy array of type float
denat_line = denat_line.astype(float) # A numpy array of type float
norm_sig = norm_sig.astype(float) # A numpy array of type float
y_adj = baseline_adj(norm_sig, denat_line, result.params, construct + '_1')
y_fit = fit_model(result.params, denat_fit, construct + '_1')
y_fit_adj = baseline_adj(y_fit, denat_fit, result.params, construct + '_1')
plt.plot(denat, y_adj, 'o', \
color=coloration(colorset/num_construct_colors), label = construct)
plt.plot(denat_fit, y_fit_adj, '-', \
color=coloration(colorset/num_construct_colors))
#set axis limits
axes=plt.gca()
axes.set_xlim([-0.1, denat_bound])
axes.set_ylim([-0.1,1.1])
axes.set_aspect(5.5)
#plot aesthetics and labels
plt.legend(loc = 'center', bbox_to_anchor = (1.15, 0.5), fontsize=8)
plt.title('Fraction Folded by Construct', fontdict = title_font)
plt.xlabel('Denaturant (Molar)', fontdict = label_font)
plt.ylabel('Fraction Folded', fontdict = label_font)
#saving plot in individual doc
plt.savefig(os.path.join(path, f'{proj_name}_plot_frac_folded_by_construct.png'),\
dpi=500, bbox_inches='tight')
#show plot in iPython window and then close
plt.show()
plt.close()
plt.clf
'''Fourth Plot: Normalized Signal by Construct'''
colorset = 0
for construct in constructs:
colorset = colorset + 1
denat = construct1_data_dict[construct][:,0] # A numpy array of type str
norm_sig = construct1_data_dict[construct][:,1] # A numpy array of type str
denat = denat.astype(float) # A numpy array of type float
norm_sig = norm_sig.astype(float) # A numpy array of type float
y_fit = fit_model(result.params, construct1_denat_synthetic_dict[construct], \
construct + '_1')
plt.plot(denat, norm_sig, 'o', color = coloration(colorset/num_construct_colors),
label = construct)
plt.plot(construct1_denat_synthetic_dict[construct], y_fit, '-', \
color = coloration(colorset/num_construct_colors))
#set axis limits
axes=plt.gca()
axes.set_xlim([-0.1, denat_bound])
axes.set_ylim([-0.1,1.1])
axes.set_aspect(5.5)
#plot aesthetics and labels
plt.legend(loc = 'center', bbox_to_anchor = (1.15, 0.5), fontsize=8)
plt.title('Normalized Signal by Construct', fontdict = title_font)
plt.xlabel('Denaturant (Molar)', fontdict = label_font)
plt.ylabel('Normalized Signal', fontdict = label_font)
#saving plot in individual doc
plt.savefig(os.path.join(path, f'{proj_name}_plot_normalized_by_construct.png'),\
dpi=500, bbox_inches='tight')
#show plot in iPython window and then close
plt.show()
plt.close()
plt.clf
# -
# ## Bootstrap analysis
#
# Asks the user to input the number of bootstrap iterations. Bootstrap parameters are stored in a list of lists (**bs_param_values**). After performing the specified number of iterations, bootstrapped thermodynamic parameters are written to a csv file.
#
# Again, bootstrapping is meant to be performed after fitting above. Otherwise, the data and the fit model will have to be re-imported, and the params list and objective function will need to be generated. Just run the fit again if needed.
#
# In this version, a two minute sleep command is built in every 50 bootstrap iterations to let things cool down.
# +
'''BootStrap analysis'''
# Create list to store bootstrap iterations of values and define column titles
bs_param_values = []
bs_param_values.append(['Bootstrap Iter', 'dGN', 'dGR', 'dGC', 'dGinter', 'mi',
'redchi**2','bestchi**2'])
#total number of bootstrap iterations
bs_iter_tot = input("How many bootstrap iterations? ")
# bs_iter_tot = 10 # You would use this if you did not want user input from screen
bs_iter_count = 0 # Iteration counter
fit_resid_index= len(fit_resid) - 1
y_fitted_dict = {}
# Dictionary of 'true' normalized y values from fit at each denaturant value.
for melt in melts:
denat = melt_data_dict[melt][:,0] # A numpy array of type str
denat = denat.astype(float) # A numpy array of type float
y_fitted_dict[melt] = np.array(fit_model(result.params, denat, melt))
# Arrays to store bs fitted param values
dGN_vals = []
dGR_vals = []
dGC_vals = []
dGinter_vals = []
mi_vals = []
# Add residuals chosen at random (with replacement) to expected
# y values. Note-residuals are combined ACROSS melts.
for j in range(int(bs_iter_tot)):
rand_resid_dict={} # Clears the random data for each bootsterap iteration
bs_iter_count = bs_iter_count + 1
print("Bootstrap iteration {0} out of {1}".format(bs_iter_count,
bs_iter_tot))
for melt in melts:
rand_resid =[]
denat = melt_data_dict[melt][:,0] # A numpy array of type str
denat = denat.astype(float) # A numpy array of type float
for x in range(0,len(denat)): # Creastes a list of random residuals
rand_int = np.random.randint(0, fit_resid_index)
rand_resid.append(fit_resid[rand_int])
rand_resid_dict[melt] = np.array(rand_resid)
y_bootstrap = y_fitted_dict[melt] + rand_resid_dict[melt]
z_max,z_min = y_bootstrap.max(), y_bootstrap.min()
melt_data_dict[melt][:, 1] = (y_bootstrap - z_min)/(z_max - z_min)
bs_result = lmfit.minimize(objective, init_guesses)
bs_chisqr = bs_result.chisqr
bs_red_chisqr= bs_result.redchi
dGN = bs_result.params['dGN'].value
dGR = bs_result.params['dGR'].value
dGC = bs_result.params['dGC'].value
dGinter = bs_result.params['dGinter'].value
mi = bs_result.params['mi'].value
# Store each value in a list for plotting and for downstream statistical analysis
dGN_vals.append(dGN)
dGR_vals.append(dGR)
dGC_vals.append(dGC)
dGinter_vals.append(dGinter)
mi_vals.append(mi)
# Append bootstrapped global parameter values for ouput to a file
bs_param_values.append([bs_iter_count, dGN, dGR, dGC, dGinter, mi,
bs_red_chisqr,bs_chisqr])
with open(os.path.join(path, f'{proj_name}_bootstrap_params.csv'), "w") as n:
writer = csv.writer(n, delimiter = ',')
writer.writerows(bs_param_values)
n.close()
# -
# ## The next cell calculates statistical properties of bootstrap parameters and outputs a file
#
# I plan to merge this with the bootstrap cell, but it is much more convenient to code it separately.
#
# The structure that currently holds the bootstrap parameter values (*bs_param_values*) is a list of lists. So it needs to be converted to a numpy array, and it needs to have only values, not column heads, in order to do numerical calculations. Pandas would clearly be the right way to go with this, but not today.
#
# *path* (for writing out the data frame) is taken from the fitting cell above.
# +
bs_param_values_fullarray = np.array(bs_param_values)
bs_param_values_array = bs_param_values_fullarray[1:,1:-2].astype(np.float) # End at -2 since last two columns
# are chi square statistics
bs_param_names = bs_param_values_fullarray[0][1:-2]
statistics = ['mean','median','stdev','2.5% CI','16.6% CI','83.7% CI','97.5% CI']
bs_statistics_df = pd.DataFrame(columns = statistics)
i = 0
for param in bs_param_names:
bs_statistics = []
bs_statistics.append(np.mean(bs_param_values_array[:,i]))
bs_statistics.append(np.median(bs_param_values_array[:,i]))
bs_statistics.append(np.std(bs_param_values_array[:,i]))
bs_statistics.append(np.percentile(bs_param_values_array[:,i],2.5))
bs_statistics.append(np.percentile(bs_param_values_array[:,i],16.7))
bs_statistics.append(np.percentile(bs_param_values_array[:,i],83.3))
bs_statistics.append(np.percentile(bs_param_values_array[:,i],97.5))
bs_statistics_df.loc[param] = bs_statistics
i = i + 1
bs_statistics_df.to_csv(os.path.join(path, f'{proj_name}_bootstrap_stats.csv'))
corr_coef_matrix = np.corrcoef(bs_param_values_array, rowvar = False)
corr_coef_df = pd.DataFrame(corr_coef_matrix, columns = bs_param_names, index = bs_param_names)
corr_coef_df.to_csv(os.path.join(path, f'{proj_name}_bootstrap_corr_coefs.csv'))
# -
bs_statistics_df
corr_coef_matrix
corr_coef_df
# ## Bootstrap histograms and correlation plots
#
# Plots are generated for the thermodynamic parameters of interest (currently, baseline parameters are not included, thought this would not be hard to generate). Histograms are generated for each parameter. Scatter plots are generated for each pair of parameters (not including self-correlation) and arrayed in a grid along with a linear fit. Shared axes are used in the grid to minimize white-space resulting from labelling each axis. Thinking about the output as a matrix, the histograms are on the main diagonal, and the correllation plots are off-diagonal elements populating the upper triangle of the matrix.
#
# The plot grid is dumped to the screen below, and is also written as a pdf file.
#
# As with the plotting and bootstrapping scripts above, this is meant to be run after the fitting script (and after the bootstrapping script immediately above). If you have not done that, re-run fit and bootstrap.
# +
# Specify the names of parameters to be compared to see correlation.
corr_params = ['dGN', 'dGR', 'dGC', 'dGinter', 'mi']
# These are a second set of parameter names that follow in the same order
# as in corr_params. They are formatted using TeX-style names so that Deltas
# and subscripts will be plotted. The would not be good key names for dictionaries
corr_param_labels = ['$\Delta$G$_N$', '$\Delta$G$_R$', '$\Delta$G$_C$',
'$\Delta$G$_{i, i-1}$', 'm$_i$']
num_corr_params = len(corr_params)
gridsize = num_corr_params # Determines the size of the plot grid.
# Dictionary of fitted parameter values.
corr_params_dict = {'dGN': dGN_vals, 'dGR': dGR_vals, 'dGC': dGC_vals,\
'dGinter': dGinter_vals, 'mi': mi_vals}
# PDF that stores a grid of the correlation plots
with PdfPages(os.path.join(path, f'{proj_name}_Corr_Plots.pdf')) as pdf:
fig, axs = plt.subplots(ncols=gridsize, nrows=gridsize, figsize=(12, 12))
# Turns off axes on lower triangle
axs[1, 0].axis('off')
axs[2, 0].axis('off')
axs[2, 1].axis('off')
axs[3, 0].axis('off')
axs[3, 1].axis('off')
axs[3, 2].axis('off')
axs[4, 0].axis('off')
axs[4, 1].axis('off')
axs[4, 2].axis('off')
axs[4, 3].axis('off')
# Defines the position of the y paramater from the array of params
hist_param_counter = 0
while hist_param_counter < num_corr_params:
hist_param_label = corr_param_labels[hist_param_counter]
hist_param = corr_params[hist_param_counter]
# Start fixing labels here
#plt.xticks(fontsize=8)
#axs[hist_param_counter, hist_param_counter].tick_params(fontsize=8)
#axs[hist_param_counter, hist_param_counter].yticks(fontsize=8)
axs[hist_param_counter, hist_param_counter].hist(corr_params_dict[hist_param])
axs[hist_param_counter, hist_param_counter].set_xlabel(hist_param_label,
fontsize=14, labelpad = 5)
hist_param_counter = hist_param_counter + 1
# This part generates the correlation plots
y_param_counter = 0
while y_param_counter < num_corr_params - 1:
# Pulls the parameter name for the y-axis label (with TeX formatting)
yparam_label = corr_param_labels[y_param_counter]
# Pulls the parameter name to be plotted on the y-axis
yparam = corr_params[y_param_counter]
# Defines the position of the x paramater from the array of params.
# The + 1 offest avoids correlating a parameter with itself.
x_param_counter = y_param_counter + 1
while (x_param_counter < num_corr_params):
#pulls the parameter name for the x-axis label (with TeX formatting)
xparam_label = corr_param_labels[x_param_counter]
# Pulls the parameter name to be plotted on the x-axis
xparam = corr_params[x_param_counter]
x_vals= corr_params_dict[xparam]
y_vals = corr_params_dict[yparam]
#plt.xticks(fontsize=8)
#plt.yticks(fontsize=8)
#plotting scatters with axes. +1 shifts a plot to the right from main diagonal
axs[y_param_counter, x_param_counter].plot(x_vals, y_vals, '.')
# The if statement below turns off numbers on axes if not the right column and
# not the main diagonal.
if x_param_counter < num_corr_params - 1:
axs[y_param_counter, x_param_counter].set_xticklabels([])
axs[y_param_counter, x_param_counter].set_yticklabels([])
if y_param_counter == 0: # Puts labels above axes on top row
axs[y_param_counter, x_param_counter].xaxis.set_label_position('top')
axs[y_param_counter, x_param_counter].set_xlabel(xparam_label,
labelpad = 10, fontsize=14)
axs[y_param_counter, x_param_counter].xaxis.tick_top()
if x_param_counter < num_corr_params - 1: # Avoids eliminating y-scale from upper right corner
axs[y_param_counter, x_param_counter].set_yticklabels([])
if x_param_counter == num_corr_params - 1: # Puts labels right of right column
axs[y_param_counter, x_param_counter].yaxis.set_label_position('right')
axs[y_param_counter, x_param_counter].set_ylabel(yparam_label,
rotation = 0, labelpad = 30, fontsize=14)
axs[y_param_counter, x_param_counter].set_xticklabels([])
axs[y_param_counter, x_param_counter].yaxis.tick_right()
# Determin correlation coefficient and display under subplot title
# Note, there is no code that displays this value at the moment.
#corr_coef = np.around(np.corrcoef(x_vals, y_vals), 3)
#min and max values of the x param
x_min = min(x_vals)
x_max = max(x_vals)
#fitting a straight line to the correlation scatterplot
fit_array = np.polyfit(x_vals, y_vals, 1)
fit_deg1_coef = fit_array[0]
fit_deg0_coef = fit_array[1]
fit_x_vals = np.linspace(x_min, x_max, 10)
fit_y_vals = fit_deg1_coef*fit_x_vals + fit_deg0_coef
#plotting correlation line fits
axs[y_param_counter, x_param_counter].plot(fit_x_vals,
fit_y_vals)
plt.subplots_adjust(wspace=0, hspace=0)
x_param_counter = x_param_counter + 1
y_param_counter = y_param_counter + 1
pdf.savefig(bbox_inches='tight')
|
homopolymer_fit/ising_fitter_homopolymer.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Introduction" data-toc-modified-id="Introduction-1"><span class="toc-item-num">1 </span>Introduction</a></span><ul class="toc-item"><li><span><a href="#Parametrized-SVD" data-toc-modified-id="Parametrized-SVD-1.1"><span class="toc-item-num">1.1 </span>Parametrized SVD</a></span></li><li><span><a href="#Computing-the-SVD-derivatives" data-toc-modified-id="Computing-the-SVD-derivatives-1.2"><span class="toc-item-num">1.2 </span>Computing the SVD derivatives</a></span></li></ul></li><li><span><a href="#Example:-Control-system" data-toc-modified-id="Example:-Control-system-2"><span class="toc-item-num">2 </span>Example: Control system</a></span></li></ul></div>
# +
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:85% !important; }</style>"))
import numpy
numpy.set_printoptions(edgeitems=30, linewidth=100000)
from matplotlib import pyplot as plt
# %matplotlib notebook
# -
# # Introduction
# The [singular value decompositon](https://en.wikipedia.org/wiki/Singular_value_decomposition) has many practical uses in science and engineering. It sates that any real $m\times n$ matrix $A$ can be written as $A = U\Sigma V^T$ where:
# * $U$ and $V$ are [orthogonal matrices](https://en.wikipedia.org/wiki/Orthogonal_matrix) with shapes $m\times m$ and $n\times n$ respectively.
# * $\Sigma$ is a positive diagonal matrix with shape $d\times d$ where $d=\min(m,n)$. The diagonal elements $\Sigma_{ii}\equiv\sigma_i\geq0$ are sorted in decending order so that $\sigma_1 \geq \ldots \geq \sigma_d$.
#
# ## Parametrized SVD
#
# By taking a matrix $A(k)$ that depends on a parameter $k$ we can define functions $U(k)$, $\Sigma(k)$, $V(k)$ through the singular value decomposition $A(k) = U(k)\Sigma(k)V^T(k)$. It turns out that if $A(k)$ is continuous the components of the singular value decomposition are also continuous and differentiable with respect to $k$ in many cases*.
#
# \*We have to assume there is a unique choice of $U(k), \Sigma(k)$ and $V(k)$ for each given $k$ which isn't the case. But practically the full trajectory should be well defined after fixing a signle point $U(k_0)$ as long as there are no repeated singular values (points where $\sigma_i = \sigma_j$ for $i\neq j$). See e.g. this [stack exchange thread](https://math.stackexchange.com/questions/644327/how-unique-are-u-and-v-in-the-singular-value-decomposition) for some interesting discussions on this topic.
#
# ## Computing the SVD derivatives
# It turns out that there's a relatively simple way to compute the derivatives of the SVD matrices with respect to $k$. The derivation can be found for example [here](https://projects.ics.forth.gr/_publications/2000_eccv_SVD_jacobian.pdf) and [here](https://j-towns.github.io/papers/svd-derivative.pdf) but I'll repeat it here with a notation that matches the python implementation in this repo. We start by implicitly differentiating both sides of the SVD relationship, letting $'$ denote derivative:
# $$A' = U' \Sigma V^T + U \Sigma' V^T + U \Sigma V'^T$$
# Then multiply by $U^T$ from the left and $V$ from the right and letting $R = U^TA' V$, $S^U = U^TU'$, $S^V = V'^TV$:
# $$R = S^U\Sigma + \Sigma' + \Sigma S^V.$$
#
# This means we have:
# * $\sigma'_i = R_{ii}$
# * $R_{ij} = S^U_{ij}\sigma_j + \sigma_i S^V_{ij}$ for $i\neq j$
# * $R_{ji} = -\sigma_i S^U_{ij} - S^V_{ij} \sigma_j$ for $i\neq j$
#
# The first equation gives us the derivative of $\sigma_i$! Relying on the assumption that all singular values have unique values we can solve the remaining two equations for $S^U_{ij}$ and $S^V_{ij}$ to get:
# * $S^{U}_{ij} = (\sigma_jR_{ij} + \sigma_iR_{ji}) / (\sigma_j^2 - \sigma_i^2)$
# * $S^{V}_{ij} = -(\sigma_iR_{ij} + \sigma_jR_{ji}) / (\sigma_j^2 - \sigma_i^2)$
#
# Which can be written on matrix form as:
# * $S^U = D\circ(R\Sigma + \Sigma R^T)$
# * $S^V = -D\circ(\Sigma R + R^T\Sigma)$
#
# where $D_{ij} = 1 / (\sigma_j^2 - \sigma_i^2)$ for $i\neq j$, $D_{ii} = 0$ and $\circ$ denotes elementwise multiplication.
#
# We can then finally compute the derivatives of $U$ and $V$ using $U' = US^U$ and $V'^T = S^VV^T$.
# # Example: Control system
#
# As an illustrative example we consider a feedback system with input $x\in\mathrm{R}^n$ and output $y\in\mathrm{R}^m$ given by:
# $$y = H(x + kFy)$$
# where $k$ is a tunable scalar parameter and $H$, $F$ are fixed system matrices defined by some physical process. Typically, $H$ and $F$ will depend on the complex frequency $s=\sigma + j\omega$ of the input but let's ignore that for now.
#
# Solving the above relationship for $y$ we get:
# $$y = (1 - kHF)^{-1}Hx \equiv A(k)y$$
# where $A(k)$ is the effective system matrix after taking the feedback control into account. A typical problem in control is to tune $k$ such that $A(k)$ has some desirable property like stability or response time for certain inputs. Such properties are often defined in terms of the singular value decomposition of $A(k)$ so being able to compute their derivatives with respect to could be interesting for e.g. sensitivity analysis.
#
# To use the formulas above we need to first get a formula for $A'(k)$. We differentiate both sides of the relationship $(1-kHF)A(k) = H$ to get
# $$(1-kHF)A'(k) - HFA(k) = 0$$
# and thus $$A'(k) = (1-kHF)^{-1}HFA(k) = A(k)FA(k).$$
#
# Below we verify that the SVD deritative corresponds to the numeric derivative for this example.
# +
m, n = 10, 8
d = min(m, n)
H = numpy.random.normal(size=(m, n))
F = numpy.random.normal(size=(n, m))
def A(k):
return numpy.linalg.inv(numpy.eye(m, m) - k * H.dot(F)).dot(H)
def A_dot(k):
A_k = A(k)
return A_k.dot(F).dot(A_k)
# -
def svd_derivative_numeric(k, h):
return [(numpy.linalg.svd(A(k+h), full_matrices=False)[i] - numpy.linalg.svd(A(k), full_matrices=False)[i]) / h for i in range(3)]
from svd_derivative import svd_derivative
for k in numpy.linspace(-10, 10, 10):
A_k = A(k)
A_dot_k = A_dot(k)
u_dot, sigma_dot, vt_dot = svd_derivative(A_k, A_dot_k)
u_dot_numeric, sigma_dot_numeric, vt_dot_numeric = svd_derivative_numeric(k, 1e-8)
print(numpy.linalg.norm(u_dot - u_dot_numeric), numpy.linalg.norm(sigma_dot - sigma_dot_numeric), numpy.linalg.norm(vt_dot - vt_dot_numeric))
|
svd_derivative.ipynb
|
% ---
% jupyter:
% jupytext:
% text_representation:
% extension: .m
% format_name: light
% format_version: '1.5'
% jupytext_version: 1.14.4
% kernelspec:
% display_name: Matlab
% language: matlab
% name: matlab
% ---
% # Condition numbers
%
% In numerical computing, we constantly make small errors in representing real numbers and the operations on them. Consequently we need to know whether the problems we want to solve are very sensitive to perturbations. The condition number measures this senstivity.
%
% ## Polynomial roots
%
% The roots of a polynomial become sensitive to the values of the coefficients in the monomial basis when roots are relatively close to one another. Consider, for example,
p = poly([1,1,1,0.4,2.2]); % polynomial with these as roots
q = p + 1e-12*randn(1,6); % small changes to its coefficients
roots(q)
% You can see that the triple root at 1 changed a lot more than the size of the perturbation would suggest; the other two roots changed by an amount less than $10^{-9}$. The effect of bad conditioning can be more dramatically shown using the _Wilkinson polynomial_.
p = poly(1:20);
plot(1:20,0,"kx")
hold on
for k = 1:200
q = p + 1e-6*randn(1,21);
r = roots(q);
plot(real(r),imag(r),"b.")
end
axis("equal")
% Clearly, having roots close together is not the only way to get sensitivity in the roots.
% ## Matrix condition number
%
% We have particular interest in the condition number of the problem "given square matrix $A$ and vector $b$, find vector $x$ such that $Ax=b$." More simply: "map $b$ to $A^{-1}b$." The relative condition number of this problem is bounded above by the *matrix condition number* $\kappa(A)=\|A\|\,\|A^{-1}\|$. Furthermore, in any particular case there exist perturbations to the data such that the upper bound is achieved.
A = hilb(5)
kappa = cond(A)
% The importance of _relative_ condition numbers is that they explain accuracy in dimensionless terms, i.e. significant digits. This condition number says we could "lose" up to 5 or so digits in the passage from data to result. So we make relative perturbations to $b$ and see the relative effect on the result.
% +
perturb = @(z,ep) z.*(1 + ep*(2*rand(size(z))-1));
x = 0.3 + (1:5)'; b = A*x;
maxerr = -Inf;
toterr = 0;
for k = 1:50000
bb = perturb(b,1e-12);
err = norm(A\bb - x)/norm(x);
maxerr = max(maxerr,err);
toterr = toterr + err;
end
fprintf(" average relative error = %.2e\n",(toterr/50000))
fprintf(" max relative error found = %.2e\n",maxerr)
fprintf(" condition number bound = %.2e\n",(1e-12*kappa));
% -
% The same holds for perturbations to $A$, though the error has higher-order terms that vanish only in the limit of infinitesimal perturbations.
maxerr = -Inf;
toterr = 0;
for k = 1:50000
AA = perturb(A,1e-12);
err = norm(AA\b - x)/norm(x);
maxerr = max(maxerr,err);
toterr = toterr + err;
end
fprintf(" average relative error = %.2e\n",(toterr/50000))
fprintf(" max relative error found = %.2e\n",maxerr)
fprintf(" condition number bound = %.2e\n",(1e-12*kappa));
|
matlab/Conditioning.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: fengine
# language: python
# name: fengine
# ---
# ## Target guided encodings
#
# In the previous lectures in this section, we learned how to convert a label into a number, by using one hot encoding, replacing by a digit or replacing by frequency or counts of observations. These methods are simple, make (almost) no assumptions and work generally well in different scenarios.
#
# There are however methods that allow us to capture information while pre-processing the labels of categorical variables. These methods include:
#
# - Ordering the labels according to the target
# - Replacing labels by the target mean (mean encoding / target encoding)
# - Replacing the labels by the probability ratio of the target being 1 or 0
# - Weight of evidence.
#
# All of the above methods have something in common:
#
# - the encoding is **guided by the target**, and
# - they create a **monotonic relationship** between the variable and the target.
#
#
# ### Monotonicity
#
# A monotonic relationship is a relationship that does one of the following:
#
# - (1) as the value of one variable increases, so does the value of the other variable; or
# - (2) as the value of one variable increases, the value of the other variable decreases.
#
# In this case, as the value of the independent variable (predictor) increases, so does the target, or conversely, as the value of the variable increases, the target value decreases.
#
#
#
# ### Advantages of target guided encodings
#
# - Capture information within the category, therefore creating more predictive features
# - Create a monotonic relationship between the variable and the target, therefore suitable for linear models
# - Do not expand the feature space
#
#
# ### Limitations
#
# - Prone to cause over-fitting
# - Difficult to cross-validate with current libraries
#
#
# ### Note
#
# The methods discussed in this and the coming 3 lectures can be also used on numerical variables, after discretisation. This creates a monotonic relationship between the numerical variable and the target, and therefore improves the performance of linear models. I will discuss this in more detail in the section "Discretisation".
#
# ===============================================================================
#
# ## Ordered Integer Encoding
#
# Ordering the categories according to the target means assigning a number to the category from 1 to k, where k is the number of distinct categories in the variable, but this numbering is informed by the mean of the target for each category.
#
# For example, we have the variable city with values London, Manchester and Bristol; if the default rate is 30% in London, 20% in Bristol and 10% in Manchester, then we replace London by 1, Bristol by 2 and Manchester by 3.
#
# ## In this demo:
#
# We will see how to perform one hot encoding with:
# - pandas
# - Feature-Engine
#
# And the advantages and limitations of these implementations using the House Prices dataset.
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# to split the datasets
from sklearn.model_selection import train_test_split
# for encoding with feature-engine
from feature_engine.encoding import OrdinalEncoder
# +
# load dataset
data = pd.read_csv(
'../houseprice.csv',
usecols=['Neighborhood', 'Exterior1st', 'Exterior2nd', 'SalePrice'])
data.head()
# +
# let's have a look at how many labels each variable has
for col in data.columns:
print(col, ': ', len(data[col].unique()), ' labels')
# -
# let's explore the unique categories
data['Neighborhood'].unique()
data['Exterior1st'].unique()
data['Exterior2nd'].unique()
# ### Encoding important
#
# We select which digit to assign each category using the train set, and then use those mappings in the test set.
#
# **Note that to do this technique with pandas, we need to keep the target within the training set**
# +
# let's separate into training and testing set
X_train, X_test, y_train, y_test = train_test_split(
data[['Neighborhood', 'Exterior1st', 'Exterior2nd', 'SalePrice']], # this time we keep the target!!
data['SalePrice'], # target
test_size=0.3, # percentage of obs in test set
random_state=0) # seed to ensure reproducibility
X_train.shape, X_test.shape
# -
# ### Explore original relationship between categorical variables and target
# +
# let's explore the relationship of the categories with the target
for var in ['Neighborhood', 'Exterior1st', 'Exterior2nd']:
fig = plt.figure()
fig = X_train.groupby([var])['SalePrice'].mean().plot()
fig.set_title('Relationship between {} and SalePrice'.format(var))
fig.set_ylabel('Mean SalePrice')
plt.show()
# -
# You can see that the relationship between the target and the categories of the categorical variables goes up and down, depending on the category.
#
#
# ## Ordered Integer encoding with pandas
#
#
# ### Advantages
#
# - quick
# - returns pandas dataframe
#
# ### Limitations of pandas:
#
# - it does not preserve information from train data to propagate to test data
#
# We need to store the encoding maps separately if planing to use them in production.
# +
# let's order the labels according to the mean target value
X_train.groupby(['Neighborhood'])['SalePrice'].mean().sort_values()
# -
# In the above cell, we ordered the categories from the neighbourhood where the houses sale prices are cheaper (IDOTRR), to the neighbourhood where the house prices are, on average, more expensive (NoRidge).
#
# In the next cells, we will replace those categories, ordered as they are, by the numbers 0 to k, where k is the number of different categories minus 1, in this case 25 - 1 = 24.
#
# So IDOTRR will be replaced by 0 and NoRidge by 24, just to be clear.
# +
# first we generate an ordered list with the labels
ordered_labels = X_train.groupby(['Neighborhood'
])['SalePrice'].mean().sort_values().index
ordered_labels
# +
# next let's create a dictionary with the mappings of categories to numbers
ordinal_mapping = {k: i for i, k in enumerate(ordered_labels, 0)}
ordinal_mapping
# +
# now, we replace the labels with the integers
X_train['Neighborhood'] = X_train['Neighborhood'].map(ordinal_mapping)
X_test['Neighborhood'] = X_test['Neighborhood'].map(ordinal_mapping)
# +
# let's explore the result
X_train['Neighborhood'].head(10)
# +
# we can turn the previous commands into 2 functions
def find_category_mappings(df, variable, target):
# first we generate an ordered list with the labels
ordered_labels = df.groupby([variable
])[target].mean().sort_values().index
# return the dictionary with mappings
return {k: i for i, k in enumerate(ordered_labels, 0)}
def integer_encode(train, test, variable, ordinal_mapping):
train[variable] = train[variable].map(ordinal_mapping)
test[variable] = test[variable].map(ordinal_mapping)
# +
# and now we run a loop over the remaining categorical variables
for variable in ['Exterior1st', 'Exterior2nd']:
mappings = find_category_mappings(X_train, variable, 'SalePrice')
integer_encode(X_train, X_test, variable, mappings)
# +
# let's see the result
X_train.head()
# +
# let's inspect the newly created monotonic relationship
# between the variables and the target
for var in ['Neighborhood', 'Exterior1st', 'Exterior2nd']:
fig = plt.figure()
fig = X_train.groupby([var])['SalePrice'].mean().plot()
fig.set_title('Monotonic relationship between {} and SalePrice'.format(var))
fig.set_ylabel('Mean SalePrice')
plt.show()
# -
# We see from the plots above that the relationship between the categories and the target is now monotonic, and for the first 2 variables, almost linear, which helps improve linear models performance.
#
# ### Note
#
# Monotonic does not mean strictly linear. Monotonic means that it increases constantly, or it decreases constantly.
#
# Replacing categorical labels with this code and method will generate missing values for categories present in the test set that were not seen in the training set. Therefore it is extremely important to handle rare labels before-hand. I will explain how to do this, in a later notebook.
# ## Integer Encoding with Feature-Engine
#
# If using Feature-Engine, instead of pandas, we do not need to keep the target variable in the training dataset.
# +
# let's separate into training and testing set
X_train, X_test, y_train, y_test = train_test_split(
data[['Neighborhood', 'Exterior1st', 'Exterior2nd']], # predictors
data['SalePrice'], # target
test_size=0.3, # percentage of obs in test set
random_state=0) # seed to ensure reproducibility
X_train.shape, X_test.shape
# -
ordinal_enc = OrdinalEncoder(
# NOTE that we indicate ordered in the encoding_method, otherwise it assings numbers arbitrarily
encoding_method='ordered',
variables=['Neighborhood', 'Exterior1st', 'Exterior2nd'])
# +
# when fitting the transformer, we need to pass the target as well
# just like with any Scikit-learn predictor class
ordinal_enc.fit(X_train, y_train)
# +
# in the encoder dict we can observe each of the top categories
# selected for each of the variables
ordinal_enc.encoder_dict_
# +
# this is the list of variables that the encoder will transform
ordinal_enc.variables
# +
X_train = ordinal_enc.transform(X_train)
X_test = ordinal_enc.transform(X_test)
# let's explore the result
X_train.head()
# -
# **Note**
#
# If the argument variables is left to None, then the encoder will automatically identify all categorical variables. Is that not sweet?
#
# The encoder will not encode numerical variables. So if some of your numerical variables are in fact categories, you will need to re-cast them as object before using the encoder.
#
# Finally, if there is a label in the test set that was not present in the train set, the encoder will through and error, to alert you of this behaviour.
|
06.05-Ordered-Integer-Encoding.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Comparison to CCL
#
# This notebook compares the implementation from jax_cosmo to CCL
# +
# %pylab inline
import os
os.environ['JAX_ENABLE_X64']='True'
import pyccl as ccl
import jax
from jax_cosmo import Cosmology, background
# +
# We first define equivalent CCL and jax_cosmo cosmologies
cosmo_ccl = ccl.Cosmology(
Omega_c=0.3, Omega_b=0.05, h=0.7, sigma8 = 0.8, n_s=0.96, Neff=0,
transfer_function='eisenstein_hu', matter_power_spectrum='halofit')
cosmo_jax = Cosmology(Omega_c=0.3, Omega_b=0.05, h=0.7, sigma8 = 0.8, n_s=0.96,
Omega_k=0., w0=-1., wa=0.)
# -
# ## Comparing background cosmology
# Test array of scale factors
a = np.linspace(0.01, 1.)
# +
# Testing the radial comoving distance
chi_ccl = ccl.comoving_radial_distance(cosmo_ccl, a)
chi_jax = background.radial_comoving_distance(cosmo_jax, a)/cosmo_jax.h
plot(a, chi_ccl, label='CCL')
plot(a, chi_jax, '--', label='jax_cosmo')
legend()
xlabel('a')
ylabel('radial comoving distance [Mpc]')
# +
# Testing the angular comoving distance
chi_ccl = ccl.comoving_angular_distance(cosmo_ccl, a)
chi_jax = background.transverse_comoving_distance(cosmo_jax, a)/cosmo_jax.h
plot(a, chi_ccl, label='CCL')
plot(a, chi_jax, '--', label='jax_cosmo')
legend()
xlabel('a')
ylabel('angular comoving distance [Mpc]')
# +
# Testing the angular diameter distance
chi_ccl = ccl.angular_diameter_distance(cosmo_ccl, a)
chi_jax = background.angular_diameter_distance(cosmo_jax, a)/cosmo_jax.h
plot(a, chi_ccl, label='CCL')
plot(a, chi_jax, '--', label='jax_cosmo')
legend()
xlabel('a')
ylabel('angular diameter distance [Mpc]')
# -
# Comparing the growth factor
plot(a, ccl.growth_factor(cosmo_ccl,a), label='CCL')
plot(a, background.growth_factor(cosmo_jax, a), '--', label='jax_cosmo')
legend()
xlabel('a')
ylabel('Growth factor')
# Comparing linear growth rate
plot(a, ccl.growth_rate(cosmo_ccl,a), label='CCL')
plot(a, background.growth_rate(cosmo_jax, a), '--', label='jax_cosmo')
legend()
xlabel('a')
ylabel('growth rate')
# ## Comparing matter power spectrum
from jax_cosmo.power import linear_matter_power, nonlinear_matter_power
k = np.logspace(-3,-0.5)
# +
#Let's have a look at the linear power
pk_ccl = ccl.linear_matter_power(cosmo_ccl, k, 1.0)
pk_jax = linear_matter_power(cosmo_jax, k/cosmo_jax.h, a=1.0)
loglog(k,pk_ccl,label='CCL')
loglog(k,pk_jax/cosmo_jax.h**3, '--', label='jax_cosmo')
legend()
xlabel('k [Mpc]')
ylabel('pk');
# +
k = np.logspace(-3,1)
#Let's have a look at the non linear power
pk_ccl = ccl.nonlin_matter_power(cosmo_ccl, k, 1.0)
pk_jax = nonlinear_matter_power(cosmo_jax, k/cosmo_jax.h, a=1.0)
loglog(k,pk_ccl,label='CCL')
loglog(k,pk_jax/cosmo_jax.h**3, '--', label='jax_cosmo')
legend()
xlabel('k [Mpc]')
ylabel('pk');
# -
# ## Comparing angular cl
# +
from jax_cosmo.redshift import smail_nz
# Let's define a redshift distribution
# with a Smail distribution with a=1, b=2, z0=1
nz = smail_nz(1.,2., 1.)
# -
z = linspace(0,4,1024)
plot(z, nz(z))
xlabel(r'Redshift $z$');
title('Normalized n(z)');
# +
from jax_cosmo.angular_cl import angular_cl
from jax_cosmo import probes
# Let's first compute some Weak Lensing cls
tracer_ccl = ccl.WeakLensingTracer(cosmo_ccl, (z, nz(z)), use_A_ia=False)
tracer_jax = probes.WeakLensing([nz])
ell = np.logspace(0.1,3)
cl_ccl = ccl.angular_cl(cosmo_ccl, tracer_ccl, tracer_ccl, ell)
cl_jax = angular_cl(cosmo_jax, ell, [tracer_jax])
# +
loglog(ell, cl_ccl,label='CCL')
loglog(ell, cl_jax[0], '--', label='jax_cosmo')
legend()
xlabel(r'$\ell$')
ylabel(r'Lensing angular $C_\ell$')
# +
# Let's try galaxy clustering now
from jax_cosmo.bias import constant_linear_bias
# We define a trivial bias model
bias = constant_linear_bias(1.)
tracer_ccl_n = ccl.NumberCountsTracer(cosmo_ccl,
has_rsd=False,
dndz=(z, nz(z)),
bias=(z, bias(cosmo_jax, z)))
tracer_jax_n = probes.NumberCounts([nz], bias)
cl_ccl = ccl.angular_cl(cosmo_ccl, tracer_ccl_n, tracer_ccl_n, ell)
cl_jax = angular_cl(cosmo_jax, ell, [tracer_jax_n])
# +
import jax_cosmo.constants as cst
loglog(ell, cl_ccl,label='CCL')
loglog(ell, cl_jax[0], '--', label='jax_cosmo')
legend()
xlabel(r'$\ell$')
ylabel(r'Clustering angular $C_\ell$')
# +
# And finally.... a cross-spectrum
cl_ccl = ccl.angular_cl(cosmo_ccl, tracer_ccl, tracer_ccl_n, ell)
cl_jax = angular_cl(cosmo_jax, ell, [tracer_jax, tracer_jax_n])
# +
ell = np.logspace(1,3)
loglog(ell, cl_ccl,label='CCL')
loglog(ell, cl_jax[1], '--', label='jax_cosmo')
legend()
xlabel(r'$\ell$')
ylabel(r'shape-position angular $C_\ell$')
# -
|
docs/notebooks/CCL_comparison.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="xyVv1tr3ZLt5"
# # BERT4Rec on ML-1m in PyTorch
#
# In this tutorial, we are building BERT4Rec model in PyTorch and then training it on the movielens 1m dataset.
# + [markdown] id="aTzyO6mfZ5cQ"
# ## Setup
# + [markdown] id="YiFQCtLZ63rc"
# ### Installations
# + id="pwdnWWZnZygU"
# !pip install -q wget
# + [markdown] id="WauAGrcH65HI"
# ### Imports
# + id="w2zR_C419Hmy"
import os
import sys
import wget
import math
import json
import random
import zipfile
import shutil
import pickle
import tempfile
from abc import *
import numpy as np
import pandas as pd
import pprint as pp
from pathlib import Path
from datetime import date
from tqdm import tqdm, trange
import torch
import torch.backends.cudnn as cudnn
from torch import optim as optim
import torch.utils.data as data_utils
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.tensorboard import SummaryWriter
tqdm.pandas()
# + [markdown] id="xmTE2oIn66bq"
# ### Params
# + id="nfjPTfJaKcUc"
STATE_DICT_KEY = 'model_state_dict'
OPTIMIZER_STATE_DICT_KEY = 'optimizer_state_dict'
RAW_DATASET_ROOT_FOLDER = '/content/ml-1m'
# + id="pEDDwB7JBMG7"
class Args:
mode = 'train'
test_model_path = '/content/models'
# Dataset
dataset_code = 'ml-1m'
min_rating = 0
min_uc = 5
min_sc = 0
split = 'leave_one_out'
dataset_split_seed = 42
eval_set_size = 500
# Dataloader
dataloader_code = 'bert'
dataloader_random_seed = 0.0
train_batch_size = 128
val_batch_size = 128
test_batch_size = 128
# NegativeSampler
train_negative_sampler_code = 'random'
train_negative_sample_size = 0
train_negative_sampling_seed = 0
test_negative_sampler_code = 'random'
test_negative_sample_size = 100
test_negative_sampling_seed = 42
# Trainer
trainer_code = 'bert'
device = 'cuda'
num_gpu = 1
device_idx = '0'
optimizer='Adam'
lr=0.001
weight_decay=0
momentum=None
enable_lr_schedule = True
decay_step=25
gamma=1.0
num_epochs=10
log_period_as_iter=12800
metric_ks=[1, 5, 10, 20, 50, 100]
best_metric='NDCG@10'
find_best_beta=False
total_anneal_steps=2000
anneal_cap=0.2
# Model
model_code='bert'
model_init_seed=0
bert_max_len=100
bert_num_items=None
bert_hidden_units=256
bert_num_blocks=2
bert_num_heads=4
bert_dropout=0.1
bert_mask_prob=0.15
# Experiment
experiment_dir='experiments'
experiment_description='test'
args = Args()
# + [markdown] id="pQ0_HSBbZ_AL"
# ## Utils
# + [markdown] id="AGjLR7B27VmM"
# ### Basic
# + id="66tGWsEfFuJ1"
def download(url, savepath):
wget.download(url, str(savepath))
def unzip(zippath, savepath):
zip = zipfile.ZipFile(zippath)
zip.extractall(savepath)
zip.close()
def get_count(tp, id):
groups = tp[[id]].groupby(id, as_index=False)
count = groups.size()
return count
# + [markdown] id="ZxL_DVsw7FKi"
# ### Metrics
# + id="74kcufQKKpX6"
def recall(scores, labels, k):
scores = scores
labels = labels
rank = (-scores).argsort(dim=1)
cut = rank[:, :k]
hit = labels.gather(1, cut)
return (hit.sum(1).float() / torch.min(torch.Tensor([k]).to(hit.device), labels.sum(1).float())).mean().cpu().item()
def ndcg(scores, labels, k):
scores = scores.cpu()
labels = labels.cpu()
rank = (-scores).argsort(dim=1)
cut = rank[:, :k]
hits = labels.gather(1, cut)
position = torch.arange(2, 2+k)
weights = 1 / torch.log2(position.float())
dcg = (hits.float() * weights).sum(1)
idcg = torch.Tensor([weights[:min(int(n), k)].sum() for n in labels.sum(1)])
ndcg = dcg / idcg
return ndcg.mean()
def recalls_and_ndcgs_for_ks(scores, labels, ks):
metrics = {}
scores = scores
labels = labels
answer_count = labels.sum(1)
labels_float = labels.float()
rank = (-scores).argsort(dim=1)
cut = rank
for k in sorted(ks, reverse=True):
cut = cut[:, :k]
hits = labels_float.gather(1, cut)
metrics['Recall@%d' % k] = \
(hits.sum(1) / torch.min(torch.Tensor([k]).to(labels.device), labels.sum(1).float())).mean().cpu().item()
position = torch.arange(2, 2+k)
weights = 1 / torch.log2(position.float())
dcg = (hits * weights.to(hits.device)).sum(1)
idcg = torch.Tensor([weights[:min(int(n), k)].sum() for n in answer_count]).to(dcg.device)
ndcg = (dcg / idcg).mean()
metrics['NDCG@%d' % k] = ndcg.cpu().item()
return metrics
# + [markdown] id="QogDOi3E7SQW"
# ### Experiment setup
# + id="dqWM-UoAK_43"
def setup_train(args):
set_up_gpu(args)
export_root = create_experiment_export_folder(args)
export_experiments_config_as_json(args, export_root)
pp.pprint({k: v for k, v in vars(args).items() if v is not None}, width=1)
return export_root
def create_experiment_export_folder(args):
experiment_dir, experiment_description = args.experiment_dir, args.experiment_description
if not os.path.exists(experiment_dir):
os.mkdir(experiment_dir)
experiment_path = get_name_of_experiment_path(experiment_dir, experiment_description)
os.mkdir(experiment_path)
print('Folder created: ' + os.path.abspath(experiment_path))
return experiment_path
def get_name_of_experiment_path(experiment_dir, experiment_description):
experiment_path = os.path.join(experiment_dir, (experiment_description + "_" + str(date.today())))
idx = _get_experiment_index(experiment_path)
experiment_path = experiment_path + "_" + str(idx)
return experiment_path
def _get_experiment_index(experiment_path):
idx = 0
while os.path.exists(experiment_path + "_" + str(idx)):
idx += 1
return idx
def load_weights(model, path):
pass
def save_test_result(export_root, result):
filepath = Path(export_root).joinpath('test_result.txt')
with filepath.open('w') as f:
json.dump(result, f, indent=2)
def export_experiments_config_as_json(args, experiment_path):
with open(os.path.join(experiment_path, 'config.json'), 'w') as outfile:
json.dump(vars(args), outfile, indent=2)
def fix_random_seed_as(random_seed):
random.seed(random_seed)
torch.manual_seed(random_seed)
torch.cuda.manual_seed_all(random_seed)
np.random.seed(random_seed)
cudnn.deterministic = True
cudnn.benchmark = False
def set_up_gpu(args):
os.environ['CUDA_VISIBLE_DEVICES'] = args.device_idx
args.num_gpu = len(args.device_idx.split(","))
def load_pretrained_weights(model, path):
chk_dict = torch.load(os.path.abspath(path))
model_state_dict = chk_dict[STATE_DICT_KEY] if STATE_DICT_KEY in chk_dict else chk_dict['state_dict']
model.load_state_dict(model_state_dict)
def setup_to_resume(args, model, optimizer):
chk_dict = torch.load(os.path.join(os.path.abspath(args.resume_training), 'models/checkpoint-recent.pth'))
model.load_state_dict(chk_dict[STATE_DICT_KEY])
optimizer.load_state_dict(chk_dict[OPTIMIZER_STATE_DICT_KEY])
def create_optimizer(model, args):
if args.optimizer == 'Adam':
return optim.Adam(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)
return optim.SGD(model.parameters(), lr=args.lr, weight_decay=args.weight_decay, momentum=args.momentum)
class AverageMeterSet(object):
def __init__(self, meters=None):
self.meters = meters if meters else {}
def __getitem__(self, key):
if key not in self.meters:
meter = AverageMeter()
meter.update(0)
return meter
return self.meters[key]
def update(self, name, value, n=1):
if name not in self.meters:
self.meters[name] = AverageMeter()
self.meters[name].update(value, n)
def reset(self):
for meter in self.meters.values():
meter.reset()
def values(self, format_string='{}'):
return {format_string.format(name): meter.val for name, meter in self.meters.items()}
def averages(self, format_string='{}'):
return {format_string.format(name): meter.avg for name, meter in self.meters.items()}
def sums(self, format_string='{}'):
return {format_string.format(name): meter.sum for name, meter in self.meters.items()}
def counts(self, format_string='{}'):
return {format_string.format(name): meter.count for name, meter in self.meters.items()}
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val
self.count += n
self.avg = self.sum / self.count
def __format__(self, format):
return "{self.val:{format}} ({self.avg:{format}})".format(self=self, format=format)
# + [markdown] id="7K7DrpeL7Z1f"
# ### Logging
# + id="5rfOwq6nLLEI"
def save_state_dict(state_dict, path, filename):
torch.save(state_dict, os.path.join(path, filename))
class LoggerService(object):
def __init__(self, train_loggers=None, val_loggers=None):
self.train_loggers = train_loggers if train_loggers else []
self.val_loggers = val_loggers if val_loggers else []
def complete(self, log_data):
for logger in self.train_loggers:
logger.complete(**log_data)
for logger in self.val_loggers:
logger.complete(**log_data)
def log_train(self, log_data):
for logger in self.train_loggers:
logger.log(**log_data)
def log_val(self, log_data):
for logger in self.val_loggers:
logger.log(**log_data)
class AbstractBaseLogger(metaclass=ABCMeta):
@abstractmethod
def log(self, *args, **kwargs):
raise NotImplementedError
def complete(self, *args, **kwargs):
pass
class RecentModelLogger(AbstractBaseLogger):
def __init__(self, checkpoint_path, filename='checkpoint-recent.pth'):
self.checkpoint_path = checkpoint_path
if not os.path.exists(self.checkpoint_path):
os.mkdir(self.checkpoint_path)
self.recent_epoch = None
self.filename = filename
def log(self, *args, **kwargs):
epoch = kwargs['epoch']
if self.recent_epoch != epoch:
self.recent_epoch = epoch
state_dict = kwargs['state_dict']
state_dict['epoch'] = kwargs['epoch']
save_state_dict(state_dict, self.checkpoint_path, self.filename)
def complete(self, *args, **kwargs):
save_state_dict(kwargs['state_dict'], self.checkpoint_path, self.filename + '.final')
class BestModelLogger(AbstractBaseLogger):
def __init__(self, checkpoint_path, metric_key='mean_iou', filename='best_acc_model.pth'):
self.checkpoint_path = checkpoint_path
if not os.path.exists(self.checkpoint_path):
os.mkdir(self.checkpoint_path)
self.best_metric = 0.
self.metric_key = metric_key
self.filename = filename
def log(self, *args, **kwargs):
current_metric = kwargs[self.metric_key]
if self.best_metric < current_metric:
print("Update Best {} Model at {}".format(self.metric_key, kwargs['epoch']))
self.best_metric = current_metric
save_state_dict(kwargs['state_dict'], self.checkpoint_path, self.filename)
class MetricGraphPrinter(AbstractBaseLogger):
def __init__(self, writer, key='train_loss', graph_name='Train Loss', group_name='metric'):
self.key = key
self.graph_label = graph_name
self.group_name = group_name
self.writer = writer
def log(self, *args, **kwargs):
if self.key in kwargs:
self.writer.add_scalar(self.group_name + '/' + self.graph_label, kwargs[self.key], kwargs['accum_iter'])
else:
self.writer.add_scalar(self.group_name + '/' + self.graph_label, 0, kwargs['accum_iter'])
def complete(self, *args, **kwargs):
self.writer.close()
# + [markdown] id="XcQfTrxmaIFS"
# ## Dataset
# + [markdown] id="KeeF9O8d7dmq"
# ### Abstract class
# + id="Bg1ChmjtF9Vi"
class AbstractDataset(metaclass=ABCMeta):
def __init__(self, args):
self.args = args
self.min_rating = args.min_rating
self.min_uc = args.min_uc
self.min_sc = args.min_sc
self.split = args.split
assert self.min_uc >= 2, 'Need at least 2 ratings per user for validation and test'
@classmethod
@abstractmethod
def code(cls):
pass
@classmethod
def raw_code(cls):
return cls.code()
@classmethod
@abstractmethod
def url(cls):
pass
@classmethod
def is_zipfile(cls):
return True
@classmethod
def zip_file_content_is_folder(cls):
return True
@classmethod
def all_raw_file_names(cls):
return []
@abstractmethod
def load_ratings_df(self):
pass
def load_dataset(self):
self.preprocess()
dataset_path = self._get_preprocessed_dataset_path()
dataset = pickle.load(dataset_path.open('rb'))
return dataset
def preprocess(self):
dataset_path = self._get_preprocessed_dataset_path()
if dataset_path.is_file():
print('Already preprocessed. Skip preprocessing')
return
if not dataset_path.parent.is_dir():
dataset_path.parent.mkdir(parents=True)
self.maybe_download_raw_dataset()
df = self.load_ratings_df()
df = self.make_implicit(df)
df = self.filter_triplets(df)
df, umap, smap = self.densify_index(df)
train, val, test = self.split_df(df, len(umap))
dataset = {'train': train,
'val': val,
'test': test,
'umap': umap,
'smap': smap}
with dataset_path.open('wb') as f:
pickle.dump(dataset, f)
def maybe_download_raw_dataset(self):
folder_path = self._get_rawdata_folder_path()
if folder_path.is_dir() and\
all(folder_path.joinpath(filename).is_file() for filename in self.all_raw_file_names()):
print('Raw data already exists. Skip downloading')
return
print("Raw file doesn't exist. Downloading...")
if self.is_zipfile():
tmproot = Path(tempfile.mkdtemp())
tmpzip = tmproot.joinpath('file.zip')
tmpfolder = tmproot.joinpath('folder')
download(self.url(), tmpzip)
unzip(tmpzip, tmpfolder)
if self.zip_file_content_is_folder():
tmpfolder = tmpfolder.joinpath(os.listdir(tmpfolder)[0])
shutil.move(tmpfolder, folder_path)
shutil.rmtree(tmproot)
print()
else:
tmproot = Path(tempfile.mkdtemp())
tmpfile = tmproot.joinpath('file')
download(self.url(), tmpfile)
folder_path.mkdir(parents=True)
shutil.move(tmpfile, folder_path.joinpath('ratings.csv'))
shutil.rmtree(tmproot)
print()
def make_implicit(self, df):
print('Turning into implicit ratings')
df = df[df['rating'] >= self.min_rating]
# return df[['uid', 'sid', 'timestamp']]
return df
def filter_triplets(self, df):
print('Filtering triplets')
if self.min_sc > 0:
item_sizes = df.groupby('sid').size()
good_items = item_sizes.index[item_sizes >= self.min_sc]
df = df[df['sid'].isin(good_items)]
if self.min_uc > 0:
user_sizes = df.groupby('uid').size()
good_users = user_sizes.index[user_sizes >= self.min_uc]
df = df[df['uid'].isin(good_users)]
return df
def densify_index(self, df):
print('Densifying index')
umap = {u: i for i, u in enumerate(set(df['uid']))}
smap = {s: i for i, s in enumerate(set(df['sid']))}
df['uid'] = df['uid'].map(umap)
df['sid'] = df['sid'].map(smap)
return df, umap, smap
def split_df(self, df, user_count):
if self.args.split == 'leave_one_out':
print('Splitting')
user_group = df.groupby('uid')
user2items = user_group.progress_apply(lambda d: list(d.sort_values(by='timestamp')['sid']))
train, val, test = {}, {}, {}
for user in range(user_count):
items = user2items[user]
train[user], val[user], test[user] = items[:-2], items[-2:-1], items[-1:]
return train, val, test
elif self.args.split == 'holdout':
print('Splitting')
np.random.seed(self.args.dataset_split_seed)
eval_set_size = self.args.eval_set_size
# Generate user indices
permuted_index = np.random.permutation(user_count)
train_user_index = permuted_index[ :-2*eval_set_size]
val_user_index = permuted_index[-2*eval_set_size: -eval_set_size]
test_user_index = permuted_index[ -eval_set_size: ]
# Split DataFrames
train_df = df.loc[df['uid'].isin(train_user_index)]
val_df = df.loc[df['uid'].isin(val_user_index)]
test_df = df.loc[df['uid'].isin(test_user_index)]
# DataFrame to dict => {uid : list of sid's}
train = dict(train_df.groupby('uid').progress_apply(lambda d: list(d['sid'])))
val = dict(val_df.groupby('uid').progress_apply(lambda d: list(d['sid'])))
test = dict(test_df.groupby('uid').progress_apply(lambda d: list(d['sid'])))
return train, val, test
else:
raise NotImplementedError
def _get_rawdata_root_path(self):
return Path(RAW_DATASET_ROOT_FOLDER)
def _get_rawdata_folder_path(self):
root = self._get_rawdata_root_path()
return root.joinpath(self.raw_code())
def _get_preprocessed_root_path(self):
root = self._get_rawdata_root_path()
return root.joinpath('preprocessed')
def _get_preprocessed_folder_path(self):
preprocessed_root = self._get_preprocessed_root_path()
folder_name = '{}_min_rating{}-min_uc{}-min_sc{}-split{}' \
.format(self.code(), self.min_rating, self.min_uc, self.min_sc, self.split)
return preprocessed_root.joinpath(folder_name)
def _get_preprocessed_dataset_path(self):
folder = self._get_preprocessed_folder_path()
return folder.joinpath('dataset.pkl')
# + [markdown] id="WO4WVLTg7f8V"
# ### ML1M class
# + id="ft3ctaOXGSrx"
class ML1MDataset(AbstractDataset):
@classmethod
def code(cls):
return 'ml-1m'
@classmethod
def url(cls):
return 'http://files.grouplens.org/datasets/movielens/ml-1m.zip'
@classmethod
def zip_file_content_is_folder(cls):
return True
@classmethod
def all_raw_file_names(cls):
return ['README',
'movies.dat',
'ratings.dat',
'users.dat']
def load_ratings_df(self):
folder_path = self._get_rawdata_folder_path()
file_path = folder_path.joinpath('ratings.dat')
df = pd.read_csv(file_path, sep='::', header=None)
df.columns = ['uid', 'sid', 'rating', 'timestamp']
return df
# + [markdown] id="uDMosTT57h0M"
# ### Manager
# + id="UgnbdrGxFh6B"
DATASETS = {
ML1MDataset.code(): ML1MDataset
}
def dataset_factory(args):
dataset = DATASETS[args.dataset_code]
return dataset(args)
# + [markdown] id="FU-1tB3SaK7u"
# ## Negative sampling
# + [markdown] id="WLQbqies7mEe"
# ### Abstract class
# + id="77GSVXs7IZgP"
class AbstractNegativeSampler(metaclass=ABCMeta):
def __init__(self, train, val, test, user_count, item_count, sample_size, seed, save_folder):
self.train = train
self.val = val
self.test = test
self.user_count = user_count
self.item_count = item_count
self.sample_size = sample_size
self.seed = seed
self.save_folder = save_folder
@classmethod
@abstractmethod
def code(cls):
pass
@abstractmethod
def generate_negative_samples(self):
pass
def get_negative_samples(self):
savefile_path = self._get_save_path()
if savefile_path.is_file():
print('Negatives samples exist. Loading.')
negative_samples = pickle.load(savefile_path.open('rb'))
return negative_samples
print("Negative samples don't exist. Generating.")
negative_samples = self.generate_negative_samples()
with savefile_path.open('wb') as f:
pickle.dump(negative_samples, f)
return negative_samples
def _get_save_path(self):
folder = Path(self.save_folder)
filename = '{}-sample_size{}-seed{}.pkl'.format(self.code(), self.sample_size, self.seed)
return folder.joinpath(filename)
# + [markdown] id="sm1cpuBx7pVU"
# ### Random negative sampling
# + id="vuvvsiHFIgnP"
class RandomNegativeSampler(AbstractNegativeSampler):
@classmethod
def code(cls):
return 'random'
def generate_negative_samples(self):
assert self.seed is not None, 'Specify seed for random sampling'
np.random.seed(self.seed)
negative_samples = {}
print('Sampling negative items')
for user in trange(self.user_count):
if isinstance(self.train[user][1], tuple):
seen = set(x[0] for x in self.train[user])
seen.update(x[0] for x in self.val[user])
seen.update(x[0] for x in self.test[user])
else:
seen = set(self.train[user])
seen.update(self.val[user])
seen.update(self.test[user])
samples = []
for _ in range(self.sample_size):
item = np.random.choice(self.item_count) + 1
while item in seen or item in samples:
item = np.random.choice(self.item_count) + 1
samples.append(item)
negative_samples[user] = samples
return negative_samples
# + [markdown] id="ZzZe59fr7s-B"
# ### Manager
# + id="tYSSmJg2IZdu"
NEGATIVE_SAMPLERS = {
RandomNegativeSampler.code(): RandomNegativeSampler,
}
def negative_sampler_factory(code, train, val, test, user_count, item_count, sample_size, seed, save_folder):
negative_sampler = NEGATIVE_SAMPLERS[code]
return negative_sampler(train, val, test, user_count, item_count, sample_size, seed, save_folder)
# + [markdown] id="F-RX320PaV6e"
# ## Dataloader
# + [markdown] id="f9t02l5u7wu8"
# ### Abstract class
# + id="z6c630DnIJLm"
class AbstractDataloader(metaclass=ABCMeta):
def __init__(self, args, dataset):
self.args = args
seed = args.dataloader_random_seed
self.rng = random.Random(seed)
self.save_folder = dataset._get_preprocessed_folder_path()
dataset = dataset.load_dataset()
self.train = dataset['train']
self.val = dataset['val']
self.test = dataset['test']
self.umap = dataset['umap']
self.smap = dataset['smap']
self.user_count = len(self.umap)
self.item_count = len(self.smap)
@classmethod
@abstractmethod
def code(cls):
pass
@abstractmethod
def get_pytorch_dataloaders(self):
pass
# + [markdown] id="3dqOj8Np71bf"
# ### BERT dataloader
# + id="fqLUnykjIJIT"
class BertDataloader(AbstractDataloader):
def __init__(self, args, dataset):
super().__init__(args, dataset)
args.num_items = len(self.smap)
self.max_len = args.bert_max_len
self.mask_prob = args.bert_mask_prob
self.CLOZE_MASK_TOKEN = self.item_count + 1
code = args.train_negative_sampler_code
train_negative_sampler = negative_sampler_factory(code, self.train, self.val, self.test,
self.user_count, self.item_count,
args.train_negative_sample_size,
args.train_negative_sampling_seed,
self.save_folder)
code = args.test_negative_sampler_code
test_negative_sampler = negative_sampler_factory(code, self.train, self.val, self.test,
self.user_count, self.item_count,
args.test_negative_sample_size,
args.test_negative_sampling_seed,
self.save_folder)
self.train_negative_samples = train_negative_sampler.get_negative_samples()
self.test_negative_samples = test_negative_sampler.get_negative_samples()
@classmethod
def code(cls):
return 'bert'
def get_pytorch_dataloaders(self):
train_loader = self._get_train_loader()
val_loader = self._get_val_loader()
test_loader = self._get_test_loader()
return train_loader, val_loader, test_loader
def _get_train_loader(self):
dataset = self._get_train_dataset()
dataloader = data_utils.DataLoader(dataset, batch_size=self.args.train_batch_size,
shuffle=True, pin_memory=True)
return dataloader
def _get_train_dataset(self):
dataset = BertTrainDataset(self.train, self.max_len, self.mask_prob, self.CLOZE_MASK_TOKEN, self.item_count, self.rng)
return dataset
def _get_val_loader(self):
return self._get_eval_loader(mode='val')
def _get_test_loader(self):
return self._get_eval_loader(mode='test')
def _get_eval_loader(self, mode):
batch_size = self.args.val_batch_size if mode == 'val' else self.args.test_batch_size
dataset = self._get_eval_dataset(mode)
dataloader = data_utils.DataLoader(dataset, batch_size=batch_size,
shuffle=False, pin_memory=True)
return dataloader
def _get_eval_dataset(self, mode):
answers = self.val if mode == 'val' else self.test
dataset = BertEvalDataset(self.train, answers, self.max_len, self.CLOZE_MASK_TOKEN, self.test_negative_samples)
return dataset
class BertTrainDataset(data_utils.Dataset):
def __init__(self, u2seq, max_len, mask_prob, mask_token, num_items, rng):
self.u2seq = u2seq
self.users = sorted(self.u2seq.keys())
self.max_len = max_len
self.mask_prob = mask_prob
self.mask_token = mask_token
self.num_items = num_items
self.rng = rng
def __len__(self):
return len(self.users)
def __getitem__(self, index):
user = self.users[index]
seq = self._getseq(user)
tokens = []
labels = []
for s in seq:
prob = self.rng.random()
if prob < self.mask_prob:
prob /= self.mask_prob
if prob < 0.8:
tokens.append(self.mask_token)
elif prob < 0.9:
tokens.append(self.rng.randint(1, self.num_items))
else:
tokens.append(s)
labels.append(s)
else:
tokens.append(s)
labels.append(0)
tokens = tokens[-self.max_len:]
labels = labels[-self.max_len:]
mask_len = self.max_len - len(tokens)
tokens = [0] * mask_len + tokens
labels = [0] * mask_len + labels
return torch.LongTensor(tokens), torch.LongTensor(labels)
def _getseq(self, user):
return self.u2seq[user]
class BertEvalDataset(data_utils.Dataset):
def __init__(self, u2seq, u2answer, max_len, mask_token, negative_samples):
self.u2seq = u2seq
self.users = sorted(self.u2seq.keys())
self.u2answer = u2answer
self.max_len = max_len
self.mask_token = mask_token
self.negative_samples = negative_samples
def __len__(self):
return len(self.users)
def __getitem__(self, index):
user = self.users[index]
seq = self.u2seq[user]
answer = self.u2answer[user]
negs = self.negative_samples[user]
candidates = answer + negs
labels = [1] * len(answer) + [0] * len(negs)
seq = seq + [self.mask_token]
seq = seq[-self.max_len:]
padding_len = self.max_len - len(seq)
seq = [0] * padding_len + seq
return torch.LongTensor(seq), torch.LongTensor(candidates), torch.LongTensor(labels)
# + [markdown] id="UB5TW2DJ745i"
# ### Manager
# + id="vljL5_uSGmB3"
DATALOADERS = {
BertDataloader.code(): BertDataloader,
}
def dataloader_factory(args):
dataset = dataset_factory(args)
dataloader = DATALOADERS[args.dataloader_code]
dataloader = dataloader(args, dataset)
train, val, test = dataloader.get_pytorch_dataloaders()
return train, val, test
# + [markdown] id="v3r-1B3Oaiq7"
# ## Model
# + id="k3NFXrgtJBXj"
class LayerNorm(nn.Module):
"Construct a layernorm module (See citation for details)."
def __init__(self, features, eps=1e-6):
super(LayerNorm, self).__init__()
self.a_2 = nn.Parameter(torch.ones(features))
self.b_2 = nn.Parameter(torch.zeros(features))
self.eps = eps
def forward(self, x):
mean = x.mean(-1, keepdim=True)
std = x.std(-1, keepdim=True)
return self.a_2 * (x - mean) / (std + self.eps) + self.b_2
class SublayerConnection(nn.Module):
"""
A residual connection followed by a layer norm.
Note for code simplicity the norm is first as opposed to last.
"""
def __init__(self, size, dropout):
super(SublayerConnection, self).__init__()
self.norm = LayerNorm(size)
self.dropout = nn.Dropout(dropout)
def forward(self, x, sublayer):
"Apply residual connection to any sublayer with the same size."
return x + self.dropout(sublayer(self.norm(x)))
class GELU(nn.Module):
"""
Paper Section 3.4, last paragraph notice that BERT used the GELU instead of RELU
"""
def forward(self, x):
return 0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3))))
class PositionwiseFeedForward(nn.Module):
"Implements FFN equation."
def __init__(self, d_model, d_ff, dropout=0.1):
super(PositionwiseFeedForward, self).__init__()
self.w_1 = nn.Linear(d_model, d_ff)
self.w_2 = nn.Linear(d_ff, d_model)
self.dropout = nn.Dropout(dropout)
self.activation = GELU()
def forward(self, x):
return self.w_2(self.dropout(self.activation(self.w_1(x))))
# + id="RLgZ4CpJJBRU"
class Attention(nn.Module):
"""
Compute 'Scaled Dot Product Attention
"""
def forward(self, query, key, value, mask=None, dropout=None):
scores = torch.matmul(query, key.transpose(-2, -1)) \
/ math.sqrt(query.size(-1))
if mask is not None:
scores = scores.masked_fill(mask == 0, -1e9)
p_attn = F.softmax(scores, dim=-1)
if dropout is not None:
p_attn = dropout(p_attn)
return torch.matmul(p_attn, value), p_attn
class MultiHeadedAttention(nn.Module):
"""
Take in model size and number of heads.
"""
def __init__(self, h, d_model, dropout=0.1):
super().__init__()
assert d_model % h == 0
# We assume d_v always equals d_k
self.d_k = d_model // h
self.h = h
self.linear_layers = nn.ModuleList([nn.Linear(d_model, d_model) for _ in range(3)])
self.output_linear = nn.Linear(d_model, d_model)
self.attention = Attention()
self.dropout = nn.Dropout(p=dropout)
def forward(self, query, key, value, mask=None):
batch_size = query.size(0)
# 1) Do all the linear projections in batch from d_model => h x d_k
query, key, value = [l(x).view(batch_size, -1, self.h, self.d_k).transpose(1, 2)
for l, x in zip(self.linear_layers, (query, key, value))]
# 2) Apply attention on all the projected vectors in batch.
x, attn = self.attention(query, key, value, mask=mask, dropout=self.dropout)
# 3) "Concat" using a view and apply a final linear.
x = x.transpose(1, 2).contiguous().view(batch_size, -1, self.h * self.d_k)
return self.output_linear(x)
# + id="sQlJoNe-JcX4"
class PositionalEmbedding(nn.Module):
def __init__(self, max_len, d_model):
super().__init__()
# Compute the positional encodings once in log space.
self.pe = nn.Embedding(max_len, d_model)
def forward(self, x):
batch_size = x.size(0)
return self.pe.weight.unsqueeze(0).repeat(batch_size, 1, 1)
class SegmentEmbedding(nn.Embedding):
def __init__(self, embed_size=512):
super().__init__(3, embed_size, padding_idx=0)
class TokenEmbedding(nn.Embedding):
def __init__(self, vocab_size, embed_size=512):
super().__init__(vocab_size, embed_size, padding_idx=0)
class BERTEmbedding(nn.Module):
"""
BERT Embedding which is consisted with under features
1. TokenEmbedding : normal embedding matrix
2. PositionalEmbedding : adding positional information using sin, cos
2. SegmentEmbedding : adding sentence segment info, (sent_A:1, sent_B:2)
sum of all these features are output of BERTEmbedding
"""
def __init__(self, vocab_size, embed_size, max_len, dropout=0.1):
"""
:param vocab_size: total vocab size
:param embed_size: embedding size of token embedding
:param dropout: dropout rate
"""
super().__init__()
self.token = TokenEmbedding(vocab_size=vocab_size, embed_size=embed_size)
self.position = PositionalEmbedding(max_len=max_len, d_model=embed_size)
# self.segment = SegmentEmbedding(embed_size=self.token.embedding_dim)
self.dropout = nn.Dropout(p=dropout)
self.embed_size = embed_size
def forward(self, sequence):
x = self.token(sequence) + self.position(sequence) # + self.segment(segment_label)
return self.dropout(x)
# + id="6MUoT1WTKEPF"
class TransformerBlock(nn.Module):
"""
Bidirectional Encoder = Transformer (self-attention)
Transformer = MultiHead_Attention + Feed_Forward with sublayer connection
"""
def __init__(self, hidden, attn_heads, feed_forward_hidden, dropout):
"""
:param hidden: hidden size of transformer
:param attn_heads: head sizes of multi-head attention
:param feed_forward_hidden: feed_forward_hidden, usually 4*hidden_size
:param dropout: dropout rate
"""
super().__init__()
self.attention = MultiHeadedAttention(h=attn_heads, d_model=hidden, dropout=dropout)
self.feed_forward = PositionwiseFeedForward(d_model=hidden, d_ff=feed_forward_hidden, dropout=dropout)
self.input_sublayer = SublayerConnection(size=hidden, dropout=dropout)
self.output_sublayer = SublayerConnection(size=hidden, dropout=dropout)
self.dropout = nn.Dropout(p=dropout)
def forward(self, x, mask):
x = self.input_sublayer(x, lambda _x: self.attention.forward(_x, _x, _x, mask=mask))
x = self.output_sublayer(x, self.feed_forward)
return self.dropout(x)
# + id="exnahi0aJ9zn"
class BERT(nn.Module):
def __init__(self, args):
super().__init__()
fix_random_seed_as(args.model_init_seed)
# self.init_weights()
max_len = args.bert_max_len
num_items = args.num_items
n_layers = args.bert_num_blocks
heads = args.bert_num_heads
vocab_size = num_items + 2
hidden = args.bert_hidden_units
self.hidden = hidden
dropout = args.bert_dropout
# embedding for BERT, sum of positional, segment, token embeddings
self.embedding = BERTEmbedding(vocab_size=vocab_size, embed_size=self.hidden, max_len=max_len, dropout=dropout)
# multi-layers transformer blocks, deep network
self.transformer_blocks = nn.ModuleList(
[TransformerBlock(hidden, heads, hidden * 4, dropout) for _ in range(n_layers)])
def forward(self, x):
mask = (x > 0).unsqueeze(1).repeat(1, x.size(1), 1).unsqueeze(1)
# embedding the indexed sequence to sequence of vectors
x = self.embedding(x)
# running over multiple transformer blocks
for transformer in self.transformer_blocks:
x = transformer.forward(x, mask)
return x
def init_weights(self):
pass
# + id="UMR-_ls5Itog"
class BaseModel(nn.Module, metaclass=ABCMeta):
def __init__(self, args):
super().__init__()
self.args = args
@classmethod
@abstractmethod
def code(cls):
pass
# + id="iY6_tI_nItmA"
class BERTModel(BaseModel):
def __init__(self, args):
super().__init__(args)
self.bert = BERT(args)
self.out = nn.Linear(self.bert.hidden, args.num_items + 1)
@classmethod
def code(cls):
return 'bert'
def forward(self, x):
x = self.bert(x)
return self.out(x)
# + id="WWexEPvmIti7"
MODELS = {
BERTModel.code(): BERTModel,
}
def model_factory(args):
model = MODELS[args.model_code]
return model(args)
# + [markdown] id="EDi_Pj_1amv4"
# ## Training
# + id="YD47UKG7KR6M"
class AbstractTrainer(metaclass=ABCMeta):
def __init__(self, args, model, train_loader, val_loader, test_loader, export_root):
self.args = args
self.device = args.device
self.model = model.to(self.device)
self.is_parallel = args.num_gpu > 1
if self.is_parallel:
self.model = nn.DataParallel(self.model)
self.train_loader = train_loader
self.val_loader = val_loader
self.test_loader = test_loader
self.optimizer = self._create_optimizer()
if args.enable_lr_schedule:
self.lr_scheduler = optim.lr_scheduler.StepLR(self.optimizer, step_size=args.decay_step, gamma=args.gamma)
self.num_epochs = args.num_epochs
self.metric_ks = args.metric_ks
self.best_metric = args.best_metric
self.export_root = export_root
self.writer, self.train_loggers, self.val_loggers = self._create_loggers()
self.add_extra_loggers()
self.logger_service = LoggerService(self.train_loggers, self.val_loggers)
self.log_period_as_iter = args.log_period_as_iter
@abstractmethod
def add_extra_loggers(self):
pass
@abstractmethod
def log_extra_train_info(self, log_data):
pass
@abstractmethod
def log_extra_val_info(self, log_data):
pass
@classmethod
@abstractmethod
def code(cls):
pass
@abstractmethod
def calculate_loss(self, batch):
pass
@abstractmethod
def calculate_metrics(self, batch):
pass
def train(self):
accum_iter = 0
self.validate(0, accum_iter)
for epoch in range(self.num_epochs):
accum_iter = self.train_one_epoch(epoch, accum_iter)
self.validate(epoch, accum_iter)
self.logger_service.complete({
'state_dict': (self._create_state_dict()),
})
self.writer.close()
def train_one_epoch(self, epoch, accum_iter):
self.model.train()
if self.args.enable_lr_schedule:
self.lr_scheduler.step()
average_meter_set = AverageMeterSet()
tqdm_dataloader = tqdm(self.train_loader)
for batch_idx, batch in enumerate(tqdm_dataloader):
batch_size = batch[0].size(0)
batch = [x.to(self.device) for x in batch]
self.optimizer.zero_grad()
loss = self.calculate_loss(batch)
loss.backward()
self.optimizer.step()
average_meter_set.update('loss', loss.item())
tqdm_dataloader.set_description(
'Epoch {}, loss {:.3f} '.format(epoch+1, average_meter_set['loss'].avg))
accum_iter += batch_size
if self._needs_to_log(accum_iter):
tqdm_dataloader.set_description('Logging to Tensorboard')
log_data = {
'state_dict': (self._create_state_dict()),
'epoch': epoch+1,
'accum_iter': accum_iter,
}
log_data.update(average_meter_set.averages())
self.log_extra_train_info(log_data)
self.logger_service.log_train(log_data)
return accum_iter
def validate(self, epoch, accum_iter):
self.model.eval()
average_meter_set = AverageMeterSet()
with torch.no_grad():
tqdm_dataloader = tqdm(self.val_loader)
for batch_idx, batch in enumerate(tqdm_dataloader):
batch = [x.to(self.device) for x in batch]
metrics = self.calculate_metrics(batch)
for k, v in metrics.items():
average_meter_set.update(k, v)
description_metrics = ['NDCG@%d' % k for k in self.metric_ks[:3]] +\
['Recall@%d' % k for k in self.metric_ks[:3]]
description = 'Val: ' + ', '.join(s + ' {:.3f}' for s in description_metrics)
description = description.replace('NDCG', 'N').replace('Recall', 'R')
description = description.format(*(average_meter_set[k].avg for k in description_metrics))
tqdm_dataloader.set_description(description)
log_data = {
'state_dict': (self._create_state_dict()),
'epoch': epoch+1,
'accum_iter': accum_iter,
}
log_data.update(average_meter_set.averages())
self.log_extra_val_info(log_data)
self.logger_service.log_val(log_data)
def test(self):
print('Test best model with test set!')
best_model = torch.load(os.path.join(self.export_root, 'models', 'best_acc_model.pth')).get('model_state_dict')
self.model.load_state_dict(best_model)
self.model.eval()
average_meter_set = AverageMeterSet()
with torch.no_grad():
tqdm_dataloader = tqdm(self.test_loader)
for batch_idx, batch in enumerate(tqdm_dataloader):
batch = [x.to(self.device) for x in batch]
metrics = self.calculate_metrics(batch)
for k, v in metrics.items():
average_meter_set.update(k, v)
description_metrics = ['NDCG@%d' % k for k in self.metric_ks[:3]] +\
['Recall@%d' % k for k in self.metric_ks[:3]]
description = 'Val: ' + ', '.join(s + ' {:.3f}' for s in description_metrics)
description = description.replace('NDCG', 'N').replace('Recall', 'R')
description = description.format(*(average_meter_set[k].avg for k in description_metrics))
tqdm_dataloader.set_description(description)
average_metrics = average_meter_set.averages()
with open(os.path.join(self.export_root, 'logs', 'test_metrics.json'), 'w') as f:
json.dump(average_metrics, f, indent=4)
print(average_metrics)
def _create_optimizer(self):
args = self.args
if args.optimizer.lower() == 'adam':
return optim.Adam(self.model.parameters(), lr=args.lr, weight_decay=args.weight_decay)
elif args.optimizer.lower() == 'sgd':
return optim.SGD(self.model.parameters(), lr=args.lr, weight_decay=args.weight_decay, momentum=args.momentum)
else:
raise ValueError
def _create_loggers(self):
root = Path(self.export_root)
writer = SummaryWriter(root.joinpath('logs'))
model_checkpoint = root.joinpath('models')
train_loggers = [
MetricGraphPrinter(writer, key='epoch', graph_name='Epoch', group_name='Train'),
MetricGraphPrinter(writer, key='loss', graph_name='Loss', group_name='Train'),
]
val_loggers = []
for k in self.metric_ks:
val_loggers.append(
MetricGraphPrinter(writer, key='NDCG@%d' % k, graph_name='NDCG@%d' % k, group_name='Validation'))
val_loggers.append(
MetricGraphPrinter(writer, key='Recall@%d' % k, graph_name='Recall@%d' % k, group_name='Validation'))
val_loggers.append(RecentModelLogger(model_checkpoint))
val_loggers.append(BestModelLogger(model_checkpoint, metric_key=self.best_metric))
return writer, train_loggers, val_loggers
def _create_state_dict(self):
return {
STATE_DICT_KEY: self.model.module.state_dict() if self.is_parallel else self.model.state_dict(),
OPTIMIZER_STATE_DICT_KEY: self.optimizer.state_dict(),
}
def _needs_to_log(self, accum_iter):
return accum_iter % self.log_period_as_iter < self.args.train_batch_size and accum_iter != 0
# + id="q3WwY7cIKR4L"
class BERTTrainer(AbstractTrainer):
def __init__(self, args, model, train_loader, val_loader, test_loader, export_root):
super().__init__(args, model, train_loader, val_loader, test_loader, export_root)
self.ce = nn.CrossEntropyLoss(ignore_index=0)
@classmethod
def code(cls):
return 'bert'
def add_extra_loggers(self):
pass
def log_extra_train_info(self, log_data):
pass
def log_extra_val_info(self, log_data):
pass
def calculate_loss(self, batch):
seqs, labels = batch
logits = self.model(seqs) # B x T x V
logits = logits.view(-1, logits.size(-1)) # (B*T) x V
labels = labels.view(-1) # B*T
loss = self.ce(logits, labels)
return loss
def calculate_metrics(self, batch):
seqs, candidates, labels = batch
scores = self.model(seqs) # B x T x V
scores = scores[:, -1, :] # B x V
scores = scores.gather(1, candidates) # B x C
metrics = recalls_and_ndcgs_for_ks(scores, labels, self.metric_ks)
return metrics
# + id="iM00f0PAKR1T"
TRAINERS = {
BERTTrainer.code(): BERTTrainer,
}
def trainer_factory(args, model, train_loader, val_loader, test_loader, export_root):
trainer = TRAINERS[args.trainer_code]
return trainer(args, model, train_loader, val_loader, test_loader, export_root)
# + colab={"background_save": true, "base_uri": "https://localhost:8080/"} id="id-5bt57LmRW" executionInfo={"elapsed": 283689, "status": "ok", "timestamp": 1632645271496, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}, "user_tz": -330} outputId="98a71d76-b161-4f93-a682-806530d3c6ef"
def train():
export_root = setup_train(args)
train_loader, val_loader, test_loader = dataloader_factory(args)
model = model_factory(args)
trainer = trainer_factory(args, model, train_loader, val_loader, test_loader, export_root)
trainer.train()
test_model = (input('Test model with test dataset? y/[n]: ') == 'y')
if test_model:
trainer.test()
if __name__ == '__main__':
if args.mode == 'train':
train()
else:
raise ValueError('Invalid mode')
|
_notebooks/2022-01-19-bert4rec-movie.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
List1 = ['Nurul', 'Tauhid', 'Sohel', 'Tauhid', 0, 1, 2]
List1
Set = set(List1)
Set.add(12.2)
Set.remove(12.2)
Set1 = {1,2,2,"Tauhid"}
Set & Set1
Set.union(Set1)
Set1.issubset(Set)
Normal_set = set([1,2,'Tauhid'])
print(Normal_set)
Frozen_set = frozenset(['e', 'f', 2])
Frozen_set
# +
People = {'Tauhid', 'Bunty', 'Mubin', 'Tahi'}
print("People:", end="")
print(People)
People.add("Bappy")
for i in range(1, 6):
People.add(i)
print("\n after adding the element:", end = " " )
print(People)
# -
p = {'A', 'B', 'C', 'D'}
q = {'A', 'Y', 'F', 'C'}
print(p.union(q))
print(p|q)
print(p.intersection(q))
print(p&q)
print(p.difference(q))
print(q-p)
print(p>q)
print(q>p)
print(p==q)
print(p>=q)
|
Sets.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
from fuzzywuzzy import fuzz
from tqdm import tqdm
warnings.filterwarnings("ignore")
# %matplotlib inline
# -
df = pd.read_csv('../transfers1.2.csv')
df.head()
df.to_league_name.value_counts()
# +
# writing a loop to classify the clubs
PL = []
la_liga = []
serie_a = []
bundesliga = []
ligue_1 = []
premier_liga = []
liga_nos = []
eredivisie = []
for i in range(df.shape[0]):
if(df.to_league_name[i] == 'Premier League'):
PL.append(df.to_club_name[i])
elif(df.to_league_name[i] == 'Serie A'):
serie_a.append(df.to_club_name[i])
elif(df.to_league_name[i] == 'Primera Division'):
la_liga.append(df.to_club_name[i])
elif(df.to_league_name[i] == '1 Bundesliga'):
bundesliga.append(df.to_club_name[i])
elif(df.to_league_name[i] == 'Ligue 1'):
ligue_1.append(df.to_club_name[i])
elif(df.to_league_name[i] == 'Premier Liga'):
premier_liga.append(df.to_club_name[i])
elif(df.to_league_name[i] == 'Liga Nos'):
liga_nos.append(df.to_club_name[i])
else:
eredivisie.append(df.to_club_name[i])
#making all lists unique
PL = set(PL)
PL = list(PL)
la_liga = set(la_liga)
la_liga = list(la_liga)
serie_a = set(serie_a)
serie_a = list(serie_a)
bundesliga = set(bundesliga)
bundesliga = list(bundesliga)
ligue_1 = set(ligue_1)
ligue_1 = list(ligue_1)
premier_liga = set(premier_liga)
premier_liga = list(premier_liga)
liga_nos = set(liga_nos)
liga_nos = list(liga_nos)
eredivisie = set(eredivisie)
eredivisie = list(eredivisie)
df['normalized_from_club_name'] = 0
# -
# I tried to run all the following loops in one single loop, but it was too heavy computationally, hence I broke it down for each league.
# +
# Premier League
from_league = []
for i in tqdm(range(df.shape[0])):
ratio_pl = []
for j in range(len(PL)):
ratio_pl.append(fuzz.partial_ratio(df.from_club_involved_name[i].lower(), PL[j].lower()))
ratio = max(ratio_pl)
if(ratio>80):
val = ratio_pl.index(ratio)
df['normalized_from_club_name'][i] = PL[val]
from_league.append('Premier League')
else:
from_league.append('Other')
df['from_league'] = from_league
# +
# Serie A
for i in tqdm(range(df.shape[0])):
if(df['from_league'][i] == 'Other'):
ratio_sa = []
for k in range(len(serie_a)):
ratio_sa.append(fuzz.partial_ratio(df.from_club_involved_name[i].lower(), serie_a[k].lower()))
# print(ratio_sa)
ratio = max(ratio_sa)
if(ratio>80):
val = ratio_sa.index(ratio)
df['normalized_from_club_name'][i] = serie_a[val]
df['from_league'][i] = 'Serie A'
else:
from_league.append('Other')
else:
None
# +
# Bundesliga
for i in tqdm(range(df.shape[0])):
if(df['from_league'][i] == 'Other'):
ratio_bl = []
for m in range(len(bundesliga)):
ratio_bl.append(fuzz.partial_ratio(df.from_club_involved_name[i].lower(), bundesliga[m].lower()))
ratio = max(ratio_bl)
if(ratio>80):
val = ratio_bl.index(ratio)
df['normalized_from_club_name'][i] = bundesliga[val]
df['from_league'][i]= '1 Bundesliga'
else:
from_league.append('Other')
else:
None
# +
# Ligue 1
for i in tqdm(range(df.shape[0])):
if(df['from_league'][i] == 'Other'):
ratio_l = []
for m in range(len(ligue_1)):
ratio_l.append(fuzz.partial_ratio(df.from_club_involved_name[i].lower(), ligue_1[m].lower()))
ratio = max(ratio_l)
if(ratio>80):
val = ratio_l.index(ratio)
df['normalized_from_club_name'][i] = ligue_1[val]
df['from_league'][i]= 'Ligue 1'
else:
from_league.append('Other')
else:
None
# +
# Premier Liga
for i in tqdm(range(df.shape[0])):
if(df['from_league'][i] == 'Other'):
ratio_l = []
for m in range(len(premier_liga)):
ratio_l.append(fuzz.partial_ratio(df.from_club_involved_name[i].lower(), premier_liga[m].lower()))
ratio = max(ratio_l)
if(ratio>80):
val = ratio_l.index(ratio)
df['normalized_from_club_name'][i] = premier_liga[val]
df['from_league'][i]= 'Premier Liga'
else:
from_league.append('Other')
else:
None
# +
# Liga Nos
for i in tqdm(range(df.shape[0])):
if(df['from_league'][i] == 'Other'):
ratio_l = []
for m in range(len(liga_nos)):
ratio_l.append(fuzz.partial_ratio(df.from_club_involved_name[i].lower(), liga_nos[m].lower()))
ratio = max(ratio_l)
if(ratio>80):
val = ratio_l.index(ratio)
df['normalized_from_club_name'][i] = liga_nos[val]
df['from_league'][i]= 'Liga Nos'
else:
from_league.append('Other')
else:
None
# +
# Eredivisie
for i in tqdm(range(df.shape[0])):
if(df['from_league'][i] == 'Other'):
ratio_l = []
for m in range(len(eredivisie)):
ratio_l.append(fuzz.partial_ratio(df.from_club_involved_name[i].lower(), eredivisie[m].lower()))
ratio = max(ratio_l)
if(ratio>80):
val = ratio_l.index(ratio)
df['normalized_from_club_name'][i] = eredivisie[val]
df['from_league'][i]= 'Eredivisie'
else:
from_league.append('Other')
else:
None
# +
# Primera Division
for i in tqdm(range(df.shape[0])):
if(df['from_league'][i] == 'Other'):
ratio_l = []
for m in range(len(la_liga)):
ratio_l.append(fuzz.partial_ratio(df.from_club_involved_name[i].lower(), la_liga[m].lower()))
ratio = max(ratio_l)
if(ratio>80):
val = ratio_l.index(ratio)
df['normalized_from_club_name'][i] = la_liga[val]
df['from_league'][i]= 'Primera Division'
else:
from_league.append('Other')
else:
None
# -
df['from_league'].value_counts()
df.head()
df.to_csv('../transfers1.3.csv')
|
projects/causal moneyball/Causal-analysis-on-football-transfer-prices/Pre-processing notebooks/generating_from_league.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
class Business(object):
def __init__(self, name, location, id):
self.name = name
self.location = location
self.id = id
class Chain(object):
def __init__(self, name, frequency):
self.name = name
self.frequency = frequency
def __repr__(self):
return ''.join([self.name, ' ', str(self.frequency)])
# -
from collections import defaultdict
def detect_and_order_chain_businesses(businesses, location):
valid = defaultdict(list)
for business in businesses:
if business.location == location:
if business.id not in valid[business.name]:
valid[business.name].append(business.id)
chains = []
for key in valid.keys():
chains.append(Chain(key, len(set(valid[key]))))
chains.sort(key = lambda x: x.name)
chains.sort(key = lambda x: -1*x.frequency)
return chains
businesses = [Business('Whole-Food', 'San Francisco', 103), Business('Whole-Food', 'Berkeley', 103),
Business('Whole-Food', 'San Francisco', 105), Business('Sprouts', 'San Francisco', 103),
Business('Sprouts', 'San Francisco', 109), Business('Sprouts', 'San Francisco', 109),
Business('Trader-Joes', 'San Francisco', 109), Business('Trader-Joes', 'San Francisco', 109)]
detect_and_order_chain_businesses(businesses, 'San Francisco')
|
ds_and_alg/.ipynb_checkpoints/Identify Chain Businesses-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# ## Tox 21 dataset exploration
#
#
# The purpose of this notebook is as follows,
#
# - Explain [Tox21 dataset](https://tripod.nih.gov/tox21/challenge/): Check the labels and visualization of molecules to understand what kind of data are stored.
# - Explain internal structure of tox21 dataset in `chainer_chemistry`: We handle the dataset with `NumpyTupleDataset`.
# - Explain how `preprocessor` and `parser` work on `chainer_chemistry`: One concrete example using `NFPPreprocessor` is explained.
#
# It is out of scope of this notebook to explain how to train graph convolutional network using this dataset, please refer [document tutorial](http://chainer-chemistry.readthedocs.io/en/latest/tutorial.html#) or try `train_tox21.py` in [tox21 example](https://github.com/pfnet-research/chainer-chemistry/tree/master/examples/tox21) for the model training.
#
# [Note]
# This notebook is executed on 1, March, 2018.
# The behavior of tox21 dataset in `chainer_chemistry` might change in the future.
# Loading modules and set loglevel.
# +
import logging
from rdkit import RDLogger
from chainer_chemistry import datasets
# Disable errors by RDKit occurred in preprocessing Tox21 dataset.
lg = RDLogger.logger()
lg.setLevel(RDLogger.CRITICAL)
# show INFO level log from chainer chemistry
logging.basicConfig(level=logging.INFO)
# -
# Tox 21 dataset consists of train/validation/test data and they can be downloaded automatically with chainer chemistry.
# The format of tox21 dataset is "sdf" file.
# You can check the file path of downloaded sdf file with `get_tox21_filepath` method.
# +
train_filepath = datasets.get_tox21_filepath('train')
val_filepath = datasets.get_tox21_filepath('val')
test_filepath = datasets.get_tox21_filepath('test')
print('train_filepath =', train_filepath)
print('val_filepath =', val_filepath)
print('test_filepath =', test_filepath)
# -
# Dataset contains 12 types of toxity, the label of toxity can be checked by `get_tox21_label_names` method.
#
label_names = datasets.get_tox21_label_names()
print('tox21 label_names =', label_names)
# ### Preprocessing dataset
#
# Dataset extraction depends on the preprocessing method, which is determined by `preprocessor`.
#
# Here, let's look an example of using `NFPPreprocessor` preprocessor for tox21 dataset exraction.
#
# Procedure is as follows,
#
# 1. Instantiate `preprocessor` (here `NFPPreprocessor` is used).
# 2. call `get_tox21` method with `preprocessor`.
# - `labels=None` option is used to extract all labels. In this case, 12 types of toxity labels are extracted (see above).
#
# [Note]
# - `return_smiles` option can be used to get SMILES information together with the dataset itself.
# - Preprocessing result depends on RDKit version.
# You might get different results due to the difference of RDKit behavior between version.
# +
import rdkit
print('RDKit version: ', rdkit.__version__)
# +
from chainer_chemistry.dataset.preprocessors.nfp_preprocessor import \
NFPPreprocessor
preprocessor = NFPPreprocessor()
train, val, test, train_smiles, val_smiles, test_smiles = datasets.get_tox21(preprocessor, labels=None, return_smiles=True)
# -
# Dataset extraction depends on the `preprocessor`, and you may use other type of `preprocessor` as well.
#
# Below is another example of using `GGNNPreprocessor` for dataset extraction. But it takes little bit of time, you can skip it for the following tutorial.
# +
from chainer_chemistry.dataset.preprocessors.ggnn_preprocessor import \
GGNNPreprocessor
# uncomment it if you want to try `GGNNPreprocessor`
ggnn_preprocessor = GGNNPreprocessor()
results = datasets.get_tox21(ggnn_preprocessor, labels=None, return_smiles=True)
train_ggnn, val_ggnn, test_ggnn, train_smiles_ggnn, val_smiles_ggnn, test_smiles_ggnn = results
# -
# ### Check extracted dataset
#
# First, let's check number of data for train/validation/test dataset.
# +
print('dataset information...')
print('train', type(train), len(train))
print('val', type(val), len(val))
print('test', type(test), len(test))
print('smiles information...')
print('train_smiles', type(train_smiles), len(train_smiles))
# -
# There are 11757 data in `train`, 295 data in `val` and 645 data in `test` respectively.
# (You might get different result with different version of `rdkit`.)
# The dataset is a class of `NumpyTupleDataset`, where i-th dataset features can be accessed by `dataset[i]`.
#
# When `NFPPreprocessor` is used, each dataset consists of following features
# 1. atom feature: representing atomic number of given molecule.
# 2. adjacency matrix feature: representing adjacency matrix of given molecule.
# 3. label feature: representing toxity (label) of given molecule.
# Here, 0 indicates negative (no toxity), 1 indicates positive (toxic) and -1 indicates data is not available, respectively.
# Let's look an example of 6-th train dataset
# +
index = 6
print('index={}, SMILES={}'.format(index, train_smiles[index]))
atom, adj, labels = train[index]
# This molecule has N=12 atoms.
print('atom', atom.shape, atom)
# adjacency matrix is NxN matrix, where N is number of atoms in the molecule.
# Unlike usual adjacency matrix, diagonal elements are filled with 1, for NFP calculation purpose.
print('adj', adj.shape)
print(adj)
print('labels', labels)
# -
# ### Visualizing the molecule
#
# One might want to visualize molecule given SMILES information.
# Here is an example code:
#
# +
# This script is referred from http://rdkit.blogspot.jp/2015/02/new-drawing-code.html
# and http://cheminformist.itmol.com/TEST/wp-content/uploads/2015/07/rdkit_moldraw2d_2.html
from __future__ import print_function
from rdkit import Chem
from rdkit.Chem.Draw import IPythonConsole
from IPython.display import SVG
from rdkit.Chem import rdDepictor
from rdkit.Chem.Draw import rdMolDraw2D
def moltosvg(mol,molSize=(450,150),kekulize=True):
mc = Chem.Mol(mol.ToBinary())
if kekulize:
try:
Chem.Kekulize(mc)
except:
mc = Chem.Mol(mol.ToBinary())
if not mc.GetNumConformers():
rdDepictor.Compute2DCoords(mc)
drawer = rdMolDraw2D.MolDraw2DSVG(molSize[0],molSize[1])
drawer.DrawMolecule(mc)
drawer.FinishDrawing()
svg = drawer.GetDrawingText()
return svg
def render_svg(svg):
# It seems that the svg renderer used doesn't quite hit the spec.
# Here are some fixes to make it work in the notebook, although I think
# the underlying issue needs to be resolved at the generation step
return SVG(svg.replace('svg:',''))
# +
smiles = train_smiles[index]
mol = Chem.MolFromSmiles(train_smiles[index])
print('smiles:', smiles)
render_svg(moltosvg(mol))
# -
# [Note] SVG images cannot be displayed on GitHub, but you can see an image of molecule when you execute it on jupyter notebook.
# ### Interactively watch through the tox21 dataset
#
# Jupyter notebook provides handy module to check/visualize the data.
# Here `interact` module can be used to interactively check the internal of tox 21 dataset.
# +
from ipywidgets import interact
def show_train_dataset(index):
atom, adj, labels = train[index]
smiles = train_smiles[index]
print('index={}, SMILES={}'.format(index, smiles))
print('atom', atom)
# print('adj', adj)
print('labels', labels)
mol = Chem.MolFromSmiles(train_smiles[index])
return render_svg(moltosvg(mol))
interact(show_train_dataset, index=(0, len(train) - 1, 1))
# -
# ### Appendix: how to save the molecule figure?
#
# ### 1. Save with SVG format
#
# First method is simply save svg in file.
#
# +
import os
dirpath = 'images'
if not os.path.exists(dirpath):
os.mkdir(dirpath)
# -
def save_svg(mol, filepath):
svg = moltosvg(mol)
with open(filepath, "w") as fw:
fw.write(svg)
# +
index = 6
save_filepath = os.path.join(dirpath, 'mol_{}.svg'.format(index))
print('drawing {}'.format(save_filepath))
mol = Chem.MolFromSmiles(train_smiles[index])
save_svg(mol, save_filepath)
# -
# ### 2. Save with png format
#
# `rdkit` provides `Draw.MolToFile` method to visualize mol instance and save it to png format.
# +
from rdkit.Chem import Draw
def save_png(mol, filepath, size=(600, 600)):
Draw.MolToFile(mol, filepath, size=size)
# +
index = 6
save_filepath = os.path.join(dirpath, 'mol_{}.png'.format(index))
print('drawing {}'.format(save_filepath))
mol = Chem.MolFromSmiles(train_smiles[index])
save_png(mol, save_filepath, size=(600, 600))
# -
|
examples/tox21/tox21_dataset_exploration.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import base
LAST_N = 15
# -
def get_success_and_fail_numbers_at_each_task():
grouped_users = base.get_dataset_and_group_by_user()
number_of_target_fails_top = {}
number_of_target_success_top = {}
number_of_target_fails_som = {}
number_of_target_success_som = {}
for username, group in grouped_users:
group = base.filter_out_mess(group)
group_by_target = group.groupby('target_id')
for target, target_group in group_by_target:
if not target in number_of_target_fails_top.keys():
number_of_target_fails_top[target] = 0
number_of_target_success_top[target] = 0
number_of_target_fails_som[target] = 0
number_of_target_success_som[target] = 0
if target_group.iloc[0]["display_type"] == "top":
if target_group[target_group['guess_video'] == target_group['target_video']].empty:
number_of_target_fails_top[target] += 1
else:
number_of_target_success_top[target]+=1
else:
if target_group[target_group['guess_video'] == target_group['target_video']].empty:
number_of_target_fails_som[target] += 1
else:
number_of_target_success_som[target]+=1
return number_of_target_success_top,number_of_target_fails_top, \
number_of_target_success_som,number_of_target_fails_som
# +
# %matplotlib inline
fig, ax = plt.subplots()
success_top, fail_top, success_som, fail_som = get_success_and_fail_numbers_at_each_task()
x = []
y1 = []
y2 = []
i = 0
prev = 100000
sort = {}
for target, _ in success_som.items():
sort[target] = fail_top[target] + success_top[target] + fail_som[target] + success_som[target]
for target, _ in {k: v for k, v in sorted(sort.items(), key=lambda item: item[1], reverse=True)}.items():
if fail_top[target]+success_top[target]+ fail_som[target] + success_som[target] < 6:
continue
if fail_top[target]+success_top[target] + fail_som[target] + success_som[target] > prev:
raise ValueError("Targets are not sorted in order from most frequent!")
prev = fail_top[target]+success_top[target]+ fail_som[target] + success_som[target]
x.append(i)
i += 1
if (fail_top[target] + success_top[target]) != 0:
y1.append(fail_top[target] / (fail_top[target] + success_top[target]))
else:
y1.append(0)
if (fail_som[target] + success_som[target]) != 0:
y2.append(fail_som[target] / (fail_som[target] + success_som[target]))
else:
y2.append(0)
width = 0.35
x = np.asarray(x, dtype=np.float32)
plt.bar(x + width/2, y2,width, label='SOM')
plt.bar(x - width/2, y1, width, label='TOP')
plt.legend(loc="upper center")
plt.figtext(0.6, 2.4, 'Only targets which were searched by at least 6 users, targets ordered by number of users', ha='center', va='center')
plt.title("Success rate of each target")
plt.ylabel("Success rate")
plt.xlabel("Targets (ordered by number of users)")
# hide x-ticks
plt.tick_params(
axis='x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
labelbottom=False)
plt.figure(figsize=(20,20))
fig.subplots_adjust(0,0,2.5,2.5) # make plots bigger in Jupyter
plt.show()
# +
# this feels a bit messy, let's try order the targets by success (let's choose for example
# TOP success) and select only first 15 targets
# -
def get_success_and_fail_numbers_at_first_15_tasks():
grouped_users = base.get_dataset_and_group_by_user()
number_of_target_fails_top = {}
number_of_target_success_top = {}
number_of_target_fails_som = {}
number_of_target_success_som = {}
for username, group in grouped_users:
group = base.filter_out_mess(group)
# filter out not_used targets
most_searched_targets = pd.read_csv('targets.csv', sep='\,', header=None).iloc[3:3+LAST_N]
group = group[group['target_id'].isin(most_searched_targets[0].to_list())]
group_by_target = group.groupby('target_id')
for target, target_group in group_by_target:
if not target in number_of_target_fails_top.keys():
number_of_target_fails_top[target] = 0
number_of_target_success_top[target] = 0
number_of_target_fails_som[target] = 0
number_of_target_success_som[target] = 0
if target_group.iloc[0]["display_type"] == "top":
if target_group[target_group['guess_video'] == target_group['target_video']].empty:
number_of_target_fails_top[target] += 1
else:
number_of_target_success_top[target]+=1
else:
if target_group[target_group['guess_video'] == target_group['target_video']].empty:
number_of_target_fails_som[target] += 1
else:
number_of_target_success_som[target]+=1
return number_of_target_success_top,number_of_target_fails_top, \
number_of_target_success_som,number_of_target_fails_som
# +
# %matplotlib inline
fig, ax = plt.subplots()
success_top, fail_top, success_som, fail_som = get_success_and_fail_numbers_at_first_15_tasks()
x = []
y1 = []
y2 = []
for index, value in fail_top.items():
x.append(index)
y1.append(success_top[index] / (success_top[index] + fail_top[index]))
for index in x:
y2.append(success_som[index] / (success_som[index] + fail_som[index]))
y12 = list(zip(y1, y2))
y12.sort(key=lambda tup: tup[0])
width = 0.35
x = np.asarray(list(range(LAST_N)), dtype=np.float32)
plt.bar(x + width/2, [i[1] for i in y12],width, label='SOM')
plt.bar(x - width/2, [i[0] for i in y12], width, label='TOP')
plt.legend(loc="upper center")
plt.title("Success rate of first 15 targets")
plt.ylabel("Success rate")
plt.xlabel("First 15 targets (ordered by TOP success rate)")
# hide x-ticks
plt.tick_params(
axis='x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
labelbottom=False)
plt.figure(figsize=(20,20))
fig.subplots_adjust(0,0,2.5,2.5) # make plots bigger in Jupyter
plt.show()
# +
# Yay, that's a bit nicer
# Let's add information about number of searches at each bar
# +
# %matplotlib inline
fig, ax = plt.subplots()
success_top, fail_top, success_som, fail_som = get_success_and_fail_numbers_at_first_15_tasks()
x = []
y1 = []
y2 = []
y1_count = []
y2_count = []
for index, value in fail_top.items():
x.append(index)
all_top = (success_top[index] + fail_top[index])
y1.append(success_top[index] / all_top)
y1_count.append(all_top)
for index in x:
all_som = (success_som[index] + fail_som[index])
y2.append(success_som[index] / all_som)
y2_count.append(all_som)
y12 = list(zip(y1, y2, y1_count, y2_count))
y12.sort(key=lambda tup: tup[0])
width = 0.35
x = np.asarray(list(range(LAST_N)), dtype=np.float32)
plt.bar(x + width/2, [i[1] for i in y12],width, label='SOM')
plt.bar(x - width/2, [i[0] for i in y12], width, label='TOP')
for i in range(len(y12)):
plt.rc('font', size=12)
plt.text(i - 0.3, y12[i][0], y12[i][2])
for i in range(len(y12)):
plt.rc('font', size=12)
plt.text(i + 0.03, y12[i][1], y12[i][3])
plt.legend(loc="upper center")
plt.title("Success rate of first 15 targets")
plt.ylabel("Success rate")
plt.xlabel("First 15 targets (ordered by TOP success rate)")
plt.tick_params(
axis='x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
labelbottom=False)
plt.figure(figsize=(20,20))
fig.subplots_adjust(0,0,2.5,2.5) # make plots bigger in Jupyter
plt.show()
# +
# Mmm, that's much better
# +
# Last but not least, let's try to make graph, where X-axis equals SOM-succes and Y-axis
# equals TOP-success
# -
def get_success_and_fail_numbers_with_session_lengths_at_first_15_targets():
grouped_users = base.get_dataset_and_group_by_user()
number_of_target_fails_top = {}
number_of_target_success_top = {}
number_of_target_fails_som = {}
number_of_target_success_som = {}
sum_session_lengths_of_target = {}
for username, group in grouped_users:
group = base.filter_out_mess(group)
# filter out not_used targets
most_searched_targets = pd.read_csv('targets.csv', sep='\,', header=None).iloc[3:3 + LAST_N]
group = group[group['target_id'].isin(most_searched_targets[0].to_list())]
group_by_target = group.groupby('target_id')
for target, target_group in group_by_target:
if not target in number_of_target_fails_top.keys():
number_of_target_fails_top[target] = 0
number_of_target_success_top[target] = 0
number_of_target_fails_som[target] = 0
number_of_target_success_som[target] = 0
sum_session_lengths_of_target[target] = 0
if target_group.iloc[0]["display_type"] == "top":
if target_group[target_group['guess_video'] == target_group['target_video']].empty:
number_of_target_fails_top[target] += 1
else:
number_of_target_success_top[target] += 1
else:
if target_group[target_group['guess_video'] == target_group['target_video']].empty:
number_of_target_fails_som[target] += 1
else:
number_of_target_success_som[target] += 1
sum_session_lengths_of_target[target] += target_group.shape[0]
return number_of_target_success_top, number_of_target_fails_top, \
number_of_target_success_som, number_of_target_fails_som, sum_session_lengths_of_target
# +
# %matplotlib inline
fig, ax = plt.subplots()
success_top, fail_top, success_som, fail_som, sum_session_lengths_of_target = get_success_and_fail_numbers_with_session_lengths_at_first_15_targets()
targets = []
x_som = []
y_top = []
session_mean_length_of_target = []
for index, value in fail_top.items():
targets.append(index)
y_top.append(success_top[index] / (success_top[index] + fail_top[index]))
for index in targets:
x_som.append(success_som[index] / (success_som[index] + fail_som[index]))
session_mean_length_of_target.append(sum_session_lengths_of_target[index] / (success_top[index] +
success_som[index] +
fail_top[index] +
fail_som[index]))
# print number of entries (failure)
for i in range(len(session_mean_length_of_target)):
plt.rc('font', size=12)
plt.text(x_som[i] + 0.01, y_top[i], "{:.2f}".format(session_mean_length_of_target[i]))
plt.figtext(0.7, 2.4, 'Number at each point indicates average session length', ha='center', va='center')
plt.scatter(x_som, y_top, label="targets")
plt.legend(loc="upper left")
plt.title("Success rate scatter plot")
plt.xlabel('SOM success rate')
plt.ylabel('TOP success rate')
plt.figure(figsize=(20,20))
fig.subplots_adjust(0,0,2.5,2.5) # make plots bigger in Jupyter
plt.show()
# -
|
target_difficulty.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # CSCA08 - LEC01
# # Week 9
# ## November 12th, 12:00pm - 1:00pm
#
# ## Testing
# +
def is_teenager(age: int) -> bool:
"""Return True iff age is teenager between 13 and 18 inclusive.
Precondition: age >= 0
"""
return age >= 13 and age <= 18
print(is_teenager(0))
# -
# When choosing test cases, first divide possible input values into intervals.
# then always choose boundary cases.
#
# Valid input only:
#
# Only specified (correct) types
#
# only values that satisfy preconditions
#
# No redundant cases!
# ## Test cases
#
# A boundary case is one that bounds an interval
#
# 0 Boundary | 13 Boundary | 18 Boundary | 0 < age < 13 | 13 < age < 18 | age > 18
#
# # Example 2:
# +
def all_fluffy(s: str) -> bool:
"""Return True iff evrey character in s is in fluffy. Fluffy charcater are
those that appear in the word 'fluffy'
"""
for ch in s:
if ch not in 'fluy':
return False
return True
print(all_fluffy('flu'))
# -
# ## Test cases
#
# empty | 1char fluffy | 1 char not fluffy | all non-fluffy | some no-fluffy | all fluffy chars used('fluffy') | some fluffy chars used
|
CSCA08/Week 9/CSCA08 - Nov 12th.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Attention Basics
# In this notebook, we look at how attention is implemented. We will focus on implementing attention in isolation from a larger model. That's because when implementing attention in a real-world model, a lot of the focus goes into piping the data and juggling the various vectors rather than the concepts of attention themselves.
#
# We will implement attention scoring as well as calculating an attention context vector.
#
# ## Attention Scoring
# ### Inputs to the scoring function
# Let's start by looking at the inputs we'll give to the scoring function. We will assume we're in the first step in the decoging phase. The first input to the scoring function is the hidden state of decoder (assuming a toy RNN with three hidden nodes -- not usable in real life, but easier to illustrate):
dec_hidden_state = [5, 1, 20]
# Let's visualize this vector:
# +
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Let's visualize our decoder hidden state
plt.figure(figsize=(1.5, 4.5))
sns.heatmap(np.transpose(np.matrix(dec_hidden_state)), annot=True,
cmap=sns.light_palette("purple", as_cmap=True), linewidths=1)
plt.show()
# -
# Our first scoring function will score a single annotation (encoder hidden state), which looks like this:
annotation = [3, 12, 45] #e.g. Encoder hidden state
# Let's visualize the single annotation
plt.figure(figsize=(1.5, 4.5))
sns.heatmap(np.transpose(np.matrix(annotation)), annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1)
plt.show()
# ### IMPLEMENT: Scoring a Single Annotation
# Let's calculate the dot product of a single annotation. Numpy's [dot()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) is a good candidate for this operation
# +
def single_dot_attention_score(dec_hidden_state, enc_hidden_state):
# return the dot product of the two vectors
return np.dot(dec_hidden_state, enc_hidden_state)
single_dot_attention_score(dec_hidden_state, annotation)
# -
#
# ### Annotations Matrix
# Let's now look at scoring all the annotations at once. To do that, here's our annotation matrix:
annotations = np.transpose([[3, 12, 45], [59, 2, 5],
[1 ,43, 5], [4, 3, 45.3]])
# And it can be visualized like this (each column is a hidden state of an encoder time step):
# Let's visualize our annotation (each column is an annotation)
ax = sns.heatmap(annotations, annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1)
# ### IMPLEMENT: Scoring All Annotations at Once
# Let's calculate the scores of all the annotations in one step using matrix multiplication. Let's continue to us the dot scoring method
#
# <img src="images/scoring_functions.png" />
#
# To do that, we'll have to transpose `dec_hidden_state` and [matrix multiply](https://docs.scipy.org/doc/numpy/reference/generated/numpy.matmul.html) it with `annotations`.
# +
def dot_attention_score(dec_hidden_state, annotations):
# return the product of dec_hidden_state transpose and enc_hidden_states
return np.matmul(np.transpose(dec_hidden_state), annotations)
attention_weights_raw = dot_attention_score(dec_hidden_state, annotations)
attention_weights_raw
# -
# Looking at these scores, can you guess which of the four vectors will get the most attention from the decoder at this time step?
#
# ## Softmax
# Now that we have our scores, let's apply softmax:
# <img src="images/softmax.png" />
# +
def softmax(x):
x = np.array(x, dtype=np.float128)
e_x = np.exp(x)
return e_x / e_x.sum(axis=0)
attention_weights = softmax(attention_weights_raw)
attention_weights
# -
# Even when knowing which annotation will get the most focus, it's interesting to see how drastic softmax makes the end score become. The first and last annotation had the respective scores of 927 and 929. But after softmax, the attention they'll get is 0.12 and 0.88 respectively.
#
# # Applying the scores back on the annotations
# Now that we have our scores, let's multiply each annotation by its score to proceed closer to the attention context vector. This is the multiplication part of this formula (we'll tackle the summation part in the latter cells)
#
# <img src="images/Context_vector.png" />
# +
def apply_attention_scores(attention_weights, annotations):
# multiply the annotations by their weights
return attention_weights * annotations
applied_attention = apply_attention_scores(attention_weights, annotations)
applied_attention
# -
# Let's visualize how the context vector looks now that we've applied the attention scores back on it:
# Let's visualize our annotations after applying attention to them
ax = sns.heatmap(applied_attention, annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1)
# Contrast this with the raw annotations visualized earlier in the notebook, and we can see that the second and third annotations (columns) have been nearly wiped out. The first annotation maintains some of its value, and the fourth annotation is the most pronounced.
#
# # Calculating the Attention Context Vector
# All that remains to produce our attention context vector now is to sum up the four columns to produce a single attention context vector
#
# +
def calculate_attention_vector(applied_attention):
return np.sum(applied_attention, axis=1)
attention_vector = calculate_attention_vector(applied_attention)
attention_vector
# -
# Let's visualize the attention context vector
plt.figure(figsize=(1.5, 4.5))
sns.heatmap(np.transpose(np.matrix(attention_vector)), annot=True, cmap=sns.light_palette("Blue", as_cmap=True), linewidths=1)
plt.show()
# Now that we have the context vector, we can concatinate it with the hidden state and pass it through a hidden layer to produce the the result of this decoding time step.
|
attention/Attention Basics.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
% matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.cluster import KMeans, MiniBatchKMeans, Birch, DBSCAN
# +
import matplotlib.colors as colors
from itertools import cycle
def plotClustering(X, plt_labels):
plt_colors = cycle(colors.cnames.keys())
plt_K = np.unique(plt_labels).size
for k in xrange(plt_K):
color = plt_colors.next()
mask = (plt_labels == k)
plt.plot(X[mask, 0], X[mask, 1], 'w', markerfacecolor=color, marker='o')
plt.show()
# +
from sklearn import datasets
centers_ = [[1, 1], [3, 3], [5, 1]]
X, labels = datasets.make_blobs(n_samples=3000, n_features=2, centers=centers_, cluster_std=0.5)
plotClustering(X, labels)
# -
|
4/l4.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import os
from utilities import get_info_from_txt
from measurements import get_measurements_from_data
import matplotlib.pyplot as plt
# %matplotlib inline
name = r'G:\ImagesForNewModel\Processed_by_Folder\training - Original'
Files = os.listdir(name)
ext=('.png', '.jpg', '.jpeg', '.bmp','.tif', '.tiff', '.PNG', '.JPG', '.JPEG', '.BMP', 'TIF', 'TIFF')
Files = [i for i in Files if i.endswith(tuple(ext))]
i_want=[]
i_dont_want = []
for file in Files:
code_name = file.split('_')[-1].split('.')[0]
if code_name =='3' or code_name == '4':
i_want.append(file)
else :
i_dont_want.append(file)
# +
CalibrationType='Iris'
CalibrationValue=11.77
eye_left=[]
eye_right=[]
for file in i_want:
txt_file = os.path.join(name, file[:-3]+'txt')
shape_, lefteye_, righteye_, boundingbox_ = get_info_from_txt(txt_file)
try:
Left, Right, _, _,_= get_measurements_from_data(shape_, lefteye_,righteye_, CalibrationType, CalibrationValue)
eye_left.append(Left.PalpebralFissureHeight)
eye_right.append(Right.PalpebralFissureHeight)
except:
pass
eye_left.sort()
eye_right.sort()
plt.figure(0)
plt.plot(eye_left,'ro', alpha=0.125)
plt.plot(eye_right,'bo', alpha = 0.125)
# -
len(eye_left), len(eye_right)
# +
CalibrationType='Iris'
CalibrationValue=11.77
eye_open_left=[]
eye_open_right=[]
for file in i_dont_want:
txt_file = os.path.join(name, file[:-3]+'txt')
shape_, lefteye_, righteye_, boundingbox_ = get_info_from_txt(txt_file)
try:
Left, Right, _, _,_= get_measurements_from_data(shape_, lefteye_,righteye_, CalibrationType, CalibrationValue)
eye_open_left.append(Left.PalpebralFissureHeight)
eye_open_right.append(Right.PalpebralFissureHeight)
except:
pass
eye_open_left.sort()
eye_open_right.sort()
plt.figure(0)
plt.plot(eye_open_left, 'ro', alpha=0.125)
plt.plot(eye_open_right,'bo', alpha=0.125)
# -
|
eye_size_when_closed.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# # Diginetica Baseline Recommender
# + executionInfo={"elapsed": 728, "status": "ok", "timestamp": 1631276623063, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}, "user_tz": -330} id="aVC5awU1jAzf"
import os
project_name = "chef-session"; branch = "main"; account = "sparsh-ai"
project_path = os.path.join('/content', project_name)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 13, "status": "ok", "timestamp": 1631276623065, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}, "user_tz": -330} id="8WqnA1xDjAzj" outputId="abc73563-fa3c-4af1-daf9-27783f9794ed"
if not os.path.exists(project_path):
# !pip install -U -q dvc dvc[gdrive]
# !cp -r /content/drive/MyDrive/git_credentials/. ~
path = "/content/" + project_name;
# !mkdir "{path}"
# %cd "{path}"
# !git init
# !git remote add origin https://github.com/"{account}"/"{project_name}".git
# !git pull origin "{branch}"
# !git checkout "{branch}"
else:
# %cd "{project_path}"
# + id="ZoaSr0CxjAzk"
# !git status
# + id="fBsuxRGMjAzl"
# !git add . && git commit -m 'commit' && git push origin "{branch}"
# + id="jo4GmFulkgAJ"
# This is sample baseline for CIKM Personalization Cup 2016
# by <NAME> & <NAME>
import numpy as np
import pandas as pd
import datetime
start_time = datetime.datetime.now()
print("Running baseline. Now it's", start_time.isoformat())
# Loading queries (assuming data placed in <dataset-train/>
queries = pd.read_csv('dataset-train/train-queries.csv', sep=';')[['queryId', 'items', 'is.test']]
print('Total queries', len(queries))
# Leaving only test queries (the ones which items we have to sort)
queries = queries[queries['is.test'] == True][['queryId', 'items']]
print('Test queries', len(queries))
queries.reset_index(inplace=True)
queries.drop(['index'], axis=1, inplace=True)
# Loading item views; taking itemId column
item_views = pd.read_csv('dataset-train/train-item-views.csv', sep=';')[['itemId']]
print('Item views', len(item_views))
# Loading clicks; taking itemId column
clicks = pd.read_csv('dataset-train/train-clicks.csv', sep=';')[['itemId']]
print('Clicks', len(clicks))
# Loading purchases; taking itemId column
purchases = pd.read_csv('dataset-train/train-purchases.csv', sep=';')[['itemId']]
print('Purchases', len(purchases))
# Calculating popularity as [Amount of views] * 1 + Amount of clicks * 2 + [Amount of purchases] * 3
print('Scoring popularity for each item ...')
prod_pop = {}
for cost, container in enumerate([item_views, clicks, purchases]):
for prod in container.values:
product = str(prod[0])
if product not in prod_pop:
prod_pop[product] = cost
else:
prod_pop[product] += cost
print('Popularity scored for', len(prod_pop), 'products')
# For each query:
# parse items (comma-separated values in last column)
# sort them by score;
# write them to the submission file.
# This is longest part; it usually takes around 5 minutes.
print('Sorting items per query by popularity...')
answers = []
step = int(len(queries) / 20)
with open('submission.txt', 'w+') as submission:
for i, q in enumerate(queries.values):
# Fancy progressbar
if i % step == 0:
print(5 * i / step, '%...')
# Splitting last column which contains comma-separated items
items = q[-1].split(',')
# Getting scores for each item. Also, inverting scores here, so we can use argsort
items_scores = list(map(lambda x: -prod_pop.get(x, 0), items))
# Sorting items using items_scores order permutation
sorted_items = np.array(items)[np.array(items_scores).argsort()]
# Squashing items together
s = ','.join(sorted_items)
# and writing them to submission
submission.write(str(q[0]) + " " + s + "\n")
end_time = datetime.datetime.now()
print("Done. Now it's ", end_time.isoformat())
print("Calculated baseline in ", (end_time - start_time).seconds, " seconds")
|
_notebooks/2022-01-27-diginetica-baseline.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h1 align=center >Feature Extraction</h1>
#
# The `sklearn.feature_extraction` module can be used to extract features in a format supported by machine learning algorithms from datasets consisting of formats such as text and image.
#
# **Note:**
#
# Feature extraction is very different from `Feature selection`: the former consists in transforming arbitrary data, such as `text` or `images`, into numerical features usable for machine learning. The latter is a machine learning technique applied on these features.
#
# ### Different TechTechniques for Feature Extraction
# 1. **Text:** The text submodule gathers utilities to build feature vectors from text documents.
# * CountVectorizer
# * TfidfVectorizer
# * TfidfTransformer
# * HashingVectorizer
# * DictVectorizer
# 2. **Image**
# * extract_patches_2d
# * grid_to_graph
# * PatchExtractor
# * img_to_graph
# * extract_patches
# * reconstruct_from
# ### Text Extraction
#
# 1. **DictVectorizer**
# The class `DictVectorizer` can be used to convert feature arrays represented as lists of standard Python dict objects to the NumPy/SciPy representation used by scikit-learn estimators.
#
# While not particularly fast to process, Python’s dict has the advantages of being convenient to use, being sparse (absent features need not be stored) and storing feature names in addition to values.
#
# DictVectorizer implements what is called `one-of-K or “one-hot”` coding for categorical (aka nominal, discrete) features. Categorical features are `“attribute-value”` pairs where the value is restricted to a list of discrete of possibilities without ordering (e.g. topic identifiers, types of objects, tags, names…).
#
# In the following, “city” is a categorical attribute while “temperature” is a traditional numerical feature:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
measurements = [
{'city': 'Dubai', 'temperature': 33.},
{'city': 'London', 'temperature': 12.},
{'city': 'San Francisco', 'temperature': 18.},
{'city': 'India', 'temperature': 28.}
]
measurements
from sklearn.feature_extraction import DictVectorizer
vec = DictVectorizer()
vec
vec.fit_transform(measurements).toarray()
# + jupyter={"outputs_hidden": true}
vec.get_feature_names()
# + jupyter={"outputs_hidden": true}
vec.feature_names_
# + jupyter={"outputs_hidden": true}
vec.vocabulary_
# -
#part of of Speach
pos_windows = [
{
'word-2':'the',
'pos-2':'DT',
'world -1':'Cat',
'pos-1':'NN',
'word+1':'on',
'pos+1':'pp',
'ÿ':'pp',
"Pop":"Reddy"
}
]
ord('P'),ord('p')
vec = DictVectorizer()
vec
pos_vectorization = vec.fit_transform(pos_windows)
pos_vectorization.toarray()
# + jupyter={"outputs_hidden": true}
vec.get_feature_names()
# + jupyter={"outputs_hidden": true}
vec.vocabulary_
# -
ord('-')
ord('+')
chr(43)
# ### CountVectorizer
#
# Convert a collection of text documents to a matrix of token counts
#
# This implementation produces a sparse representation of the counts using scipy.sparse.coo_matrix.
#
# If you do not provide an a-priori dictionary and you do not use an analyzer that does some kind of feature selection then the number of features will be equal to the vocabulary size found by analyzing the data.
#
# 
from sklearn.feature_extraction.text import CountVectorizer
# list of text documents
text = ["The quick brown fox jumped over the lazy dog,the"]
# create the transform
vectorizer = CountVectorizer()
# tokenize and build vocab
vectorizer.fit(text)
# summarize
print(vectorizer.vocabulary_)
# encode document
vector = vectorizer.transform(text)
# summarize encoded vector
print(vector.shape)
print(type(vector))
print(vector.toarray())
# encode another document
text2 = ["the puppy"]
vector = vectorizer.transform(text2)
print(vector.toarray())
# #### Example-2
#
sample_text = ["One of the most basic ways we can numerically represent words "
"is through the one-hot encoding method (also sometimes called "
"count vectorizing)."]
vectorizer.fit(sample_text)
print('Vocabulary: ')
print(vectorizer.vocabulary_)
vector = vectorizer.transform(sample_text)
# Our final vector:
print('Full vector: ')
print(vector.toarray())
# Or if we wanted to get the vector for one word:
print('Hot vector: ')
print(vectorizer.transform(['hot']).toarray())
# Or if we wanted to get multiple vectors at once to build matrices
print('Hot and one: ')
print(vectorizer.transform(['hot', 'one']).toarray())
# We could also do the whole thing at once with the fit_transform method:
print('One swoop:')
new_text = ['Today is the day that I do the thing today, today']
new_vectorizer = CountVectorizer()
print(new_vectorizer.fit_transform(new_text).toarray())
# #### Feature hashing
# The class FeatureHasher is a high-speed, low-memory vectorizer that uses a technique known as feature hashing, or the “hashing trick”. Instead of building a hash table of the features encountered in training, as the vectorizers do, instances of FeatureHasher apply a hash function to the features to determine their column index in sample matrices directly. The result is increased speed and reduced memory usage, at the expense of inspectability; the hasher does not remember what the input features looked like and has no inverse_transform method.
#
# 
# * John likes to watch movies.
# * Mary likes movies too.
# * John also likes football.
#
# |Term|Index|
# |----|---|
# |John|1|
# |likes|2|
# |to|3|
# |watch|4|
# |movies|5|
# |Mary|6|
# |too|7|
# |also|8|
# |football|9|
#
# 
D = [{'dog': 1, 'cat':2, 'elephant':4},{'dog': 2, 'run': 5}]
from sklearn.feature_extraction import FeatureHasher
h = FeatureHasher(n_features=4)
f = h.transform(D)
f.toarray()
# #### Example-2
data = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'],
'year': [2000, 2001, 2002, 2001, 2002],
'pop': [1.5, 1.7, 3.6, 2.4, 2.9]}
data = pd.DataFrame(data)
def has_col(df ,col,vocab):
cols = [col + "=" + str(v) for v in vocab]
def xform(x):
temp = [0 for i in range(len(vocab))];
temp[vocab.index(x)]=1;
return pd.Series(temp,index=cols)
df[cols] =df[col].apply(xform)
return df.drop(col,axis=1)
has_col(data,'state',['Ohio', 'Nevada'])
# ### Text Features
#
# Another common need in feature engineering is to convert text to a set of representative numerical values. For example, most automatic mining of social media data relies on some form of encoding the text as numbers. One of the simplest methods of encoding data is by word counts: you take each snippet of text, count the occurrences of each word within it, and put the results in a table.
#
# For example, consider the following set of three phrases:
sample = pd.Series(['problem of evil problem',
'evil queen',
'horizone problem'])
sample
# + [markdown] jupyter={"source_hidden": true}
# For a vectorization of this data based on word count, we could construct a column representing the word "problem," the word "evil," the word "horizon," and so on. While doing this by hand would be possible, the tedium can be avoided by using Scikit-Learn's `CountVectorizer`:
#
#
# |Words|count|
# |-----|------|
# |problem|3|
# |of |1|
# |evil|2|
# |queen|1|
# |horizone|1|
# + jupyter={"outputs_hidden": true}
sample.str.lower()
# -
from sklearn.feature_extraction.text import CountVectorizer
vec = CountVectorizer()
X = vec.fit_transform(sample)
X
# + jupyter={"outputs_hidden": true}
pd.DataFrame(X.toarray(), columns=vec.get_feature_names())
# -
# There are some issues with this approach, however: the raw word counts lead to features which put too much weight on words that appear very frequently, and this can be sub-optimal in some classification algorithms.
# ## Term frequency
#
# Suppose we have a set of English text documents and wish to rank which document is most relevant to the query, "the brown cow". A simple way to start out is by eliminating documents that do not contain all three words "the", "brown", and "cow", but this still leaves many documents.
#
# To further distinguish them, we might count the number of times each term occurs in each document; the number of times a term occurs in a document is called its term frequency.
#
#
# $\text{tf-idf(t,d)}=\text{tf(t,d)} \times \text{idf(t)}$
#
#
#
# tf–idf means term-frequency times **inverse document-frequency**
#
# $\text{idf}(t) = \log{\frac{1 + n}{1+\text{df}(t)}} + 1$
from sklearn.feature_extraction.text import TfidfVectorizer,TfidfTransformer
Tan = TfidfVectorizer()
o = Tan.fit_transform(sample)
o
o.toarray()
pd.DataFrame(o.toarray(), columns=['evil','horizone','of','problem','queen'])
tt = TfidfTransformer(smooth_idf=False)
tt.fit_transform(o)
# ## Image Feature Extraction
from sklearn.feature_extraction import image
# + jupyter={"outputs_hidden": true}
array = np.arange(4*4*3).reshape((4,4,3))
array
# + jupyter={"outputs_hidden": true}
plt.imshow(array)
# -
array[:,:,0]
# + jupyter={"outputs_hidden": true}
plt.imshow(array[:,:,0])
# -
array.shape
# + jupyter={"outputs_hidden": true}
patches = image.extract_patches_2d(array,(2,2),max_patches=2,random_state=0)
patches
# -
patches.shape
# + jupyter={"outputs_hidden": true}
patches[:,:,:,0]
# + jupyter={"outputs_hidden": true}
plt.imshow(patches[:,:,:,0][0])
# + jupyter={"outputs_hidden": true}
plt.imshow(patches[:,:,:,0][1])
# -
reconstruct = image.reconstruct_from_patches_2d(patches,(4,4,3))
reconstruct.shape
p = image.PatchExtractor(patch_size=(2,2)).transform(patches)
p.shape
plt.imshow(p[0])
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
from skimage.io import imread, imshow
path = "https://raw.githubusercontent.com/reddyprasade/Bird-Classifications-Problem/master/Parrot/1.jpg"
# + jupyter={"outputs_hidden": true}
image = imread(path)
plt.imshow(image)
# -
image.shape
# + jupyter={"outputs_hidden": true}
print(image)
# -
features = np.reshape(image,(622*960,3))
features.shape
# + jupyter={"outputs_hidden": true}
features
# -
from skimage.filters import prewitt_h,prewitt_v
#calculating horizontal edges using prewitt kernel
edges_prewitt_horizontal = prewitt_h(features)
# + jupyter={"outputs_hidden": true}
imshow(edges_prewitt_horizontal, cmap='gray')
# -
#calculating vertical edges using prewitt kernel
edges_prewitt_vertical = prewitt_v(features)
# + jupyter={"outputs_hidden": true}
imshow(edges_prewitt_vertical, cmap='gray')
|
Datasets transformations/ML0101EN-11-Feature extraction .ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PyCharm (BehindTheScene)
# language: python
# name: pycharm-a7c08466
# ---
# # Transfer Learning
# A Convolutional Neural Network (CNN) for image classification is made up of multiple layers that extract features, such as edges, corners, etc; and then use a final fully-connected layer to classify objects based on these features. You can visualize this like this:
#
# <table>
# <tr><td rowspan=2 style='border: 1px solid black;'>⇒</td><td style='border: 1px solid black;'>Convolutional Layer</td><td style='border: 1px solid black;'>Pooling Layer</td><td style='border: 1px solid black;'>Convolutional Layer</td><td style='border: 1px solid black;'>Pooling Layer</td><td style='border: 1px solid black;'>Fully Connected Layer</td><td rowspan=2 style='border: 1px solid black;'>⇒</td></tr>
# <tr><td colspan=4 style='border: 1px solid black; text-align:center;'>Feature Extraction</td><td style='border: 1px solid black; text-align:center;'>Classification</td></tr>
# </table>
#
# *Transfer Learning* is a technique where you can take an existing trained model and re-use its feature extraction layers, replacing its final classification layer with a fully-connected layer trained on your own custom images. With this technique, your model benefits from the feature extraction training that was performed on the base model (which may have been based on a larger training dataset than you have access to) to build a classification model for your own specific set of object classes.
#
# How does this help? Well, think of it this way. Suppose you take a professional tennis player and a complete beginner, and try to teach them both how to play raquetball. It's reasonable to assume that the professional tennis player will be easier to train, because many of the underlying skills involved in raquetball are already learned. Similarly, a pre-trained CNN model may be easier to train to classify specific set of objects because it's already learned how to identify the features of common objects, such as edges and corners.
#
# In this notebook, we'll see how to implement transfer learning for a classification model.
#
# > **Important**:The base model used in this exercise is large, and training is resource-intensive. Before running the code in this notebook, shut down all other notebooks in this library (In each open notebook other than this one, on the **File** menu, click **Close and Halt**). If you experience and Out-of-Memory (OOM) error when running code in this notebook, shut down this entire library, and then reopen it and open only this notebook.
#
# ## Using Transfer Learning to Train a CNN
#
# First, we'll import the latest version of Keras and prepare to load our training data.
# +
import keras
from keras import backend as K
print('Keras version:',keras.__version__)
# -
# ### Prepare the Data
# Before we can train the model, we need to prepare the data.
# +
import os
from keras.preprocessing.image import ImageDataGenerator
# The images are in a folder named 'shapes/training'
training_folder_name = '../data/shapes/training'
# The folder contains a subfolder for each class of shape
classes = sorted(os.listdir(training_folder_name))
print(classes)
# Our source images are 128x128, but the base model we're going to use was trained with 224x224 images
pretrained_size = (224,224)
batch_size = 15
print("Getting Data...")
datagen = ImageDataGenerator(rescale=1./255, # normalize pixel values
validation_split=0.3) # hold back 30% of the images for validation
print("Preparing training dataset...")
train_generator = datagen.flow_from_directory(
training_folder_name,
target_size=pretrained_size,
batch_size=batch_size,
class_mode='categorical',
subset='training') # set as training data
print("Preparing validation dataset...")
validation_generator = datagen.flow_from_directory(
training_folder_name,
target_size=pretrained_size,
batch_size=batch_size,
class_mode='categorical',
subset='validation') # set as validation data
# -
# ### Download a trained model to use as a base
# The VGG16 model is an image classifier that was trained on the ImageNet dataset - a huge dataset containing thousands of images of many kinds of object. We'll download the trained model, excluding its top layer, and set its input shape to match our image data.
#
# *Note: The **keras.applications** namespace includes multiple base models, some which may perform better for your dataset than others. We've chosen this model because it's fairly lightweight within the limited resources of the Azure Notebooks environment.*
from keras import applications
#Load the base model, not including its final connected layer, and set the input shape to match our images
base_model = keras.applications.vgg16.VGG16(weights='imagenet', include_top=False, input_shape=train_generator.image_shape)
# ### Freeze the already trained layers and add a custom output layer for our classes
# The existing feature extraction layers are already trained, so we just need to add a couple of layers so that the model output is the predictions for our classes.
# +
from keras import Model
from keras.layers import Flatten, Dense
from keras import optimizers
# Freeze the already-trained layers in the base model
for layer in base_model.layers:
layer.trainable = False
# Create layers for classification of our images
x = base_model.output
x = Flatten()(x)
prediction_layer = Dense(len(classes), activation='sigmoid')(x)
model = Model(inputs=base_model.input, outputs=prediction_layer)
# Compile the model
opt = optimizers.Adam(lr=0.001)
model.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
# Now print the full model, which will include the layers of the base model plus the dense layer we added
print(model.summary())
# -
# ### Train the Model
# With the layers of the CNN defined, we're ready to train the top layer using our image data. This will take a considerable amount of time on a CPU due to the complexity of the base model, so we'll train the model over only one epoch.
# Train the model over 1 epoch using 15-image batches and using the validation holdout dataset for validation
num_epochs = 1
history = model.fit_generator(
train_generator,
steps_per_epoch = train_generator.samples // batch_size,
validation_data = validation_generator,
validation_steps = validation_generator.samples // batch_size,
epochs = num_epochs)
# ### Using the Trained Model
# Now that we've trained the model, we can use it to predict the class of an image.
# +
# Helper function to resize image
def resize_image(src_img, size=(128,128), bg_color="white"):
from PIL import Image
# rescale the image so the longest edge is the right size
src_img.thumbnail(size, Image.ANTIALIAS)
# Create a new image of the right shape
new_image = Image.new("RGB", size, bg_color)
# Paste the rescaled image onto the new background
new_image.paste(src_img, (int((size[0] - src_img.size[0]) / 2), int((size[1] - src_img.size[1]) / 2)))
# return the resized image
return new_image
# Function to predict the class of an image
def predict_image(classifier, image_array):
import numpy as np
# We need to format the input to match the training data
# The data generator loaded the values as floating point numbers
# and normalized the pixel values, so...
img_features = image_array.astype('float32')
img_features /= 255
# These are the classes our model can predict
classnames = ['circle', 'square', 'triangle']
# Predict the class of each input image
predictions = classifier.predict(img_features)
predicted_classes = []
for prediction in predictions:
# The prediction for each image is the probability for each class, e.g. [0.8, 0.1, 0.2]
# So get the index of the highest probability
class_idx = np.argmax(prediction)
# And append the corresponding class name to the results
predicted_classes.append(classnames[int(class_idx)])
# Return the predictions
return predicted_classes
print("Functions created - ready to use model for inference.")
# +
import os
from random import randint
import numpy as np
from PIL import Image
from keras.models import load_model
from matplotlib import pyplot as plt
# %matplotlib inline
#get the list of test image files
test_folder = '../data/shapes/test'
test_image_files = os.listdir(test_folder)
# Empty array on which to store the images
image_arrays = []
size = (224,224)
background_color="white"
fig = plt.figure(figsize=(12, 8))
# Get the images and show the predicted classes
for file_idx in range(len(test_image_files)):
img = Image.open(os.path.join(test_folder, test_image_files[file_idx]))
# resize the image so it matches the training set - it must be the same size as the images on which the model was trained
resized_img = np.array(resize_image(img, size, background_color))
# Add the image to the array of images
image_arrays.append(resized_img)
# Get predictions from the array of image arrays
# Note that the model expects an array of 1 or more images - just like the batches on which it was trained
predictions = predict_image(model, np.array(image_arrays))
# plot easch image with its corresponding prediction
for idx in range(len(predictions)):
a=fig.add_subplot(1,len(predictions),idx+1)
imgplot = plt.imshow(image_arrays[idx])
a.set_title(predictions[idx])
|
ComputerVision/3-Fine tuning & Overfiting/2-Applying Transfer Learning (Keras).ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h1 align='center'>9.2 Plotting with pandas and seaborn P I
# <h3>Line Plots
# Series and DataFrame each have a plot attribute for making some basic plot types.
#
# By default, plot() makes line plots
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
s = pd.Series(np.random.randn(10).cumsum(), index=np.arange(0, 100, 10))
# -
s.plot()
# The Series object’s index is passed to matplotlib for plotting on the x-axis, though youcan disable this by passing use_index=False.
#
# The x-axis ticks and limits can beadjusted with the xticks and xlim options,
s.plot(use_index=False)
# Most of pandas’s plotting methods accept an optional ax parameter, which can be amatplotlib subplot object. This gives you more flexible placement of subplots in a gridlayout
# DataFrame’s plot method plots each of its columns as a different line on the samesubplot, creating a legend automatically
df = pd.DataFrame(np.random.randn(10, 4).cumsum(0),
columns=['A', 'B', 'C', 'D'],
index=np.arange(0, 100, 10))
df.plot()
# 
#
# 
#
#
# <h3> Bar Plots
# The plot.bar() and plot.barh() make vertical and horizontal bar plots, respec‐tively. In this case, the Series or DataFrame index will be used as the x (bar) or y(barh) ticks
fig, axes = plt.subplots(2, 1)
data = pd.Series(np.random.rand(16), index=list('abcdefghijklmnop'))
data.plot.bar(ax=axes[0], color='k', alpha=0.7)
data.plot.barh(ax=axes[1], color='k', alpha=0.7)
fig
# The options color='k' and alpha=0.7 set the color of the plots to black and use par‐tial transparency on the filling
data.value_counts().plot.bar()
df = pd.DataFrame(np.random.rand(6, 4),
index=['one', 'two', 'three', 'four', 'five', 'six'],
columns=pd.Index(['A', 'B', 'C', 'D'], name='Genus'))
df.plot.bar()
df.plot.barh()
# Note that the name “Genus” on the DataFrame’s columns is used to title the legend
# We create stacked bar plots from a DataFrame by passing stacked=True, resulting inthe value in each row being stacked together
df.plot.barh(stacked=True, alpha=0.5)
# Suppose we wanted to makea stacked bar plot showing the percentage of data points for each party size on eachday.
tips = pd.read_csv(r'D:/tips.csv')
tips.head()
# Using pandas inbuilt plot() function
party_counts = pd.crosstab(tips['day'], tips['size'])
party_counts.head()
party_counts = party_counts.loc[:, 2:5]
party_pcts = party_counts.div(party_counts.sum(1), axis=0)
party_pcts
party_pcts.plot.bar()
# Using Seaborn
tips['tip_pct'] = tips['tip'] / (tips['total_bill'] - tips['tip'])
tips.head()
sns.barplot(x='tip_pct', y='day', data=tips, orient='h')
# Plotting functions in seaborn take a data argument, which can be a pandas Data‐Frame. The other arguments refer to column names. Because there are multipleobservations for each value in the day, the bars are the average value of tip_pct. Theblack lines drawn on the bars represent the 95% confidence interval
#
# seaborn.barplot has a hue option that enables us to split by an additional categoricalvalue
sns.set(style="whitegrid")
sns.barplot(x='tip_pct', y='day', hue='time', data=tips, orient='h')
# <h3>Histograms and Density Plots
# A histogram is a kind of bar plot that gives a discretized display of value frequency.The data points are split into discrete, evenly spaced bins, and the number of datapoints in each bin is plotted.
tips['tip_pct'].plot.hist(bins=50)
# A related plot type is a density plot, which is formed by computing an estimate of acontinuous probability distribution that might have generated the observed data. Theusual procedure is to approximate this distribution as a mixture of “kernels”—that is,simpler distributions like the normal distribution. Thus, density plots are also knownas kernel density estimate (KDE) plots.
tips['tip_pct'].plot.density()
# Seaborn makes histograms and density plots even easier through its distplotmethod, which can plot both a histogram and a continuous density estimate simulta‐neously.
p=sns.distplot(tips['tip_pct'], bins=20, color='k')
p.set_xlim(['0.0', '1.0'])
|
Python-For-Data-Analysis/Chapter 9 Data Visualization/9.3 Plotting with pandas and seaborn P I.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Image Classification using `sklearn.svm`
# +
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib notebook
from sklearn import svm, metrics, datasets
from sklearn.utils import Bunch
from sklearn.model_selection import GridSearchCV, train_test_split
from skimage.io import imread
from skimage.transform import resize
# -
# ### Load images in structured directory like it's sklearn sample dataset
def load_image_files(container_path, dimension=(64, 64)):
"""
Load image files with categories as subfolder names
which performs like scikit-learn sample dataset
Parameters
----------
container_path : string or unicode
Path to the main folder holding one subfolder per category
dimension : tuple
size to which image are adjusted to
Returns
-------
Bunch
"""
image_dir = Path(container_path)
folders = [directory for directory in image_dir.iterdir() if directory.is_dir()]
categories = [fo.name for fo in folders]
descr = "A image classification dataset"
images = []
flat_data = []
target = []
for i, direc in enumerate(folders):
for file in direc.iterdir():
img = imread(file)(file)
img_resized = resize(img, dimension, anti_aliasing=True, mode='reflect')
flat_data.append(img_resized.flatten())
images.append(img_resized)
target.append(i)
flat_data = np.array(flat_data)
target = np.array(target)
images = np.array(images)
return Bunch(data=flat_data,
target=target,
target_names=categories,
images=images,
DESCR=descr)
image_dataset = load_image_files("images/")
# ### Split data
X_train, X_test, y_train, y_test = train_test_split(
image_dataset.data, image_dataset.target, test_size=0.3,random_state=109)
# ### Train data with parameter optimization
param_grid = [
{'C': [1, 10, 100, 1000], 'kernel': ['linear']},
{'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001], 'kernel': ['rbf']},
]
svc = svm.SVC()
clf = GridSearchCV(svc, param_grid)
clf.fit(X_train, y_train)
# ### Predict
y_pred = clf.predict(X_test)
# ### Report
print("Classification report for - \n{}:\n{}\n".format(
clf, metrics.classification_report(y_test, y_pred)))
|
.ipynb_checkpoints/Image Classification using scikit-learn-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
X, y = make_classification(n_samples=1000, n_features=4,n_informative=2, n_redundant=0,random_state=0, shuffle=False)
clf = RandomForestClassifier(n_estimators=100, max_depth=2,random_state=0)
clf.fit(X, y)
|
datasets/random_foreast_classification_using.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19"
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# + _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0"
from __future__ import print_function
from time import time
import keras
from keras.datasets import mnist,cifar10
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Dense,Dropout,Activation,Flatten
from keras.optimizers import Adam
from keras import backend as K
from matplotlib import pyplot as plt
import random
# -
#no. of convolutional filters to use
filters = 64
#size of pooling area for max pooling
pool_size = 2
#convolutional kernel size
kernel_size = 3
#load and split the data to train and test
(x_cifar_train, y_cifar_train), (x_cifar_test, y_cifar_test) = cifar10.load_data()
y_cifar_train = y_cifar_train.reshape(50000,)
y_cifar_test = y_cifar_test.reshape(10000,)
# +
x_train_lt5 = x_cifar_train[y_cifar_train < 5]
y_train_lt5 = y_cifar_train[y_cifar_train < 5]
x_test_lt5 = x_cifar_test[y_cifar_test < 5]
y_test_lt5 = y_cifar_test[y_cifar_test < 5]
x_train_gte5 = x_cifar_train[y_cifar_train >= 5]
y_train_gte5 = y_cifar_train[y_cifar_train >= 5] - 5
x_test_gte5 = x_cifar_test[y_cifar_test >= 5]
y_test_gte5 = y_cifar_test[y_cifar_test >= 5] - 5
# -
fig, ax = plt.subplots(2,10,figsize=(10,2.8))
fig.suptitle("Example of training images (from first 5 categories), for the first neural net\n", fontsize=15)
axes = ax.ravel()
for i in range(20):
# Pick a random number
idx=random.randint(1,1000)
axes[i].imshow(x_train_lt5[idx])
axes[i].axis('off')
fig.tight_layout(pad=0.5)
plt.show()
#set the no. of classes and the input shape
num_classes = 5
input_shape = (32,32,3) # 3 is for rgb
#keras expects 3d images
feature_layers = [
Conv2D(filters, kernel_size,
padding='valid',
input_shape=input_shape),
Activation('relu'),
Conv2D(filters, kernel_size),
Activation('relu'),
MaxPooling2D(pool_size=pool_size),
Dropout(0.25),
Flatten(),
]
classification_layers = [
Dense(128),
Activation('relu'),
Dropout(0.25),
Dense(num_classes),
Activation('softmax')
]
# +
# Assignment - Normalize the pixels for cifar images
# for the classification layers change the dense to Conv2D and see which one works better
# -
model_1 = Sequential(feature_layers + classification_layers)
def train_model(model, train, test, num_classes):
x_train = train[0].reshape((train[0].shape[0],) + input_shape)
x_test = test[0].reshape((test[0].shape[0],) + input_shape)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(train[1], num_classes)
y_test = keras.utils.to_categorical(test[1], num_classes)
model.compile(loss='categorical_crossentropy',
optimizer=Adam(lr=0.001),
metrics=['accuracy'])
t1 = time()
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
t2 = time()
t_delta = round(t2-t1,2)
print('Training time: {} seconds'.format(t_delta))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
batch_size = 128
epochs = 20
#train model for the first 5 categories of images
train_model(model_1,
(x_train_lt5, y_train_lt5),
(x_test_lt5, y_test_lt5), num_classes)
model_1.summary()
# freeze the features layer
for l in feature_layers:
l.trainable = False
model_2 = Sequential(feature_layers + classification_layers)
#train model for the greater than 5 categories of images (last five categrories)
train_model(model_2,
(x_train_gte5, y_train_gte5),
(x_test_gte5, y_test_gte5), num_classes)
history_dict = model_1.history.history
print(history_dict.keys())
plt.title("Validation accuracy over epcohs",fontsize=15)
plt.plot(model_1.history.history['accuracy'])
plt.plot(model_1.history.history['val_accuracy'])
# plt.plot(history.history['val_accuracy'],lw=3,c='k')
plt.grid(True)
plt.xlabel("Epochs",fontsize=14)
plt.ylabel("Accuracy",fontsize=14)
plt.xticks([2*i for i in range(11)],fontsize=14)
plt.yticks(fontsize=14)
plt.show()
# # My code
# load dataset
(trainX, trainy), (testX, testy) = cifar10.load_data()
# summarize loaded dataset
print('Train: X=%s, y=%s' % (trainX.shape, trainy.shape))
print('Test: X=%s, y=%s' % (testX.shape, testy.shape))
# plot first few images
for i in range(9):
# define subplot
pyplot.subplot(330 + 1 + i)
# plot raw pixel data
pyplot.imshow(trainX[i])
# show the figure
pyplot.show()
|
Transfer Learning using CIFAR-10 dataset/cifar-10-transfer-learning.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.6 64-bit
# name: python3
# ---
# # Notebook 3.1: Web-App (UFOS!!)
#
# In this notebook we will train a ML model using the database from the [NUFORC](https://nuforc.org) (The National UFO Reporting Center) about UFO sightings. The goal of the lesson is to use the model in a Flask app to be used in a browser.
# ## About the Model:
#
# ### Data cleaning:
# +
import pandas as pd
import numpy as np
ufos = pd.read_csv('data/ufos.csv')
ufos.head()
# +
ufos = pd.DataFrame({'Seconds':ufos['duration (seconds)'],
'Country':ufos['country'],'Latitude':ufos['latitude'],
'Longitude':ufos['longitude']})
ufos.Country.unique()
# -
ufos.dropna(inplace=True)
ufos=ufos[(ufos['Seconds']>=1) & (ufos['Seconds']<=60)]
ufos.info()
# +
from sklearn.preprocessing import LabelEncoder
ufos['Country']=LabelEncoder().fit_transform(ufos['Country'])
ufos.head()
# -
# ### Building the model:
# +
from sklearn.model_selection import train_test_split
features=['Seconds','Latitude','Longitude']
X,y=ufos[features],ufos['Country']
x_train,x_test,y_train,y_test=train_test_split(X,y,test_size=0.2,
random_state=0)
# +
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, classification_report
model=LogisticRegression()
model.fit(x_train,y_train)
pred=model.predict(x_test)
print(classification_report(y_test,pred))
print('Predicted countries: ', pred)
print('Accuracy: ',accuracy_score(y_test,pred))
# -
# ### Pickle the model:
# +
import pickle
model_filename='ufo-model.pkl'
pickle.dump(model, open(model_filename, 'wb'))
model=pickle.load(open('ufo-model.pkl','rb'))
print(model.predict([[50,44,-12]]))
|
3-Web-App/1-Web-App/notebook.ipynb
|