text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# 6. Challenge Solution - A2C
So far, we managed to implement the REINFORCE agent version for the Cartpole environment. Now let's try to do the A2C one. As both models have a pretty similar structure and functioning mechanism, this notebook will contain more missing parts.
### Theory
As it has been mentioned in Actor-Critic methods notebook, A2C algorithm has pretty similar expected return expression - the only difference is that the return function is exchanged by the advantage function.
$$
\nabla_{\theta}J(\theta)=\sum_{t = 0}^T\nabla_{\theta}log\pi_{\theta}(a_t|s_t)A(s_t, a_t)
$$
$$
A(s_t, a_t) = Q(s_t, a_t) - V(s_t)
$$
From the structural perspective, A2C model has two parts - actor and critic. Actor takes state as an input and outputs probability distribution for actions, while critic calculated values for those actions.
Your implementation should take the following form:
1. Extracting environment information (state, action, etc.)
2. Passing state through the model to generate action and critic outputs
3. Sample action from the probability distribution
4. Calculating rewards
5. Comparing rewards after taken trajectory to those calculated at the start of the trajectory to generate loss
### Building model
```
import gym
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
seed = 42
# Discount value
gamma = 0.99
max_steps_per_episode = 10000
# Create the environment
env = gym.make("CartPole-v0")
env.seed(seed)
eps = np.finfo(np.float32).eps.item() # Smallest number such that 1.0 + eps != 1.0
```
### Defining model
As it has been mentioned, A2C algorithm contains two parts - actor and critic - both of which should be modeled by separate neural networks. In our implementation, the input layer is shared.
```
num_states = env.observation_space.shape[0]
num_hidden = 128
num_inputs = 4
num_actions = env.action_space.n
inputs = layers.Input(shape=(num_inputs,))
common = layers.Dense(num_hidden, activation="relu")(inputs)
action = layers.Dense(num_actions, activation="softmax")(common)
critic = layers.Dense(1)(common)
model = keras.Model(inputs=inputs, outputs=[action, critic])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
huber_loss = keras.losses.Huber()
action_probs_history = []
critic_value_history = []
rewards_history = []
running_reward = 0
episode_count = 0
while True: # Run until solved
state = env.reset()
episode_reward = 0
with tf.GradientTape() as tape:
for timestep in range(1, max_steps_per_episode):
# show the attempts of the agent
env.render()
state = tf.convert_to_tensor(state)
state = tf.expand_dims(state, 0)
# Predict action probabilities and estimated future rewards
# from environment state
action_probs, critic_value = model(state)
critic_value_history.append(critic_value[0, 0])
# Sample action from action probability distribution
action = np.random.choice(num_actions, p=np.squeeze(action_probs))
action_probs_history.append(tf.math.log(action_probs[0, action]))
# Apply the sampled action in our environment
state, reward, done, _ = env.step(action)
rewards_history.append(reward)
episode_reward += reward
if done:
break
# Update running reward to check condition for solving
running_reward = 0.05 * episode_reward + (1 - 0.05) * running_reward
# Calculate expected value from rewards
# - At each timestep what was the total reward received after that timestep
# - Rewards in the past are discounted by multiplying them with gamma
# - These are the labels for our critic
returns = []
discounted_sum = 0
for r in rewards_history[::-1]:
discounted_sum = r + gamma * discounted_sum
returns.insert(0, discounted_sum)
# Normalize
returns = np.array(returns)
returns = (returns - np.mean(returns)) / (np.std(returns) + eps)
returns = returns.tolist()
# Calculating loss values to update our network
history = zip(action_probs_history, critic_value_history, returns)
actor_losses = []
critic_losses = []
for log_prob, value, ret in history:
# At this point in history, the critic estimated that we would get a
# total reward = `value` in the future. We took an action with log probability
# of `log_prob` and ended up recieving a total reward = `ret`.
# The actor must be updated so that it predicts an action that leads to
# high rewards (compared to critic's estimate) with high probability.
diff = ret - value
actor_losses.append(-log_prob * diff) # actor loss
# The critic must be updated so that it predicts a better estimate of
# the future rewards.
critic_losses.append(
huber_loss(tf.expand_dims(value, 0), tf.expand_dims(ret, 0))
)
# Backpropagation
loss_value = sum(actor_losses) + sum(critic_losses)
grads = tape.gradient(loss_value, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# Clear the loss and reward history
action_probs_history.clear()
critic_value_history.clear()
rewards_history.clear()
# Log details
episode_count += 1
if episode_count % 10 == 0:
template = "running reward: {:.2f} at episode {}"
print(template.format(running_reward, episode_count))
if running_reward > 195: # Condition to consider the task solved
print("Solved at episode {}!".format(episode_count))
break
```
| github_jupyter |
# Drawing Bloch vector of a density matrix
Creating matplotlib software to draw Bloch vector given a density matrix for single qubit state.
From Exercise 2.72 in *Quantum Computation and Quantum Information* by Nielsen and Chuang, an arbitrary density matrix for a single qubit can be written as
$$ \rho = \frac{I + \vec{r} \cdot \vec{\sigma}}{2}$$
where $\vec{\sigma}$ is the vector of Pauli $X$, $Y$, and $Z$ matrices. The $3$-d real vector is the Bloch vector.
A state $\rho$ is pure if and only if $\vec{r}$ is a unit vector.
```
import numpy as np
# Global variables
eps = 0.0001
def gen_density_matrix(states, probs):
"""
Generate a density matrix from an array of 1-qubit states with an array of their corresponding probabilities.
"""
if len(states) != len(probs):
raise ValueError('Size of `states` and `probs` arrays must be the same.')
if np.sum(probs) != 1:
raise ValueError('Probabilities must sum to 1.')
for state in states:
if np.shape(state) != (2,):
raise ValueError('Each state must be a 2-dimensional vector. ')
if np.linalg.norm(state) < 1-eps or np.linalg.norm(state) > 1+eps:
raise ValueError('Each state must have norm 1. ')
rho = np.zeros((2,2))
for i in range(len(states)):
conj = np.conj(states[i])
for j in range(len(conj)):
# Add probs[i] * conj[j] * states[i] to column j of rho
# We use np.dot because otherwise 0 multiplication results in scalar
rho[:, j] = rho[:, j] + np.dot(probs[i], np.dot(conj[j],states[i]))
return rho
def validate(rho):
"""
Validate rho as a valid single-qubit density matrix.
Raise error if rho fails.
"""
# Verify rho is 2x2
if np.shape(rho) != (2, 2):
raise ValueError('Rho must be a 2x2 matrix.')
# Verify rho has trace 1
if np.trace(rho) < 1-eps or np.trace(rho) > 1+eps:
raise ValueError('Rho must have trace 1.')
# Verify rho is a positive operator. We do this by using the generalized Sylvester's criterion for
# positive semidefiniteness: check that the determinant of all principal minors of rho are >= 0
if rho[0][0] < 0 or rho[1][1] < 0 or np.linalg.det(rho) < 0:
raise ValueError('Rho must be a positive operator. ')
def bloch_vector_from_density(rho):
"""
Returns r vector.
"""
# Check to make sure rho is a legit density matrix
validate(rho)
# Simplify to get r x sigma on one side of equation
pauli_weighted = np.dot(2, rho) - np.eye(2)
# Define Pauli matrices
X = np.array([[0, 1], [1, 0]])
Y = np.array([[0, -1j], [1j, 0]])
Z = np.array([[1, 0], [0, -1]])
# Do inner product with Pauli X, Y, Z to get r vector coefficients
r = np.zeros(3)
r[0] = HS_inner_product(X, pauli_weighted) / HS_inner_product(X, X)
r[1] = HS_inner_product(Y, pauli_weighted) / HS_inner_product(Y, Y)
r[2] = HS_inner_product(Z, pauli_weighted) / HS_inner_product(Z, Z)
return r
def HS_inner_product(M1, M2):
# Hilbert-Schmidt inner product is defined as Tr(M1*M2)
return np.trace( np.dot( np.transpose(np.conjugate(M1)), M2 ) )
bloch_vector_from_density(gen_density_matrix([[1/np.sqrt(2), 1/np.sqrt(2)]], [1]))
```
## Plotting Bloch vector
Using [this guide](https://jakevdp.github.io/PythonDataScienceHandbook/04.12-three-dimensional-plotting.html).
```
from mpl_toolkits import mplot3d
%matplotlib inline
#%matplotlib notebook
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,10))
ax = plt.axes(projection='3d')
fig = plt.figure(figsize=(10,10))
ax = plt.axes(projection='3d')
# Data for a three-dimensional line
zline = np.linspace(0, 15, 1000)
xline = np.sin(zline)
yline = np.cos(zline)
ax.plot3D(xline, yline, zline, 'gray')
# Data for three-dimensional scattered points
zdata = 15 * np.random.random(100)
xdata = np.sin(zdata) + 0.1 * np.random.randn(100)
ydata = np.cos(zdata) + 0.1 * np.random.randn(100)
ax.scatter3D(xdata, ydata, zdata, c=zdata, cmap='Greens');
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,10))
ax = fig.gca(projection='3d')
# theta = np.linspace(-4 * np.pi, 4 * np.pi, 100)
# z = np.linspace(-2, 2, 100)
# r = z**2 + 1
# x = r * np.sin(theta)
# y = r * np.cos(theta)
# ax.plot(x, y, z)
ax.plot([0, 1], [0, 0], [0, 1])
plt.savefig('foo.png', bbox_inches='tight')
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
from itertools import product, combinations
fig = plt.figure(figsize=(10,10))
ax = fig.gca(projection='3d')
#ax.set_aspect("equal")
# draw sphere
u, v = np.mgrid[0:2*np.pi:20j, 0:np.pi:10j]
x = np.cos(u)*np.sin(v)
y = np.sin(u)*np.sin(v)
z = np.cos(v)
ax.plot_wireframe(x, y, z, color="r")
# draw a point
ax.scatter([0], [0], [0], color="g", s=100)
# draw a vector
from matplotlib.patches import FancyArrowPatch
from mpl_toolkits.mplot3d import proj3d
class Arrow3D(FancyArrowPatch):
def __init__(self, xs, ys, zs, *args, **kwargs):
FancyArrowPatch.__init__(self, (0, 0), (0, 0), *args, **kwargs)
self._verts3d = xs, ys, zs
def draw(self, renderer):
xs3d, ys3d, zs3d = self._verts3d
xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)
self.set_positions((xs[0], ys[0]), (xs[1], ys[1]))
FancyArrowPatch.draw(self, renderer)
a = Arrow3D([0, 1], [0, 1], [0, 1], mutation_scale=20,
lw=1, arrowstyle="-|>", color="k")
ax.add_artist(a)
plt.savefig('foo.png', bbox_inches='tight')
```
## Visualizing expressibility of different ansatze
```
import qutip as qtp
from qiskit import *
thetas = [i * 2 * np.pi / 500 for i in range(500)]
statevectors = []
for theta in thetas:
qc = QuantumCircuit(1)
qc.h(0)
qc.rz(theta, 0)
backend = Aer.get_backend('statevector_simulator')
statevectors.append(execute(qc,backend).result().get_statevector())
rhos = [gen_density_matrix([statevector], [1]) for statevector in statevectors]
vecs = [bloch_vector_from_density(rho) for rho in rhos]
print(len(vecs))
b = qtp.Bloch()
b.clear()
b.add_points(vecs)
b.show()
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Copulas Primer
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Gaussian_Copula.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Gaussian_Copula.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
```
!pip install -q tensorflow-probability
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
```
A [copula](https://en.wikipedia.org/wiki/Copula_(probability_theory%29) is a classical approach for capturing the dependence between random variables. More formally, a copula is a multivariate distribution $C(U_1, U_2, ...., U_n)$ such that marginalizing gives $U_i \sim \text{Uniform}(0, 1)$.
Copulas are interesting because we can use them to create multivariate distributions with arbitrary marginals. This is the recipe:
* Using the [Probability Integral Transform](https://en.wikipedia.org/wiki/Probability_integral_transform) turns an arbitrary continuous R.V. $X$ into a uniform one $F_X(X)$, where $F_X$ is the CDF of $X$.
* Given a copula (say bivariate) $C(U, V)$, we have that $U$ and $V$ have uniform marginal distributions.
* Now given our R.V's of interest $X, Y$, create a new distribution $C'(X, Y) = C(F_X(X), F_Y(Y))$. The marginals for $X$ and $Y$ are the ones we desired.
Marginals are univariate and thus may be easier to measure and/or model. A copula enables starting from marginals yet also achieving arbitrary correlation between dimensions.
# Gaussian Copula
To illustrate how copulas are constructed, consider the case of capturing dependence according to multivariate Gaussian correlations. A Gaussian Copula is one given by $C(u_1, u_2, ...u_n) = \Phi_\Sigma(\Phi^{-1}(u_1), \Phi^{-1}(u_2), ... \Phi^{-1}(u_n))$ where $\Phi_\Sigma$ represents the CDF of a MultivariateNormal, with covariance $\Sigma$ and mean 0, and $\Phi^{-1}$ is the inverse CDF for the standard normal.
Applying the normal's inverse CDF warps the uniform dimensions to be normally distributed. Applying the multivariate normal's CDF then squashes the distribution to be marginally uniform and with Gaussian correlations.
Thus, what we get is that the Gaussian Copula is a distribution over the unit hypercube $[0, 1]^n$ with uniform marginals.
Defined as such, the Gaussian Copula can be implemented with `tfd.TransformedDistribution` and appropriate `Bijector`. That is, we are transforming a MultivariateNormal, via the use of the Normal distribution's inverse CDF. A `Bijector` modelling this is listed below.
```
class NormalCDF(tfb.Bijector):
"""Bijector that encodes normal CDF and inverse CDF functions.
We follow the convention that the `inverse` represents the CDF
and `forward` the inverse CDF (the reason for this convention is
that inverse CDF methods for sampling are expressed a little more
tersely this way).
"""
def __init__(self):
self.normal_dist = tfd.Normal(loc=0., scale=1.)
super(NormalCDF, self).__init__(
forward_min_event_ndims=0,
validate_args=False,
name="NormalCDF")
def _forward(self, y):
# Inverse CDF of normal distribution.
return self.normal_dist.quantile(y)
def _inverse(self, x):
# CDF of normal distribution.
return self.normal_dist.cdf(x)
def _inverse_log_det_jacobian(self, x):
# Log PDF of the normal distribution.
return self.normal_dist.log_prob(x)
```
Below, we implement a Gaussian Copula with one simplifying assumption: that the covariance is parameterized
by a Cholesky factor (hence a covariance for `MultivariateNormalTriL`). (One could use other `tf.linalg.LinearOperators` to encode different matrix-free assumptions.).
```
class GaussianCopulaTriL(tfd.TransformedDistribution):
"""Takes a location, and lower triangular matrix for the Cholesky factor."""
def __init__(self, loc, scale_tril):
super(GaussianCopulaTriL, self).__init__(
distribution=tfd.MultivariateNormalTriL(
loc=loc,
scale_tril=scale_tril),
bijector=tfb.Invert(NormalCDF()),
validate_args=False,
name="GaussianCopulaTriLUniform")
# Plot an example of this.
unit_interval = np.linspace(0.01, 0.99, num=200, dtype=np.float32)
x_grid, y_grid = np.meshgrid(unit_interval, unit_interval)
coordinates = np.concatenate(
[x_grid[..., np.newaxis],
y_grid[..., np.newaxis]], axis=-1)
pdf = GaussianCopulaTriL(
loc=[0., 0.],
scale_tril=[[1., 0.8], [0., 0.6]],
).prob(coordinates)
# Plot its density.
with tf.Session() as sess:
pdf_eval = sess.run(pdf)
plt.contour(x_grid, y_grid, pdf_eval, 100, cmap=plt.cm.jet);
```
The power, however, from such a model is using the Probability Integral Transform, to use the copula on arbitrary R.V.s. In this way, we can specify arbitrary marginals, and use the copula to stitch them together.
We start with a model:
$$\begin{align*}
X &\sim \text{Kumaraswamy}(a, b) \\
Y &\sim \text{Gumbel}(\mu, \beta)
\end{align*}$$
and use the copula to get a bivariate R.V. $Z$, which has marginals [Kumaraswamy](https://en.wikipedia.org/wiki/Kumaraswamy_distribution) and [Gumbel](https://en.wikipedia.org/wiki/Gumbel_distribution).
We'll start by plotting the product distribution generated by those two R.V.s. This is just to serve as a comparison point to when we apply the Copula.
```
a = 2.0
b = 2.0
gloc = 0.
gscale = 1.
x = tfd.Kumaraswamy(a, b)
y = tfd.Gumbel(loc=gloc, scale=gscale)
# Plot the distributions, assuming independence
x_axis_interval = np.linspace(0.01, 0.99, num=200, dtype=np.float32)
y_axis_interval = np.linspace(-2., 3., num=200, dtype=np.float32)
x_grid, y_grid = np.meshgrid(x_axis_interval, y_axis_interval)
pdf = x.prob(x_grid) * y.prob(y_grid)
# Plot its density
with tf.Session() as sess:
pdf_eval = sess.run(pdf)
plt.contour(x_grid, y_grid, pdf_eval, 100, cmap=plt.cm.jet);
```
Now we use a Gaussian copula to couple the distributions together, and plot that. Again our tool of choice is `TransformedDistribution` applying the appropriate `Bijector`.
Specifically, we define a `Concat` bijector which applies different bijectors at different parts of the vector (which is still a bijective transformation).
```
class Concat(tfb.Bijector):
"""This bijector concatenates bijectors who act on scalars.
More specifically, given [F_0, F_1, ... F_n] which are scalar transformations,
this bijector creates a transformation which operates on the vector
[x_0, ... x_n] with the transformation [F_0(x_0), F_1(x_1) ..., F_n(x_n)].
NOTE: This class does no error checking, so use with caution.
"""
def __init__(self, bijectors):
self._bijectors = bijectors
super(Concat, self).__init__(
forward_min_event_ndims=1,
validate_args=False,
name="ConcatBijector")
@property
def bijectors(self):
return self._bijectors
def _forward(self, x):
split_xs = tf.split(x, len(self.bijectors), -1)
transformed_xs = [b_i.forward(x_i) for b_i, x_i in zip(
self.bijectors, split_xs)]
return tf.concat(transformed_xs, -1)
def _inverse(self, y):
split_ys = tf.split(y, len(self.bijectors), -1)
transformed_ys = [b_i.inverse(y_i) for b_i, y_i in zip(
self.bijectors, split_ys)]
return tf.concat(transformed_ys, -1)
def _forward_log_det_jacobian(self, x):
split_xs = tf.split(x, len(self.bijectors), -1)
fldjs = [
b_i.forward_log_det_jacobian(x_i, event_ndims=0) for b_i, x_i in zip(
self.bijectors, split_xs)]
return tf.squeeze(sum(fldjs), axis=-1)
def _inverse_log_det_jacobian(self, y):
split_ys = tf.split(y, len(self.bijectors), -1)
ildjs = [
b_i.inverse_log_det_jacobian(y_i, event_ndims=0) for b_i, y_i in zip(
self.bijectors, split_ys)]
return tf.squeeze(sum(ildjs), axis=-1)
```
Now we can define the Copula we want. Given a list of target marginals (encoded as bijectors), we can easily construct
a new distribution that uses the copula and has the specified marginals.
Note that $C'(X, Y) = C(F_X(X), F_Y(Y))$. This is mathematically equivalent to using `Concat([F_X, F_Y])`.
```
class WarpedGaussianCopula(tfd.TransformedDistribution):
"""Application of a Gaussian Copula on a list of target marginals.
This implements an application of a Gaussian Copula. Given [x_0, ... x_n]
which are distributed marginally (with CDF) [F_0, ... F_n],
`GaussianCopula` represents an application of the Copula, such that the
resulting multivariate distribution has the above specified marginals.
The marginals are specified by `marginal_bijectors`: These are
bijectors whose `inverse` encodes the CDF and `forward` the inverse CDF.
"""
def __init__(self, loc, scale_tril, marginal_bijectors):
super(WarpedGaussianCopula, self).__init__(
distribution=GaussianCopulaTriL(loc=loc, scale_tril=scale_tril),
bijector=Concat(marginal_bijectors),
validate_args=False,
name="GaussianCopula")
```
Finally, let's actually use this Gaussian Copula. We'll use a Cholesky of $\begin{bmatrix}1 & 0\\\rho & \sqrt{(1-\rho^2)}\end{bmatrix}$, which will correspond to variances 1, and correlation $\rho$ for the multivariate normal.
We'll look at a few cases:
```
# Create our coordinates:
coordinates = np.concatenate(
[x_grid[..., np.newaxis], y_grid[..., np.newaxis]], -1)
def create_gaussian_copula(correlation):
# Use Gaussian Copula to add dependence.
return WarpedGaussianCopula(
loc=[0., 0.],
scale_tril=[[1., 0.], [correlation, tf.sqrt(1. - correlation ** 2)]],
# These encode the marginals we want. In this case we want X_0 has
# Kumaraswamy marginal, and X_1 has Gumbel marginal.
marginal_bijectors=[
tfb.Kumaraswamy(a, b),
# Kumaraswamy follows the above convention, while
# Gumbel does not, and has to be inverted.
tfb.Invert(tfb.Gumbel(loc=0., scale=1.))])
# Note that the zero case will correspond to independent marginals!
correlations = [0., -0.8, 0.8]
copulas = []
probs = []
for correlation in correlations:
copula = create_gaussian_copula(correlation)
copulas.append(copula)
probs.append(copula.prob(coordinates))
# Plot it's density
with tf.Session() as sess:
copula_evals = sess.run(probs)
for correlation, copula_eval in zip(correlations, copula_evals):
plt.figure()
plt.contour(x_grid, y_grid, copula_eval, 100, cmap=plt.cm.jet)
plt.title('Correlation {}'.format(correlation))
```
Finally, let's verify that we actually get the marginals we want.
```
def kumaraswamy_pdf(x):
return tfd.Kumaraswamy(a, b).prob(np.float32(x)).eval()
def gumbel_pdf(x):
return tfd.Gumbel(gloc, gscale).prob(np.float32(x)).eval()
samples = []
for copula in copulas:
samples.append(copula.sample(10000))
with tf.Session() as sess:
copula_evals = sess.run(samples)
# Let's marginalize out on each, and plot the samples.
for correlation, copula_eval in zip(correlations, copula_evals):
k = copula_eval[..., 0]
g = copula_eval[..., 1]
plt.figure()
_, bins, _ = plt.hist(k, bins=100, normed=True)
plt.plot(bins, kumaraswamy_pdf(bins), 'r--')
plt.title('Kumaraswamy from Copula with correlation {}'.format(correlation))
plt.figure();
_, bins, _ = plt.hist(g, bins=100, normed=True)
plt.plot(bins, gumbel_pdf(bins), 'r--')
plt.title('Gumbel from Copula with correlation {}'.format(correlation))
```
# Conclusion
And there we go! We've demonstrated that we can construct Gaussian Copulas using the `Bijector` API.
More generally, writing bijectors using the `Bijector` API and composing them with a distribution, can create rich families of distributions for flexible modelling.
| github_jupyter |
# Методы доступа к атрибутам
https://github.com/alexopryshko/advancedpython/tree/master/1
В предыдущей теме были рассмотрены дескрипторы. Они позволяют переопределять доступ к атрибутам класса изнутри атрибута. Тем не менее в питоне есть еще группа магических методов, которые вызываются при доступе к атрибутам со стороны объекта вызывающего класса:
- `__getattribute__(self, name)` - будет вызван при попытке получить значение атрибута. Если этот метод переопределён, стандартный механизм поиска значения атрибута не будет задействован. По умолчанию как раз он и лезет в `__dict__` объекта и вызывает в случае неудачи `__getattr__`:
- `__getattr__(self, name)` - будет вызван в случае, если запрашиваемый атрибут не найден обычным механизмом (в `__dict__` экземпляра, класса и т.д.)
- `__setattr__(self, name, value)` - будет вызван при попытке установить значение атрибута экземпляра. Если его переопределить, стандартный механизм установки значения не будет задействован.
- `__delattr__(self, name)` - используется при удалении атрибута.
В следующем примере показано, что `__getattr__` вызывается только тогда, когда стандартными средствами (заглянув в `__dict__` объекта и класса) найти атрибут не получается. При этом в нашем случае метод срабатывает для любых значений, не вызывая AttributeError
```
class A:
def __getattr__(self, attr):
print('__getattr__')
return 42
field = 'field'
a = A()
a.name = 'name'
print(a.__dict__, A.__dict__, end='\n\n\n')
print('a.name', a.name, end='\n\n')
print('a.field', a.field, end='\n\n')
print('a.random', a.random, end='\n\n')
a.asdlubaslifubasfuib
```
А если переопределим `__getattribute__`, то даже на `__dict__` посмотреть не сможем.
```
class A:
def __getattribute__(self, item):
if item in self.__dict__:
return self.__dict__[item]
if item in self.__class__.__dict__:
return self.__class__.__dict__[item]
if item in object.__dict__:
return object.__dict__[item]
if '__getattr__' in self.__class__.__dict__:
return self.__getattr__(self, item)
raise AttributeError
print('__getattribute__')
return 42
def __len__(self):
return 0
def test(self):
print('test', self)
field = 'field'
a = A()
a.name = 'name'
print('__dict__', getattr(a, "__dict__"), end='\n\n')
print('a.name', a.name, end='\n\n')
print('a.field', a.field, end='\n\n')
print('a.random', a.random, end='\n\n')
print('a.__len__', a.__len__, end='\n\n')
print('len(a)', len(a), end='\n\n')
print('type(a)...', type(a).__dict__['test'](a), end='\n\n')
print('A.field', A.field, end='\n\n')
a.test()
```
Переопределяя `__setattr__`, рискуем не увидеть наши добавляемые атрибуты объекта в `__dict__`
```
class A:
def __setattr__(self, key, value):
print('__setattr__')
field = 'field'
a = A()
a.field = 1
a.a = 1
print('a.__dict__', a.__dict__, end='\n\n')
A.field = 'new'
print('A.field', A.field, end='\n\n')
A.__dict__
dir(a)
a.a
```
А таким образом можем разрешить нашему объекту возвращать только те атрибуты, название которых начинается на слово test. Теоретически, используя этот прием, можно реализовать истинно приватные атрибуты, но зачем?
```
class A:
def __getattribute__(self, item):
if 'test' in item or '__dict__' == item:
return super().__getattribute__(item)
else:
raise AttributeError
a = A()
a.test_name = 1
a.name = 1
print('a.__dict__', a.__dict__)
print('a.test_name', a.test_name)
print('a.name', a.name)
class A:
def __init__(self):
self.obj_field = 4
class_field = 5
data_descr = DataDescriptor()
nondata_descr = NonDataDescriptor()
a = A()
a.liunyiuynliun
```
## Общий алгоритм получения атрибута
Чтобы получить значение атрибута attrname:
- Если определён метод `a.__class__.__getattribute__()`, то вызывается он и возвращается полученное значение.
- Если attrname это специальный (определённый python-ом) атрибут, такой как `__class__` или `__doc__`, возвращается его значение.
- Проверяется `a.__class__.__dict__` на наличие записи с attrname. Если она существует и значением является data дескриптор, возвращается результат вызова метода `__get__()` дескриптора. Также проверяются все базовые классы.
- Если в `a.__dict__` существует запись с именем attrname, возвращается значение этой записи.
- Проверяется `a.__class__.__dict__`, если в нём существует запись с attrname и это non-data дескриптор, возвращается результат `__get__()` дескриптора, если запись существует и там не дескриптор, возвращается значение записи. Также обыскиваются базовые классы.
- Если существует метод `a.__class__.__getattr__()`, он вызывается и возвращается его результат. Если такого метода нет — выкидывается `AttributeError`.
## Общий алгоритм назначения атрибута
Чтобы установить значение value атрибута attrname экземпляра a:
- Если существует метод `a.__class__.__setattr__()`, он вызывается.
- Проверяется `a.__class__.__dict__`, если в нём есть запись с attrname и это дескриптор данных — вызывается метод `__set__()` дескриптора. Также проверяются базовые классы.
- `a.__dict__` добавляется запись value с ключом attrname.
## Задание
Библиотека pandas предназначена для работы с табличными данными. В ней есть сущности DataFrame (по сути, сама таблица) и Series (колонка либо строка таблицы). У колонок внутри таблицы есть названия, притом получить колонку можно двумя способами:
- `dataframe.colname`
- `dataframe['colname']`
Задание: реализовать структуру данных ключ-значение, где и присваивание, и получение элементов можно будет производить обоими этими способами.
| github_jupyter |
# What's this TensorFlow business?
You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.
For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, TensorFlow (or PyTorch, if you switch over to that notebook)
#### What is it?
TensorFlow is a system for executing computational graphs over Tensor objects, with native support for performing backpropogation for its Variables. In it, we work with Tensors which are n-dimensional arrays analogous to the numpy ndarray.
#### Why?
* Our code will now run on GPUs! Much faster training. Writing your own modules to run on GPUs is beyond the scope of this class, unfortunately.
* We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand.
* We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :)
* We want you to be exposed to the sort of deep learning code you might run into in academia or industry.
## How will I learn TensorFlow?
TensorFlow has many excellent tutorials available, including those from [Google themselves](https://www.tensorflow.org/get_started/get_started).
Otherwise, this notebook will walk you through much of what you need to do to train models in TensorFlow. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here.
# Table of Contents
This notebook has 5 parts. We will walk through TensorFlow at three different levels of abstraction, which should help you better understand it and prepare you for working on your project.
1. Preparation: load the CIFAR-10 dataset.
2. Barebone TensorFlow: we will work directly with low-level TensorFlow graphs.
3. Keras Model API: we will use `tf.keras.Model` to define arbitrary neural network architecture.
4. Keras Sequential API: we will use `tf.keras.Sequential` to define a linear feed-forward network very conveniently.
5. CIFAR-10 open-ended challenge: please implement your own network to get as high accuracy as possible on CIFAR-10. You can experiment with any layer, optimizer, hyperparameters or other advanced features.
Here is a table of comparison:
| API | Flexibility | Convenience |
|---------------|-------------|-------------|
| Barebone | High | Low |
| `tf.keras.Model` | High | Medium |
| `tf.keras.Sequential` | Low | High |
# Part I: Preparation
First, we load the CIFAR-10 dataset. This might take a few minutes to download the first time you run it, but after that the files should be cached on disk and loading should be faster.
In previous parts of the assignment we used CS231N-specific code to download and read the CIFAR-10 dataset; however the `tf.keras.datasets` package in TensorFlow provides prebuilt utility functions for loading many common datasets.
For the purposes of this assignment we will still write our own code to preprocess the data and iterate through it in minibatches. The `tf.data` package in TensorFlow provides tools for automating this process, but working with this package adds extra complication and is beyond the scope of this notebook. However using `tf.data` can be much more efficient than the simple approach used in this notebook, so you should consider using it for your project.
```
import os
import tensorflow as tf
import numpy as np
import math
import timeit
import matplotlib.pyplot as plt
%matplotlib inline
def load_cifar10(num_training=49000, num_validation=1000, num_test=10000):
"""
Fetch the CIFAR-10 dataset from the web and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 dataset and use appropriate data types and shapes
cifar10 = tf.keras.datasets.cifar10.load_data()
(X_train, y_train), (X_test, y_test) = cifar10
X_train = np.asarray(X_train, dtype=np.float32)
y_train = np.asarray(y_train, dtype=np.int32).flatten()
X_test = np.asarray(X_test, dtype=np.float32)
y_test = np.asarray(y_test, dtype=np.int32).flatten()
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean pixel and divide by std
mean_pixel = X_train.mean(axis=(0, 1, 2), keepdims=True)
std_pixel = X_train.std(axis=(0, 1, 2), keepdims=True)
X_train = (X_train - mean_pixel) / std_pixel
X_val = (X_val - mean_pixel) / std_pixel
X_test = (X_test - mean_pixel) / std_pixel
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
NHW = (0, 1, 2)
X_train, y_train, X_val, y_val, X_test, y_test = load_cifar10()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape, y_train.dtype)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
```
### Preparation: Dataset object
For our own convenience we'll define a lightweight `Dataset` class which lets us iterate over data and labels. This is not the most flexible or most efficient way to iterate through data, but it will serve our purposes.
```
class Dataset(object):
def __init__(self, X, y, batch_size, shuffle=False):
"""
Construct a Dataset object to iterate over data X and labels y
Inputs:
- X: Numpy array of data, of any shape
- y: Numpy array of labels, of any shape but with y.shape[0] == X.shape[0]
- batch_size: Integer giving number of elements per minibatch
- shuffle: (optional) Boolean, whether to shuffle the data on each epoch
"""
assert X.shape[0] == y.shape[0], 'Got different numbers of data and labels'
self.X, self.y = X, y
self.batch_size, self.shuffle = batch_size, shuffle
def __iter__(self):
N, B = self.X.shape[0], self.batch_size
idxs = np.arange(N)
if self.shuffle:
np.random.shuffle(idxs)
return iter((self.X[i:i+B], self.y[i:i+B]) for i in range(0, N, B))
train_dset = Dataset(X_train, y_train, batch_size=64, shuffle=True)
val_dset = Dataset(X_val, y_val, batch_size=64, shuffle=False)
test_dset = Dataset(X_test, y_test, batch_size=64)
# We can iterate through a dataset like this:
for t, (x, y) in enumerate(train_dset):
print(t, x.shape, y.shape)
if t > 5: break
```
You can optionally **use GPU by setting the flag to True below**. It's not neccessary to use a GPU for this assignment; if you are working on Google Cloud then we recommend that you do not use a GPU, as it will be significantly more expensive.
```
# Set up some global variables
USE_GPU = True
if USE_GPU:
device = '/device:GPU:0'
else:
device = '/cpu:0'
# Constant to control how often we print when training models
print_every = 100
print('Using device: ', device)
```
# Part II: Barebone TensorFlow
TensorFlow ships with various high-level APIs which make it very convenient to define and train neural networks; we will cover some of these constructs in Part III and Part IV of this notebook. In this section we will start by building a model with basic TensorFlow constructs to help you better understand what's going on under the hood of the higher-level APIs.
TensorFlow is primarily a framework for working with **static computational graphs**. Nodes in the computational graph are Tensors which will hold n-dimensional arrays when the graph is run; edges in the graph represent functions that will operate on Tensors when the graph is run to actually perform useful computation.
This means that a typical TensorFlow program is written in two distinct phases:
1. Build a computational graph that describes the computation that you want to perform. This stage doesn't actually perform any computation; it just builds up a symbolic representation of your computation. This stage will typically define one or more `placeholder` objects that represent inputs to the computational graph.
2. Run the computational graph many times. Each time the graph is run you will specify which parts of the graph you want to compute, and pass a `feed_dict` dictionary that will give concrete values to any `placeholder`s in the graph.
### TensorFlow warmup: Flatten Function
We can see this in action by defining a simple `flatten` function that will reshape image data for use in a fully-connected network.
In TensorFlow, data for convolutional feature maps is typically stored in a Tensor of shape N x H x W x C where:
- N is the number of datapoints (minibatch size)
- H is the height of the feature map
- W is the width of the feature map
- C is the number of channels in the feature map
This is the right way to represent the data when we are doing something like a 2D convolution, that needs spatial understanding of where the intermediate features are relative to each other. When we use fully connected affine layers to process the image, however, we want each datapoint to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data. So, we use a "flatten" operation to collapse the `H x W x C` values per representation into a single long vector. The flatten function below first reads in the value of N from a given batch of data, and then returns a "view" of that data. "View" is analogous to numpy's "reshape" method: it reshapes x's dimensions to be N x ??, where ?? is allowed to be anything (in this case, it will be H x W x C, but we don't need to specify that explicitly).
**NOTE**: TensorFlow and PyTorch differ on the default Tensor layout; TensorFlow uses N x H x W x C but PyTorch uses N x C x H x W.
```
def flatten(x):
"""
Input:
- TensorFlow Tensor of shape (N, D1, ..., DM)
Output:
- TensorFlow Tensor of shape (N, D1 * ... * DM)
"""
N = tf.shape(x)[0]
return tf.reshape(x, (N, -1))
def test_flatten():
# Clear the current TensorFlow graph.
tf.reset_default_graph()
# Stage I: Define the TensorFlow graph describing our computation.
# In this case the computation is trivial: we just want to flatten
# a Tensor using the flatten function defined above.
# Our computation will have a single input, x. We don't know its
# value yet, so we define a placeholder which will hold the value
# when the graph is run. We then pass this placeholder Tensor to
# the flatten function; this gives us a new Tensor which will hold
# a flattened view of x when the graph is run. The tf.device
# context manager tells TensorFlow whether to place these Tensors
# on CPU or GPU.
with tf.device(device):
x = tf.placeholder(tf.float32)
x_flat = flatten(x)
# At this point we have just built the graph describing our computation,
# but we haven't actually computed anything yet. If we print x and x_flat
# we see that they don't hold any data; they are just TensorFlow Tensors
# representing values that will be computed when the graph is run.
print('x: ', type(x), x)
print('x_flat: ', type(x_flat), x_flat)
print()
# We need to use a TensorFlow Session object to actually run the graph.
with tf.Session() as sess:
# Construct concrete values of the input data x using numpy
x_np = np.arange(24).reshape((2, 3, 4))
print('x_np:\n', x_np, '\n')
# Run our computational graph to compute a concrete output value.
# The first argument to sess.run tells TensorFlow which Tensor
# we want it to compute the value of; the feed_dict specifies
# values to plug into all placeholder nodes in the graph. The
# resulting value of x_flat is returned from sess.run as a
# numpy array.
x_flat_np = sess.run(x_flat, feed_dict={x: x_np})
print('x_flat_np:\n', x_flat_np, '\n')
# We can reuse the same graph to perform the same computation
# with different input data
x_np = np.arange(12).reshape((2, 3, 2))
print('x_np:\n', x_np, '\n')
x_flat_np = sess.run(x_flat, feed_dict={x: x_np})
print('x_flat_np:\n', x_flat_np)
test_flatten()
```
### Barebones TensorFlow: Two-Layer Network
We will now implement our first neural network with TensorFlow: a fully-connected ReLU network with two hidden layers and no biases on the CIFAR10 dataset. For now we will use only low-level TensorFlow operators to define the network; later we will see how to use the higher-level abstractions provided by `tf.keras` to simplify the process.
We will define the forward pass of the network in the function `two_layer_fc`; this will accept TensorFlow Tensors for the inputs and weights of the network, and return a TensorFlow Tensor for the scores. It's important to keep in mind that calling the `two_layer_fc` function **does not** perform any computation; instead it just sets up the computational graph for the forward computation. To actually run the network we need to enter a TensorFlow Session and feed data to the computational graph.
After defining the network architecture in the `two_layer_fc` function, we will test the implementation by setting up and running a computational graph, feeding zeros to the network and checking the shape of the output.
It's important that you read and understand this implementation.
```
def two_layer_fc(x, params):
"""
A fully-connected neural network; the architecture is:
fully-connected layer -> ReLU -> fully connected layer.
Note that we only need to define the forward pass here; TensorFlow will take
care of computing the gradients for us.
The input to the network will be a minibatch of data, of shape
(N, d1, ..., dM) where d1 * ... * dM = D. The hidden layer will have H units,
and the output layer will produce scores for C classes.
Inputs:
- x: A TensorFlow Tensor of shape (N, d1, ..., dM) giving a minibatch of
input data.
- params: A list [w1, w2] of TensorFlow Tensors giving weights for the
network, where w1 has shape (D, H) and w2 has shape (H, C).
Returns:
- scores: A TensorFlow Tensor of shape (N, C) giving classification scores
for the input data x.
"""
w1, w2 = params # Unpack the parameters
x = flatten(x) # Flatten the input; now x has shape (N, D)
h = tf.nn.relu(tf.matmul(x, w1)) # Hidden layer: h has shape (N, H)
scores = tf.matmul(h, w2) # Compute scores of shape (N, C)
return scores
def two_layer_fc_test():
# TensorFlow's default computational graph is essentially a hidden global
# variable. To avoid adding to this default graph when you rerun this cell,
# we clear the default graph before constructing the graph we care about.
tf.reset_default_graph()
hidden_layer_size = 42
# Scoping our computational graph setup code under a tf.device context
# manager lets us tell TensorFlow where we want these Tensors to be
# placed.
with tf.device(device):
# Set up a placehoder for the input of the network, and constant
# zero Tensors for the network weights. Here we declare w1 and w2
# using tf.zeros instead of tf.placeholder as we've seen before - this
# means that the values of w1 and w2 will be stored in the computational
# graph itself and will persist across multiple runs of the graph; in
# particular this means that we don't have to pass values for w1 and w2
# using a feed_dict when we eventually run the graph.
x = tf.placeholder(tf.float32)
w1 = tf.zeros((32 * 32 * 3, hidden_layer_size))
w2 = tf.zeros((hidden_layer_size, 10))
# Call our two_layer_fc function to set up the computational
# graph for the forward pass of the network.
scores = two_layer_fc(x, [w1, w2])
# Use numpy to create some concrete data that we will pass to the
# computational graph for the x placeholder.
x_np = np.zeros((64, 32, 32, 3))
with tf.Session() as sess:
# The calls to tf.zeros above do not actually instantiate the values
# for w1 and w2; the following line tells TensorFlow to instantiate
# the values of all Tensors (like w1 and w2) that live in the graph.
sess.run(tf.global_variables_initializer())
# Here we actually run the graph, using the feed_dict to pass the
# value to bind to the placeholder for x; we ask TensorFlow to compute
# the value of the scores Tensor, which it returns as a numpy array.
scores_np = sess.run(scores, feed_dict={x: x_np})
print(scores_np.shape)
two_layer_fc_test()
```
### Barebones TensorFlow: Three-Layer ConvNet
Here you will complete the implementation of the function `three_layer_convnet` which will perform the forward pass of a three-layer convolutional network. The network should have the following architecture:
1. A convolutional layer (with bias) with `channel_1` filters, each with shape `KW1 x KH1`, and zero-padding of two
2. ReLU nonlinearity
3. A convolutional layer (with bias) with `channel_2` filters, each with shape `KW2 x KH2`, and zero-padding of one
4. ReLU nonlinearity
5. Fully-connected layer with bias, producing scores for `C` classes.
**HINT**: For convolutions: https://www.tensorflow.org/api_docs/python/tf/nn/conv2d; be careful with padding!
**HINT**: For biases: https://www.tensorflow.org/performance/xla/broadcasting
```
def three_layer_convnet(x, params):
"""
A three-layer convolutional network with the architecture described above.
Inputs:
- x: A TensorFlow Tensor of shape (N, H, W, 3) giving a minibatch of images
- params: A list of TensorFlow Tensors giving the weights and biases for the
network; should contain the following:
- conv_w1: TensorFlow Tensor of shape (KH1, KW1, 3, channel_1) giving
weights for the first convolutional layer.
- conv_b1: TensorFlow Tensor of shape (channel_1,) giving biases for the
first convolutional layer.
- conv_w2: TensorFlow Tensor of shape (KH2, KW2, channel_1, channel_2)
giving weights for the second convolutional layer
- conv_b2: TensorFlow Tensor of shape (channel_2,) giving biases for the
second convolutional layer.
- fc_w: TensorFlow Tensor giving weights for the fully-connected layer.
Can you figure out what the shape should be?
- fc_b: TensorFlow Tensor giving biases for the fully-connected layer.
Can you figure out what the shape should be?
"""
conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b = params
scores = None
############################################################################
# TODO: Implement the forward pass for the three-layer ConvNet. #
############################################################################
paddings = tf.constant([[0,0], [2,2], [2,2], [0,0]])
x = tf.pad(x, paddings, 'CONSTANT')
conv1 = tf.nn.conv2d(x, conv_w1, strides=[1,1,1,1], padding="VALID")+conv_b1
relu1 = tf.nn.relu(conv1)
paddings = tf.constant([[0,0], [1,1], [1,1], [0,0]])
conv1 = tf.pad(conv1, paddings, 'CONSTANT')
conv2 = tf.nn.conv2d(conv1, conv_w2, strides=[1,1,1,1], padding="VALID")+conv_b2
relu2 = tf.nn.relu(conv2)
relu2 = flatten(relu2)
scores = tf.matmul(relu2, fc_w) + fc_b
############################################################################
# END OF YOUR CODE #
############################################################################
return scores
```
After defing the forward pass of the three-layer ConvNet above, run the following cell to test your implementation. Like the two-layer network, we use the `three_layer_convnet` function to set up the computational graph, then run the graph on a batch of zeros just to make sure the function doesn't crash, and produces outputs of the correct shape.
When you run this function, `scores_np` should have shape `(64, 10)`.
```
def three_layer_convnet_test():
tf.reset_default_graph()
with tf.device(device):
x = tf.placeholder(tf.float32)
conv_w1 = tf.zeros((5, 5, 3, 6))
conv_b1 = tf.zeros((6,))
conv_w2 = tf.zeros((3, 3, 6, 9))
conv_b2 = tf.zeros((9,))
fc_w = tf.zeros((32 * 32 * 9, 10))
fc_b = tf.zeros((10,))
params = [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b]
scores = three_layer_convnet(x, params)
# Inputs to convolutional layers are 4-dimensional arrays with shape
# [batch_size, height, width, channels]
x_np = np.zeros((64, 32, 32, 3))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
scores_np = sess.run(scores, feed_dict={x: x_np})
print('scores_np has shape: ', scores_np.shape)
with tf.device('/cpu:0'):
three_layer_convnet_test()
```
### Barebones TensorFlow: Training Step
We now define the `training_step` function which sets up the part of the computational graph that performs a single training step. This will take three basic steps:
1. Compute the loss
2. Compute the gradient of the loss with respect to all network weights
3. Make a weight update step using (stochastic) gradient descent.
Note that the step of updating the weights is itself an operation in the computational graph - the calls to `tf.assign_sub` in `training_step` return TensorFlow operations that mutate the weights when they are executed. There is an important bit of subtlety here - when we call `sess.run`, TensorFlow does not execute all operations in the computational graph; it only executes the minimal subset of the graph necessary to compute the outputs that we ask TensorFlow to produce. As a result, naively computing the loss would not cause the weight update operations to execute, since the operations needed to compute the loss do not depend on the output of the weight update. To fix this problem, we insert a **control dependency** into the graph, adding a duplicate `loss` node to the graph that does depend on the outputs of the weight update operations; this is the object that we actually return from the `training_step` function. As a result, asking TensorFlow to evaluate the value of the `loss` returned from `training_step` will also implicitly update the weights of the network using that minibatch of data.
We need to use a few new TensorFlow functions to do all of this:
- For computing the cross-entropy loss we'll use `tf.nn.sparse_softmax_cross_entropy_with_logits`: https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits
- For averaging the loss across a minibatch of data we'll use `tf.reduce_mean`:
https://www.tensorflow.org/api_docs/python/tf/reduce_mean
- For computing gradients of the loss with respect to the weights we'll use `tf.gradients`: https://www.tensorflow.org/api_docs/python/tf/gradients
- We'll mutate the weight values stored in a TensorFlow Tensor using `tf.assign_sub`: https://www.tensorflow.org/api_docs/python/tf/assign_sub
- We'll add a control dependency to the graph using `tf.control_dependencies`: https://www.tensorflow.org/api_docs/python/tf/control_dependencies
```
def training_step(scores, y, params, learning_rate):
"""
Set up the part of the computational graph which makes a training step.
Inputs:
- scores: TensorFlow Tensor of shape (N, C) giving classification scores for
the model.
- y: TensorFlow Tensor of shape (N,) giving ground-truth labels for scores;
y[i] == c means that c is the correct class for scores[i].
- params: List of TensorFlow Tensors giving the weights of the model
- learning_rate: Python scalar giving the learning rate to use for gradient
descent step.
Returns:
- loss: A TensorFlow Tensor of shape () (scalar) giving the loss for this
batch of data; evaluating the loss also performs a gradient descent step
on params (see above).
"""
# First compute the loss; the first line gives losses for each example in
# the minibatch, and the second averages the losses acros the batch
losses = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=scores)
loss = tf.reduce_mean(losses)
# Compute the gradient of the loss with respect to each parameter of the the
# network. This is a very magical function call: TensorFlow internally
# traverses the computational graph starting at loss backward to each element
# of params, and uses backpropagation to figure out how to compute gradients;
# it then adds new operations to the computational graph which compute the
# requested gradients, and returns a list of TensorFlow Tensors that will
# contain the requested gradients when evaluated.
grad_params = tf.gradients(loss, params)
# Make a gradient descent step on all of the model parameters.
new_weights = []
for w, grad_w in zip(params, grad_params):
new_w = tf.assign_sub(w, learning_rate * grad_w)
new_weights.append(new_w)
# Insert a control dependency so that evaluting the loss causes a weight
# update to happen; see the discussion above.
with tf.control_dependencies(new_weights):
return tf.identity(loss)
```
### Barebones TensorFlow: Training Loop
Now we set up a basic training loop using low-level TensorFlow operations. We will train the model using stochastic gradient descent without momentum. The `training_step` function sets up the part of the computational graph that performs the training step, and the function `train_part2` iterates through the training data, making training steps on each minibatch, and periodically evaluates accuracy on the validation set.
```
def train_part2(model_fn, init_fn, learning_rate):
"""
Train a model on CIFAR-10.
Inputs:
- model_fn: A Python function that performs the forward pass of the model
using TensorFlow; it should have the following signature:
scores = model_fn(x, params) where x is a TensorFlow Tensor giving a
minibatch of image data, params is a list of TensorFlow Tensors holding
the model weights, and scores is a TensorFlow Tensor of shape (N, C)
giving scores for all elements of x.
- init_fn: A Python function that initializes the parameters of the model.
It should have the signature params = init_fn() where params is a list
of TensorFlow Tensors holding the (randomly initialized) weights of the
model.
- learning_rate: Python float giving the learning rate to use for SGD.
"""
# First clear the default graph
tf.reset_default_graph()
is_training = tf.placeholder(tf.bool, name='is_training')
# Set up the computational graph for performing forward and backward passes,
# and weight updates.
with tf.device(device):
# Set up placeholders for the data and labels
x = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int32, [None])
params = init_fn() # Initialize the model parameters
scores = model_fn(x, params) # Forward pass of the model
loss = training_step(scores, y, params, learning_rate)
# Now we actually run the graph many times using the training data
with tf.Session() as sess:
# Initialize variables that will live in the graph
sess.run(tf.global_variables_initializer())
for t, (x_np, y_np) in enumerate(train_dset):
# Run the graph on a batch of training data; recall that asking
# TensorFlow to evaluate loss will cause an SGD step to happen.
feed_dict = {x: x_np, y: y_np}
loss_np = sess.run(loss, feed_dict=feed_dict)
# Periodically print the loss and check accuracy on the val set
if t % print_every == 0:
print('Iteration %d, loss = %.4f' % (t, loss_np))
check_accuracy(sess, val_dset, x, scores, is_training)
```
### Barebones TensorFlow: Check Accuracy
When training the model we will use the following function to check the accuracy of our model on the training or validation sets. Note that this function accepts a TensorFlow Session object as one of its arguments; this is needed since the function must actually run the computational graph many times on the data that it loads from the dataset `dset`.
Also note that we reuse the same computational graph both for taking training steps and for evaluating the model; however since the `check_accuracy` function never evalutes the `loss` value in the computational graph, the part of the graph that updates the weights of the graph do not execute on the validation data.
```
def check_accuracy(sess, dset, x, scores, is_training=None):
"""
Check accuracy on a classification model.
Inputs:
- sess: A TensorFlow Session that will be used to run the graph
- dset: A Dataset object on which to check accuracy
- x: A TensorFlow placeholder Tensor where input images should be fed
- scores: A TensorFlow Tensor representing the scores output from the
model; this is the Tensor we will ask TensorFlow to evaluate.
Returns: Nothing, but prints the accuracy of the model
"""
num_correct, num_samples = 0, 0
for x_batch, y_batch in dset:
feed_dict = {x: x_batch, is_training: 0}
scores_np = sess.run(scores, feed_dict=feed_dict)
y_pred = scores_np.argmax(axis=1)
num_samples += x_batch.shape[0]
num_correct += (y_pred == y_batch).sum()
acc = float(num_correct) / num_samples
print('Got %d / %d correct (%.2f%%)' % (num_correct, num_samples, 100 * acc))
```
### Barebones TensorFlow: Initialization
We'll use the following utility method to initialize the weight matrices for our models using Kaiming's normalization method.
[1] He et al, *Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
*, ICCV 2015, https://arxiv.org/abs/1502.01852
```
def kaiming_normal(shape):
if len(shape) == 2:
fan_in, fan_out = shape[0], shape[1]
elif len(shape) == 4:
fan_in, fan_out = np.prod(shape[:3]), shape[3]
return tf.random_normal(shape) * np.sqrt(2.0 / fan_in)
```
### Barebones TensorFlow: Train a Two-Layer Network
We are finally ready to use all of the pieces defined above to train a two-layer fully-connected network on CIFAR-10.
We just need to define a function to initialize the weights of the model, and call `train_part2`.
Defining the weights of the network introduces another important piece of TensorFlow API: `tf.Variable`. A TensorFlow Variable is a Tensor whose value is stored in the graph and persists across runs of the computational graph; however unlike constants defined with `tf.zeros` or `tf.random_normal`, the values of a Variable can be mutated as the graph runs; these mutations will persist across graph runs. Learnable parameters of the network are usually stored in Variables.
You don't need to tune any hyperparameters, but you should achieve accuracies above 40% after one epoch of training.
```
def two_layer_fc_init():
"""
Initialize the weights of a two-layer network, for use with the
two_layer_network function defined above.
Inputs: None
Returns: A list of:
- w1: TensorFlow Variable giving the weights for the first layer
- w2: TensorFlow Variable giving the weights for the second layer
"""
hidden_layer_size = 4000
w1 = tf.Variable(kaiming_normal((3 * 32 * 32, 4000)))
w2 = tf.Variable(kaiming_normal((4000, 10)))
return [w1, w2]
learning_rate = 1e-2
train_part2(two_layer_fc, two_layer_fc_init, learning_rate)
```
### Barebones TensorFlow: Train a three-layer ConvNet
We will now use TensorFlow to train a three-layer ConvNet on CIFAR-10.
You need to implement the `three_layer_convnet_init` function. Recall that the architecture of the network is:
1. Convolutional layer (with bias) with 32 5x5 filters, with zero-padding 2
2. ReLU
3. Convolutional layer (with bias) with 16 3x3 filters, with zero-padding 1
4. ReLU
5. Fully-connected layer (with bias) to compute scores for 10 classes
You don't need to do any hyperparameter tuning, but you should see accuracies above 43% after one epoch of training.
```
def three_layer_convnet_init():
"""
Initialize the weights of a Three-Layer ConvNet, for use with the
three_layer_convnet function defined above.
Inputs: None
Returns a list containing:
- conv_w1: TensorFlow Variable giving weights for the first conv layer
- conv_b1: TensorFlow Variable giving biases for the first conv layer
- conv_w2: TensorFlow Variable giving weights for the second conv layer
- conv_b2: TensorFlow Variable giving biases for the second conv layer
- fc_w: TensorFlow Variable giving weights for the fully-connected layer
- fc_b: TensorFlow Variable giving biases for the fully-connected layer
"""
params = None
############################################################################
# TODO: Initialize the parameters of the three-layer network. #
############################################################################
conv_w1 = tf.Variable(kaiming_normal([5, 5, 3, 32]))
conv_b1 = tf.Variable(np.zeros([32]), dtype=tf.float32)
conv_w2 = tf.Variable(kaiming_normal([3, 3, 32, 16]))
conv_b2 = tf.Variable(np.zeros([16]), dtype=tf.float32)
fc_w = tf.Variable(kaiming_normal([32*32*16,10]))
fc_b = tf.Variable(np.zeros([10]), dtype=tf.float32)
params = (conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b)
############################################################################
# END OF YOUR CODE #
############################################################################
return params
learning_rate = 3e-3
train_part2(three_layer_convnet, three_layer_convnet_init, learning_rate)
```
# Part III: Keras Model API
Implementing a neural network using the low-level TensorFlow API is a good way to understand how TensorFlow works, but it's a little inconvenient - we had to manually keep track of all Tensors holding learnable parameters, and we had to use a control dependency to implement the gradient descent update step. This was fine for a small network, but could quickly become unweildy for a large complex model.
Fortunately TensorFlow provides higher-level packages such as `tf.keras` and `tf.layers` which make it easy to build models out of modular, object-oriented layers; `tf.train` allows you to easily train these models using a variety of different optimization algorithms.
In this part of the notebook we will define neural network models using the `tf.keras.Model` API. To implement your own model, you need to do the following:
1. Define a new class which subclasses `tf.keras.model`. Give your class an intuitive name that describes it, like `TwoLayerFC` or `ThreeLayerConvNet`.
2. In the initializer `__init__()` for your new class, define all the layers you need as class attributes. The `tf.layers` package provides many common neural-network layers, like `tf.layers.Dense` for fully-connected layers and `tf.layers.Conv2D` for convolutional layers. Under the hood, these layers will construct `Variable` Tensors for any learnable parameters. **Warning**: Don't forget to call `super().__init__()` as the first line in your initializer!
3. Implement the `call()` method for your class; this implements the forward pass of your model, and defines the *connectivity* of your network. Layers defined in `__init__()` implement `__call__()` so they can be used as function objects that transform input Tensors into output Tensors. Don't define any new layers in `call()`; any layers you want to use in the forward pass should be defined in `__init__()`.
After you define your `tf.keras.Model` subclass, you can instantiate it and use it like the model functions from Part II.
### Module API: Two-Layer Network
Here is a concrete example of using the `tf.keras.Model` API to define a two-layer network. There are a few new bits of API to be aware of here:
We use an `Initializer` object to set up the initial values of the learnable parameters of the layers; in particular `tf.variance_scaling_initializer` gives behavior similar to the Kaiming initialization method we used in Part II. You can read more about it here: https://www.tensorflow.org/api_docs/python/tf/variance_scaling_initializer
We construct `tf.layers.Dense` objects to represent the two fully-connected layers of the model. In addition to multiplying their input by a weight matrix and adding a bias vector, these layer can also apply a nonlinearity for you. For the first layer we specify a ReLU activation function by passing `activation=tf.nn.relu` to the constructor; the second layer does not apply any activation function.
Unfortunately the `flatten` function we defined in Part II is not compatible with the `tf.keras.Model` API; fortunately we can use `tf.layers.flatten` to perform the same operation. The issue with our `flatten` function from Part II has to do with static vs dynamic shapes for Tensors, which is beyond the scope of this notebook; you can read more about the distinction [in the documentation](https://www.tensorflow.org/programmers_guide/faq#tensor_shapes).
```
class TwoLayerFC(tf.keras.Model):
def __init__(self, hidden_size, num_classes):
super().__init__()
initializer = tf.variance_scaling_initializer(scale=2.0)
self.fc1 = tf.layers.Dense(hidden_size, activation=tf.nn.relu,
kernel_initializer=initializer)
self.fc2 = tf.layers.Dense(num_classes,
kernel_initializer=initializer)
def call(self, x, training=None):
x = tf.layers.flatten(x)
x = self.fc1(x)
x = self.fc2(x)
return x
def test_TwoLayerFC():
""" A small unit test to exercise the TwoLayerFC model above. """
tf.reset_default_graph()
input_size, hidden_size, num_classes = 50, 42, 10
# As usual in TensorFlow, we first need to define our computational graph.
# To this end we first construct a TwoLayerFC object, then use it to construct
# the scores Tensor.
model = TwoLayerFC(hidden_size, num_classes)
with tf.device(device):
x = tf.zeros((64, input_size))
scores = model(x)
# Now that our computational graph has been defined we can run the graph
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
scores_np = sess.run(scores)
print(scores_np.shape)
test_TwoLayerFC()
```
### Funtional API: Two-Layer Network
The `tf.layers` package provides two different higher-level APIs for defining neural network models. In the example above we used the **object-oriented API**, where each layer of the neural network is represented as a Python object (like `tf.layers.Dense`). Here we showcase the **functional API**, where each layer is a Python function (like `tf.layers.dense`) which inputs and outputs TensorFlow Tensors, and which internally sets up Tensors in the computational graph to hold any learnable weights.
To construct a network, one needs to pass the input tensor to the first layer, and construct the subsequent layers sequentially. Here's an example of how to construct the same two-layer nework with the functional API.
```
def two_layer_fc_functional(inputs, hidden_size, num_classes):
initializer = tf.variance_scaling_initializer(scale=2.0)
flattened_inputs = tf.layers.flatten(inputs)
fc1_output = tf.layers.dense(flattened_inputs, hidden_size, activation=tf.nn.relu,
kernel_initializer=initializer)
scores = tf.layers.dense(fc1_output, num_classes,
kernel_initializer=initializer)
return scores
def test_two_layer_fc_functional():
""" A small unit test to exercise the TwoLayerFC model above. """
tf.reset_default_graph()
input_size, hidden_size, num_classes = 50, 42, 10
# As usual in TensorFlow, we first need to define our computational graph.
# To this end we first construct a two layer network graph by calling the
# two_layer_network() function. This function constructs the computation
# graph and outputs the score tensor.
with tf.device(device):
x = tf.zeros((64, input_size))
scores = two_layer_fc_functional(x, hidden_size, num_classes)
# Now that our computational graph has been defined we can run the graph
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
scores_np = sess.run(scores)
print(scores_np.shape)
test_two_layer_fc_functional()
```
### Keras Model API: Three-Layer ConvNet
Now it's your turn to implement a three-layer ConvNet using the `tf.keras.Model` API. Your model should have the same architecture used in Part II:
1. Convolutional layer with 5 x 5 kernels, with zero-padding of 2
2. ReLU nonlinearity
3. Convolutional layer with 3 x 3 kernels, with zero-padding of 1
4. ReLU nonlinearity
5. Fully-connected layer to give class scores
You should initialize the weights of your network using the same initialization method as was used in the two-layer network above.
**Hint**: Refer to the documentation for `tf.layers.Conv2D` and `tf.layers.Dense`:
https://www.tensorflow.org/api_docs/python/tf/layers/Conv2D
https://www.tensorflow.org/api_docs/python/tf/layers/Dense
```
class ThreeLayerConvNet(tf.keras.Model):
def __init__(self, channel_1, channel_2, num_classes):
super().__init__()
########################################################################
# TODO: Implement the __init__ method for a three-layer ConvNet. You #
# should instantiate layer objects to be used in the forward pass. #
########################################################################
initializer = tf.variance_scaling_initializer(scale=2.0)
self.conv1 = tf.layers.Conv2D(channel_1, [5,5], [1,1], padding='valid',
kernel_initializer=initializer,
activation=tf.nn.relu)
self.conv2 = tf.layers.Conv2D(channel_2, [3,3], [1,1], padding='valid',
kernel_initializer=initializer,
activation=tf.nn.relu)
self.fc = tf.layers.Dense(num_classes, kernel_initializer=initializer)
########################################################################
# END OF YOUR CODE #
########################################################################
def call(self, x, training=None):
scores = None
########################################################################
# TODO: Implement the forward pass for a three-layer ConvNet. You #
# should use the layer objects defined in the __init__ method. #
########################################################################
padding = tf.constant([[0,0],[2,2],[2,2],[0,0]])
x = tf.pad(x, padding, 'CONSTANT')
x = self.conv1(x)
padding = tf.constant([[0,0],[1,1],[1,1],[0,0]])
x = tf.pad(x, padding, 'CONSTANT')
x = self.conv2(x)
x = tf.layers.flatten(x)
scores = self.fc(x)
########################################################################
# END OF YOUR CODE #
########################################################################
return scores
```
Once you complete the implementation of the `ThreeLayerConvNet` above you can run the following to ensure that your implementation does not crash and produces outputs of the expected shape.
```
def test_ThreeLayerConvNet():
tf.reset_default_graph()
channel_1, channel_2, num_classes = 12, 8, 10
model = ThreeLayerConvNet(channel_1, channel_2, num_classes)
with tf.device(device):
x = tf.zeros((64, 3, 32, 32))
scores = model(x)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
scores_np = sess.run(scores)
print(scores_np.shape)
test_ThreeLayerConvNet()
```
### Keras Model API: Training Loop
We need to implement a slightly different training loop when using the `tf.keras.Model` API. Instead of computing gradients and updating the weights of the model manually, we use an `Optimizer` object from the `tf.train` package which takes care of these details for us. You can read more about `Optimizer`s here: https://www.tensorflow.org/api_docs/python/tf/train/Optimizer
```
def train_part34(model_init_fn, optimizer_init_fn, num_epochs=1):
"""
Simple training loop for use with models defined using tf.keras. It trains
a model for one epoch on the CIFAR-10 training set and periodically checks
accuracy on the CIFAR-10 validation set.
Inputs:
- model_init_fn: A function that takes no parameters; when called it
constructs the model we want to train: model = model_init_fn()
- optimizer_init_fn: A function which takes no parameters; when called it
constructs the Optimizer object we will use to optimize the model:
optimizer = optimizer_init_fn()
- num_epochs: The number of epochs to train for
Returns: Nothing, but prints progress during training
"""
tf.reset_default_graph()
with tf.device(device):
# Construct the computational graph we will use to train the model. We
# use the model_init_fn to construct the model, declare placeholders for
# the data and labels
x = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int32, [None])
# We need a place holder to explicitly specify if the model is in the training
# phase or not. This is because a number of layers behaves differently in
# training and in testing, e.g., dropout and batch normalization.
# We pass this variable to the computation graph through feed_dict as shown below.
is_training = tf.placeholder(tf.bool, name='is_training')
# Use the model function to build the forward pass.
scores = model_init_fn(x, is_training)
# Compute the loss like we did in Part II
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=scores)
loss = tf.reduce_mean(loss)
# Use the optimizer_fn to construct an Optimizer, then use the optimizer
# to set up the training step. Asking TensorFlow to evaluate the
# train_op returned by optimizer.minimize(loss) will cause us to make a
# single update step using the current minibatch of data.
# Note that we use tf.control_dependencies to force the model to run
# the tf.GraphKeys.UPDATE_OPS at each training step. tf.GraphKeys.UPDATE_OPS
# holds the operators that update the states of the network.
# For example, the tf.layers.batch_normalization function adds the running mean
# and variance update operators to tf.GraphKeys.UPDATE_OPS.
optimizer = optimizer_init_fn()
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
train_op = optimizer.minimize(loss)
# Now we can run the computational graph many times to train the model.
# When we call sess.run we ask it to evaluate train_op, which causes the
# model to update.
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
t = 0
for epoch in range(num_epochs):
print('Starting epoch %d' % epoch)
for x_np, y_np in train_dset:
feed_dict = {x: x_np, y: y_np, is_training:1}
loss_np, _ = sess.run([loss, train_op], feed_dict=feed_dict)
if t % print_every == 0:
print('Iteration %d, loss = %.4f' % (t, loss_np))
check_accuracy(sess, val_dset, x, scores, is_training=is_training)
print()
t += 1
```
### Keras Model API: Train a Two-Layer Network
We can now use the tools defined above to train a two-layer network on CIFAR-10. We define the `model_init_fn` and `optimizer_init_fn` that construct the model and optimizer respectively when called. Here we want to train the model using stochastic gradient descent with no momentum, so we construct a `tf.train.GradientDescentOptimizer` function; you can [read about it here](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer).
You don't need to tune any hyperparameters here, but you should achieve accuracies above 40% after one epoch of training.
```
hidden_size, num_classes = 4000, 10
learning_rate = 1e-2
def model_init_fn(inputs, is_training):
return TwoLayerFC(hidden_size, num_classes)(inputs)
def optimizer_init_fn():
return tf.train.GradientDescentOptimizer(learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
```
### Keras Model API: Train a Two-Layer Network (functional API)
Similarly, we train the two-layer network constructed using the functional API.
```
hidden_size, num_classes = 4000, 10
learning_rate = 1e-2
def model_init_fn(inputs, is_training):
return two_layer_fc_functional(inputs, hidden_size, num_classes)
def optimizer_init_fn():
return tf.train.GradientDescentOptimizer(learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
```
### Keras Model API: Train a Three-Layer ConvNet
Here you should use the tools we've defined above to train a three-layer ConvNet on CIFAR-10. Your ConvNet should use 32 filters in the first convolutional layer and 16 filters in the second layer.
To train the model you should use gradient descent with Nesterov momentum 0.9.
**HINT**: https://www.tensorflow.org/api_docs/python/tf/train/MomentumOptimizer
You don't need to perform any hyperparameter tuning, but you should achieve accuracies above 45% after training for one epoch.
```
learning_rate = 3e-3
channel_1, channel_2, num_classes = 32, 16, 10
def model_init_fn(inputs, is_training):
model = None
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
model = ThreeLayerConvNet(channel_1, channel_2, num_classes)
############################################################################
# END OF YOUR CODE #
############################################################################
return model(inputs)
def optimizer_init_fn():
optimizer = None
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
############################################################################
# END OF YOUR CODE #
############################################################################
return optimizer
train_part34(model_init_fn, optimizer_init_fn)
```
# Part IV: Keras Sequential API
In Part III we introduced the `tf.keras.Model` API, which allows you to define models with any number of learnable layers and with arbitrary connectivity between layers.
However for many models you don't need such flexibility - a lot of models can be expressed as a sequential stack of layers, with the output of each layer fed to the next layer as input. If your model fits this pattern, then there is an even easier way to define your model: using `tf.keras.Sequential`. You don't need to write any custom classes; you simply call the `tf.keras.Sequential` constructor with a list containing a sequence of layer objects.
One complication with `tf.keras.Sequential` is that you must define the shape of the input to the model by passing a value to the `input_shape` of the first layer in your model.
### Keras Sequential API: Two-Layer Network
Here we rewrite the two-layer fully-connected network using `tf.keras.Sequential`, and train it using the training loop defined above.
You don't need to perform any hyperparameter tuning here, but you should see accuracies above 40% after training for one epoch.
```
learning_rate = 1e-2
def model_init_fn(inputs, is_training):
input_shape = (32, 32, 3)
hidden_layer_size, num_classes = 4000, 10
initializer = tf.variance_scaling_initializer(scale=2.0)
layers = [
tf.layers.Flatten(input_shape=input_shape),
tf.layers.Dense(hidden_layer_size, activation=tf.nn.relu,
kernel_initializer=initializer),
tf.layers.Dense(num_classes, kernel_initializer=initializer),
]
model = tf.keras.Sequential(layers)
return model(inputs)
def optimizer_init_fn():
return tf.train.GradientDescentOptimizer(learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
```
### Keras Sequential API: Three-Layer ConvNet
Here you should use `tf.keras.Sequential` to reimplement the same three-layer ConvNet architecture used in Part II and Part III. As a reminder, your model should have the following architecture:
1. Convolutional layer with 32 5x5 kernels, using zero padding of 2
2. ReLU nonlinearity
3. Convolutional layer with 16 3x3 kernels, using zero padding of 1
4. ReLU nonlinearity
5. Fully-connected layer giving class scores
You should initialize the weights of the model using a `tf.variance_scaling_initializer` as above.
You should train the model using Nesterov momentum 0.9.
You don't need to perform any hyperparameter search, but you should achieve accuracy above 45% after training for one epoch.
```
def model_init_fn(inputs, is_training):
model = None
############################################################################
# TODO: Construct a three-layer ConvNet using tf.keras.Sequential. #
############################################################################
input_shape = (32,32,3)
channel_1, channel_2, num_classes = 32, 16, 10
initializer = tf.variance_scaling_initializer(scale=2.0)
layers = [
tf.keras.layers.InputLayer(input_shape=input_shape),
tf.keras.layers.Conv2D(channel_1, [5,5], [1,1], padding='same',
kernel_initializer=initializer,
activation=tf.nn.relu),
tf.keras.layers.Conv2D(channel_2, [3,3], [1,1], padding='same',
kernel_initializer=initializer,
activation=tf.nn.relu),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(num_classes, kernel_initializer=initializer)
]
model = tf.keras.Sequential(layers)
############################################################################
# END OF YOUR CODE #
############################################################################
return model(inputs)
learning_rate = 5e-4
def optimizer_init_fn():
optimizer = None
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=0.9,
use_nesterov=True)
############################################################################
# END OF YOUR CODE #
############################################################################
return optimizer
train_part34(model_init_fn, optimizer_init_fn)
```
# Part V: CIFAR-10 open-ended challenge
In this section you can experiment with whatever ConvNet architecture you'd like on CIFAR-10.
You should experiment with architectures, hyperparameters, loss functions, regularization, or anything else you can think of to train a model that achieves **at least 70%** accuracy on the **validation** set within 10 epochs. You can use the `check_accuracy` and `train` functions from above, or you can implement your own training loop.
Describe what you did at the end of the notebook.
### Some things you can try:
- **Filter size**: Above we used 5x5 and 3x3; is this optimal?
- **Number of filters**: Above we used 16 and 32 filters. Would more or fewer do better?
- **Pooling**: We didn't use any pooling above. Would this improve the model?
- **Normalization**: Would your model be improved with batch normalization, layer normalization, group normalization, or some other normalization strategy?
- **Network architecture**: The ConvNet above has only three layers of trainable parameters. Would a deeper model do better?
- **Global average pooling**: Instead of flattening after the final convolutional layer, would global average pooling do better? This strategy is used for example in Google's Inception network and in Residual Networks.
- **Regularization**: Would some kind of regularization improve performance? Maybe weight decay or dropout?
### WARNING: Batch Normalization / Dropout
Batch Normalization and Dropout **WILL NOT WORK CORRECTLY** if you use the `train_part34()` function with the object-oriented `tf.keras.Model` or `tf.keras.Sequential` APIs; if you want to use these layers with this training loop then you **must use the tf.layers functional API**.
We wrote `train_part34()` to explicitly demonstrate how TensorFlow works; however there are some subtleties that make it tough to handle the object-oriented batch normalization layer in a simple training loop. In practice both `tf.keras` and `tf` provide higher-level APIs which handle the training loop for you, such as [keras.fit](https://keras.io/models/sequential/) and [tf.Estimator](https://www.tensorflow.org/programmers_guide/estimators), both of which will properly handle batch normalization when using the object-oriented API.
### Tips for training
For each network architecture that you try, you should tune the learning rate and other hyperparameters. When doing this there are a couple important things to keep in mind:
- If the parameters are working well, you should see improvement within a few hundred iterations
- Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
- Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
- You should use the validation set for hyperparameter search, and save your test set for evaluating your architecture on the best parameters as selected by the validation set.
### Going above and beyond
If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are **not required** to implement any of these, but don't miss the fun if you have time!
- Alternative optimizers: you can try Adam, Adagrad, RMSprop, etc.
- Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.
- Model ensembles
- Data augmentation
- New Architectures
- [ResNets](https://arxiv.org/abs/1512.03385) where the input from the previous layer is added to the output.
- [DenseNets](https://arxiv.org/abs/1608.06993) where inputs into previous layers are concatenated together.
- [This blog has an in-depth overview](https://chatbotslife.com/resnets-highwaynets-and-densenets-oh-my-9bb15918ee32)
### Have fun and happy training!
```
def test_model(model_init_fn):
tf.reset_default_graph()
with tf.device(device):
x = tf.zeros((50, 32, 32, 3))
scores = model_init_fn(x, True)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
scores_np = sess.run(scores)
print(scores_np.shape)
def model_init_fn(inputs, is_training):
model = None
############################################################################
# TODO: Construct a three-layer ConvNet using tf.keras.Sequential. #
############################################################################
num_classes = 10
initializer = tf.variance_scaling_initializer(scale=2.0)
conv1 = tf.layers.conv2d(inputs, 32, [3,3], [1,1], padding='same',
kernel_initializer=initializer)
bn1 = tf.layers.batch_normalization(conv1, training=is_training)
relu1 = tf.nn.elu(bn1)
drop1 = tf.layers.dropout(relu1)
pool1 = tf.layers.max_pooling2d(drop1, [2,2], [2,2])
conv2 = tf.layers.conv2d(pool1, 64, [3,3], [1,1], padding='valid',
kernel_initializer=initializer)
bn2 = tf.layers.batch_normalization(conv2, training=is_training)
relu2 = tf.nn.elu(bn2)
drop2 = tf.layers.dropout(relu2)
pool2 = tf.layers.max_pooling2d(drop2, [2,2], [2,2])
conv3 = tf.layers.conv2d(pool2, 128, [3,3], [1,1], padding='valid',
kernel_initializer=initializer)
bn3 = tf.layers.batch_normalization(conv3, training=is_training)
relu3 = tf.nn.elu(bn3)
drop3 = tf.layers.dropout(relu3)
avg_pool = tf.layers.average_pooling2d(drop3, [5,5], [1,1])
avg_pool = tf.layers.flatten(avg_pool)
fc1 = tf.layers.dense(avg_pool, 50)
fc2 = tf.layers.dense(fc1, 50)
scores = tf.layers.dense(fc2, num_classes)
############################################################################
# END OF YOUR CODE #
############################################################################
return scores
learning_rate = 1e-3
def optimizer_init_fn():
optimizer = None
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
optimizer = tf.train.AdamOptimizer(learning_rate)
############################################################################
# END OF YOUR CODE #
############################################################################
return optimizer
device = '/device:GPU:0'
print_every = 700
num_epochs = 10
#test_model(model_init_fn)
train_part34(model_init_fn, optimizer_init_fn, num_epochs)
```
## Describe what you did
In the cell below you should write an explanation of what you did, any additional features that you implemented, and/or any graphs that you made in the process of training and evaluating your network.
**Filter Size:** 3x3
**Number of Filters:** 32, 64, 128
**Pooling:** 2x2 max-pooling
**Normalization:** Batch normalization
**Network Architecture:** (conv - batchnorm - relu - dropout - max pool) \* 3 - avg pool - fc\* 3
**Global average pooling:** used instead of flattening after the final convolutional layer
**Regularization:** Dropout
**Optimizer:** Adam
| github_jupyter |
# Lightweight python components
Lightweight python components do not require you to build a new container image for every code change.
They're intended to use for fast iteration in notebook environment.
#### Building a lightweight python component
To build a component just define a stand-alone python function and then call kfp.components.func_to_container_op(func) to convert it to a component that can be used in a pipeline.
There are several requirements for the function:
* The function should be stand-alone. It should not use any code declared outside of the function definition. Any imports should be added inside the main function. Any helper functions should also be defined inside the main function.
* The function can only import packages that are available in the base image. If you need to import a package that's not available you can try to find a container image that already includes the required packages. (As a workaround you can use the module subprocess to run pip install for the required package. There is an example below in my_divmod function.)
* If the function operates on numbers, the parameters need to have type hints. Supported types are ```[int, float, bool]```. Everything else is passed as string.
* To build a component with multiple output values, use the typing.NamedTuple type hint syntax: ```NamedTuple('MyFunctionOutputs', [('output_name_1', type), ('output_name_2', float)])```
```
# Install the SDK
!pip3 install 'kfp>=0.1.31.2' --quiet
```
Restart the kernel for changes to take effect
```
import kfp
import kfp.components as comp
```
Simple function that just add two numbers:
```
#Define a Python function
def add(a: float, b: float) -> float:
'''Calculates sum of two arguments'''
return a + b
```
Convert the function to a pipeline operation
```
add_op = comp.func_to_container_op(add)
```
A bit more advanced function which demonstrates how to use imports, helper functions and produce multiple outputs.
```
#Advanced function
#Demonstrates imports, helper functions and multiple outputs
from typing import NamedTuple
def my_divmod(dividend: float, divisor:float) -> NamedTuple('MyDivmodOutput', [('quotient', float), ('remainder', float), ('mlpipeline_ui_metadata', 'UI_metadata'), ('mlpipeline_metrics', 'Metrics')]):
'''Divides two numbers and calculate the quotient and remainder'''
#Imports inside a component function:
import numpy as np
#This function demonstrates how to use nested functions inside a component function:
def divmod_helper(dividend, divisor):
return np.divmod(dividend, divisor)
(quotient, remainder) = divmod_helper(dividend, divisor)
from tensorflow.python.lib.io import file_io
import json
# Exports a sample tensorboard:
metadata = {
'outputs' : [{
'type': 'tensorboard',
'source': 'gs://ml-pipeline-dataset/tensorboard-train',
}]
}
# Exports two sample metrics:
metrics = {
'metrics': [{
'name': 'quotient',
'numberValue': float(quotient),
},{
'name': 'remainder',
'numberValue': float(remainder),
}]}
from collections import namedtuple
divmod_output = namedtuple('MyDivmodOutput', ['quotient', 'remainder', 'mlpipeline_ui_metadata', 'mlpipeline_metrics'])
return divmod_output(quotient, remainder, json.dumps(metadata), json.dumps(metrics))
```
Test running the python function directly
```
my_divmod(100, 7)
```
#### Convert the function to a pipeline operation
You can specify an alternative base container image (the image needs to have Python 3.5+ installed).
```
divmod_op = comp.func_to_container_op(my_divmod, base_image='tensorflow/tensorflow:1.11.0-py3')
```
#### Define the pipeline
Pipeline function has to be decorated with the `@dsl.pipeline` decorator
```
import kfp.dsl as dsl
@dsl.pipeline(
name='Calculation pipeline',
description='A toy pipeline that performs arithmetic calculations.'
)
def calc_pipeline(
a='a',
b='7',
c='17',
):
#Passing pipeline parameter and a constant value as operation arguments
add_task = add_op(a, 4) #Returns a dsl.ContainerOp class instance.
#Passing a task output reference as operation arguments
#For an operation with a single return value, the output reference can be accessed using `task.output` or `task.outputs['output_name']` syntax
divmod_task = divmod_op(add_task.output, b)
#For an operation with a multiple return values, the output references can be accessed using `task.outputs['output_name']` syntax
result_task = add_op(divmod_task.outputs['quotient'], c)
```
#### Submit the pipeline for execution
```
#Specify pipeline argument values
arguments = {'a': '7', 'b': '8'}
#Submit a pipeline run
kfp.Client().create_run_from_pipeline_func(calc_pipeline, arguments=arguments)
# Run the pipeline on a separate Kubeflow Cluster instead
# (use if your notebook is not running in Kubeflow - e.x. if using AI Platform Notebooks)
# kfp.Client(host='<ADD KFP ENDPOINT HERE>').create_run_from_pipeline_func(calc_pipeline, arguments=arguments)
#vvvvvvvvv This link leads to the run information page. (Note: There is a bug in JupyterLab that modifies the URL and makes the link stop working)
```
| github_jupyter |
```
from zipline.data import bundles
from zipline.pipeline import Pipeline
from zipline.pipeline.data import USEquityPricing
from zipline.pipeline.data import Column
from zipline.pipeline.data import DataSet
from zipline.pipeline.engine import SimplePipelineEngine
from zipline.pipeline.filters import StaticAssets
from zipline.pipeline.loaders import USEquityPricingLoader
from zipline.pipeline.loaders.frame import DataFrameLoader
from zipline.utils.calendars import get_calendar
import numpy as np
import pandas as pd
trading_calendar = get_calendar('NYSE')
bundle_data = bundles.load('sharadar-prices')
# Set up Custom Data Source for two sids for DataFrameLoader
class MyDataSet(DataSet):
column_A = Column(dtype=float)
column_B = Column(dtype=bool)
dates = pd.date_range('2014-01-01', '2017-01-01', tz='UTC')
assets = bundle_data.asset_finder.lookup_symbols(['A', 'AAL'], as_of_date=None)
sids = pd.Int64Index([asset.sid for asset in assets])
# The values for Column A will just be a 2D array of numbers ranging from 1 -> N.
column_A_frame = pd.DataFrame(
data=np.arange(len(dates)*len(assets), dtype=float).reshape(len(dates), len(assets)),
index=dates,
columns=sids,
)
# Column B will always provide True for 0 and False for 1.
column_B_frame = pd.DataFrame(data={sids[0]: True, sids[1]: False}, index=dates)
loaders = {
MyDataSet.column_A: DataFrameLoader(MyDataSet.column_A, column_A_frame),
MyDataSet.column_B: DataFrameLoader(MyDataSet.column_B, column_B_frame),
}
def my_dispatcher(column):
return loaders[column]
# Set up pipeline engine
# Loader for pricing
pipeline_loader = USEquityPricingLoader(
bundle_data.equity_daily_bar_reader,
bundle_data.adjustment_reader,
)
def choose_loader(column):
if column in USEquityPricing.columns:
return pipeline_loader
return my_dispatcher(column)
engine = SimplePipelineEngine(
get_loader=choose_loader,
calendar=trading_calendar.all_sessions,
asset_finder=bundle_data.asset_finder,
)
p = Pipeline(
columns={
'price': USEquityPricing.close.latest,
'col_A': MyDataSet.column_A.latest,
'col_B': MyDataSet.column_B.latest
},
screen=StaticAssets(assets)
)
df = engine.run_pipeline(
p,
pd.Timestamp('2016-01-05', tz='utc'),
pd.Timestamp('2016-01-07', tz='utc')
)
print(df)
import zipline as zl
zl.extensions
print(np.__version__)
loader = my_dispatcher(MyDataSet.column_A)
adj_array = loader.load_adjusted_array(
[MyDataSet.column_A],
dates,
sids,
np.ones((len(dates), len(sids)), dtype=bool)
)
print(list(adj_array.values())[0].inspect())
type(MyDataSet.column_A)
column_A_frame.columns
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import cv2
from matplotlib import pyplot as plt
import os
```
### Dataset
The data I used in this notebook can be found here, on Kaggle: https://www.kaggle.com/shayanfazeli/heartbeat
```
path = 'dataset/mitbih_train.csv'
df = pd.read_csv(path, header=None)
df.head()
df.info()
# Let's see the classes in which we have to classify our heartbeats
df[187].value_counts()
'''This piece of code takes the numerical values from the dataframe and generates for each row an image, which we'll use later
as input of the CNN.'''
'''I changed the name of the folder (0,1,2,3,4) manually, to split the data in train, validation and test sets. So I didn t use
the csv file for testing, but I created my own version (images of course) from the train data, because the were enough.'''
for count, i in enumerate(df.values[81123:]):
fig = plt.figure(frameon=False)
plt.plot(i)
plt.xticks([]), plt.yticks([])
for spine in plt.gca().spines.values():
spine.set_visible(False)
filename = 'img_dataset/4/' + '4' + '-' + str(count) + '.png'
fig.savefig(filename)
im_gray = cv2.imread(filename, cv2.IMREAD_GRAYSCALE)
im_gray = cv2.resize(im_gray, (512, 512), interpolation = cv2.INTER_LANCZOS4)
cv2.imwrite(filename, im_gray)
import tensorflow as tf
from tensorflow import gfile
from tensorflow import compat
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras.preprocessing.image import ImageDataGenerator
from keras import backend as K
def create_model():
K.set_image_dim_ordering('tf')
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(512, 512, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(5))
model.add(Activation('softmax'))
return model
model = create_model()
batch_size = 64
# This is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
horizontal_flip=True)
# This is the augmentation configuration we will use for testing:
test_datagen = ImageDataGenerator(rescale=1./255)
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
train_generator = train_datagen.flow_from_directory(
'img_dataset/train',
target_size=(512, 512),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
'img_dataset/validation',
target_size=(512, 512),
batch_size=batch_size,
class_mode='categorical')
# Let's start training!
model.fit_generator(
train_generator,
steps_per_epoch=2000 // batch_size,
epochs=50,
validation_data=validation_generator,
validation_steps=800 // batch_size)
model.save_weights('first_try.h5')
'''I trained the model on FloydHub, reaching an accuracy on the validation set of 0.9948'''
def load_trained_model(weights_path):
final_model = create_model()
final_model.load_weights(weights_path)
return final_model
# You can find the trained model here in GitHub and test it!
load_model = load_trained_model("first_try.h5")
load_model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
def load_images_from_folder(folder):
images = []
for filename in os.listdir(folder):
img = cv2.imread(os.path.join(folder,filename))
if img is not None:
images.append(img)
return images
from random import shuffle
'''Loading and shuffling images from test directory with 4 as label'''
images = load_images_from_folder('img_dataset/test/4')
shuffle(images)
for i in range(10):
img = images[i]
if (img is not None):
img = np.expand_dims(img, axis=0)
result_class = load_model.predict_classes(img)
print(result_class)
```
| github_jupyter |
# Доверительные интервалы для двух долей
```
import numpy as np
import pandas as pd
import scipy
from statsmodels.stats.weightstats import *
from statsmodels.stats.proportion import proportion_confint
```
## Загрузка данных
```
data = pd.read_csv('banner_click_stat.txt', header = None, sep = '\t')
data.columns = ['banner_a', 'banner_b']
data.head()
data.describe()
```
## Интервальные оценки долей
$$\frac1{ 1 + \frac{z^2}{n} } \left( \hat{p} + \frac{z^2}{2n} \pm z \sqrt{ \frac{ \hat{p}\left(1-\hat{p}\right)}{n} + \frac{z^2}{4n^2} } \right), \;\; z \equiv z_{1-\frac{\alpha}{2}}$$
```
conf_interval_banner_a = proportion_confint(sum(data.banner_a),
data.shape[0],
method = 'wilson')
conf_interval_banner_b = proportion_confint(sum(data.banner_b),
data.shape[0],
method = 'wilson')
print('interval for banner a [%f, %f]' % conf_interval_banner_a)
print('interval for banner b [%f, %f]' % conf_interval_banner_b)
```
### Как их сравнить?
## Доверительный интервал для разности долей (независимые выборки)
| $X_1$ | $X_2$
------------- | -------------|
1 | a | b
0 | c | d
$\sum$ | $n_1$| $n_2$
$$ \hat{p}_1 = \frac{a}{n_1}$$
$$ \hat{p}_2 = \frac{b}{n_2}$$
$$\text{Доверительный интервал для }p_1 - p_2\colon \;\; \hat{p}_1 - \hat{p}_2 \pm z_{1-\frac{\alpha}{2}}\sqrt{\frac{\hat{p}_1(1 - \hat{p}_1)}{n_1} + \frac{\hat{p}_2(1 - \hat{p}_2)}{n_2}}$$
```
def proportions_confint_diff_ind(sample1, sample2, alpha = 0.05):
z = scipy.stats.norm.ppf(1 - alpha / 2.)
p1 = float(sum(sample1)) / len(sample1)
p2 = float(sum(sample2)) / len(sample2)
left_boundary = (p1 - p2) - z * np.sqrt(p1 * (1 - p1)/ len(sample1) + p2 * (1 - p2)/ len(sample2))
right_boundary = (p1 - p2) + z * np.sqrt(p1 * (1 - p1)/ len(sample1) + p2 * (1 - p2)/ len(sample2))
return (left_boundary, right_boundary)
print("confidence interval: [%f, %f]" % proportions_confint_diff_ind(data.banner_a, data.banner_b))
```
## Доверительный интервал для разности долей (связанные выборки)
$X_1$ \ $X_2$ | 1| 0 | $\sum$
------------- | -------------|
1 | e | f | e + f
0 | g | h | g + h
$\sum$ | e + g| f + h | n
$$ \hat{p}_1 = \frac{e + f}{n}$$
$$ \hat{p}_2 = \frac{e + g}{n}$$
$$ \hat{p}_1 - \hat{p}_2 = \frac{f - g}{n}$$
$$\text{Доверительный интервал для }p_1 - p_2\colon \;\; \frac{f - g}{n} \pm z_{1-\frac{\alpha}{2}}\sqrt{\frac{f + g}{n^2} - \frac{(f - g)^2}{n^3}}$$
```
def proportions_confint_diff_rel(sample1, sample2, alpha = 0.05):
z = scipy.stats.norm.ppf(1 - alpha / 2.)
sample = list(zip(sample1, sample2))
n = len(sample)
f = sum([1 if (x[0] == 1 and x[1] == 0) else 0 for x in sample])
g = sum([1 if (x[0] == 0 and x[1] == 1) else 0 for x in sample])
left_boundary = float(f - g) / n - z * np.sqrt(float((f + g)) / n**2 - float((f - g)**2) / n**3)
right_boundary = float(f - g) / n + z * np.sqrt(float((f + g)) / n**2 - float((f - g)**2) / n**3)
return (left_boundary, right_boundary)
print("confidence interval: [%f, %f]" % proportions_confint_diff_rel(data.banner_a, data.banner_b))
```
| github_jupyter |
##Lab 5: Buy and Hold
This is a preview of the simulation
```
#By: Cristian Camilo Vargas Morales
class AlertOrangeSalmon(QCAlgorithm):
def Initialize(self):
self.SetStartDate(2019, 3, 30) #Last two Years
self.SetEndDate(2021, 3, 30)
#NVIDIA
self.nvda = self.AddEquity("NVDA", Resolution.Daily)
self.nvda.SetDataNormalizationMode(DataNormalizationMode.Raw)
self.nvda.SetLeverage(1) #Configuramos Apalancamiento Normal (STD)
#Aramco
self.aramco = self.AddEquity("2222.SR", Resolution.Daily)
self.aramco.SetDataNormalizationMode(DataNormalizationMode.Raw)
self.aramco.SetLeverage(1)
#JPMorgan Chase & Co.
self.JPM = self.AddEquity("JPM", Resolution.Daily)
self.JPM.SetDataNormalizationMode(DataNormalizationMode.Raw)
self.JPM.SetLeverage(1)
#Alibaba Group Holding Limited
self.BABA = self.AddEquity("BABA", Resolution.Daily)
self.BABA.SetDataNormalizationMode(DataNormalizationMode.Raw)
self.BABA.SetLeverage(1)
#Alphabet Inc. (Google)
self.GOOG = self.AddEquity("GOOG", Resolution.Daily)
self.GOOG.SetDataNormalizationMode(DataNormalizationMode.Raw)
self.GOOG.SetLeverage(1)
# 1. Set Starting Cash
self.SetCash(1000000) #Portafolio Inicial de inversión de 1000000
def OnData(self, data):
if not self.Portfolio.Invested:
self.MarketOrder("NVDA", 80) #Vamos a intentar por 100 acciones de NVDA
self.Debug(str(self.Portfolio["NVDA"].AveragePrice))
self.MarketOrder("2222.SR", 100) #Vamos a intentar por 100 acciones de Aramco
self.Debug(str(self.Portfolio["2222.SR"].AveragePrice))
self.MarketOrder("JPM", 200) #Vamos a intentar por 100 acciones de JPM
self.Debug(str(self.Portfolio["JPM"].AveragePrice))
self.MarketOrder("BABA", 50) #Vamos a intentar por 100 acciones de BABA
self.Debug(str(self.Portfolio["BABA"].AveragePrice))
self.MarketOrder("GOOG", 200) #Vamos a intentar por 100 acciones de GOOG
self.Debug(str(self.Portfolio["GOOG"].AveragePrice))
```

| github_jupyter |
```
import numpy as np
import pandas as pd
import torch
import torchvision
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from matplotlib import pyplot as plt
%matplotlib inline
mu1 = np.array([3,3,3,3,0])
sigma1 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu2 = np.array([4,4,4,4,0])
sigma2 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu3 = np.array([10,5,5,10,0])
sigma3 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu4 = np.array([-10,-10,-10,-10,0])
sigma4 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu5 = np.array([-21,4,4,-21,0])
sigma5 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu6 = np.array([-10,18,18,-10,0])
sigma6 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu7 = np.array([4,20,4,20,0])
sigma7 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu8 = np.array([4,-20,-20,4,0])
sigma8 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu9 = np.array([20,20,20,20,0])
sigma9 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu10 = np.array([20,-10,-10,20,0])
sigma10 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
sample1 = np.random.multivariate_normal(mean=mu1,cov= sigma1,size=500)
sample2 = np.random.multivariate_normal(mean=mu2,cov= sigma2,size=500)
sample3 = np.random.multivariate_normal(mean=mu3,cov= sigma3,size=500)
sample4 = np.random.multivariate_normal(mean=mu4,cov= sigma4,size=500)
sample5 = np.random.multivariate_normal(mean=mu5,cov= sigma5,size=500)
sample6 = np.random.multivariate_normal(mean=mu6,cov= sigma6,size=500)
sample7 = np.random.multivariate_normal(mean=mu7,cov= sigma7,size=500)
sample8 = np.random.multivariate_normal(mean=mu8,cov= sigma8,size=500)
sample9 = np.random.multivariate_normal(mean=mu9,cov= sigma9,size=500)
sample10 = np.random.multivariate_normal(mean=mu10,cov= sigma10,size=500)
# mu1 = np.array([3,3,0,0,0])
# sigma1 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu2 = np.array([4,4,0,0,0])
# sigma2 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu3 = np.array([10,5,0,0,0])
# sigma3 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu4 = np.array([-10,-10,0,0,0])
# sigma4 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu5 = np.array([-21,4,0,0,0])
# sigma5 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu6 = np.array([-10,18,0,0,0])
# sigma6 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu7 = np.array([4,20,0,0,0])
# sigma7 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu8 = np.array([4,-20,0,0,0])
# sigma8 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu9 = np.array([20,20,0,0,0])
# sigma9 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu10 = np.array([20,-10,0,0,0])
# sigma10 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# sample1 = np.random.multivariate_normal(mean=mu1,cov= sigma1,size=500)
# sample2 = np.random.multivariate_normal(mean=mu2,cov= sigma2,size=500)
# sample3 = np.random.multivariate_normal(mean=mu3,cov= sigma3,size=500)
# sample4 = np.random.multivariate_normal(mean=mu4,cov= sigma4,size=500)
# sample5 = np.random.multivariate_normal(mean=mu5,cov= sigma5,size=500)
# sample6 = np.random.multivariate_normal(mean=mu6,cov= sigma6,size=500)
# sample7 = np.random.multivariate_normal(mean=mu7,cov= sigma7,size=500)
# sample8 = np.random.multivariate_normal(mean=mu8,cov= sigma8,size=500)
# sample9 = np.random.multivariate_normal(mean=mu9,cov= sigma9,size=500)
# sample10 = np.random.multivariate_normal(mean=mu10,cov= sigma10,size=500)
X = np.concatenate((sample1,sample2,sample3,sample4,sample5,sample6,sample7,sample8,sample9,sample10),axis=0)
Y = np.concatenate((np.zeros((500,1)),np.ones((500,1)),2*np.ones((500,1)),3*np.ones((500,1)),4*np.ones((500,1)),
5*np.ones((500,1)),6*np.ones((500,1)),7*np.ones((500,1)),8*np.ones((500,1)),9*np.ones((500,1))),axis=0).astype(int)
print(X.shape,Y.shape)
# plt.scatter(sample1[:,0],sample1[:,1],label="class_0")
# plt.scatter(sample2[:,0],sample2[:,1],label="class_1")
# plt.scatter(sample3[:,0],sample3[:,1],label="class_2")
# plt.scatter(sample4[:,0],sample4[:,1],label="class_3")
# plt.scatter(sample5[:,0],sample5[:,1],label="class_4")
# plt.scatter(sample6[:,0],sample6[:,1],label="class_5")
# plt.scatter(sample7[:,0],sample7[:,1],label="class_6")
# plt.scatter(sample8[:,0],sample8[:,1],label="class_7")
# plt.scatter(sample9[:,0],sample9[:,1],label="class_8")
# plt.scatter(sample10[:,0],sample10[:,1],label="class_9")
# plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')
class SyntheticDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, x, y):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.x = x
self.y = y
#self.fore_idx = fore_idx
def __len__(self):
return len(self.y)
def __getitem__(self, idx):
return self.x[idx] , self.y[idx] #, self.fore_idx[idx]
trainset = SyntheticDataset(X,Y)
# testset = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform)
classes = ('zero','one','two','three','four','five','six','seven','eight','nine')
foreground_classes = {'zero','one','two'}
fg_used = '012'
fg1, fg2, fg3 = 0,1,2
all_classes = {'zero','one','two','three','four','five','six','seven','eight','nine'}
background_classes = all_classes - foreground_classes
background_classes
trainloader = torch.utils.data.DataLoader(trainset, batch_size=100, shuffle=True)
dataiter = iter(trainloader)
background_data=[]
background_label=[]
foreground_data=[]
foreground_label=[]
batch_size=100
for i in range(50):
images, labels = dataiter.next()
for j in range(batch_size):
if(classes[labels[j]] in background_classes):
img = images[j].tolist()
background_data.append(img)
background_label.append(labels[j])
else:
img = images[j].tolist()
foreground_data.append(img)
foreground_label.append(labels[j])
foreground_data = torch.tensor(foreground_data)
foreground_label = torch.tensor(foreground_label)
background_data = torch.tensor(background_data)
background_label = torch.tensor(background_label)
def create_mosaic_img(bg_idx,fg_idx,fg):
"""
bg_idx : list of indexes of background_data[] to be used as background images in mosaic
fg_idx : index of image to be used as foreground image from foreground data
fg : at what position/index foreground image has to be stored out of 0-8
"""
image_list=[]
j=0
for i in range(9):
if i != fg:
image_list.append(background_data[bg_idx[j]])
j+=1
else:
image_list.append(foreground_data[fg_idx])
label = foreground_label[fg_idx] - fg1 # minus fg1 because our fore ground classes are fg1,fg2,fg3 but we have to store it as 0,1,2
#image_list = np.concatenate(image_list ,axis=0)
image_list = torch.stack(image_list)
return image_list,label
desired_num = 3000
mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9
mosaic_label=[] # label of mosaic image = foreground class present in that mosaic
list_set_labels = []
for i in range(desired_num):
set_idx = set()
np.random.seed(i)
bg_idx = np.random.randint(0,3500,8)
set_idx = set(background_label[bg_idx].tolist())
fg_idx = np.random.randint(0,1500)
set_idx.add(foreground_label[fg_idx].item())
fg = np.random.randint(0,9)
fore_idx.append(fg)
image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
mosaic_list_of_images.append(image_list)
mosaic_label.append(label)
list_set_labels.append(set_idx)
def create_avg_image_from_mosaic_dataset(mosaic_dataset,labels,foreground_index,dataset_number):
"""
mosaic_dataset : mosaic_dataset contains 9 images 32 x 32 each as 1 data point
labels : mosaic_dataset labels
foreground_index : contains list of indexes where foreground image is present so that using this we can take weighted average
dataset_number : will help us to tell what ratio of foreground image to be taken. for eg: if it is "j" then fg_image_ratio = j/9 , bg_image_ratio = (9-j)/8*9
"""
avg_image_dataset = []
for i in range(len(mosaic_dataset)):
img = torch.zeros([5], dtype=torch.float64)
for j in range(9):
if j == foreground_index[i]:
img = img + mosaic_dataset[i][j]*dataset_number/9
else :
img = img + mosaic_dataset[i][j]*(9-dataset_number)/(8*9)
avg_image_dataset.append(img)
return torch.stack(avg_image_dataset) , torch.stack(labels) , foreground_index
avg_image_dataset_1 , labels_1, fg_index_1 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images, mosaic_label, fore_idx, 1)
avg_image_dataset_2 , labels_2, fg_index_2 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images, mosaic_label, fore_idx, 2)
avg_image_dataset_3 , labels_3, fg_index_3 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images, mosaic_label, fore_idx, 3)
avg_image_dataset_4 , labels_4, fg_index_4 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images, mosaic_label, fore_idx, 4)
avg_image_dataset_5 , labels_5, fg_index_5 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images, mosaic_label, fore_idx, 5)
avg_image_dataset_6 , labels_6, fg_index_6 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images, mosaic_label, fore_idx, 6)
avg_image_dataset_7 , labels_7, fg_index_7 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images, mosaic_label, fore_idx, 7)
avg_image_dataset_8 , labels_8, fg_index_8 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images, mosaic_label, fore_idx, 8)
avg_image_dataset_9 , labels_9, fg_index_9 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images, mosaic_label, fore_idx, 9)
# avg_test_1 , labels_test_1, fg_index_test_1 = create_avg_image_from_mosaic_dataset(test_images, test_label, fore_idx_test , 1)
# avg_test_2 , labels_test_2, fg_index_test_2 = create_avg_image_from_mosaic_dataset(test_images, test_label, fore_idx_test , 2)
# avg_test_3 , labels_test_3, fg_index_test_3 = create_avg_image_from_mosaic_dataset(test_images, test_label, fore_idx_test , 3)
# avg_test_4 , labels_test_4, fg_index_test_4 = create_avg_image_from_mosaic_dataset(test_images, test_label, fore_idx_test , 4)
# avg_test_5 , labels_test_5, fg_index_test_5 = create_avg_image_from_mosaic_dataset(test_images, test_label, fore_idx_test , 5)
# avg_test_6 , labels_test_6, fg_index_test_6 = create_avg_image_from_mosaic_dataset(test_images, test_label, fore_idx_test , 6)
# avg_test_7 , labels_test_7, fg_index_test_7 = create_avg_image_from_mosaic_dataset(test_images, test_label, fore_idx_test , 7)
# avg_test_8 , labels_test_8, fg_index_test_8 = create_avg_image_from_mosaic_dataset(test_images, test_label, fore_idx_test , 8)
# avg_test_9 , labels_test_9, fg_index_test_9 = create_avg_image_from_mosaic_dataset(test_images, test_label, fore_idx_test , 9)
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list, mosaic_label):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list
self.label = mosaic_label
#self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] #, self.fore_idx[idx]
batch = 256
epochs = 65
# training_data = avg_image_dataset_5 #just change this and training_label to desired dataset for training
# training_label = labels_5
traindata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )
trainloader_1 = DataLoader( traindata_1 , batch_size= batch ,shuffle=True)
traindata_2 = MosaicDataset(avg_image_dataset_2, labels_2 )
trainloader_2 = DataLoader( traindata_2 , batch_size= batch ,shuffle=True)
traindata_3 = MosaicDataset(avg_image_dataset_3, labels_3 )
trainloader_3 = DataLoader( traindata_3 , batch_size= batch ,shuffle=True)
traindata_4 = MosaicDataset(avg_image_dataset_4, labels_4 )
trainloader_4 = DataLoader( traindata_4 , batch_size= batch ,shuffle=True)
traindata_5 = MosaicDataset(avg_image_dataset_5, labels_5 )
trainloader_5 = DataLoader( traindata_5 , batch_size= batch ,shuffle=True)
traindata_6 = MosaicDataset(avg_image_dataset_6, labels_6 )
trainloader_6 = DataLoader( traindata_6 , batch_size= batch ,shuffle=True)
traindata_7 = MosaicDataset(avg_image_dataset_7, labels_7 )
trainloader_7 = DataLoader( traindata_7 , batch_size= batch ,shuffle=True)
traindata_8 = MosaicDataset(avg_image_dataset_8, labels_8 )
trainloader_8 = DataLoader( traindata_8 , batch_size= batch ,shuffle=True)
traindata_9 = MosaicDataset(avg_image_dataset_9, labels_9 )
trainloader_9 = DataLoader( traindata_9 , batch_size= batch ,shuffle=True)
# testdata_1 = MosaicDataset(avg_test_1, labels_test_1 )
# testloader_1 = DataLoader( testdata_1 , batch_size= batch ,shuffle=False)
# testdata_2 = MosaicDataset(avg_test_2, labels_test_2 )
# testloader_2 = DataLoader( testdata_2 , batch_size= batch ,shuffle=False)
# testdata_3 = MosaicDataset(avg_test_3, labels_test_3 )
# testloader_3 = DataLoader( testdata_3 , batch_size= batch ,shuffle=False)
# testdata_4 = MosaicDataset(avg_test_4, labels_test_4 )
# testloader_4 = DataLoader( testdata_4 , batch_size= batch ,shuffle=False)
# testdata_5 = MosaicDataset(avg_test_5, labels_test_5 )
# testloader_5 = DataLoader( testdata_5 , batch_size= batch ,shuffle=False)
# testdata_6 = MosaicDataset(avg_test_6, labels_test_6 )
# testloader_6 = DataLoader( testdata_6 , batch_size= batch ,shuffle=False)
# testdata_7 = MosaicDataset(avg_test_7, labels_test_7 )
# testloader_7 = DataLoader( testdata_7 , batch_size= batch ,shuffle=False)
# testdata_8 = MosaicDataset(avg_test_8, labels_test_8 )
# testloader_8 = DataLoader( testdata_8 , batch_size= batch ,shuffle=False)
# testdata_9 = MosaicDataset(avg_test_9, labels_test_9 )
# testloader_9 = DataLoader( testdata_9 , batch_size= batch ,shuffle=False)
class Net(nn.Module):
def __init__(self):
super(Net,self).__init__()
self.linear1 = nn.Linear(5,100)
self.linear2 = nn.Linear(100,3)
# self.linear3 = nn.Linear(8,3)
def forward(self,x):
x = F.relu(self.linear1(x))
#x = F.relu(self.linear2(x))
x = self.linear2(x)
return x
def calculate_loss(dataloader,model,criter):
model.eval()
r_loss = 0
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = model(inputs)
loss = criter(outputs, labels)
r_loss += loss.item()
return r_loss/i
def test_all(number, testloader,inc):
correct = 0
total = 0
out = []
pred = []
inc.eval()
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to("cuda"),labels.to("cuda")
out.append(labels.cpu().numpy())
outputs= inc(images)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 3000 test dataset %d: %d %%' % (number , 100 * correct / total))
def train_all(trainloader, ds_number, testloader_list):
print("--"*40)
print("training on data set ", ds_number)
net = Net().double()
net = net.to("cuda")
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=0.001)
acti = []
loss_curi = []
epochs = 500
running_loss = calculate_loss(trainloader,net,criterion)
loss_curi.append(running_loss)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
net.train()
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer.step()
running_loss = calculate_loss(trainloader,net,criterion)
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.01:
break
# if (epoch%5 == 0):
# _,actis= inc(inputs)
# acti.append(actis)
print('Finished Training')
#torch.save(net.state_dict(),"train_dataset_"+str(ds_number)+"_"+str(epochs)+".pt")
correct = 0
total = 0
with torch.no_grad():
for data in trainloader:
images, labels = data
images, labels = images.to("cuda"), labels.to("cuda")
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 3000 train images: %d %%' % ( 100 * correct / total))
for i, j in enumerate(testloader_list):
test_all(i+1, j,net)
print("--"*40)
return loss_curi
train_loss_all=[]
testloader_list= [ trainloader_1, trainloader_2, trainloader_3, trainloader_4, trainloader_5, trainloader_6,
trainloader_7, trainloader_8, trainloader_9]
train_loss_all.append(train_all(trainloader_1, 1, testloader_list))
train_loss_all.append(train_all(trainloader_2, 2, testloader_list))
train_loss_all.append(train_all(trainloader_3, 3, testloader_list))
train_loss_all.append(train_all(trainloader_4, 4, testloader_list))
train_loss_all.append(train_all(trainloader_5, 5, testloader_list))
train_loss_all.append(train_all(trainloader_6, 6, testloader_list))
train_loss_all.append(train_all(trainloader_7, 7, testloader_list))
train_loss_all.append(train_all(trainloader_8, 8, testloader_list))
train_loss_all.append(train_all(trainloader_9, 9, testloader_list))
for i,j in enumerate(train_loss_all):
plt.plot(j,label ="dataset "+str(i+1))
plt.xlabel("Epochs")
plt.ylabel("Training_loss")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
class MosaicDataset1(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list, mosaic_label,fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] , self.fore_idx[idx]
batch = 250
msd = MosaicDataset1(mosaic_list_of_images, mosaic_label, fore_idx)
train_loader = DataLoader( msd,batch_size= batch ,shuffle=True)
class Focus_deep(nn.Module):
'''
deep focus network averaged at zeroth layer
input : elemental data
'''
def __init__(self,inputs,output,K,d):
super(Focus_deep,self).__init__()
self.inputs = inputs
self.output = output
self.K = K
self.d = d
self.linear1 = nn.Linear(self.inputs,300) #,self.output)
self.linear2 = nn.Linear(300,self.output)
def forward(self,z):
batch = z.shape[0]
x = torch.zeros([batch,self.K],dtype=torch.float64)
y = torch.zeros([batch,self.d], dtype=torch.float64)
x,y = x.to("cuda"),y.to("cuda")
for i in range(self.K):
x[:,i] = self.helper(z[:,i] )[:,0] # self.d*i:self.d*i+self.d
x = F.softmax(x,dim=1) # alphas
x1 = x[:,0]
for i in range(self.K):
x1 = x[:,i]
y = y+torch.mul(x1[:,None],z[:,i]) # self.d*i:self.d*i+self.d
return y , x
def helper(self,x):
x = F.relu(self.linear1(x))
x = self.linear2(x)
return x
class Classification_deep(nn.Module):
'''
input : elemental data
deep classification module data averaged at zeroth layer
'''
def __init__(self,inputs,output):
super(Classification_deep,self).__init__()
self.inputs = inputs
self.output = output
self.linear1 = nn.Linear(self.inputs,300)
self.linear2 = nn.Linear(300,self.output)
def forward(self,x):
x = F.relu(self.linear1(x))
x = self.linear2(x)
return x
where = Focus_deep(5,1,9,5).double()
what = Classification_deep(5,3).double()
where = where.to("cuda")
what = what.to("cuda")
def calculate_attn_loss(dataloader,what,where,criter):
what.eval()
where.eval()
r_loss = 0
alphas = []
lbls = []
pred = []
fidices = []
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels,fidx = data
lbls.append(labels)
fidices.append(fidx)
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
avg,alpha = where(inputs)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
alphas.append(alpha.cpu().numpy())
loss = criter(outputs, labels)
r_loss += loss.item()
alphas = np.concatenate(alphas,axis=0)
pred = np.concatenate(pred,axis=0)
lbls = np.concatenate(lbls,axis=0)
fidices = np.concatenate(fidices,axis=0)
#print(alphas.shape,pred.shape,lbls.shape,fidices.shape)
analysis = analyse_data(alphas,lbls,pred,fidices)
return r_loss/i,analysis
def analyse_data(alphas,lbls,predicted,f_idx):
'''
analysis data is created here
'''
batch = len(predicted)
amth,alth,ftpt,ffpt,ftpf,ffpf = 0,0,0,0,0,0
for j in range (batch):
focus = np.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
amth +=1
else:
alth +=1
if(focus == f_idx[j] and predicted[j] == lbls[j]):
ftpt += 1
elif(focus != f_idx[j] and predicted[j] == lbls[j]):
ffpt +=1
elif(focus == f_idx[j] and predicted[j] != lbls[j]):
ftpf +=1
elif(focus != f_idx[j] and predicted[j] != lbls[j]):
ffpf +=1
#print(sum(predicted==lbls),ftpt+ffpt)
return [ftpt,ffpt,ftpf,ffpf,amth,alth]
print("--"*40)
criterion = nn.CrossEntropyLoss()
optimizer_where = optim.Adam(where.parameters(),lr =0.001)
optimizer_what = optim.Adam(what.parameters(), lr=0.001)
acti = []
loss_curi = []
analysis_data = []
epochs = 1000
running_loss,anlys_data = calculate_attn_loss(train_loader,what,where,criterion)
loss_curi.append(running_loss)
analysis_data.append(anlys_data)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
what.train()
where.train()
for i, data in enumerate(train_loader, 0):
# get the inputs
inputs, labels,_ = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_where.zero_grad()
optimizer_what.zero_grad()
# forward + backward + optimize
avg, alpha = where(inputs)
outputs = what(avg)
loss = criterion(outputs, labels)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer_where.step()
optimizer_what.step()
running_loss,anls_data = calculate_attn_loss(train_loader,what,where,criterion)
analysis_data.append(anls_data)
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.01:
break
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in train_loader:
images, labels,_ = data
images = images.double()
images, labels = images.to("cuda"), labels.to("cuda")
avg, alpha = where(images)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 3000 train images: %d %%' % ( 100 * correct / total))
analysis_data = np.array(analysis_data)
plt.figure(figsize=(6,6))
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,0],label="ftpt")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,1],label="ffpt")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,2],label="ftpf")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,3],label="ffpf")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.savefig("trends_synthetic_300_300.png",bbox_inches="tight")
plt.savefig("trends_synthetic_300_300.pdf",bbox_inches="tight")
analysis_data[-1,:2]/3000
1972/3000
```
| github_jupyter |
[xptree/NetMF: Network Embedding as Matrix Factorization: Unifying DeepWalk, LINE, PTE, and node2vec](https://github.com/xptree/NetMF)
```
import scipy.io
import scipy.sparse as sparse
from scipy.sparse import csgraph
import numpy as np
import argparse
import logging
import theano
from theano import tensor as T
theano.config.exception_verbosity='high'
```
## load matrix
```
def load_adjacency_matrix(file, variable_name="network"):
data = scipy.io.loadmat(file)
return data[variable_name]
file = "./data/POS.mat"
# file = "./data/blogcatalog.mat"
A = load_adjacency_matrix(file)
A.shape
A.toarray()[-1]
A.toarray()[0]
```
## netmf small
```
def direct_compute_deepwalk_matrix(A, window, b):
n = A.shape[0]
vol = float(A.sum())
L, d_rt = csgraph.laplacian(A, normed=True, return_diag=True)
# X = D^{-1/2} A D^{-1/2}
X = sparse.identity(n) - L
S = np.zeros_like(X)
X_power = sparse.identity(n)
for i in range(window):
X_power = X_power.dot(X)
S += X_power
S *= vol / window / b
D_rt_inv = sparse.diags(d_rt ** -1)
M = D_rt_inv.dot(D_rt_inv.dot(S).T)
m = T.matrix()
f = theano.function([m], T.log(T.maximum(m, 1)))
Y = f(M.todense().astype(theano.config.floatX))
return sparse.csr_matrix(Y)
deepwalk_matrix = direct_compute_deepwalk_matrix(A, window=10, b=1.0)
n = A.shape[0]
vol = float(A.sum())
vol
L, d_rt = csgraph.laplacian(A, normed=True, return_diag=True)
d_rt
A.toarray()[0]
L.toarray()[0]
X = sparse.identity(n) - L
X.shape
S = np.zeros_like(X)
X_power = sparse.identity(n)
for i in range(10):
X_power = X_power.dot(X)
S += X_power
S.toarray()[0]
S = S * vol / 10 / 1.0
S.toarray()[0]
d_rt
D_rt_inv = sparse.diags(d_rt ** -1)
D_rt_inv
D_rt_inv.toarray()[0]
M = D_rt_inv.dot(D_rt_inv.dot(S).T)
M.toarray()[0]
M.todense()[0]
m = T.matrix()
f = theano.function([m], T.log(T.maximum(m, 1)))
Y = f(M.todense().astype(theano.config.floatX))
res = sparse.csr_matrix(Y)
res.toarray()[0]
M2 = M.todense()
M2[M2 <= 1] = 1
Y2 = np.log(M2)
res2 = sparse.csr_matrix(Y2)
res2.toarray()[0]
def svd_deepwalk_matrix(X, dim):
u, s, v = sparse.linalg.svds(X, dim, return_singular_vectors="u")
# return U \Sigma^{1/2}
return sparse.diags(np.sqrt(s)).dot(u.T).T
deepwalk_embedding = svd_deepwalk_matrix(res, dim=128)
deepwalk_embedding.shape
```
## netmf large
```
def approximate_normalized_graph_laplacian(A, rank, which="LA"):
n = A.shape[0]
L, d_rt = csgraph.laplacian(A, normed=True, return_diag=True)
# X = D^{-1/2} W D^{-1/2}
X = sparse.identity(n) - L
evals, evecs = sparse.linalg.eigsh(X, rank, which=which)
D_rt_inv = sparse.diags(d_rt ** -1)
D_rt_invU = D_rt_inv.dot(evecs)
return evals, D_rt_invU
evals, D_rt_invU = approximate_normalized_graph_laplacian(A, rank=256, which="LA")
evals.shape, D_rt_invU.shape
def deepwalk_filter(evals, window):
for i in range(len(evals)):
x = evals[i]
evals[i] = 1. if x >= 1 else x*(1-x**window) / (1-x) / window
evals = np.maximum(evals, 0)
return evals
evals = deepwalk_filter(evals, window=10)
evals.shape
def approximate_deepwalk_matrix(evals, D_rt_invU, window, vol, b):
evals = deepwalk_filter(evals, window=window)
X = sparse.diags(np.sqrt(evals)).dot(D_rt_invU.T).T
m = T.matrix()
mmT = T.dot(m, m.T) * (vol/b)
f = theano.function([m], T.log(T.maximum(mmT, 1)))
Y = f(X.astype(theano.config.floatX))
return sparse.csr_matrix(Y)
deepwalk_matrix = approximate_deepwalk_matrix(evals, D_rt_invU, window=10, vol=vol, b=1.0)
deepwalk_matrix.shape
deepwalk_matrix.toarray()[0]
deepwalk_embedding = svd_deepwalk_matrix(deepwalk_matrix, dim=128)
deepwalk_embedding.shape
```
| github_jupyter |
```
from time import sleep
from vnpy.app.script_trader import init_cli_trading
from vnpy.gateway.ctp import CtpGateway
from vnpy.trader.constant import Direction
# 连接到服务器
#setting = {
# "用户名": "",
# "密码": "",
# "经纪商代码": "66666",
# "交易服务器": "ctpfz1-front1.citicsf.com:51305",
# "行情服务器": "ctpfz1-front1.citicsf.com:51313",
# "产品名称": "vntech_vnpy_2.0",
# "授权编码": "WGEN56HLB6CYCVEG",
# "产品信息": ""
#}
from vnpy.trader.utility import load_json
setting= load_json("connect_ctp.json")
engine = init_cli_trading([CtpGateway])
engine.connect_gateway(setting, "CTP")
# 定义持仓查询函数
def get_net_pos(engine, vt_symbol):
net_pos = 0
long_position = engine.get_position(vt_symbol + "." + Direction.LONG.value)
if long_position:
net_pos += long_position.volume
short_position = engine.get_position(vt_symbol + "." + Direction.SHORT.value)
if short_position:
net_pos -= short_position.volume
return net_pos
# 设置参数
leg1_symbol = "TF2006.CFFEX"
leg2_symbol = "T2006.CFFEX"
entry_level = 5 # 入场位置
tick_add = 1 # 买卖超价Tick
trading_size = 100000 # 交易数量
vt_symbols = [leg1_symbol, leg2_symbol]
# 订阅行情
engine.subscribe(vt_symbols)
# 初始化变量
pos_data = {}
target_data = {}
for vt_symbol in vt_symbols:
pos_data[vt_symbol] = 0
target_data[vt_symbol] = 0
# 持续运行
while True:
# 获取行情
leg1_tick = engine.get_tick(leg1_symbol)
leg2_tick = engine.get_tick(leg2_symbol)
# 计算交易目标
# 开仓
if not target_data[leg1_symbol]:
if leg1_tick.bid_price_1 >= leg2_tick.ask_price_1 + entry_level:
print(f"满足开仓条件,卖{leg1_symbol},买{leg2_symbol}")
target_data[leg1_symbol] = -trading_size
target_data[leg2_symbol] = trading_size
elif leg1_tick.ask_price_1 <= leg2_tick.bid_price_1 - entry_level:
print(f"满足开仓条件,买{leg1_symbol},卖{leg2_symbol}")
target_data[leg1_symbol] = trading_size
target_data[leg2_symbol] = -trading_size
# 平仓
else:
if target_data[leg1_symbol] > 0:
if leg1_tick.ask_price_1 <= leg2_tick.bid_price_1:
print("满足平仓条件")
target_data[leg1_symbol] = 0
target_data[leg2_symbol] = 0
else:
if leg1_tick.bid_price_1 >= leg2_tick.ask_price_1:
print("满足平仓条件")
target_data[leg1_symbol] = 0
target_data[leg2_symbol] = 0
# 检查委托情况
active_orders = engine.get_all_active_orders()
if active_orders:
print("当前存在活动委托,执行撤单")
for order in active_orders:
engine.cancel_order(order.vt_orderid)
continue
# 执行交易
for vt_symbol in vt_symbols:
pos = pos_data[vt_symbol]
target = target_data[vt_symbol]
diff = target - pos
contract = engine.get_contract(vt_symbol)
price_add = tick_add * contract.pricetick
tick = engine.get_tick(vt_symbol)
# 持仓和目标一致,无需交易
if not diff:
continue
# 大于则买入
elif diff > 0:
# 有空头持仓,买入平仓
if pos < 0:
engine.cover(
vt_symbol, tick.ask_price_1 + price_add, abs(diff)
)
print(f"cover {vt_symbol}")
# 否则买入开仓
else:
engine.buy(
vt_symbol, tick.ask_price_1 + price_add, abs(diff)
)
print(f"buy {vt_symbol}")
# 小于则卖出
elif diff < 0:
# 有多头持仓,卖出平仓
if pos > 0:
engine.sell(
vt_symbol, tick.bid_price_1 - price_add, abs(diff)
)
print(f"sell {vt_symbol}")
# 否则卖出开仓
else:
engine.short(
vt_symbol, tick.bid_price_1 - price_add, abs(diff)
)
print(f"short {vt_symbol}")
# 等待进入下一轮
sleep(10)
```
| github_jupyter |
<< [Table des matières](../index.ipynb)
# Introduction
## Objectifs
* Se familiariser avec Python et les Jupyter Notebook
* comprendre les exemples présentés tout au long du cours, en traitement de données, données spatiales, analyse statistique et fouille de données
* Commencer avec quelques exemples
* de jeux de donnees et leurs attributs
* de notions d'algorithmes: structures de donnees et de contrôle
## Bonnes pratiques
Wilson G, Aruliah DA, Brown CT, Chue Hong NP, Davis M, Guy RT, et al. (2014) Best Practices for Scientific Computing. PLoS Biol 12(1): e1001745. https://doi.org/10.1371/journal.pbio.1001745
* Write Programs for People, Not Computers
* Let the Computer Do the Work
* Make Incremental Changes
* Don't Repeat Yourself (or Others)
* Plan for Mistakes
* Optimize Software Only after It Works Correctly
* Document Design and Purpose, Not Mechanics
* Collaborate
## Principes
* Automatiser, "soyez fainéant !": un ordinateur est idiot et idéal pour les taches répétitives: permet de traiter de larges ensembles de données
* Réutiliser votre code
* Devenir autonome: résoudre un autre problème, apprendre un autre langage
* Apprendre à contrôler son ordinateur: au lieu de se limiter aux programmes pensés et écrits par d'autres
* Program or be programmed (Douglas Rushkoff, http://rushkoff.com/program/)
* Comprendre ce qu'il est possible de programmer, et comment le faire: le but n'est pas d'apprendre Python, mais un langage de programmation moderne et puissant
* Assurer la répétabilité et traçabilité de vos traitements de données pour faire du travail de bonne qualité
## Méthode
* Développement itératif: faire la chose la plus simple qui peut marcher ("do the simplest thing that could possibly work") et raffiner (re-factoriser)
* méthodes de développement logiciel agiles
* "premature optimization is the root of all evil" (Don Knuth http://en.wikipedia.org/wiki/Program_optimization)
* tester interactivement le code dans l'interpréteur, manipulation directe des variables
* Ne pas se répéter: éviter la duplication de code
* Style de programmation
* choisir un style d'écriture et s'y tenir
* utiliser des noms explicites, éviter les commentaires: si les commentaires sont trop long, cela veut dire que le code est compliqué et pourrait être simplifié
* Attention aux détails et patience
* reconnaître les différences
* il faut rester rationnel
* l'ordinateur est idiot et suit vos instructions les unes après les autres: s'il y a un bug, c'est vous qui l'avez créé
# Python
* Avantages
* Orienté-objet: ré-utilisation du code
* Libre ("open source"): gratuit à utiliser et distribuer
* Flexibilité
* Largement utilisé et bonne documentation: automatiser VISSIM, Aimsun, QGIS et ArcGIS
* Plus facile à lire et apprendre que d'autres langages: relire et reprendre votre code dans 6 mois est comme écrire pour être relu par un autre être humain
* Multi-plateforme, langage "glue", "Bindings" pour de nombreux langages et bibliothèques
* Bibliothèques scientifiques, visualisation, SIG: scipy, numpy, matplotlib, pandas, scikit-image, scikit-learn, shapely, geopandas
* Faiblesses: vitesse (langage interprété), manque de certains outils de calcul numérique et statistique (matlab)
* Description
* Langage interprété et interpréteur
* Version 3 vs 2
* Code = fichier texte, extension `.py`
* Les commentaires commencent par `#`
* Scripts et modules (bibliothèques)
* Large bibliothèque standard ("standard library", "batteries included") http://docs.python.org/library/
```
# esprit de Python
import this
```
## Ligne de commande et interpréteur
* Ne pas avoir peur d'écrire
* L'interpréteur est une calculatrice
* Simplifié par la complétion automatique et l'historique des commandes (+ facilités syntaxiques)
* Ligne de commande avancée: IPython
* Évaluer des expressions et des variables
* `help`, `?`
```
print("Hello World")
s = "Hello World"
print(s)
reponse = input('Bonjour, comment vous appelez-vous ? ')
print('Bonjour '+reponse)
%run ./script01.py # commande magique ipython
```
## Jupyter Notebook
* Environnement computationnel interactif web pour créer des carnets ("notebooks")
* document JSON contenant une liste ordonnée de cellule d'entrée/sortie qui peuvent contenir du code, du texte, des formules mathématiques, des graphiques, etc.
* Versions Python, R, Julia
* Les notebooks peuvent être convertis dans plusieurs formats standards ouverts (HTML, diapositifs, LaTeX, PDF, ReStructuredText, Markdown, Python)
* Processus de travail similaire à l'interpréteur
* Outil de communication
## Types de données de base
* booléen (binaire)
* numérique: entier, réel
* chaîne de caractère
* liste, tuple, ensembles
* dictionnaire
```
type('Hello')
type(4)
type(4.5)
type(True)
type([])
type([2,3])
type({})
type({1: 'sdfasd', 'g': [1,2]})
type((2,3))
# enregistrement vide
class A:
pass
a = A()
a.x = 10
a.y = -2.3
type(a)
a = 2
a = 2.3
a = 'hello' # les variables n'ont pas de type
#b = a + 3 # les valeurs ont un type
a = 4
b = a + 3
b
a
# conversions entre types
int('3')
str(3)
# opération
i=1
i == 1
i+=1
i == 1
a
i == 2
```
## Exercice
Écrire un programme qui demande une vitesse et calcule le temps et la distance nécessaire pour s'arrêter. Le temps de perception réaction est 1.5 s et la décélération du véhicule est -1 m/s2.
```
print('À quelle vitesse roule le véhicule (km/h) ?')
vitesse = None
temps = None
distance = None
print('Le temps de freinage est', temps, 'et la distance de freinage est', distance)
# listes
a = list(range(10))
print(a)
print(a[0]) # index commence à 0
print(a[-1])
print(a[-2])
a[2] = -10
print(a)
# méthodes
a.sort()
print(a)
a.append(-100)
print(a)
del a[0]
print(a)
a[3] = 'hello'
print(a)
# appartenance
1 in a
2 in a
# références
b = list(range(10))
c = b
c[3] = 'bug'
print(b)
##
a = A()
b = a
a.x = 10
print(b.x)
print(a == b) # référence au même objet
# "list comprehensions"
a = list(range(10))
doubles = [2*x for x in a]
print(doubles)
carres = [x*x for x in a]
print(carres)
```
## Structures de contrôle
* Séquences d’opérations
* Conditionnelles: test si [condition] alors [opération1] (sinon [opération2])
* Boucles:
* tant que [condition] [opération]
* itérateur (compteur) pour [ensemble] [opération]
```
# boucles
a = list(range(5))
for x in a:
print(x)
for i in range(len(a)):
print(a[i])
i = 0
while i<len(a):
print(a[i])
i += 1
# test
from numpy.random import random_sample
b = random_sample(10)
print(b)
for x in b:
if x > 0.5:
print(x)
else:
print('Nombre plus petit que 0.5', x)
# list comprehensions avec test
c = [x for x in b if x>0.5]
print(c)
```
## Fonctions
Les fonctions sont fondamentales pour éviter de répéter du code. Les fonctions ont des arguments (peut être vide) et retourne quelque chose (None si pas de retour explicite). Les arguments de la fonction peuvent avoir des valeurs par défaut (derniers arguments).
```
def test():
x = 0
test() == None
def vitesseMetreSecEnKmH(v):
return v*3.6
x = 1
print("Un piéton marchant à {} m/s marche à {} km/h".format(x,vitesseMetreSecEnKmH(x)))
```
## Exercice
1. Modifier le programme de calcul du temps et de la distance de freinage pour donner les réponses pour différentes valeurs de décélération, entre -0.5 et -6 m/s^2 (avec un incrément de -0.5 m/s^2).
1. Faire un graphique du temps (ou de la distance) de freinage en fonction de la vitesse pour différentes valeurs de décélération
1. Transformer le programme pour demander à l'utilisateur s'il veut continuer avec d'autres valeurs de vitesses, et ré-itérer la question et les calculs (tant que l'utilisateur veut continuer).
1. Compter le nombre de chiffres plus petits que 0.5 dans la liste `b`.
## Bibliothèques scientifiques
* numpy: vecteurs, matrices, etc.
* scipy: fonctions scientifiques, en particulier statistiques
* matplotlib: graphiques et visualisation
* pandas: structures de données (interface et similarité avec SQL)
* statsmodels: modèles statistiques
* scikit-learn: apprentissage automatique
Toutes les bibliothèques doivent être importées avec la commande `import`. La bibliothèque peut être renommée avec `as` (par ex. dans `import numpy as np`). Une classe ou fonction spécifique peut être importée si importée spécifiquement avec la commande `from` (par ex. `from numpy.random import random_sample`), et on peut importer plusieurs classes ou fonction d'un coup en les ajoutant les unes après les autres séparées par des virgules.
```
import urllib.request
import zipfile
import io
import matplotlib.mlab as pylab
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
# exemples numpy
a = np.arange(10) # similaire a range(10), retourne une array
b = np.zeros((4,5))
c = np.ones((4,5))
a = np.random.random_sample(10)
# éviter les boucles, extraire des sous-vecteurs (comme avec matlab)
b = a>0.5
print(b)
c = a[b]
print(a)
print(c)
# charger des matrices
data = np.loadtxt('../donnees/vitesse-debit.txt')
plt.plot(data[:,0], data[:,1], 'o')
data.mean(0)
```
## Exercice
Sachant que la première colonne du fichier `vitesse-debit.txt` est la vitesse moyenne et la seconde le débit, calculer la densité (égale au débit divisé par la vitesse) et tracer le graphique de la vitesse en fonction de la densité.
## Exemple de fichier csv avec en-tête avec différents types de données
Cet exemple utilise la structure de données `DataFrame` de la bibliothèque `pandas`.
```
# jeu de données de voitures http://lib.stat.cmu.edu/DASL/Datafiles/Cars.html
text = '''Country Car MPG Weight Drive_Ratio Horsepower Displacement Cylinders
U.S. Buick Estate Wagon 16.9 4.360 2.73 155 350 8
U.S. Ford Country Squire Wagon 15.5 4.054 2.26 142 351 8
U.S. Chevy Malibu Wagon 19.2 3.605 2.56 125 267 8
U.S. Chrysler LeBaron Wagon 18.5 3.940 2.45 150 360 8
U.S. Chevette 30.0 2.155 3.70 68 98 4
Japan Toyota Corona 27.5 2.560 3.05 95 134 4
Japan Datsun 510 27.2 2.300 3.54 97 119 4
U.S. Dodge Omni 30.9 2.230 3.37 75 105 4
Germany Audi 5000 20.3 2.830 3.90 103 131 5
Sweden Volvo 240 GL 17.0 3.140 3.50 125 163 6
Sweden Saab 99 GLE 21.6 2.795 3.77 115 121 4
France Peugeot 694 SL 16.2 3.410 3.58 133 163 6
U.S. Buick Century Special 20.6 3.380 2.73 105 231 6
U.S. Mercury Zephyr 20.8 3.070 3.08 85 200 6
U.S. Dodge Aspen 18.6 3.620 2.71 110 225 6
U.S. AMC Concord D/L 18.1 3.410 2.73 120 258 6
U.S. Chevy Caprice Classic 17.0 3.840 2.41 130 305 8
U.S. Ford LTD 17.6 3.725 2.26 129 302 8
U.S. Mercury Grand Marquis 16.5 3.955 2.26 138 351 8
U.S. Dodge St Regis 18.2 3.830 2.45 135 318 8
U.S. Ford Mustang 4 26.5 2.585 3.08 88 140 4
U.S. Ford Mustang Ghia 21.9 2.910 3.08 109 171 6
Japan Mazda GLC 34.1 1.975 3.73 65 86 4
Japan Dodge Colt 35.1 1.915 2.97 80 98 4
U.S. AMC Spirit 27.4 2.670 3.08 80 121 4
Germany VW Scirocco 31.5 1.990 3.78 71 89 4
Japan Honda Accord LX 29.5 2.135 3.05 68 98 4
U.S. Buick Skylark 28.4 2.670 2.53 90 151 4
U.S. Chevy Citation 28.8 2.595 2.69 115 173 6
U.S. Olds Omega 26.8 2.700 2.84 115 173 6
U.S. Pontiac Phoenix 33.5 2.556 2.69 90 151 4
U.S. Plymouth Horizon 34.2 2.200 3.37 70 105 4
Japan Datsun 210 31.8 2.020 3.70 65 85 4
Italy Fiat Strada 37.3 2.130 3.10 69 91 4
Germany VW Dasher 30.5 2.190 3.70 78 97 4
Japan Datsun 810 22.0 2.815 3.70 97 146 6
Germany BMW 320i 21.5 2.600 3.64 110 121 4
Germany VW Rabbit 31.9 1.925 3.78 71 89 4
'''
s = io.StringIO(text)
data = pd.read_csv(s, delimiter = '\t')
#data.to_csv('cars.txt', index=False)
print(data.info())
data
#data.describe(include = 'all')
data.describe()
data['Country'].value_counts().plot(kind='bar')
data[['Car', 'Country']].describe()
# exemple de vecteur ou enregistrement
data.loc[0]
```
## Exercice
1. Quel est le type des attributs?
1. Proposer une méthode pour déterminer le pays avec les véhicules les plus économes
```
plt.scatter(data.Weight, data.MPG)
```
## Exemple de fichier de comptage de vélos
Source http://donnees.ville.montreal.qc.ca/dataset/velos-comptage
```
# comptages vélo
filename, message = urllib.request.urlretrieve('http://donnees.ville.montreal.qc.ca/dataset/f170fecc-18db-44bc-b4fe-5b0b6d2c7297/resource/6caecdd0-e5ac-48c1-a0cc-5b537936d5f6/download/comptagevelo20162.csv')
data = pd.read_csv(filename)
print(data.info())
plt.plot(data['CSC (Côte Sainte-Catherine)'])
# 01/01/16 était un vendredi, le 4 était un lundi
cscComptage = np.array(data['CSC (Côte Sainte-Catherine)'].tolist()[4:4+51*7]).reshape(51,7)
for r in cscComptage:
plt.plot(r)
plt.xticks(range(7),['lundi', 'mardi', 'mercredi', 'jeudi', 'vendredi', 'samedi', 'dimanche'])
plt.ylabel('Nombre de cyclistes')
plt.imshow(cscComptage, interpolation = 'none', aspect = 'auto')
plt.colorbar()
```
## Exercice
1. Les semaines sont-elle bien représentées?
1. À quoi correspondent les bandes blanches dans l'image?
## Données bixi
Source https://bixi.com/fr/page-27
```
# données bixi
filename, message = urllib.request.urlretrieve('https://sitewebbixi.s3.amazonaws.com/uploads/docs/biximontrealrentals2018-96034e.zip')
zip=zipfile.ZipFile(filename)
data = pd.read_csv(zip.open(zip.namelist()[0]))
print(data.info())
print(data.describe())
# reflechir aux types, sens de moyenner des codes de station
```
## Données météo d'Environnement Canada
```
# données météo d'Environnement Canada
#filename, message = urllib.request.urlretrieve('http://climate.weather.gc.ca/climate_data/bulk_data_f.html?format=csv&stationID=10761&Year=2017&Month=1&Day=1&timeframe=2&submit=Download+Data')
filename, message = urllib.request.urlretrieve('http://climate.weather.gc.ca/climate_data/bulk_data_f.html?format=csv&stationID=10761&Year=2017&Month=7&Day=1&timeframe=1&submit=Download+Data')
data = pd.read_csv(filename, delimiter = ',')
print(data.info())
plt.plot(data['Hum. rel (%)'])
plt.show()
#data.describe()
# plt.plot
```
| github_jupyter |
# Drug Sentiment Analysis
## Problem Statement
The dataset provides patient reviews on specific drugs along with related conditions and a 10 star patient rating reflecting overall patient satisfaction. We have to create a target feature out of ratings and predict the sentiment of the reviews.
### Data Description :
The data is split into a train (75%) a test (25%) partition.
* drugName (categorical): name of drug
* condition (categorical): name of condition
* review (text): patient review
* rating (numerical): 10 star patient rating
* date (date): date of review entry
* usefulCount (numerical): number of users who found review useful
The structure of the data is that a patient with a unique ID purchases a drug that meets his condition and writes a review and rating for the drug he/she purchased on the date. Afterwards, if the others read that review and find it helpful, they will click usefulCount, which will add 1 for the variable.
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# Any results you write to the current directory are saved as output.
```
### Import all the necessary packages
Here we have imported the basic packages that are required to do basic processing. Feel free to use any library that you think can be useful here.
```
#import all the necessary packages
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
%matplotlib inline
from matplotlib import style
style.use('ggplot')
```
### Load Data
```
#read the train and test data
test = pd.read_csv('/kaggle/input/kuc-hackathon-winter-2018/drugsComTest_raw.csv') #train data
train = pd.read_csv('/kaggle/input/kuc-hackathon-winter-2018/drugsComTrain_raw.csv') #test data
```
### Checking Out The Data
```
#check the head of train data
train.head(10)
#check the head of test data
test.head(10)
```
By looking at the head of train and test data we see that there are 7 features in our Dataset but we don't have any sentiment feature which can serve as our target variable. We will make a target feature out of rating. If Rating is greater than 5 we will assign it as positive else we will assign it as negative.
```
#check the shape of the given dataset
print(f'train has {train.shape[0]} number of rows and {train.shape[1]} number of columns')
print(f'train has {test.shape[0]} number of rows and {test.shape[1]} number of columns')
#check the columns in train
train.columns
```
## Exploratory Data Analysis
The purpose of EDA is to find out interesting insights and irregularities in our Dataset. We will look at Each feature and try to find out interesting facts and patterns from them. And see whether there is any relationship between the variables or not.
Merge the train and test data as there are no target labels. We will perform our EDA and Pre-processing on merged data. Then we will dive the data into 70 : 30 ratio for training and testing
```
#merge train and test data
merge = [train,test]
merged_data = pd.concat(merge,ignore_index=True)
merged_data.shape #check the shape of merged_data
```
### Check number of uniqueIds to see if there's any duplicate record in our dataset
```
#check uniqueID
merged_data['uniqueID'].nunique()
```
There are 215063 uniqueIds meaning that every record is unique.
### Check information of the merged data
```
merged_data.info()
```
### Check the Description
```
merged_data.describe(include='all')
```
**Following things can be noticed from the description**
* Top **drugName** is **Levonorgestrel**, It will be intresting to see for what condition it is used.
* Top **condition** is **Birth Control**.
* Top **review** is just a single word "Good", but it has very small count - 39. May be lazy people like me have written that comment.
* Most single day review came on 1-Mar-16, it will be interesting to investigate this date and see for which drugName and which conditions these reviews were for.
### Check the percentage of null values in each column
```
merged_data.isnull().sum()/merged_data.shape[0]
```
We just have null values in just 1 column i.e **condition** . We will leave the null values in that column for now as the null values are very small.
### Check number of unique values in drugName and condition
```
#check number of unique values in drugName
print(merged_data['drugName'].nunique())
#check number of unique values in condition
print(merged_data['condition'].nunique())
```
We can see that there are 3671 drugName and only 916 conditions. So there are conditions which has multiple drugs.
Now the time is to plot some beautiful graphs and find some interesting insights from our Data. **Here your detective skills are needed so be ready and interrogate the data as much as you can **
### Check the top 20 conditions
```
#plot a bargraph to check top 20 conditions
plt.figure(figsize=(12,6))
conditions = merged_data['condition'].value_counts(ascending = False).head(20)
plt.bar(conditions.index,conditions.values)
plt.title('Top-20 Conditions',fontsize = 20)
plt.xticks(rotation=90)
plt.ylabel('count')
plt.show()
```
**From above graph we can see that the :**
* Birth control is twice as big as anyone, around 38,000.
* Most of the conditions for top 20 conditions are between 5000 - 10000
### Plot the bottom 20 conditions
```
#plot a bargraph to check bottom 20 conditions
plt.figure(figsize=(12,6))
conditions_bottom = merged_data['condition'].value_counts(ascending = False).tail(20)
plt.bar(conditions_bottom.index,conditions_bottom.values)
plt.title('Bottom-20 Conditions',fontsize = 20)
plt.xticks(rotation=90)
plt.ylabel('count')
plt.show()
```
* Bottom 20 conditions have just single counts in our dataset. They may be the rare conditions.
* And if we look at our plot we see that there are conditions whose name are strange starting with **"61<_/span_>users found this comment helpful"** , these are the noise present in our data. We will deal with these noise later.
### Check top 20 drugName
```
#plot a bargraph to check top 20 drugName
plt.figure(figsize=(12,6))
drugName_top = merged_data['drugName'].value_counts(ascending = False).head(20)
plt.bar(drugName_top.index,drugName_top.values,color='blue')
plt.title('drugName Top-20',fontsize = 20)
plt.xticks(rotation=90)
plt.ylabel('count')
plt.show()
```
* The top drugName is Levonorgestrel, which we had seen in description as well.
* The top 3 drugName has count around 4000 and above.
* Most of the drugName counts are around 1500 if we look at top 20
### Check bottom 20 drugName
```
#plot a bargraph to check top 20 drugName
plt.figure(figsize=(12,6))
drugName_bottom = merged_data['drugName'].value_counts(ascending = False).tail(20)
plt.bar(drugName_bottom.index,drugName_bottom.values,color='blue')
plt.title('drugName Bottom-20',fontsize = 20)
plt.xticks(rotation=90)
plt.ylabel('count')
plt.show()
```
* The bottom 20 drugName has count 1. These might be the drugs used of rare conditions or are new in market.
### Checking Ratings Distribution
```
ratings_ = merged_data['rating'].value_counts().sort_values(ascending=False).reset_index().\
rename(columns = {'index' :'rating', 'rating' : 'counts'})
ratings_['percent'] = 100 * (ratings_['counts']/merged_data.shape[0])
print(ratings_)
# Setting the Parameter
sns.set(font_scale = 1.2, style = 'darkgrid')
plt.rcParams['figure.figsize'] = [12, 6]
#let's plot and check
sns.barplot(x = ratings_['rating'], y = ratings_['percent'],order = ratings_['rating'])
plt.title('Ratings Percent',fontsize=20)
plt.show()
```
We notice that most of the ratings are high with ratings 10 and 9.
rating 1 is also high which shows the extreme ratings of the user. We can say that the users mostly prefer to rate when the drugs are either very useful to them or the drugs fails, or there is some side effects. About 70% of the values have rating greater than 7.
### Check the distribution of usefulCount
```
#plot a distplot of usefulCount
sns.distplot(merged_data['usefulCount'])
plt.show()
```
* usefulCount is positively-skewed.
* Most of the usefulCounts are distributed between 0 and 200.
* There are extreme outliers present in our usefulCounts. We either have to remove them or transform them.
```
#check the descriptive summary
sns.boxplot(y = merged_data['usefulCount'])
plt.show()
```
We can see that there are huge outliers present in our dataset. Some drugs have extreme useful counts.
### Check number of Drugs per condition
```
#lets check the number of drugs/condition
merged_data.groupby('condition')['drugName'].nunique().sort_values(ascending=False).head(20)
```
If we look above the top value is not listed/othe.
* It might be possible that the user didn't mentioned his/her condition as sometimes people doesn't want to reveal thier disorders. We can look up the drug names and fill up the conditions for which that drug is used.
* Another point to note here is that there are values is condition like **'3 <_/span_> user found this comment helpful'**, **4<_/span_> users found this comment helpful**. These are the noises present in our dataset. The dataset appears to have been extracted through webscraping, the values are wrongly fed in here.
##### Let's look at ''3 <_/span_> user found this comment helpful'
```
span_data = merged_data[merged_data['condition'].str.contains('</span>',case=False,regex=True) == True]
print('Number of rows with </span> values : ', len(span_data))
noisy_data_ = 100 * (len(span_data)/merged_data.shape[0])
print('Total percent of noisy data {} % '.format(noisy_data_))
```
There are only 0.54 % values with </span type data. We can remove these from our dataset as we won't lose much information by removing them.
```
#drop the nosie
merged_data.drop(span_data.index, axis = 0, inplace=True)
```
### Now let's look at the not listed/other
```
#check the percentage of 'not listed / othe' conditions
not_listed = merged_data[merged_data['condition'] == 'not listed / othe']
print('Number of not_listed values : ', len(not_listed))
percent_not_listed = 100 * len(not_listed)/merged_data.shape[0]
print('Total percent of noisy data {} % '.format(percent_not_listed))
```
There are 592 unique drugs for "not / listed othe " values. There are 2 options to deal with these values
1. Check the condition associated with the drugs and replace the values.
2. We can drop the values as these only accounts for 0.27 % of total data. To save our time we will drop the nosiy data.
```
# drop noisy data
merged_data.drop(not_listed.index, axis = 0, inplace=True)
# after removing the noise, let's check the shape
merged_data.shape[0]
```
### Now Check number of drugs present per condition after removing noise
```
#lets check the number of drugs present in our dataset condition wise
conditions_gp = merged_data.groupby('condition')['drugName'].nunique().sort_values(ascending=False)
#plot the top 20
# Setting the Parameter
condition_gp_top_20 = conditions_gp.head(20)
sns.set(font_scale = 1.2, style = 'darkgrid')
plt.rcParams['figure.figsize'] = [12, 6]
sns.barplot(x = condition_gp_top_20.index, y = condition_gp_top_20.values)
plt.title('Top-20 Number of drugs per condition',fontsize=20)
plt.xticks(rotation=90)
plt.ylabel('count',fontsize=10)
plt.show()
```
* Most of the drugs are for pain, birth control and high blood pressure which are common conditions.
* In top- 20 each condition has above 50 drugs.
### Check bottom 20 drugs per conditions
```
#bottom-20
condition_gp_bottom_20 = conditions_gp.tail(20)
#plot the top 20
sns.barplot(x = condition_gp_bottom_20.index, y = condition_gp_bottom_20.values,color='blue')
plt.title('Bottom-20 Number of drugs per condition',fontsize=20)
plt.xticks(rotation=90)
plt.ylabel('count',fontsize=10)
plt.show()
```
Bottom-20 conditions just have single drugs. These are the rare conditions.
### Now let's check if a single drug can be used for Multiple conditions
```
#let's check if a single drug is used for multiple conditions
drug_multiple_cond = merged_data.groupby('drugName')['condition'].nunique().sort_values(ascending=False)
print(drug_multiple_cond.head(10))
```
There are many drugs which can be used for multiple conditions.
### Check the number of drugs with rating 10
```
#Let's check the Number of drugs with rating 10.
merged_data[merged_data['rating'] == 10]['drugName'].nunique()
```
We have 2907 drugs with rating 10.
### Plot top-20 drugs with rating 10
```
#Check top 20 drugs with rating=10/10
top_20_ratings = merged_data[merged_data['rating'] == 10]['drugName'].value_counts().head(20)
sns.barplot(x = top_20_ratings.index, y = top_20_ratings.values )
plt.xticks(rotation=90)
plt.title('Top-20 Drugs with Rating - 10/10', fontsize=20)
plt.ylabel('count')
plt.show()
```
* We can see that Levonorgestrel has most of the ratings 10/10. It seems it is used for the common condition and, it would be the most effective one.
* Other drugs have ratings between 1000 and 500 from top-20 10/10.
### Check for what condition Levonorgestrel is used for
```
merged_data[merged_data['drugName'] == 'Levonorgestrel']['condition'].unique()
```
Levonorgestrel is used for 3 different conditions.
* emergency contraception
* birth control
* abnormal uterine bleeding
### Top 10 drugs with 1/10 Rating
```
#check top 20 drugs with 1/10 rating
top_20_ratings_1 = merged_data[merged_data['rating'] == 1]['drugName'].value_counts().head(20)
sns.barplot(x = top_20_ratings_1.index, y = top_20_ratings_1.values )
plt.xticks(rotation=90)
plt.title('Top-20 Drugs with Rating - 1/10', fontsize=20)
plt.ylabel('count')
plt.show()
```
Top-3 of 1/10 ratings have almost 700 counts. Which means they are not so useful drugs.
### Now we will look at the Date column
```
# convert date to datetime and create year andd month features
merged_data['date'] = pd.to_datetime(merged_data['date'])
merged_data['year'] = merged_data['date'].dt.year #create year
merged_data['month'] = merged_data['date'].dt.month #create month
```
### Check Number of reviews per year
```
#plot number of reviews year wise
count_reviews = merged_data['year'].value_counts().sort_index()
sns.barplot(count_reviews.index,count_reviews.values,color='blue')
plt.title('Number of reviews Year wise')
plt.show()
```
The year 2015, 2016 and 2017 accounts for the most reviews. Almost 60% of the reviews are from these years.
### Check average rating per year
```
#check average rating per year
yearly_mean_rating = merged_data.groupby('year')['rating'].mean()
sns.barplot(yearly_mean_rating.index,yearly_mean_rating.values,color='green')
plt.title('Mean Rating Yearly')
plt.show()
```
* Rating has been almost constant from year 2009 - 2014 but after 2014 the ratings has started to decrease.
* As the number of reviews has increased for last 3 years, the rating has decreased.
### Per year drug count and Condition count
```
#check year wise drug counts and year wise conditions counts
year_wise_condition = merged_data.groupby('year')['condition'].nunique()
sns.barplot(year_wise_condition.index,year_wise_condition.values,color='green')
plt.title('Conditions Year wise',fontsize=20)
plt.show()
```
* Condition has increased in last 3 years. Which means the new conditions has been coming up.
* Starting year 2008 had lowest number of conditions.
**We expect that as the the conditions has increased. Drugs should have also increased. Let's check that out.**
```
#check drugs year wise
year_wise_drug = merged_data.groupby('year')['drugName'].nunique()
sns.barplot(year_wise_drug.index,year_wise_drug.values,color='green')
plt.title('Drugs Year Wise',fontsize=20)
plt.show()
```
As expected number of drugs has also increased in last three years.
## Data Pre-Processing
Data Pre-processing is a vital part in model building. **"Garbage In Garbage Out"**, we all have heard this statement. But what does it mean. It means if we feed in garbage in our data like missing values, and different features which doesn't have any predictive power and provides the same information in our model. Our model will be just making a random guess and it won't be efficient enough for us to use it for any predictions.
```
# check the null values
merged_data.isnull().sum()
```
We only have null values in condition. We will drop the records with null values as it only accounts for 0.5 % of total data.
```
# drop the null values
merged_data.dropna(inplace=True, axis=0)
```
### Pre-Processing Reviews
**Check the first few reviews**
```
#check first three reviews
for i in merged_data['review'][0:3]:
print(i,'\n')
```
### Steps for reviews pre-processing.
* **Remove HTML tags**
* Using BeautifulSoup from bs4 module to remove the html tags. We have already removed the html tags with pattern "64</_span_>...", we will use get_text() to remove the html tags if there are any.
* **Remove Stop Words**
* Remove the stopwords like "a", "the", "I" etc.
* **Remove symbols and special characters**
* We will remove the special characters from our reviews like '#' ,'&' ,'@' etc.
* **Tokenize**
* We will tokenize the words. We will split the sentences with spaces e.g "I might come" --> "I", "might", "come"
* **Stemming**
* Remove the suffixes from the words to get the root form of the word e.g 'Wording' --> "Word"
```
#import the libraries for pre-processing
from bs4 import BeautifulSoup
import nltk
import re
from nltk.corpus import stopwords
from nltk.stem.snowball import SnowballStemmer
stops = set(stopwords.words('english')) #english stopwords
stemmer = SnowballStemmer('english') #SnowballStemmer
def review_to_words(raw_review):
# 1. Delete HTML
review_text = BeautifulSoup(raw_review, 'html.parser').get_text()
# 2. Make a space
letters_only = re.sub('[^a-zA-Z]', ' ', review_text)
# 3. lower letters
words = letters_only.lower().split()
# 5. Stopwords
meaningful_words = [w for w in words if not w in stops]
# 6. Stemming
stemming_words = [stemmer.stem(w) for w in meaningful_words]
# 7. space join words
return( ' '.join(stemming_words))
#apply review_to_words function on reviews
merged_data['review'] = merged_data['review'].apply(review_to_words)
```
### Now we will create our target variable "Sentiment" from rating
```
#create sentiment feature from ratings
#if rating > 5 sentiment = 1 (positive)
#if rating < 5 sentiment = 0 (negative)
merged_data['sentiment'] = merged_data["rating"].apply(lambda x: 1 if x > 5 else 0)
```
We will predict the sentiment using the reviews only. So let's start building our model.
## Building Model
```
#import all the necessary packages
from sklearn.model_selection import train_test_split #import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer #import TfidfVectorizer
from sklearn.metrics import confusion_matrix #import confusion_matrix
from sklearn.naive_bayes import MultinomialNB #import MultinomialNB
from sklearn.ensemble import RandomForestClassifier #import RandomForestClassifier
```
We all know that we cannot pass raw text features in our model. We have to convert them into numeric values. We will use TfidfVectorizer to convert our reviews in numeric features.
### TfidfVectorizer (Term frequency - Inverse document frequency)
**TF - Term Frequency** :-
How often a term t occurs in a document d.
TF = (_Number of occurences of a word in document_) / (_Number of words in that document_)
**Inverse Document Frequency**
IDF = log(Number of sentences / Number of sentence containing word)
**Tf - Idf = Tf * Idf**
```
# Creates TF-IDF vectorizer and transforms the corpus
vectorizer = TfidfVectorizer()
reviews_corpus = vectorizer.fit_transform(merged_data.review)
reviews_corpus.shape
```
We have built reviews_corpus which are the independent feature in our model.
### **Store Dependent feature in sentiment and split the Data into train and test**
```
#dependent feature
sentiment = merged_data['sentiment']
sentiment.shape
#split the data in train and test
from sklearn.model_selection import train_test_split
X_train,X_test,Y_train,Y_test = train_test_split(reviews_corpus,sentiment,test_size=0.33,random_state=42)
print('Train data shape ',X_train.shape,Y_train.shape)
print('Test data shape ',X_test.shape,Y_test.shape)
```
### Apply Multinomial Naive Bayes
```
#fit the model and predicct the output
clf = MultinomialNB().fit(X_train, Y_train) #fit the training data
pred = clf.predict(X_test) #predict the sentiment for test data
print("Accuracy: %s" % str(clf.score(X_test, Y_test))) #check accuracy
print("Confusion Matrix")
print(confusion_matrix(pred, Y_test)) #print confusion matrix
```
We have got accuracy score of 75.8% by using NaiveBayes
### Apply RandomForestClassifier
```
#fit the model and predicct the output
clf = RandomForestClassifier().fit(X_train, Y_train)
pred = clf.predict(X_test)
print("Accuracy: %s" % str(clf.score(X_test, Y_test)))
print("Confusion Matrix")
print(confusion_matrix(pred, Y_test))
```
## Parameter Tuning
Try different sets of parameters for RandomForestClassifier using RandomSearchCV and check which sets of parameters gives the best accuracy. *A task for you to try*
## Conclusion
After applying the TfidfVectorizer to transform our reviews in Vectors and applying NaiveBayes and RandomForestClassifier we see that RandomForestClassifier outperforms MulinomialNB. We have achieved accuracy of 89.7 % after applying RandomForestClassifier without any parameter tuning. We can tune the parameters of our classifier and improve our accuracy.
| github_jupyter |
# Lecture 2 - Introduction to Probability Theory
> Probability theory is nothing but common sense reduced to calculation. P. Laplace (1812)
## Objectives
+ To use probability theory to represent states of knowledge.
+ To use probability theory to extend Aristotelian logic to reason under uncertainty.
+ To learn about the **pruduct rule** of probability theory.
+ To learn about the **sum rule** of probability theory.
+ What is a **random variable**?
+ What is a **discrete random variable**?
+ When are two random variable **independent**?
+ What is a **continuous random variable**?
+ What is the **cumulative distribution function**?
+ What is the **probability density function**?
## Readings
Before coming to class, please read the following:
+ [Chapter 1 of Probabilistic Programming and Bayesian Methods for Hackers](http://nbviewer.ipython.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter1_Introduction/Chapter1.ipynb)
+ [Chapter 1](http://home.fnal.gov/~paterno/images/jaynesbook/cc01p.pdf) of (Jaynes, 2003).
+ [Chapter 2](http://home.fnal.gov/~paterno/images/jaynesbook/cc02p.pdf) of (Jaynes, 2003) (skim through).
## The basic desiderata of probability theory
It is actually possible to derive the rules of probability based on a system of common sense requirements.
Paraphrasing
[Chapter 1](http://home.fnal.gov/~paterno/images/jaynesbook/cc01p.pdf) of \cite{jaynes2003}),
we would like our system to satisfy the following desiderata:
1) *Degrees of plausibility are represented by real numbers.*
2) *The system should have a qualitative correspondance with common sense.*
3) *The system should be consistent in the sense that:*
+ *If a conclusion can be reasoned out in more than one way, then every possible way must lead to the same result.*
+ *All the evidence relevant to a question should be taken into account.*
+ *Equivalent states of knowledge must be represented by equivalent plausibility assignments.*
## How to speak about probabilities?
Let
+ A be a logical sentence,
+ B be another logical sentence, and
+ and I be all other information we know.
There is no restriction on what A and B may be as soon as none of them is a contradiction.
We write as a shortcut:
$$
\mbox{not A} \equiv \neg,
$$
$$
A\;\mbox{and}\;B \equiv A,B \equiv AB,
$$
$$
A\;\mbox{or}\;B \equiv A + B.
$$
We **write**:
$$
p(A|BI),
$$
and we **read**:
> the probability of A being true given that we know that B and I is true
or (assuming knowledge I is implied)
> the probability of A being true given that we know that B is true
or (making it even shorter)
> the probability of A given B.
$$
p(\mbox{something} | \mbox{everything known}) = \mbox{probability samething is true conditioned on what is known}.
$$
$p(A|B,I)$ is just a number between 0 and 1 that corresponds to the degree of plaussibility of A conditioned on B and I.
0 and 1 are special.
+ If
$$
p(A|BI) = 0,
$$
we say that we are certain that A is false if B is true.
+ If
$$
p(A|BI) = 1,
$$
we say that we are certain that A is false if B is false.
+ If
$$
p(A|BI) \in (0, 1),
$$
we say that we are uncertain about A given that B is false.
Depending on whether $p(A|B,I)$ is closer to 0 or 1 we beleive more on one possibiliy or another.
Complete ignorance corresponds to a probability of 0.5.
## The rules of probability theory
According to
[Chapter 2](http://home.fnal.gov/~paterno/images/jaynesbook/cc02m.pdf) of \cite{jaynes2003} the desiderata are enough
to derive the rules of probability.
These rules are:
+ The **obvious rule** (in lack of a better name):
$$
p(A | I) + p(\neg A | I) = 1.
$$
+ The **product rule** (also known as the Bayes rule or Bayes theorem):
$$
p(AB|I) = p(A|BI)p(B|I).
$$
or
$$
p(AB|I) = p(B|AI)p(A|I).
$$
These two rules are enough to compute any probability we want. Let us demonstrate this by a very simple example.
### Example: Drawing balls from a box without replacement
Consider the following example of prior information I:
> We are given a box with 10 balls 6 of which are red and 4 of which are blue.
The box is sufficiently mixed so that so that when we get a ball from it, we don't know which one we pick.
When we take a ball out of the box, we do not put it back.
Let A be the sentence:
> The first ball we draw is blue.
Intuitively, we would set the probability of A equal to:
$$
p(A|I) = \frac{4}{10}.
$$
This choice can actually be justified, but we will come to this later in this course.
From the "obvious rule", we get that the probability of not drawing a blue ball, i.e.,
the probability of drawing a red ball in the first draw is:
$$
p(\neg A|I) = 1 - p(A|I) = 1 - \frac{4}{10} = \frac{6}{10}.
$$
Now, let B be the sentence:
> The second ball we draw is red.
What is the probability that we draw a red ball in the second draw given that we drew a blue ball in the first draw?
Just before our second draw, there remain 9 bals in the box, 3 of which are blue and 6 of which are red.
Therefore:
$$
p(B|AI) = \frac{6}{9}.
$$
We have not used the product rule just yet. What if we wanted to find the probability that we draw a blue during the first draw and a red during the second draw? Then,
$$
p(AB|I) = p(A|I)p(B|AI) = \frac{4}{10}\frac{6}{9} = \frac{24}{90}.
$$
What about the probability o a red followed by a blue? Then,
$$
p(\neg AB|I) = p(\neg A|I)p(B|AI) = \left[1 - p(A|I) \right]p(B|\neg AI) = \frac{6}{10}\frac{5}{9} = \frac{30}{90}.
$$
### Other rules of probability theory
All the other rules of probability theory can be derived from these two rules.
To demonstrate this, let's prove that:
$$
p(A + B|I) = p(A|I) + p(B|I) - p(AB|I).
$$
Here we go:
\begin{eqnarray*}
p(A+B|I) &=& 1 - p(\neg A \neg B|I)\;\mbox{(obvious rule)}\\
&=& 1 - p(\neg A|\neg BI)p(\neg B|I)\;\mbox{(product rule)}\\
&=& 1 - [1 - p(A |\neg BI)]p(\neg B|I)\;\mbox{(obvious rule)}\\
&=& 1 - p(\neg B|I) + p(A|\neg B I)p(\neg B|I)\\
&=& 1 - [1 - p(B|I)] + p(A|\neg B I)p(\neg B|I)\\
&=& p(B|I) + p(A|\neg B I)p(\neg B|I)\\
&=& p(B|I) + p(A\neg B|I)\\
&=& p(B|I) + p(\neg B|AI)p(A|I)\\
&=& p(B|I) + [1 - p(B|AI)] p(A|I)\\
&=& p(B|I) + p(A|I) - p(B|AI)p(A|I)\\
&=& p(A|I) + p(B|I) - p(AB|I).
\end{eqnarray*}
### The sum rule
Now consider a finite set of logical sentences, $B_1,\dots,B_n$ such that:
1. One of them is definitely true:
$$
p(B_1+\dots+B_n|I) = 1.
$$
2. They are mutually exclusive:
$$
p(B_iB_j|I) = 0,\;\mbox{if}\;i\not=j.
$$
The **sum rule** states that:
$$
P(A|I) = \sum_i p(AB_i|I) = \sum_i p(A|B_i I)p(B_i|I).
$$
We can prove this by induction, but let's just prove it for $n=2$:
$$
\begin{array}{ccc}
p(A|I) &=& p[A(B_1+B_2|I]\\
&=& p(AB_1 + AB_2|I)\\
&=& p(AB_1|I) + p(AB_2|I) - p(AB_1B_2|I)\\
&=& p(AB_1|I) + p(AB_2|I),
\end{array}
$$
since
$$
p(AB_1B_2|I) = p(A|B_1B_2|I)p(B_1B_2|I) = 0.
$$
Let's go back to our example. We can use the sum rule to compute the probability of getting a red ball on the second draw independently of what we drew first. This is how it goes:
$$
\begin{array}{ccc}
p(B|I) &=& p(AB|I) + p(\neg AB|I)\\
&=& p(B|AI)p(A|I) + p(B|\neg AI) p(\neg A|I)\\
&=& \frac{6}{9}\frac{4}{10} + \frac{5}{9}\frac{6}{10}\\
&=& \dots
\end{array}
$$
### Example: Medical Diagnosis
This example is a modified version of the one found in [Lecture 1](http://www.zabaras.com/Courses/BayesianComputing/IntroToProbabilityAndStatistics.pdf) of the Bayesian Scientific Computing course offered during Spring 2013 by Prof. N. Zabaras at Cornell University.
We are going to examine the usefullness of a new tuberculosis test. Let the prior information, I, be:
> The percentage of the population infected by tuberculosis is 0.4%. We have run several experiments and determined that:
+ If a tested patient has the disease, then 80% of the time the test comes out positive.
+ If a tested patient does not have the disease, then 90% of the time, the test comes out negative.
Suppose now that you administer this test to a patient and that the result is positive. How confident are you that the patient does indeed have the disease?
Let's use probability theory to answer this question. Let A be the event:
> The patient's test is positive.
Let B be the event:
> The patient has tuberculosis.
According to the prior information, we have:
$$
p(B|I) = p(\mbox{has tuberculosis}|I) = 0.004,
$$
and
$$
p(A|B,I) = p(\mbox{test is positive}|\mbox{has tuberculosis},I) = 0.8.
$$
Similarly,
$$
p(A|\neg B, I) = p(\mbox{test is positive}|\mbox{does not have tuberculosis}, I) = 0.1.
$$
We are looking for:
$$
\begin{array}{ccc}
p(\mbox{has tuberculosis}|\mbox{test is positive},I) &=& P(B|A,I)\\
&=& \frac{p(AB|I)}{p(A|I)} \\
&=& \frac{p(A|B,I)p(B|I)}{p(A|B,I)p(B|I) + p(A|\neg B, I)p(\neg B|I)}\\
&=& \frac{0.8\times 0.004}{0.8\times 0.004 + 0.1 \times 0.996}\\
&\approx& 0.031.
\end{array}
$$
How much would you pay for such a test?
## Conditional Independence
We say that $A$ and $B$ are **independent** (conditional on I), and write,
$$
A\perp B|I,
$$
if knowledge of one does not yield any information about the other. Mathematically, by $A\perp B|I$, we mean that:
$$
p(A|B,I) = p(A|I).
$$
Using the product rule, we can easily show that:
$$
A\perp B|I \iff p(AB|I) = p(A|I)p(B|I).
$$
### Question
+ Give an example of $I, A$ and $B$ so that $A\perp B|I$.
Now, let $C$ be another event. We say that $A$ and $B$ are **independent** conditional on $C$ (and I), and write:
$$
A\perp B|C,I,
$$
if knowlege of $C$ makes information about $A$ irrelevant to $B$ (and vice versa). Mathematically, we mean that:
$$
p(A|B,C,I) = p(A|C,I).
$$
### Question
+ Give an example of $I,A,B,C$ so that $A\perp B|C,I$.
## Random Variables
The formal mathematical definition of a random variable involves measure theory and is well beyond the scope of this course.
Fortunately, we do not have to go through that route to get a theory that is useful in applications.
For us, a **random variable** $X$ will just be a variable of our problem whose value is unknown to us.
Note that, you should not take the word "random" too literally.
If we could, we would change the name to **uncertain** or **unknown** variable.
A random variable could correspond to something fixed but unknown, e.g., the number of balls in a box,
or it could correspond to something truely random, e.g., the number of particles that hit a [Geiger counter](https://en.wikipedia.org/wiki/Geiger_counter) in a specific time interval.
### Discrete Random Variables
We say that a random variable $X$ is discrete, if the possible values it can take are discrete (possibly countably infinite).
We write:
$$
p(X = x|I)
$$
and we read "the probability of $X$ being $x$".
If it does not cause any ambiguity, sometimes we will simplify the notation to:
$$
p(x) \equiv p(X=x|I).
$$
Note that $p(X=x)$ is actually a discrete function of $x$ which depends on our beliefs about $X$.
The function $p(x) = p(X=x|I)$ is known as the probability density function of $X$.
Now let $Y$ be another random variable.
The **sum rule** becomes:
$$
p(X=x|I) = \sum_{y}p(X=x,Y=y|I) = \sum_y p(X=x|Y=y,I)p(Y=y|I),
$$
or in simpler notation:
$$
p(x) = \sum_y p(x,y) = \sum_y p(x|y)p(y).
$$
The function $p(X=x, Y=y|I) \equiv p(x, y)$ is known as the joint *probability mass function* of $X$ and $Y$.
The **product rule** becomes:
$$
p(X=x,Y=y|I) = p(X=x|Y=y,I)p(Y=y|I),
$$
or in simpler notation:
$$
p(x,y) = p(x|y)p(y).
$$
We say that $X$ and $Y$ are **independent** and write:
$$
X\perp Y|I,
$$
if knowledge of one does not yield any information about the other.
Mathematically, $Y$ gives no information about $X$ if:
$$
p(x|y) = p(x).
$$
From the product rule, however, we get that:
$$
p(x) = p(x|y) = \frac{p(x,y)}{p(y)},
$$
from which we see that the joint distribution of $X$ and $Y$ must factorize as:
$$
p(x, y) = p(x) p(y).
$$
It is trivial to show that if this factorization holds, then
$$
p(y|x) = p(y),
$$
and thus $X$ yields no information about $Y$ either.
### Continuous Random Variables
A random variable $X$ is continuous if the possible values it can take are continuous. The probability of a continuous variable getting a specific value is always zero. Therefore, we cannot work directly with probability mass functions as we did for discrete random variables. We would have to introduce the concepts of the **cumulative distribution function** and the **probability density function**. Fortunately, with the right choice of mathematical symbols, the theory will look exactly the same.
Let us start with a real continuous random variable $X$, i.e., a random variable taking values in the real line $\mathbb{R}$. Let $x \in\mathbb{R}$ and consider the probability of $X$ being less than or equal to $x$:
$$
F(x) := p(X\le x|I).
$$
$F(x)$ is known as the **cumulative distribution function** (CDF). Here are some properties of the CDF whose proof is
left as an excersise:
+ The CDF starts at zero and goes up to one:
$$
F(-\infty) = 0\;\mbox{and}\;F(+\infty) = 1.
$$
+ $F(x)$ is an increasing function of $x$, i.e.,
$$
x_1 \le x_2 \implies F(x_1)\le F(x_2).
$$
+ The probability of $X$ being in the interval $[x_1,x_2]$ is:
$$
p(x_1 \le X \le x_2|I) = F(x_2) - F(x_1).
$$
Now, assume that the derivative of $F(x)$ with respect to $x$ exists.
Let us call it $f(x)$:
$$
f(x) = \frac{dF(x)}{dx}.
$$
Using the fundamental theorem of calculus, it is trivial to show Eq. (\ref{eq:CDF_prob}) implies:
\begin{equation}
p(x_1 \le X \le x_2|I) = \int_{x_1}^{x_2}f(x)dx.
\end{equation}
$f(x)$ is known as the **probability density function** (PDF) and it is measured in probability per unit of $X$.
To see this note that:
$$
p(x \le X \le x + \delta x|I) = \int_{x}^{x+\delta x}f(x')dx' \approx f(x)\delta x,
$$
so that:
$$
f(x) \approx \frac{p(x \le X \le x + \delta x|I)}{\delta x}.
$$
The PDF should satisfy the following properties:
+ It should be positive
$$
f(x) \ge 0,
$$
+ It should integrate to one:
$$
\int_{-\infty}^{\infty} f(x) dx = 1.
$$
#### Notation about the PDF of continuous random variables
In order to make all the formulas of probability theory the same, we define for a continuous random variable $X$:
$$
p(x) := f(x) = \frac{dF(x)}{dx} = \frac{d}{dx}p(X \le x|I).
$$
But keep in mind, that if $X$ is continuous $p(x)$ is not a probability but a probability density.
That is, it needs a $dx$ to become a probability.
Let the PDF $p(x)$ of $X$ and the PDF $p(y)$ of $Y$ ($Y$ is another continuous random variable).
We can find the PDF of the random variable $X$ conditioned on $Y$, i.e., the PDF of $X$ if $Y$ is directly observed.
This is the **product rule** for continuous random variables:
\begin{equation}
\label{eq:continuous_bayes}
p(y|x) = \frac{p(x, y)}{p(y)},
\end{equation}
where $p(x,y)$ is the **joint PDF** of $X$ and $Y$.
The **sum rule** for continous random variables is:
\begin{equation}
\label{eq:continuous_sum}
p(x) = \int p(x, y) dy = \int p(x | y) p(y) dy.
\end{equation}
The similarity between these rules and the discrete ones is obvious.
We have prepared a table to help you remember it.
| Concept | Discrete Random Variables | Continuous Random Variables |
|---|---------------|-----------------|
|$p(x)$| in units of robability | in units of probability per unit of $X$|
|sum rule| $\sum_y p(x,y) = \sum_y p(x|y)p(y)$ | $\int_y p(x,y) dy = \int_y p(x|y) p(y)$|
|product rule| $p(x,y) = p(x|y)p(y)$ | $p(x,y) = p(x|y)p(y)$|
## Expectations
Let $X$ be a random variable. The expectation of $X$ is defined to be:
$$
\mathbb{E}[X] := \mathbb{E}[X | I] = \int x p(x) dx.
$$
Now let $g(x)$ be any function. The expectation of $g(X)$, i.e., the random variable defined after passing $X$ through $g(\cdot)$, is:
$$
\mathbb{E}[g(X)] := \mathbb{E}[g(X)|I] = \int g(x)p(x)dx.
$$
As usual, calling $\mathbb{E}[\cdot]$ is not a very good name.
You may think of $\mathbb{E}[g(X)]$ as the expected value of $g(X)$, but do not take it too far.
Can you think of an example in which the expected value is never actually observed?
### Conditional Expectation
Let $X$ and $Y$ be two random variables. The conditional expectation of $X$ given $Y=y$ is defined to be:
$$
\mathbb{E}[X|Y=y] := \mathbb{E}[X|Y=y,I] = \int xp(x|y)dx.
$$
### Properties of Expectations
The following properties of expectations of random variables are extremely helpful. In what follows, $X$ and $Y$ are random variables and $c$ is a constant:
+ Sum of random variable with a constant:
$$
\mathbb{E}[X+c] = \mathbb{E}[X] + c.
$$
+ Sum of two random variables:
$$
\mathbb{E}[X+Y] = \mathbb{E}[X] + \mathbb{E}[Y].
$$
+ Product of random variable with constant:
$$
\mathbb{E}[cX] = c\mathbb{E}[X].
$$
+ If $X\perp Y$, then:
$$
\mathbb{E}[XY] = \mathbb{E}[X]\mathbb{E}[Y].
$$
**NOTE**: This property does not hold if $X$ and $Y$ are not independent!
+ If $f(\cdot)$ is a convex function, then:
$$
f(\mathbb{E}[X]) \le \mathbb{E}[f(X)].
$$
**NOTE**: The equality holds only if $f(\cdot)$ is linear!
### Variance of a Random Variable
The variance of $X$ is defined to be:
$$
\mathbb{V}[X] = \mathbb{E}\left[X - \mathbb{E}[X])^2\right].
$$
It is easy to prove (and a very useful formulat to remember), that:
$$
\mathbb{V}[X] = \mathbb{E}[X^2] - \left(\mathbb{E}[X]\right)^2.
$$
### Covariance of Two Random Variables
Let $X$ and $Y$ be two random variables.
The covariance between $X$ and $Y$ is defined to be:
$$
\mathbb{C}[X, Y] = \mathbb{E}\left[\left(X - \mathbb{E}[X]\right)
\left(Y-\mathbb{E}[Y]\right)\right]
$$
### Properties of the Variance
Let $X$ and $Y$ be random variables and $c$ be a constant.
Then:
+ Sum of random variable with a constant:
$$
\mathbb{V}[X + c] = \mathbb{V}[X].
$$
+ Product of random variable with a constant:
$$
\mathbb{V}[cX] = c^2\mathbb{V}[X].
$$
+ Sum of two random variables:
$$
\mathbb{V}[X+Y] = \mathbb{V}[X] + \mathbb{V}[Y] + 2\mathbb{C}(X,Y).
$$
+ Sum of two independent random variables:
$$
\mathbb{V}[X+Y] = \mathbb{V}[X] + \mathbb{V}[Y].
$$
# References
(<a id="cit-jaynes2003" href="#call-jaynes2003">Jaynes, 2003</a>) E T Jaynes, ``_Probability Theory: The Logic of Science_'', 2003. [online](http://bayes.wustl.edu/etj/prob/book.pdf)
| github_jupyter |
# Chapter 3: Variational Autoencoders
An _autoencoder_ is an artificial neural network with two parts:
- An _encoder_ network which finds a representation of high dimensional data in a lower dimensional, or _latent_, space.
- A _decoder_ network which reconstruct samples of the original data from elements of the latent space.
## Your First Autoencoder
Below is an example of an autoencoder which uses convolutional layers for the encoder network and _convolutional transpose layers_ for the decoder. The model uses MSE for the loss function.
```
# Mount Google drive to save training checkpoints
from google.colab import drive
drive.mount('/content/gdrive/')
# Setup the model
from tensorflow.keras.layers import (Input, Conv2D, LeakyReLU, Flatten, Dense,
Reshape, Conv2DTranspose, Activation)
from tensorflow.keras.models import Model
import tensorflow.keras.backend as K
import numpy as np
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint
class Autoencoder():
"""Implements an autoencoder in Keras with an API for using the model."""
def __init__(self, input_dim, encoder_conv_filters, encoder_conv_kernel_size,
encoder_conv_strides, z_dim, decoder_conv_filters,
decoder_conv_kernel_size, decoder_conv_strides):
encoder_input = Input(input_dim, name='encoder_input')
x = encoder_input
for i in range(len(encoder_conv_filters)):
x = Conv2D(filters=encoder_conv_filters[i],
kernel_size=encoder_conv_kernel_size[i],
strides=encoder_conv_strides[i],
padding='same', name='encoder_conv_{}'.format(i))(x)
x = LeakyReLU()(x)
shape_before_flattening = K.int_shape(x)[1:]
x = Flatten()(x)
encoder_output = Dense(z_dim)(x)
encoder = Model(encoder_input, encoder_output)
self.encoder = encoder
decoder_input = Input(shape=(z_dim,), name='decoder_input')
x = Dense(np.prod(shape_before_flattening))(decoder_input)
x = Reshape(shape_before_flattening)(x)
for i in range(len(decoder_conv_filters)):
x = Conv2DTranspose(filters=decoder_conv_filters[i],
kernel_size=decoder_conv_kernel_size[i],
strides=decoder_conv_strides[i],
padding='same', name='decoder_conv_{}'.format(i))(x)
if i < len(decoder_conv_filters) - 1:
x = LeakyReLU()(x)
else:
x = Activation('sigmoid')(x)
decoder_output = x
decoder = Model(decoder_input, decoder_output)
self.decoder = decoder
# Joining the models.
self.model = Model(encoder_input, decoder(encoder_output))
self.compiled = False
def compile(self, learning_rate):
"""Compile the model."""
if self.compiled:
return
opt = Adam(lr=learning_rate)
mse = lambda y_act, y_pred: K.mean(K.square(y_act - y_pred), axis=(1, 2, 3))
self.model.compile(opt, loss=mse)
self.compiled = True
def fit(self, X, y, batch_size, epochs, checkpoint_path=None):
"""Train the model."""
if not self.compiled:
raise Exception('Model not compiled')
callbacks = []
if checkpoint_path:
callbacks.append(ModelCheckpoint(filepath=checkpoint_path, verbose=1,
save_weights_only=True))
self.model.fit(X, y, batch_size=batch_size, epochs=epochs, shuffle=True,
callbacks=callbacks)
def load(self, checkpoint_path):
"""Load the model weights from the saved checkpoint file."""
self.model.load_weights(checkpoint_path)
# Load the MNIST dataset.
from tensorflow.keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# Reshape these to be 4D tensors and scale the pixel values to [0, 1].
X_train = X_train.reshape(X_train.shape + (1,)) / 255.0
X_test = X_test.reshape(X_test.shape + (1,)) / 255.0
autoencoder = Autoencoder(input_dim=X_train.shape[1:],
encoder_conv_filters=(32, 64, 64, 64),
encoder_conv_kernel_size=(3, 3, 3, 3),
encoder_conv_strides=(1, 2, 2, 1),
z_dim=2,
decoder_conv_filters=(64, 64, 32, 1),
decoder_conv_kernel_size=(3, 3, 3, 3),
decoder_conv_strides=(1, 2, 2, 1))
autoencoder.model.summary()
```
Now we will train the model for 200 training epochs, saving the model as we go using the `ModelCheckpoint` class. This will let us load the model later without having to go through the long training process.
```
# The path to save the model weights to in Drive.
checkpoint_path = '/content/gdrive/My Drive/gdl_models/autoencoder.hdf5'
autoencoder.compile(learning_rate=0.0005)
# Train the model. I removed the logs to remove visual noise.
autoencoder.fit(X_train, X_train, epochs=200, batch_size=32,
checkpoint_path=checkpoint_path)
```
## Analysis of the Autoencoder
```
# Load the model from the checkpoint.
autoencoder.load(checkpoint_path)
```
### Reconstructing Digits
The following code plots a row of digits from the test set and the model's reconstruction of the respective image on the second row.
```
import matplotlib.pyplot as plt
n_to_show = 10
example_idx = np.random.choice(range(len(X_test)), n_to_show)
X_pred = autoencoder.model.predict(X_test[example_idx])
fig = plt.figure(figsize=(15, 3))
fig.subplots_adjust(hspace=0.4, wspace=0.4)
for i, x in enumerate(X_test[example_idx]):
ax = fig.add_subplot(2, n_to_show, i + 1)
ax.axis('off')
ax.imshow(x.squeeze(), cmap='gray_r')
for i, x in enumerate(X_pred):
ax = fig.add_subplot(2, n_to_show, n_to_show + i + 1)
ax.axis('off')
ax.imshow(x.squeeze(), cmap='gray_r')
```
### Plotting the Latent Space
Plotting a 2D scatter plot of the 5,000 images encoded into a point in the latent space. Each point's color indicates the digit's actual label.
```
X_enc = autoencoder.encoder.predict(X_test[:5000])
plt.figure(figsize=(14, 12))
plt.scatter(X_enc[:,0], X_enc[:,1], cmap='rainbow', s=2, alpha=0.6,
c=y_test[:5000])
plt.colorbar()
```
### Trying to Generate New Images
Below is an example that illustrates the problem with sampling from the latent space with an autoencoder. We can try randomly sampling from the latent space, but it may not yield good results.
```
from random import random
min_x = min(X_enc[:, 0])
max_x = max(X_enc[:, 0])
min_y = min(X_enc[:, 1])
max_y = max(X_enc[:, 1])
n_to_show = 10
fig = plt.figure(figsize=(15, 3))
fig.subplots_adjust(hspace=0.4, wspace=0.4)
for i in range(n_to_show):
ax = fig.add_subplot(1, n_to_show, i + 1)
ax.axis('off')
x = min_x + (random() * (max_x - min_x))
y = min_y + (random() * (max_y - min_y))
img = autoencoder.decoder.predict([[x, y]]).squeeze()
ax.imshow(img, cmap='gray_r')
```
| github_jupyter |
```
!pip install mmcv-full==1.5.0 -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.11.0/index.html
!pip install mmdet==2.24.1
!pip install mmpose==0.25.1
!git clone https://github.com/hysts/anime-face-detector
from IPython.display import clear_output
clear_output()
```
If you encounter the following error in Colab, you can restart the runtime to execute the following cells correctly.
```
xtcocotools/_mask.pyx in init xtcocotools._mask()
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject
```
```
%cd anime-face-detector
#@title import packages
import cv2
import matplotlib.pyplot as plt
import numpy as np
import anime_face_detector
#@title Contour Definition
# https://github.com/hysts/anime-face-detector/blob/main/assets/landmarks.jpg
FACE_BOTTOM_OUTLINE = np.arange(0, 5)
LEFT_EYEBROW = np.arange(5, 8)
RIGHT_EYEBROW = np.arange(8, 11)
LEFT_EYE_TOP = np.arange(11, 14)
LEFT_EYE_BOTTOM = np.arange(14, 17)
RIGHT_EYE_TOP = np.arange(17, 20)
RIGHT_EYE_BOTTOM = np.arange(20, 23)
NOSE = np.array([23])
MOUTH_OUTLINE = np.arange(24, 28)
FACE_OUTLINE_LIST = [FACE_BOTTOM_OUTLINE, LEFT_EYEBROW, RIGHT_EYEBROW]
LEFT_EYE_LIST = [LEFT_EYE_TOP, LEFT_EYE_BOTTOM]
RIGHT_EYE_LIST = [RIGHT_EYE_TOP, RIGHT_EYE_BOTTOM]
NOSE_LIST = [NOSE]
MOUTH_OUTLINE_LIST = [MOUTH_OUTLINE]
# (indices, BGR color, is_closed)
CONTOURS = [
(FACE_OUTLINE_LIST, (0, 170, 255), False),
(LEFT_EYE_LIST, (50, 220, 255), False),
(RIGHT_EYE_LIST, (50, 220, 255), False),
(NOSE_LIST, (255, 30, 30), False),
(MOUTH_OUTLINE_LIST, (255, 30, 30), True),
]
#@title Visualization Function
def visualize_box(image,
box,
score,
lt,
box_color=(0, 255, 0),
text_color=(255, 255, 255),
show_box_score=True):
cv2.rectangle(image, tuple(box[:2]), tuple(box[2:]), box_color, lt)
if not show_box_score:
return
cv2.putText(image,
f'{round(score * 100, 2)}%', (box[0], box[1] - 2),
0,
lt / 2,
text_color,
thickness=max(lt, 1),
lineType=cv2.LINE_AA)
def visualize_landmarks(image, pts, lt, landmark_score_threshold):
for *pt, score in pts:
pt = tuple(np.round(pt).astype(int))
if score < landmark_score_threshold:
color = (0, 255, 255)
else:
color = (0, 0, 255)
cv2.circle(image, pt, lt, color, cv2.FILLED)
def draw_polyline(image, pts, color, closed, lt, skip_contour_with_low_score,
score_threshold):
if skip_contour_with_low_score and (pts[:, 2] < score_threshold).any():
return
pts = np.round(pts[:, :2]).astype(int)
cv2.polylines(image, np.array([pts], dtype=np.int32), closed, color, lt)
def visualize_contour(image, pts, lt, skip_contour_with_low_score,
score_threshold):
for indices_list, color, closed in CONTOURS:
for indices in indices_list:
draw_polyline(image, pts[indices], color, closed, lt,
skip_contour_with_low_score, score_threshold)
def visualize(image: np.ndarray,
preds: np.ndarray,
face_score_threshold: float,
landmark_score_threshold: float,
show_box_score: bool = True,
draw_contour: bool = True,
skip_contour_with_low_score=False):
res = image.copy()
for pred in preds:
box = pred['bbox']
box, score = box[:4], box[4]
box = np.round(box).astype(int)
pred_pts = pred['keypoints']
# line_thickness
lt = max(2, int(3 * (box[2:] - box[:2]).max() / 256))
visualize_box(res, box, score, lt, show_box_score=show_box_score)
if draw_contour:
visualize_contour(
res,
pred_pts,
lt,
skip_contour_with_low_score=skip_contour_with_low_score,
score_threshold=landmark_score_threshold)
visualize_landmarks(res, pred_pts, lt, landmark_score_threshold)
return res
#@title Detector
device = 'cuda:0' #@param ['cuda:0', 'cpu']
model = 'yolov3' #@param ['yolov3', 'faster-rcnn']
detector = anime_face_detector.create_detector(model, device=device)
#@title Visualization Arguments
face_score_threshold = 0.5 #@param {type: 'slider', min: 0, max: 1, step:0.1}
landmark_score_threshold = 0.3 #@param {type: 'slider', min: 0, max: 1, step:0.1}
show_box_score = True #@param {'type': 'boolean'}
draw_contour = True #@param {'type': 'boolean'}
skip_contour_with_low_score = True #@param {'type': 'boolean'}
```
# image test
```
image = cv2.imread('assets/input.jpg')
preds = detector(image)
res = visualize(image, preds, face_score_threshold, landmark_score_threshold,
show_box_score, draw_contour, skip_contour_with_low_score)
plt.figure(figsize=(30, 30))
plt.imshow(res[:, :, ::-1])
plt.axis('off')
plt.show()
```
# video test
```
# https://www.sakugabooru.com/post/show/43401
!wget -q https://www.sakugabooru.com/data/f47f699b9c5afc5a849be4b974f40975.mp4 -O input_vid.mp4
from moviepy.editor import VideoFileClip
# skip frame
speedx = 2
clip = VideoFileClip('input_vid.mp4').subfx(lambda c: c.speedx(speedx))
clip.write_videofile('input_vid_clip.mp4')
clip.close()
from tqdm.auto import tqdm
cap = cv2.VideoCapture('input_vid_clip.mp4')
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
frames_per_second = cap.get(cv2.CAP_PROP_FPS) / speedx
num_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
writer = cv2.VideoWriter(
filename='/content/anime-face-detector/output_vid.mp4',
# some installation of opencv may not support x264 (due to its license),
# you can try other format (e.g. MPEG)
fourcc=cv2.VideoWriter_fourcc(*'MPEG'),
fps=frames_per_second,
frameSize=(width, height),
isColor=True)
# Colab CPU 3.27s/it, Colab GPU 2.75it/s
with tqdm(total=num_frames) as pbar:
while True:
ok, frame = cap.read()
if not ok:
break
pbar.update()
preds = detector(frame)
vis_frame = visualize(
frame,
preds,
face_score_threshold=face_score_threshold,
landmark_score_threshold=landmark_score_threshold,
show_box_score=show_box_score,
draw_contour=draw_contour,
skip_contour_with_low_score=skip_contour_with_low_score)
writer.write(vis_frame)
cap.release()
writer.release()
!ffmpeg -i output_vid.mp4 -c:v libx264 -hide_banner -loglevel error -y out.mp4
from base64 import b64encode
from IPython.display import HTML
HTML(f"""
<video height=400 controls loop>
<source src="data:video/mp4;base64,{b64encode(open('out.mp4','rb').read()).decode()}" type="video/mp4">
</video>
""")
```
| github_jupyter |
```
import os
import urllib.request
import tarfile
import shutil
import glob
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# 17flowersデータセットのダウンロード
urllib.request.urlretrieve(
'http://www.robots.ox.ac.uk/~vgg/data/flowers/17/17flowers.tgz',
'17flowers.tgz'
)
# 17flowersデータセットの解凍
with tarfile.open('17flowers.tgz') as tar:
tar.extractall()
os.rename('jpg', '17flowers')
# 17flowersデータセットのラベルを定義
labels = ['Tulip', 'Snowdrop', 'LilyValley', 'Bluebell', 'Crocus',
'Iris', 'Tigerlily', 'Daffodil', 'Fritillary', 'Sunflower',
'Daisy', 'ColtsFoot', 'Dandelion', 'Cowslip', 'Buttercup',
'Windflower', 'Pansy']
# train/validationディレクトリパス準備
train_dir = os.path.join(os.getcwd(), '17flowers', 'train')
validation_dir = os.path.join(os.getcwd(), '17flowers', 'validation')
# train/validationに各ラベルのディレクトリを準備
os.mkdir(train_dir)
os.mkdir(validation_dir)
for directory_name in labels:
os.mkdir(os.path.join(os.getcwd(), '17flowers', 'train', directory_name))
os.mkdir(os.path.join(os.getcwd(), '17flowers', 'validation', directory_name))
# train/validationにデータセットを配置
dataset_number = 80
train_ratio = 0.9
train_number = int(dataset_number * train_ratio)
jpg_files = [f for f in sorted(os.listdir('17flowers')) if f.endswith('.jpg')]
for index, jpg_file in enumerate(jpg_files):
if (index % dataset_number) < train_number:
destination_directory = 'train'
else:
destination_directory = 'validation'
src = os.path.join(os.getcwd(), '17flowers', jpg_file)
dst = os.path.join(os.getcwd(), '17flowers', destination_directory, labels[index // dataset_number])
shutil.move(src, dst)
# train/validation配下のjpgファイル一覧を取得
train_files = glob.glob(os.path.join(train_dir, '*', '*.jpg'))
validation_files = glob.glob(os.path.join(validation_dir, '*', '*.jpg'))
# Xceptionモデルをロード(include_top=False:ネットワークの出力層側にある全結合層を含まない)
xception_base_model = tf.keras.applications.xception.Xception(include_top=False, weights='imagenet', input_shape=(299, 299, 3))
# モデル可視化
xception_base_model.summary()
# 一部の層を固定し再学習しないよう設定
layer_names = [l.name for l in xception_base_model.layers]
layer_index = layer_names.index('block3_sepconv1')
for layer in xception_base_model.layers[:layer_index]:
layer.trainable = False
for layer in xception_base_model.layers:
print(layer, layer.trainable)
# 出力用の全結合層を再定義しモデルを構築
x = tf.keras.layers.Flatten()(xception_base_model.output)
x = tf.keras.layers.Dense(512, activation='relu')(x)
output = tf.keras.layers.Dense(17, activation='softmax', name='last_output')(x)
xception_model = tf.keras.Model(inputs=xception_base_model.inputs, outputs=output, name='model')
# モデルコンパイル(転移学習、ファインチューニングの場合はAdamよりSGDが良いケースが多い)
xception_model.compile(
optimizer='sgd',
loss='categorical_crossentropy',
metrics=['accuracy']
)
# モデル可視化
xception_model.summary()
# train/validation用のImageDataGeneratorを定義(データ拡張をする場合はコメントアウトを解除する)
train_image_generator = ImageDataGenerator(
width_shift_range=2,
height_shift_range=2,
brightness_range=(0.8, 1.2),
channel_shift_range=0.2,
zoom_range=0.02,
rotation_range=2,
)
validation_image_generator = ImageDataGenerator()
# ImageDataGeneratorを用いてディレクトリからデータを読み込む準備
batch_size = 32
epochs = 10
IMG_HEIGHT =299
IMG_WIDTH = 299
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
classes=labels,
class_mode='categorical')
validation_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
classes=labels,
class_mode='categorical')
# モデルチェックポイントのコールバック(1エポック毎)
checkpoint_path = os.path.join(os.getcwd(), 'checkpoints', 'weights.{epoch:03d}-{val_loss:.3f}-{val_accuracy:.3f}.hdf5')
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
verbose=1,
save_best_only=True,
mode='auto',
save_weights_only=False,
save_freq='epoch')
# 評価値の改善が見られない場合に学習率を減らすコールバックを定義
lr_callback = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=5,
verbose=1, mode='auto', min_delta=0.0001,
cooldown=3, min_lr=0)
!mkdir checkpoints
# 訓練
history = xception_model.fit(
train_data_gen,
steps_per_epoch=len(train_files) // batch_size,
epochs=epochs,
validation_data=validation_data_gen,
validation_steps=len(validation_files) // batch_size,
callbacks=[cp_callback, lr_callback]
)
!ls checkpoints
# 保存したモデルのロード
# load_model = tf.keras.models.load_model("checkpoints/weights.***-****-****.hdf5") # 出来上がったcheckpointファイルを指定する
load_model = tf.keras.models.load_model("checkpoints/weights.007-0.619-0.836.hdf5") # 出来上がったcheckpointファイルを指定する
# テスト画像を1枚ロード
from IPython.display import Image, display_png
from tensorflow.keras.preprocessing.image import img_to_array, load_img
img = tf.keras.preprocessing.image.load_img('17flowers/validation/Sunflower/image_0793.jpg', False, target_size=(299, 299))
display_png(img)
# 入力画像成形、および正規化
x = img_to_array(img)
x = x.reshape(-1, 299, 299, 3)
x = x.astype('float32')
# 推論実行
with tf.device("CPU:0"): # CUDA、cuDNNが正しくインストールされている場合はwith句を外す
predict_result = load_model.predict(x)
# 推論結果表示
print(predict_result)
print(np.squeeze(predict_result))
print(np.argmax(np.squeeze(predict_result)))
print(labels[np.argmax(np.squeeze(predict_result))])
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import requests
import bs4 as bs
import urllib.request
```
## Extracting features of 2020 movies from Wikipedia
```
link = "https://en.wikipedia.org/wiki/List_of_American_films_of_2020"
source = urllib.request.urlopen(link).read()
soup = bs.BeautifulSoup(source,'lxml')
tables = soup.find_all('table',class_='wikitable sortable')
len(tables)
type(tables[0])
df1 = pd.read_html(str(tables[0]))[0]
df2 = pd.read_html(str(tables[1]))[0]
df3 = pd.read_html(str(tables[2]))[0]
df4 = pd.read_html(str(tables[3]).replace("'1\"\'",'"1"'))[0] # avoided "ValueError: invalid literal for int() with base 10: '1"'
df = df1.append(df2.append(df3.append(df4,ignore_index=True),ignore_index=True),ignore_index=True)
df
df_2020 = df[['Title','Cast and crew']]
df_2020
!pip install tmdbv3api
from tmdbv3api import TMDb
import json
import requests
tmdb = TMDb()
tmdb.api_key = 'eba1feb73d94c9d6c0b134aec310d3ea'
from tmdbv3api import Movie
tmdb_movie = Movie()
def get_genre(x):
genres = []
result = tmdb_movie.search(x)
if not result:
return np.NaN
else:
movie_id = result[0].id
response = requests.get('https://api.themoviedb.org/3/movie/550?api_key=eba1feb73d94c9d6c0b134aec310d3ea'.format(movie_id,tmdb.api_key))
data_json = response.json()
if data_json['genres']:
genre_str = " "
for i in range(0,len(data_json['genres'])):
genres.append(data_json['genres'][i]['name'])
return genre_str.join(genres)
else:
return np.NaN
df_2020['genres'] = df_2020['Title'].map(lambda x: get_genre(str(x)))
df_2020
def get_director(x):
if " (director)" in x:
return x.split(" (director)")[0]
elif " (directors)" in x:
return x.split(" (directors)")[0]
else:
return x.split(" (director/screenplay)")[0]
df_2020['director_name'] = df_2020['Cast and crew'].map(lambda x: get_director(str(x)))
def get_actor1(x):
return ((x.split("screenplay); ")[-1]).split(", ")[0])
df_2020['actor_1_name'] = df_2020['Cast and crew'].map(lambda x: get_actor1(str(x)))
def get_actor2(x):
if len((x.split("screenplay); ")[-1]).split(", ")) < 2:
return np.NaN
else:
return ((x.split("screenplay); ")[-1]).split(", ")[1])
df_2020['actor_2_name'] = df_2020['Cast and crew'].map(lambda x: get_actor2(str(x)))
def get_actor3(x):
if len((x.split("screenplay); ")[-1]).split(", ")) < 3:
return np.NaN
else:
return ((x.split("screenplay); ")[-1]).split(", ")[2])
df_2020['actor_3_name'] = df_2020['Cast and crew'].map(lambda x: get_actor3(str(x)))
df_2020
df_2020 = df_2020.rename(columns={'Title':'movie_title'})
new_df20 = df_2020.loc[:,['director_name','actor_1_name','actor_2_name','actor_3_name','genres','movie_title']]
new_df20
new_df20['comb'] = new_df20['actor_1_name'] + ' ' + new_df20['actor_2_name'] + ' '+ new_df20['actor_3_name'] + ' '+ new_df20['director_name'] +' ' + new_df20['genres']
new_df20.isna().sum()
new_df20 = new_df20.dropna(how='any')
new_df20.isna().sum()
new_df20['movie_title'] = new_df20['movie_title'].str.lower()
new_df20
old_df = pd.read_csv('./datasets/final_data.csv')
old_df
final_df = old_df.append(new_df20,ignore_index=True)
final_df
final_df.to_csv('./datasets/main_data.csv',index=False)
```
| github_jupyter |
# MNIST Softmax Estimation
Note: This notebook is desinged to run with Python3 and CPU (no GPU) runtime.

This notebook uses TensorFlow2.x.
```
%tensorflow_version 2.x
```
####[MSE-01]
Import modules.
```
import numpy as np
from pandas import DataFrame
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.datasets import mnist
```
####[MSE-02]
Download the MNIST dataset and store into NumPy arrays.
```
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = train_images.reshape(
(len(train_images), 784)).astype('float32') / 255
test_images = test_images.reshape(
(len(test_images), 784)).astype('float32') / 255
train_labels = tf.keras.utils.to_categorical(train_labels, 10)
test_labels = tf.keras.utils.to_categorical(test_labels, 10)
```
####[MSE-03]
Define a model for the softmax estimation.
```
model = models.Sequential()
model.add(layers.Dense(10, activation='softmax', input_shape=(28*28,),
name='softmax'))
model.summary()
```
####[MSE-04]
Compile the model using the Adam optimizer, and Cross entroy as a loss function.
```
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['acc'])
```
####[MSE-05]
Train the model. It acheves the 92.7% accuracy for the test dataset.
```
history = model.fit(train_images, train_labels,
validation_data=(test_images, test_labels),
batch_size=128, epochs=10)
```
####[MSE-06]
Show examples of the prediction result. Three for correct preditions and three for incorrect predictions.
```
p_val = model.predict(np.array(test_images))
df = DataFrame({'pred': list(map(np.argmax, p_val)),
'label': list(map(np.argmax, test_labels))})
correct = df[df['pred']==df['label']]
incorrect = df[df['pred']!=df['label']]
fig = plt.figure(figsize=(8, 15))
for i in range(10):
indices = list(correct[correct['pred']==i].index[:3]) \
+ list(incorrect[incorrect['pred']==i].index[:3])
for c, image in enumerate(test_images[indices]):
subplot = fig.add_subplot(10, 6, i*6+c+1)
subplot.set_xticks([])
subplot.set_yticks([])
subplot.set_title('%d / %d' % (i, df['label'][indices[c]]))
subplot.imshow(image.reshape((28, 28)),
vmin=0, vmax=1, cmap=plt.cm.gray_r)
```
| github_jupyter |
<img width="50" src="https://carbonplan-assets.s3.amazonaws.com/monogram/dark-small.png" style="margin-left:0px;margin-top:20px"/>
# Monthly MTBS to Zarr
_by Jeremy Freeman (CarbonPlan), November 16, 2020_
This notebook converts monthly MTBS 30m yearly rasters stored in Cloud Optimized
GeoTIFF and stages them in a single Zarr archive.
**Inputs:**
- COG outputs from `04_mtbs_perims_to_raster.ipynb`
**Outputs:**
- 1 Zarr archive: `gs://carbonplan-data/processed/mtbs/conus/4000m/monthly.zarr`
**Notes:**
- In the process of processing this dataset, we found that the behavior in
rasterio's `reproject` function was sensitive to the package version for
rasterio and/or gdal. Versions we found to work were
`rasterio=1.0.25,gdal=2.4.2`. Versions that we found to fail were
`rasterio=1.1.5,gdal=3.1.0`
```
import os
import gcsfs
import numpy as np
import rasterio
import rioxarray
import xarray as xr
import zarr
from numcodecs.zlib import Zlib
from rasterio import Affine
from rasterio.crs import CRS
from rasterio.warp import Resampling, reproject, transform
def base_crs():
return (
'PROJCS["Albers_Conical_Equal_Area",'
'GEOGCS["WGS 84",DATUM["WGS_1984",'
'SPHEROID["WGS 84",6378137,298.257223563,AUTHORITY["EPSG","7030"]],'
"TOWGS84[0,0,0,-0,-0,-0,0],"
'AUTHORITY["EPSG","6326"]],'
'PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],'
'UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]],'
'AUTHORITY["EPSG","4326"]],'
'PROJECTION["Albers_Conic_Equal_Area"],'
'PARAMETER["standard_parallel_1",29.5],'
'PARAMETER["standard_parallel_2",45.5],'
'PARAMETER["latitude_of_center",23],'
'PARAMETER["longitude_of_center",-96],'
'PARAMETER["false_easting",0],'
'PARAMETER["false_northing",0],'
'UNIT["meters",1]]'
)
def make_dst_band(src_band, src_resolution):
left = -2493045.0
right = 2342655.0
top = 3310005.0
bottom = 177285.0
dst_transform = Affine(30.0, 0.0, left, 0.0, -30.0, top)
dst_resolution = dst_transform[0]
dst_transform = dst_transform * Affine.scale(
src_resolution / dst_resolution, src_resolution / dst_resolution
)
dst_crs = CRS.from_wkt(base_crs())
dst_shape = [
round((top - bottom) / src_resolution),
round((right - left) / src_resolution),
]
dst_band = np.zeros(dst_shape, np.float32)
return dst_band, dst_transform, dst_crs, dst_shape
def calc_coords(shape, trans, crs):
ny, nx = shape
# crs coords
x, _ = trans * (np.arange(nx) + 0.5, np.zeros(nx) + 0.5)
_, y = trans * (np.zeros(ny) + 0.5, np.arange(ny) + 0.5)
# convert to lat/lon
xs, ys = np.meshgrid(x, y)
lon, lat = transform(crs, {"init": "EPSG:4326"}, xs.flatten(), ys.flatten())
return {
"x": xr.DataArray(x, dims=("x",)),
"y": xr.DataArray(y, dims=("y",)),
"lat": xr.DataArray(np.asarray(lat).reshape((ny, nx)), dims=("y", "x")),
"lon": xr.DataArray(np.asarray(lon).reshape((ny, nx)), dims=("y", "x")),
}
def prepare_mtbs(year, resolution, return_ds=True):
src_path_year = f"/Users/freeman/workdir/carbonplan-data/raw/mtbs/conus/30m/severity/{year}.tif"
with rasterio.open(src_path_year, "r") as src_raster_year:
src_transform = src_raster.meta["transform"]
src_crs = src_raster.meta["crs"]
src_band = src_raster.read(1)
src_resolution = resolution
dst_band, dst_transform, dst_crs, dst_shape = make_dst_band(
src_band, src_resolution
)
print("calc_coords")
coords = calc_coords(dst_shape, dst_transform, dst_crs)
for month in range(12):
src_path_month = f"/Users/freeman/workdir/carbonplan-data/raw/mtbs/conus/30m/area/{year}.{month+1}.tif"
with rasterio.open(src_path_month, "r") as src_raster_month:
src_nodata = 6
resampling = Resampling.average
# set moderate or high burn severity to 1 and others to 1
src_band_tmp = ((src_band == 3) | (src_band == 4)).astype("uint8")
# set masked regions to nodata value
src_band_tmp[src_band == src_nodata] = src_nodata
src_band = src_band_tmp
dst_band = dst_band.astype(
"float32"
) # convert to float for averaging
print("reproject")
# this seems to require rasterio=1.0.25 and gdal=2.4.2
reproject(
src_band,
dst_band,
src_transform=src_transform,
src_crs=src_crs,
dst_transform=dst_transform,
dst_crs=dst_crs,
resampling=resampling,
src_nodata=src_nodata,
dst_nodata=src_raster.meta["nodata"],
)
meta = src_raster.meta
meta.update(
width=dst_shape[0],
height=dst_shape[1],
dtype=str(dst_band.dtype),
crs=dst_crs.to_wkt(),
transform=list(dst_transform),
nodata=src_raster.meta["nodata"],
)
varname = f"{year}"
chunks = {"x": 512, "y": 512}
ds = xr.DataArray(dst_band, dims=("y", "x"), attrs=meta).to_dataset(
name=varname
)
ds = ds.assign_coords(coords).chunk(chunks)
if return_ds:
return ds
else:
fs = gcsfs.GCSFileSystem(
project="carbonplan", token="cloud", requester_pays=True
)
mapper = fs.get_mapper(scratch + f"/MTBS.{year}.{resolution}m.zarr")
ds.to_zarr(
store=mapper, mode="w", encoding={varname: {"compressor": Zlib()}}
)
years = list(range(1984, 2018))
dsets = [prepare_mtbs(y, 4000) for y in years]
varname = "burned_area"
da = xr.merge(dsets).to_array(dim="time", name=varname)
da["time"] = da.time.astype(int)
ds = da.to_dataset()
ds[varname].attrs.update(dsets[0]["1984"].attrs)
ds
fs = gcsfs.GCSFileSystem(
project="carbonplan", token="cloud", requester_pays=True
)
mapper = fs.get_mapper("carbonplan-data/processed/MTBS/raster.zarr")
ds.to_zarr(
store=mapper,
group="4000m",
mode="w",
encoding={varname: {"compressor": Zlib()}},
)
ds[varname].sum("time").plot(robust=True)
ds[varname]
```
| github_jupyter |
Based on Medium [article](https://towardsdatascience.com/a-practitioners-guide-to-natural-language-processing-part-i-processing-understanding-text-9f4abfd13e72) by Dipanjan (DJ) Sarkar
# Data retrieval
```
import requests
from bs4 import BeautifulSoup
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import os
%matplotlib inline
seed_urls = ['https://inshorts.com/en/read/technology',
'https://inshorts.com/en/read/sports',
'https://inshorts.com/en/read/world']
def build_dataset(seed_urls):
news_data = []
for url in seed_urls:
news_category = url.split('/')[-1]
data = requests.get(url)
soup = BeautifulSoup(data.content, 'html.parser')
news_articles = [{'news_headline': headline.find('span',
attrs={"itemprop": "headline"}).string,
'news_article': article.find('div',
attrs={"itemprop": "articleBody"}).string,
'news_category': news_category}
for headline, article in
zip(soup.find_all('div',
class_=["news-card-title news-right-box"]),
soup.find_all('div',
class_=["news-card-content news-right-box"]))
]
news_data.extend(news_articles)
df = pd.DataFrame(news_data)
df = df[['news_headline', 'news_article', 'news_category']]
return df
news_df = build_dataset(seed_urls)
news_df.head(10)
news_df.news_category.value_counts()
```
# Text Wrangling and Pre-processing
```
import spacy
import pandas as pd
import numpy as np
import nltk
from nltk.tokenize.toktok import ToktokTokenizer
import re
from bs4 import BeautifulSoup
from contractions import CONTRACTION_MAP
import unicodedata
nlp = spacy.load('en_core', parse = True, tag=True, entity=True)
#nlp_vec = spacy.load('en_vecs', parse = True, tag=True, entity=True)
tokenizer = ToktokTokenizer()
stopword_list = nltk.corpus.stopwords.words('english')
stopword_list.remove('no')
stopword_list.remove('not')
```
## Remove HTML tags
```
def strip_html_tags(text):
soup = BeautifulSoup(text, "html.parser")
stripped_text = soup.get_text()
return stripped_text
strip_html_tags('<html><h2>Some important text</h2></html>')
```
## Remove accented characters
```
def remove_accented_chars(text):
text = unicodedata.normalize('NFKD', text).encode('ascii', 'ignore').decode('utf-8', 'ignore')
return text
remove_accented_chars('Sómě Áccěntěd těxt')
```
## Expand contractions
```
def expand_contractions(text, contraction_mapping=CONTRACTION_MAP):
contractions_pattern = re.compile('({})'.format('|'.join(contraction_mapping.keys())),
flags=re.IGNORECASE|re.DOTALL)
def expand_match(contraction):
match = contraction.group(0)
first_char = match[0]
expanded_contraction = contraction_mapping.get(match)\
if contraction_mapping.get(match)\
else contraction_mapping.get(match.lower())
expanded_contraction = first_char+expanded_contraction[1:]
return expanded_contraction
expanded_text = contractions_pattern.sub(expand_match, text)
expanded_text = re.sub("'", "", expanded_text)
return expanded_text
expand_contractions("Y'all can't expand contractions I'd think")
```
## Remove special characters
```
def remove_special_characters(text, remove_digits=False):
pattern = r'[^a-zA-z0-9\s]' if not remove_digits else r'[^a-zA-z\s]'
text = re.sub(pattern, '', text)
return text
remove_special_characters("Well this was fun! What do you think? 123#@!",
remove_digits=True)
```
## Text lemmatization
```
def lemmatize_text(text):
text = nlp(text)
text = ' '.join([word.lemma_ if word.lemma_ != '-PRON-' else word.text for word in text])
return text
lemmatize_text("My system keeps crashing! his crashed yesterday, ours crashes daily")
```
## Text stemming
```
def simple_stemmer(text):
ps = nltk.porter.PorterStemmer()
text = ' '.join([ps.stem(word) for word in text.split()])
return text
simple_stemmer("My system keeps crashing his crashed yesterday, ours crashes daily")
```
## Remove stopwords
```
def remove_stopwords(text, is_lower_case=False):
tokens = tokenizer.tokenize(text)
tokens = [token.strip() for token in tokens]
if is_lower_case:
filtered_tokens = [token for token in tokens if token not in stopword_list]
else:
filtered_tokens = [token for token in tokens if token.lower() not in stopword_list]
filtered_text = ' '.join(filtered_tokens)
return filtered_text
remove_stopwords("The, and, if are stopwords, computer is not")
```
## Building a text normalizer
```
def normalize_corpus(corpus, html_stripping=True, contraction_expansion=True,
accented_char_removal=True, text_lower_case=True,
text_lemmatization=True, special_char_removal=True,
stopword_removal=True, remove_digits=True):
normalized_corpus = []
# normalize each document in the corpus
for doc in corpus:
# strip HTML
if html_stripping:
doc = strip_html_tags(doc)
# remove accented characters
if accented_char_removal:
doc = remove_accented_chars(doc)
# expand contractions
if contraction_expansion:
doc = expand_contractions(doc)
# lowercase the text
if text_lower_case:
doc = doc.lower()
# remove extra newlines
doc = re.sub(r'[\r|\n|\r\n]+', ' ',doc)
# lemmatize text
if text_lemmatization:
doc = lemmatize_text(doc)
# remove special characters and\or digits
if special_char_removal:
# insert spaces between special characters to isolate them
special_char_pattern = re.compile(r'([{.(-)!}])')
doc = special_char_pattern.sub(" \\1 ", doc)
doc = remove_special_characters(doc, remove_digits=remove_digits)
# remove extra whitespace
doc = re.sub(' +', ' ', doc)
# remove stopwords
if stopword_removal:
doc = remove_stopwords(doc, is_lower_case=text_lower_case)
normalized_corpus.append(doc)
return normalized_corpus
```
## Pre-process and normalize news articles
```
news_df['full_text'] = news_df["news_headline"].map(str)+ '. ' + news_df["news_article"]
news_df['clean_text'] = normalize_corpus(news_df['full_text'])
norm_corpus = list(news_df['clean_text'])
news_df.iloc[1][['full_text', 'clean_text']].to_dict()
```
# Save the news articles
```
news_df.to_csv('news.csv', index=False, encoding='utf-8')
```
# Tagging Parts of Speech
```
news_df = pd.read_csv('news.csv')
corpus = normalize_corpus(news_df['full_text'], text_lower_case=False,
text_lemmatization=False, special_char_removal=False)
sentence = str(news_df.iloc[1].news_headline)
sentence_nlp = nlp(sentence)
spacy_pos_tagged = [(word, word.tag_, word.pos_) for word in sentence_nlp]
pd.DataFrame(spacy_pos_tagged, columns=['Word', 'POS tag', 'Tag type'])
nltk_pos_tagged = nltk.pos_tag(sentence.split())
pd.DataFrame(nltk_pos_tagged, columns=['Word', 'POS tag'])
```
# Shallow Parsing or Chunking Text
```
from nltk.corpus import conll2000
data = conll2000.chunked_sents()
train_data = data[:10900]
test_data = data[10900:]
print(len(train_data), len(test_data))
print(train_data[1])
from nltk.chunk.util import tree2conlltags, conlltags2tree
wtc = tree2conlltags(train_data[1])
wtc
tree = conlltags2tree(wtc)
print(tree)
def conll_tag_chunks(chunk_sents):
tagged_sents = [tree2conlltags(tree) for tree in chunk_sents]
return [[(t, c) for (w, t, c) in sent] for sent in tagged_sents]
def combined_tagger(train_data, taggers, backoff=None):
for tagger in taggers:
backoff = tagger(train_data, backoff=backoff)
return backoff
from nltk.tag import UnigramTagger, BigramTagger
from nltk.chunk import ChunkParserI
class NGramTagChunker(ChunkParserI):
def __init__(self, train_sentences,
tagger_classes=[UnigramTagger, BigramTagger]):
train_sent_tags = conll_tag_chunks(train_sentences)
self.chunk_tagger = combined_tagger(train_sent_tags, tagger_classes)
def parse(self, tagged_sentence):
if not tagged_sentence:
return None
pos_tags = [tag for word, tag in tagged_sentence]
chunk_pos_tags = self.chunk_tagger.tag(pos_tags)
chunk_tags = [chunk_tag for (pos_tag, chunk_tag) in chunk_pos_tags]
wpc_tags = [(word, pos_tag, chunk_tag) for ((word, pos_tag), chunk_tag)
in zip(tagged_sentence, chunk_tags)]
return conlltags2tree(wpc_tags)
ntc = NGramTagChunker(train_data)
print(ntc.evaluate(test_data))
chunk_tree = ntc.parse(nltk_pos_tagged)
print(chunk_tree)
from IPython.display import display
os.environ['PATH'] = os.environ['PATH']+";C:\\Program Files\\gs\\gs9.09\\bin\\"
display(chunk_tree)
```
# Constituency parsing
```
# set java path
import os
java_path = r'C:\Program Files\Java\jdk1.8.0_102\bin\java.exe'
os.environ['JAVAHOME'] = java_path
from nltk.parse.stanford import StanfordParser
scp = StanfordParser(path_to_jar='E:/stanford/stanford-parser-full-2015-04-20/stanford-parser.jar',
path_to_models_jar='E:/stanford/stanford-parser-full-2015-04-20/stanford-parser-3.5.2-models.jar')
result = list(scp.raw_parse(sentence))
print(result[0])
from IPython.display import display
os.environ['PATH'] = os.environ['PATH']+";C:\\Program Files\\gs\\gs9.09\\bin\\"
display(result[0])
```
# Dependency parsing
```
dependency_pattern = '{left}<---{word}[{w_type}]--->{right}\n--------'
for token in sentence_nlp:
print(dependency_pattern.format(word=token.orth_,
w_type=token.dep_,
left=[t.orth_
for t
in token.lefts],
right=[t.orth_
for t
in token.rights]))
from spacy import displacy
displacy.render(sentence_nlp, jupyter=True,
options={'distance': 110,
'arrow_stroke': 2,
'arrow_width': 8})
from nltk.parse.stanford import StanfordDependencyParser
sdp = StanfordDependencyParser(path_to_jar='E:/stanford/stanford-parser-full-2015-04-20/stanford-parser.jar',
path_to_models_jar='E:/stanford/stanford-parser-full-2015-04-20/stanford-parser-3.5.2-models.jar')
result = list(sdp.raw_parse(sentence))
dep_tree = [parse.tree() for parse in result][0]
print(dep_tree)
from IPython.display import display
os.environ['PATH'] = os.environ['PATH']+";C:\\Program Files\\gs\\gs9.09\\bin\\"
display(dep_tree)
from graphviz import Source
dep_tree_dot_repr = [parse for parse in result][0].to_dot()
source = Source(dep_tree_dot_repr, filename="dep_tree", format="png")
source
```
# Named Entity Recognition
```
sentence = str(news_df.iloc[1].full_text)
sentence_nlp = nlp(sentence)
print([(word, word.ent_type_) for word in sentence_nlp if word.ent_type_])
displacy.render(sentence_nlp, style='ent', jupyter=True)
named_entities = []
for sentence in corpus:
temp_entity_name = ''
temp_named_entity = None
sentence = nlp(sentence)
for word in sentence:
term = word.text
tag = word.ent_type_
if tag:
temp_entity_name = ' '.join([temp_entity_name, term]).strip()
temp_named_entity = (temp_entity_name, tag)
else:
if temp_named_entity:
named_entities.append(temp_named_entity)
temp_entity_name = ''
temp_named_entity = None
entity_frame = pd.DataFrame(named_entities,
columns=['Entity Name', 'Entity Type'])
top_entities = (entity_frame.groupby(by=['Entity Name', 'Entity Type'])
.size()
.sort_values(ascending=False)
.reset_index().rename(columns={0 : 'Frequency'}))
top_entities.T.iloc[:,:15]
top_entities = (entity_frame.groupby(by=['Entity Type'])
.size()
.sort_values(ascending=False)
.reset_index().rename(columns={0 : 'Frequency'}))
top_entities.T.iloc[:,:15]
from nltk.tag import StanfordNERTagger
import os
java_path = r'C:\Program Files\Java\jdk1.8.0_102\bin\java.exe'
os.environ['JAVAHOME'] = java_path
sn = StanfordNERTagger('E:/stanford/stanford-ner-2014-08-27/classifiers/english.all.3class.distsim.crf.ser.gz',
path_to_jar='E:/stanford/stanford-ner-2014-08-27/stanford-ner.jar')
ner_tagged_sentences = [sn.tag(sent.split()) for sent in corpus]
named_entities = []
for sentence in ner_tagged_sentences:
temp_entity_name = ''
temp_named_entity = None
for term, tag in sentence:
if tag != 'O':
temp_entity_name = ' '.join([temp_entity_name, term]).strip()
temp_named_entity = (temp_entity_name, tag)
else:
if temp_named_entity:
named_entities.append(temp_named_entity)
temp_entity_name = ''
temp_named_entity = None
#named_entities = list(set(named_entities))
entity_frame = pd.DataFrame(named_entities,
columns=['Entity Name', 'Entity Type'])
top_entities = (entity_frame.groupby(by=['Entity Name', 'Entity Type'])
.size()
.sort_values(ascending=False)
.reset_index().rename(columns={0 : 'Frequency'}))
top_entities.head(15)
top_entities = (entity_frame.groupby(by=['Entity Type'])
.size()
.sort_values(ascending=False)
.reset_index().rename(columns={0 : 'Frequency'}))
top_entities.head()
```
# Emotion and Sentiment Analysis
```
from afinn import Afinn
af = Afinn()
sentiment_scores = [af.score(article) for article in corpus]
sentiment_category = ['positive' if score > 0
else 'negative' if score < 0
else 'neutral'
for score in sentiment_scores]
df = pd.DataFrame([list(news_df['news_category']), sentiment_scores, sentiment_category]).T
df.columns = ['news_category', 'sentiment_score', 'sentiment_category']
df['sentiment_score'] = df.sentiment_score.astype('float')
df.groupby(by=['news_category']).describe()
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 4))
sp = sns.stripplot(x='news_category', y="sentiment_score",
hue='news_category', data=df, ax=ax1)
bp = sns.boxplot(x='news_category', y="sentiment_score",
hue='news_category', data=df, palette="Set2", ax=ax2)
t = f.suptitle('Visualizing News Sentiment', fontsize=14)
fc = sns.factorplot(x="news_category", hue="sentiment_category",
data=df, kind="count",
palette={"negative": "#FE2020",
"positive": "#BADD07",
"neutral": "#68BFF5"})
pos_idx = df[(df.news_category=='technology') & (df.sentiment_score == 6)].index[0]
neg_idx = df[(df.news_category=='technology') & (df.sentiment_score == -15)].index[0]
print('Most Negative Tech News Article:', news_df.iloc[neg_idx][['news_article']][0])
print()
print('Most Positive Tech News Article:', news_df.iloc[pos_idx][['news_article']][0])
pos_idx = df[(df.news_category=='world') & (df.sentiment_score == 16)].index[0]
neg_idx = df[(df.news_category=='world') & (df.sentiment_score == -12)].index[0]
print('Most Negative World News Article:', news_df.iloc[neg_idx][['news_article']][0])
print()
print('Most Positive World News Article:', news_df.iloc[pos_idx][['news_article']][0])
from textblob import TextBlob
sentiment_scores_tb = [round(TextBlob(article).sentiment.polarity, 3) for article in news_df['clean_text']]
sentiment_category_tb = ['positive' if score > 0
else 'negative' if score < 0
else 'neutral'
for score in sentiment_scores_tb]
df = pd.DataFrame([list(news_df['news_category']), sentiment_scores_tb, sentiment_category_tb]).T
df.columns = ['news_category', 'sentiment_score', 'sentiment_category']
df['sentiment_score'] = df.sentiment_score.astype('float')
df.groupby(by=['news_category']).describe()
df.head()
fc = sns.factorplot(x="news_category", hue="sentiment_category",
data=df, kind="count",
palette={"negative": "#FE2020",
"positive": "#BADD07",
"neutral": "#68BFF5"})
pos_idx = df[(df.news_category=='world') & (df.sentiment_score == 0.7)].index[0]
neg_idx = df[(df.news_category=='world') & (df.sentiment_score == -0.296)].index[0]
print('Most Negative World News Article:', news_df.iloc[neg_idx][['news_article']][0])
print()
print('Most Positive World News Article:', news_df.iloc[pos_idx][['news_article']][0])
import model_evaluation_utils as meu
meu.display_confusion_matrix_pretty(true_labels=sentiment_category,
predicted_labels=sentiment_category_tb,
classes=['negative', 'neutral', 'positive'])
```
| github_jupyter |
<!--NAVIGATION-->
< [Pivot Tables](03.09-Pivot-Tables.ipynb) | [Contents](Index.ipynb) | [Working with Time Series](03.11-Working-with-Time-Series.ipynb) >
# Vectorized String Operations
One strength of Python is its relative ease in handling and manipulating string data.
Pandas builds on this and provides a comprehensive set of *vectorized string operations* that become an essential piece of the type of munging required when working with (read: cleaning up) real-world data.
In this section, we'll walk through some of the Pandas string operations, and then take a look at using them to partially clean up a very messy dataset of recipes collected from the Internet.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## Introducing Pandas String Operations
We saw in previous sections how tools like NumPy and Pandas generalize arithmetic operations so that we can easily and quickly perform the same operation on many array elements. For example:
```
import numpy as np
x = np.array([2, 3, 5, 7, 11, 13])
x * 2
```
This *vectorization* of operations simplifies the syntax of operating on arrays of data: we no longer have to worry about the size or shape of the array, but just about what operation we want done.
For arrays of strings, NumPy does not provide such simple access, and thus you're stuck using a more verbose loop syntax:
```
data = ['peter', 'Paul', 'MARY', 'gUIDO','HelLo']
[s.capitalize() for s in data]
```
This is perhaps sufficient to work with some data, but it will break if there are any missing values.
For example:
```
data = ['peter', 'Paul', None, 'MARY', 'gUIDO']
[s.capitalize() for s in data]
```
Pandas includes features to address both this need for vectorized string operations and for correctly handling missing data via the ``str`` attribute of Pandas Series and Index objects containing strings.
So, for example, suppose we create a Pandas Series with this data:
```
import pandas as pd
names = pd.Series(data)
names
```
We can now call a single method that will capitalize all the entries, while skipping over any missing values:
```
names.str.capitalize()
```
Using tab completion on this ``str`` attribute will list all the vectorized string methods available to Pandas.
## Tables of Pandas String Methods
If you have a good understanding of string manipulation in Python, most of Pandas string syntax is intuitive enough that it's probably sufficient to just list a table of available methods; we will start with that here, before diving deeper into a few of the subtleties.
The examples in this section use the following series of names:
```
monte = pd.Series(['Graham Chapman', 'John Cleese', 'Terry Gilliam',
'Eric Idle', 'Terry Jones', 'Michael Palin'])
```
### Methods similar to Python string methods
Nearly all Python's built-in string methods are mirrored by a Pandas vectorized string method. Here is a list of Pandas ``str`` methods that mirror Python string methods:
| Methods | Methods Names | Methods |Methods |
|-------------|------------------|------------------|------------------|
|``len()`` | ``lower()`` | ``translate()`` | ``islower()`` |
|``ljust()`` | ``upper()`` | ``startswith()`` | ``isupper()`` |
|``rjust()`` | ``find()`` | ``endswith()`` | ``isnumeric()`` |
|``center()`` | ``rfind()`` | ``isalnum()`` | ``isdecimal()`` |
|``zfill()`` | ``index()`` | ``isalpha()`` | ``split()`` |
|``strip()`` | ``rindex()`` | ``isdigit()`` | ``rsplit()`` |
|``rstrip()`` | ``capitalize()`` | ``isspace()`` | ``partition()`` |
|``lstrip()`` | ``swapcase()`` | ``istitle()`` | ``rpartition()`` |
Notice that these have various return values. Some, like ``lower()``, return a series of strings:
```
monte.str.lower()
```
But some others return numbers:
```
monte.str.len()
```
Or Boolean values:
```
monte.str.startswith('T')
```
Still others return lists or other compound values for each element:
```
monte.str.split()
monte.str.center(30)
monte.str.count('C')
monte.str.startswith('G')
monte.str.find('l')
```
We'll see further manipulations of this kind of series-of-lists object as we continue our discussion.
### Methods using regular expressions
In addition, there are several methods that accept regular expressions to examine the content of each string element, and follow some of the API conventions of Python's built-in ``re`` module:
| Method | Description |
|--------|-------------|
| ``match()`` | Call ``re.match()`` on each element, returning a boolean. |
| ``extract()`` | Call ``re.match()`` on each element, returning matched groups as strings.|
| ``findall()`` | Call ``re.findall()`` on each element |
| ``replace()`` | Replace occurrences of pattern with some other string|
| ``contains()`` | Call ``re.search()`` on each element, returning a boolean |
| ``count()`` | Count occurrences of pattern|
| ``split()`` | Equivalent to ``str.split()``, but accepts regexps |
| ``rsplit()`` | Equivalent to ``str.rsplit()``, but accepts regexps |
```
monte
```
With these, you can do a wide range of interesting operations.
For example, we can extract the first name from each by asking for a contiguous group of characters at the beginning of each element:
```
monte.str.extract('([A-Za-z]+)', expand=False)
```
Or we can do something more complicated, like finding all names that start and end with a consonant, making use of the start-of-string (``^``) and end-of-string (``$``) regular expression characters:
```
monte.str.findall(r'^[^AEIOU].*[^aeiou]$')
```
The ability to concisely apply regular expressions across ``Series`` or ``Dataframe`` entries opens up many possibilities for analysis and cleaning of data.
### Miscellaneous methods
Finally, there are some miscellaneous methods that enable other convenient operations:
| Method | Description |
|--------|-------------|
| ``get()`` | Index each element |
| ``slice()`` | Slice each element|
| ``slice_replace()`` | Replace slice in each element with passed value|
| ``cat()`` | Concatenate strings|
| ``repeat()`` | Repeat values |
| ``normalize()`` | Return Unicode form of string |
| ``pad()`` | Add whitespace to left, right, or both sides of strings|
| ``wrap()`` | Split long strings into lines with length less than a given width|
| ``join()`` | Join strings in each element of the Series with passed separator|
| ``get_dummies()`` | extract dummy variables as a dataframe |
#### Vectorized item access and slicing
The ``get()`` and ``slice()`` operations, in particular, enable vectorized element access from each array.
For example, we can get a slice of the first three characters of each array using ``str.slice(0, 3)``.
Note that this behavior is also available through Python's normal indexing syntax–for example, ``df.str.slice(0, 3)`` is equivalent to ``df.str[0:3]``:
```
monte.str[0:4]
```
Indexing via ``df.str.get(i)`` and ``df.str[i]`` is likewise similar.
These ``get()`` and ``slice()`` methods also let you access elements of arrays returned by ``split()``.
For example, to extract the last name of each entry, we can combine ``split()`` and ``get()``:
```
monte.str.split().str.get(0)
```
#### Indicator variables
Another method that requires a bit of extra explanation is the ``get_dummies()`` method.
This is useful when your data has a column containing some sort of coded indicator.
For example, we might have a dataset that contains information in the form of codes, such as A="born in America," B="born in the United Kingdom," C="likes cheese," D="likes spam":
```
full_monte = pd.DataFrame({'name': monte,
'info': ['B|C|D', 'B|D', 'A|C',
'B|D', 'B|C', 'B|C|D']})
full_monte
```
The ``get_dummies()`` routine lets you quickly split-out these indicator variables into a ``DataFrame``:
```
full_monte['info'].str.get_dummies('|')
```
With these operations as building blocks, you can construct an endless range of string processing procedures when cleaning your data.
We won't dive further into these methods here, but I encourage you to read through ["Working with Text Data"](http://pandas.pydata.org/pandas-docs/stable/text.html) in the Pandas online documentation, or to refer to the resources listed in [Further Resources](03.13-Further-Resources.ipynb).
## Example: Recipe Database
These vectorized string operations become most useful in the process of cleaning up messy, real-world data.
Here I'll walk through an example of that, using an open recipe database compiled from various sources on the Web.
Our goal will be to parse the recipe data into ingredient lists, so we can quickly find a recipe based on some ingredients we have on hand.
As of Spring 2016, this database is about 30 MB, and can be downloaded and unzipped with these commands:
The database is in JSON format, so we will try ``pd.read_json`` to read it:
```
try:
recipes = pd.read_json('data/recipeitems-latest.json')
except ValueError as e:
print("ValueError:", e)
recipes.head()
```
Oops! We get a ``ValueError`` mentioning that there is "trailing data."
Searching for the text of this error on the Internet, it seems that it's due to using a file in which *each line* is itself a valid JSON, but the full file is not.
Let's check if this interpretation is true:
```
with open('data/recipeitems-latest.json') as f:
line = f.readline()
pd.read_json(line).shape
```
Yes, apparently each line is a valid JSON, so we'll need to string them together.
One way we can do this is to actually construct a string representation containing all these JSON entries, and then load the whole thing with ``pd.read_json``:
```
recipes=pd.read_json('data/recipeitems-latest.json',encoding="utf8",lines =True)
recipes.shape
```
We see there are nearly 200,000 recipes, and 17 columns.
Let's take a look at one row to see what we have:
```
recipes.iloc[0]
```
There is a lot of information there, but much of it is in a very messy form, as is typical of data scraped from the Web.
In particular, the ingredient list is in string format; we're going to have to carefully extract the information we're interested in.
Let's start by taking a closer look at the ingredients:
```
recipes.ingredients.str.len().describe()
```
The ingredient lists average 250 characters long, with a minimum of 0 and a maximum of nearly 10,000 characters!
Just out of curiousity, let's see which recipe has the longest ingredient list:
```
recipes.name[np.argmax(recipes.ingredients.str.len())]
```
That certainly looks like an involved recipe.
We can do other aggregate explorations; for example, let's see how many of the recipes are for breakfast food:
```
recipes.description.str.contains('[Bb]reakfast').sum()
```
Or how many of the recipes list cinnamon as an ingredient:
```
recipes.ingredients.str.contains('[Cc]innamon').sum()
```
We could even look to see whether any recipes misspell the ingredient as "cinamon":
```
recipes.ingredients.str.contains('[Cc]inamon').sum()
```
This is the type of essential data exploration that is possible with Pandas string tools.
It is data munging like this that Python really excels at.
### A simple recipe recommender
Let's go a bit further, and start working on a simple recipe recommendation system: given a list of ingredients, find a recipe that uses all those ingredients.
While conceptually straightforward, the task is complicated by the heterogeneity of the data: there is no easy operation, for example, to extract a clean list of ingredients from each row.
So we will cheat a bit: we'll start with a list of common ingredients, and simply search to see whether they are in each recipe's ingredient list.
For simplicity, let's just stick with herbs and spices for the time being:
```
spice_list = ['salt', 'pepper', 'oregano', 'sage', 'parsley',
'rosemary', 'tarragon', 'thyme', 'paprika', 'cumin']
```
We can then build a Boolean ``DataFrame`` consisting of True and False values, indicating whether this ingredient appears in the list:
```
import re
spice_df = pd.DataFrame(dict((spice, recipes.ingredients.str.contains(spice, re.IGNORECASE))
for spice in spice_list))
spice_df.head()
```
Now, as an example, let's say we'd like to find a recipe that uses parsley, paprika, and tarragon.
We can compute this very quickly using the ``query()`` method of ``DataFrame``s, discussed in [High-Performance Pandas: ``eval()`` and ``query()``](03.12-Performance-Eval-and-Query.ipynb):
```
selection = spice_df.query('parsley & paprika & tarragon')
len(selection)
```
We find only 10 recipes with this combination; let's use the index returned by this selection to discover the names of the recipes that have this combination:
```
recipes.name[selection.index]
```
Now that we have narrowed down our recipe selection by a factor of almost 20,000, we are in a position to make a more informed decision about what we'd like to cook for dinner.
### Going further with recipes
Hopefully this example has given you a bit of a flavor (ba-dum!) for the types of data cleaning operations that are efficiently enabled by Pandas string methods.
Of course, building a very robust recipe recommendation system would require a *lot* more work!
Extracting full ingredient lists from each recipe would be an important piece of the task; unfortunately, the wide variety of formats used makes this a relatively time-consuming process.
This points to the truism that in data science, cleaning and munging of real-world data often comprises the majority of the work, and Pandas provides the tools that can help you do this efficiently.
<!--NAVIGATION-->
< [Pivot Tables](03.09-Pivot-Tables.ipynb) | [Contents](Index.ipynb) | [Working with Time Series](03.11-Working-with-Time-Series.ipynb) >
| github_jupyter |
# Sequence classification with Neural Networks
## Split-window RNN model
Now we're going to try split-window RNN model. We are doing this because feeding whole sequence of your data might be impractican for a number of reasons:
* one sample of high-frequency data (like acceleration) might not even fit into GPU memory and the training will crash
* model might not be able to learn properly on long sequences
* avoids the need for padding/masking since we will have equal size windows
```
# Load the TensorBoard notebook extension
%load_ext tensorboard
import altair as alt
import numpy as np
import pandas as pd
import os
import sys
module_path = os.path.abspath(os.path.join('../..'))
if module_path not in sys.path:
sys.path.append(module_path)
from tmdprimer.datagen import generate_sample, Dataset, Sample
```
We're going to use the same network as for the per-sample RNN:
```
import tensorflow as tf
def get_rnn_model():
rnn_model = tf.keras.Sequential(
[
tf.keras.layers.GRU(8, return_sequences=True),
tf.keras.layers.Dense(1, activation="sigmoid")
]
)
rnn_model.compile(
loss="binary_crossentropy",
optimizer=tf.keras.optimizers.Nadam(),
metrics=[tf.keras.metrics.BinaryAccuracy()]
)
return rnn_model
data_rnn = []
for outlier_prob in (0.01, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0):
print(outlier_prob)
dataset = Dataset.generate(train_outlier_prob=outlier_prob, n_samples=100)
model = get_rnn_model()
model.fit(
x=dataset.to_split_window_tfds(window_size=50).batch(32),
epochs=10,
verbose=0
)
test_dataset = Dataset.generate(train_outlier_prob=outlier_prob, n_samples=20)
res = model.evaluate(test_dataset.to_split_window_tfds(window_size=50).batch(32), verbose=0)
data_rnn.append({'outlier_prob': outlier_prob, 'accuracy': res[1]})
df_rnn = pd.DataFrame(data_rnn)
alt.Chart(df_rnn).mark_line().encode(x='outlier_prob', y='accuracy')
```
Looks quite similar to the per-sample RNN, but the accuracy of at the low levels of noise doesn't reach 0.99 for some reason. I don't know why this is happening, so let me know in the comments if you have an idea.
Let's see again how the tensorboard graphs look like for this RNN:
```
# Clear any logs from previous runs
from datetime import datetime
!rm -rf ./logs/
log_dir = "logs/fit/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
dataset = Dataset.generate(train_outlier_prob=0, n_samples=200)
model = get_rnn_model()
model.fit(
x=dataset.to_split_window_tfds(window_size=50).batch(20),
epochs=10,
callbacks=[tensorboard_callback]
)
#%tensorboard --logdir logs/fit
```
It's interesting that this model doesn't reach the 99% accuracy easily like the others. Let's look at its prediction in detail.
```
test_dataset = Dataset.generate(train_outlier_prob=0, n_samples=20)
df = pd.DataFrame(data=({"time step": i, "speed": lf.features[0]/100, "label": lf.label} for i, lf in enumerate(test_dataset.samples[0].features)))
base = alt.Chart(df).encode(x="time step")
x, _ = test_dataset.samples[0].to_numpy_split_windows(window_size=50, scaler=dataset.std_scaler)
pred_y = model.predict(x)
df.loc[:, "pred_label"] = pd.Series(pred_y.flatten())
df.fillna(1, inplace=True)
alt.layer(
base.mark_line(color="cornflowerblue").encode(y="speed"),
base.mark_line(color="orange").encode(y="label"),
base.mark_line(color="red").encode(y="pred_label"),
)
```
Somehow it's less confident in the first prediction it makes for each window. And the same situation is amplified with the introduction of outliers. Write me a message if you know how to explain this.
Take a look at the stateful split-window RNN to see how to improve the results.
| github_jupyter |
# Data Profiler - What's in your data?
The library is designed to easily detect sensitive data and gather statistics on your datasets with just a few lines of code.
This demo covers the followings:
- Basic usage of the Data Profiler
- The data reader class
- Updating and merging profiles
- Profile differences
- Graphing a profile
- Saving profiles
- Data labeling
First, let's import the libraries needed for this example.
```
import os
import sys
import json
import pandas as pd
import tensorflow as tf
try:
sys.path.insert(0, '..')
import dataprofiler as dp
except ImportError:
import dataprofiler as dp
data_folder = "../dataprofiler/tests/data"
# remove extra tf loggin
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
```
## Basic Usage of the Data Profiler
This section shows the basic example of the Data Profiler. A CSV dataset is read using the data reader, then the Data object is given to the Data Profiler to detect sensitive data and obtain the statistics.
```
# read, profile, and get the report in 3 lines
# get the data
data = dp.Data(os.path.join(data_folder, "csv/diamonds.csv"))
# profile the data
profile = dp.Profiler(data)
# generate the report
report = profile.report(report_options={"output_format": "compact"})
data.head() # data.data provides access to a pandas.DataFrame
# print the report
print('\nREPORT:\n' + '='*80)
print(json.dumps(report, indent=4))
```
## Data reader class -- Automatic Detection
Within the Data Profiler, there are 5 data reader classes:
* CSVData (delimited data: CSV, TSV, etc.)
* JSONData
* ParquetData
* AVROData
* TextData
```
# use data reader to read input data with different file types
data_folder = "../dataprofiler/tests/data"
csv_files = [
"csv/aws_honeypot_marx_geo.csv",
"csv/all-strings-skip-header-author.csv", # csv files with the author/description on the first line
"csv/sparse-first-and-last-column-empty-first-row.txt", # csv file with the .txt extension
]
json_files = [
"json/complex_nested.json",
"json/honeypot_intentially_mislabeled_file.csv", # json file with the .csv extension
]
parquet_files = [
"parquet/nation.dict.parquet",
"parquet/nation.plain.intentionally_mislabled_file.csv", # parquet file with the .csv extension
]
avro_files = [
"avro/userdata1.avro",
"avro/userdata1_intentionally_mislabled_file.json", # avro file with the .json extension
]
text_files = [
"txt/discussion_reddit.txt",
]
all_files = csv_files + json_files + parquet_files + avro_files + text_files
print('filepath' + ' ' * 58 + 'data type')
print('='*80)
for file in all_files:
filepath = os.path.join(data_folder, file)
############################
##### READING THE DATA #####
data = dp.Data(filepath)
############################
print("{:<65} {:<15}".format(file, data.data_type))
# importing from a url
data = dp.Data('https://raw.githubusercontent.com/capitalone/DataProfiler/main/dataprofiler/tests/data/csv/diamonds.csv')
data.head()
```
## Data Profiling
As we saw above, profiling is as simple as:
```python
import dataprofiler as dp
data = dp.Data('my_data.csv')
profiler = dp.Profiler(data)
report = profiler.report(report_options={"output_format": "compact"})
```
### Update profiles - the case for batching / streaming data¶
The profiler allows users to send the data to the profile in batches.
```
# divide dataset in half
data = dp.Data(os.path.join(data_folder, "csv/diamonds.csv"))
df = data.data
df1 = df.iloc[:int(len(df)/2)]
df2 = df.iloc[int(len(df)/2):]
# Update the profile with the first half
profile = dp.Profiler(df1)
############################
####### BATCH UPDATE #######
profile.update_profile(df2)
############################
report_batch = profile.report(report_options={"output_format": "compact"})
# print('\nREPORT:\n' + '='*80)
print(json.dumps(report_batch, indent=4))
```
### Merge profiles -- the case for parallelization
Two profiles can be added together to create a combined profile.
```
# create two profiles and merge
profile1 = dp.Profiler(df1)
profile2 = dp.Profiler(df2)
profile_merge = profile1 + profile2
# check results of the merged profile
report_merge = profile.report(report_options={"output_format": "compact"})
# # print the report
# print('\nREPORT:\n' + '='*80)
# print(json.dumps(report_merge, indent=4))
```
# Differences in Data
Can be appliied to both structured and unstructured datasets.
Such reports can provide details on the differences between training and validation data like in this pseudo example:
```python
profiler_training = dp.Profiler(training_data)
profiler_testing = dp.Profiler(testing_data)
validation_report = profiler_training.diff(profiler_testing)
```
```
from pprint import pprint
# structured differences example
data_split_differences = profile1.diff(profile2)
pprint(data_split_differences)
```
## Graphing a Profile
We've also added the ability to generating visual reports from a profile.
The following plots are currently available to work directly with your profilers:
* missing values matrix
* histogram (numeric columns only)
```
import matplotlib.pyplot as plt
# get the data
data = dp.Data(os.path.join(data_folder, "csv/aws_honeypot_marx_geo.csv"))
# profile the data
profile = dp.Profiler(data)
# generate a missing values matrix
fig = plt.figure(figsize=(8, 6), dpi=100)
fig = dp.graphs.plot_missing_values_matrix(profile, ax=fig.gca(), title="Missing Values Matrix")
# generate histogram of all int/float columns
fig = dp.graphs.plot_histograms(profile)
fig.set_size_inches(8, 6)
fig.set_dpi(100)
```
## Saving and Loading a Profile
Not only can the Profiler create and update profiles, it's also possible to save, load then manipulate profiles.
```
# Load data
data = dp.Data(os.path.join(data_folder, "csv/diamonds.csv"))
# Generate a profile
profile = dp.Profiler(data)
# Save a profile to disk for later (saves as pickle file)
profile.save(filepath="my_profile.pkl")
# Load a profile from disk
loaded_profile = dp.Profiler.load("my_profile.pkl")
# Report the compact version of the profile
# report = profile.report(report_options={"output_format":"compact"})
# print(json.dumps(report, indent=4))
```
# Unstructured Profiling
Similar to structured datasets, text data can also be profiled with the unstructured profiler.
It currently provides an easy overview of information in the text such as:
* memory size
* char stats
* word stats
* data labeling entity stats
```
profiler_string = dp.Profiler("This is my random text: 332-23-2123")
print(json.dumps(profiler_string.report(), indent=4))
email_data = ["Message-ID: <11111111.1111111111111.JavaMail.evans@thyme>\n" + \
"Date: Fri, 10 Aug 2005 11:31:37 -0700 (PDT)\n" + \
"From: w..smith@company.com\n" + \
"To: john.smith@company.com\n" + \
"Subject: RE: ABC\n" + \
"Mime-Version: 1.0\n" + \
"Content-Type: text/plain; charset=us-ascii\n" + \
"Content-Transfer-Encoding: 7bit\n" + \
"X-From: Smith, Mary W. </O=ENRON/OU=NA/CN=RECIPIENTS/CN=SSMITH>\n" + \
"X-To: Smith, John </O=ENRON/OU=NA/CN=RECIPIENTS/CN=JSMITH>\n" + \
"X-cc: \n" + \
"X-bcc: \n" + \
"X-Folder: \SSMITH (Non-Privileged)\Sent Items\n" + \
"X-Origin: Smith-S\n" + \
"X-FileName: SSMITH (Non-Privileged).pst\n\n" + \
"All I ever saw was the e-mail from the office.\n\n" + \
"Mary\n\n" + \
"-----Original Message-----\n" + \
"From: Smith, John \n" + \
"Sent: Friday, August 10, 2005 13:07 PM\n" + \
"To: Smith, Mary W.\n" + \
"Subject: ABC\n\n" + \
"Have you heard any more regarding the ABC sale? I guess that means that " + \
"it's no big deal here, but you think they would have send something.\n\n\n" + \
"John Smith\n" + \
"123-456-7890\n"]
profiler_email = dp.Profiler(email_data, profiler_type='unstructured')
print(json.dumps(profiler_email.report(), indent=4))
```
## Merging Unstructured Data
```
merged_profile = profiler_string + profiler_email
print(json.dumps(merged_profile.report(), indent=4))
```
## Differences in Unstructured Data
```
# unstructured differences example
validation_report = profiler_email.diff(profiler_string)
print(json.dumps(validation_report, indent=4))
```
## Data Labeling
The Labeler is a pipeline designed to make building, training, and predictions with ML models quick and easy. There are 3 major components to the Labeler: the preprocessor, the model, and the postprocessor.

Default labels:
* UNKNOWN
* ADDRESS
* BAN (bank account number, 10-18 digits)
* CREDIT_CARD
* EMAIL_ADDRESS
* UUID
* HASH_OR_KEY (md5, sha1, sha256, random hash, etc.)
* IPV4
* IPV6
* MAC_ADDRESS
* PERSON
* PHONE_NUMBER
* SSN
* URL
* US_STATE
* DRIVERS_LICENSE
* DATE
* TIME
* DATETIME
* INTEGER
* FLOAT
* QUANTITY
* ORDINAL
```
# helper functions for printing results
def get_structured_results(results):
"""Helper function to get data labels for each column."""
columns = []
predictions = []
samples = []
for col in results['data_stats']:
columns.append(col['column_name'])
predictions.append(col['data_label'])
samples.append(col['samples'])
df_results = pd.DataFrame({'Column': columns, 'Prediction': predictions, 'Sample': samples})
return df_results
def get_unstructured_results(data, results):
"""Helper function to get data labels for each labeled piece of text."""
labeled_data = []
for pred in results['pred'][0]:
labeled_data.append([data[0][pred[0]:pred[1]], pred[2]])
label_df = pd.DataFrame(labeled_data, columns=['Text', 'Labels'])
return label_df
pd.set_option('display.width', 100)
```
### Structured Labeling
Each column within your profile is given a suggested data label.
```
# profile data and get labels for each column
data = dp.Data(os.path.join(data_folder, "csv/SchoolDataSmall.csv"))
profiler = dp.Profiler(data)
report = profiler.report()
print('\Label Predictions:\n' + '=' * 85)
print(get_structured_results(report))
```
### Unstructured Labeling
```
# load data
email_data = ["Message-ID: <11111111.1111111111111.JavaMail.evans@thyme>\n" + \
"Date: Fri, 10 Aug 2005 11:31:37 -0700 (PDT)\n" + \
"From: w..smith@company.com\n" + \
"To: john.smith@company.com\n" + \
"Subject: RE: ABC\n" + \
"Mime-Version: 1.0\n" + \
"Content-Type: text/plain; charset=us-ascii\n" + \
"Content-Transfer-Encoding: 7bit\n" + \
"X-From: Smith, Mary W. </O=ENRON/OU=NA/CN=RECIPIENTS/CN=SSMITH>\n" + \
"X-To: Smith, John </O=ENRON/OU=NA/CN=RECIPIENTS/CN=JSMITH>\n" + \
"X-cc: \n" + \
"X-bcc: \n" + \
"X-Folder: \SSMITH (Non-Privileged)\Sent Items\n" + \
"X-Origin: Smith-S\n" + \
"X-FileName: SSMITH (Non-Privileged).pst\n\n" + \
"All I ever saw was the e-mail from the office.\n\n" + \
"Mary\n\n" + \
"-----Original Message-----\n" + \
"From: Smith, John \n" + \
"Sent: Friday, August 10, 2005 13:07 PM\n" + \
"To: Smith, Mary W.\n" + \
"Subject: ABC\n\n" + \
"Have you heard any more regarding the ABC sale? I guess that means that " + \
"it's no big deal here, but you think they would have send something.\n\n\n" + \
"John Smith\n" + \
"123-456-7890\n"]
labeler = dp.DataLabeler(labeler_type='unstructured')
# convert prediction to word format and ner format
# Set the output to the NER format (start position, end position, label)
labeler.set_params(
{ 'postprocessor': { 'output_format': 'ner', 'use_word_level_argmax': True } }
)
# make predictions and get labels per character
predictions = labeler.predict(email_data)
# display results
print('=========================Prediction========================')
print(get_unstructured_results(email_data, predictions))
```
| github_jupyter |
# Importing packages
```
import json
from multiprocessing.dummy import Pool
import multiprocessing as mp
import torch
from transformers import AutoModel, AutoTokenizer, BertTokenizer
from tqdm import tqdm
import tensorflow_federated as tff
import numpy as np
from tensorflow import strings
import time
import pickle
torch.set_grad_enabled(False)
MODEL_NAME = "bert-base-uncased"
DEVICE = 'cuda'
INPUT_SIZE = 512
model = AutoModel.from_pretrained(MODEL_NAME)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model.eval();
model.to(DEVICE);
stackoverflow_train, _, stackoverflow_test = tff.simulation.datasets.stackoverflow.load_data()
def client_caption(client_id):
return strings.join(inputs=[x['tokens'] for x in stackoverflow_train.create_tf_dataset_for_client(client_id)], separator=' ')
CLIENT_NUM = 5000
start_time = time.time()
pool = Pool(20)
res = pool.map(client_caption, stackoverflow_train.client_ids[:CLIENT_NUM])
pool.close()
pool.join()
print(time.time() - start_time)
client_ids = stackoverflow_train.client_ids[:CLIENT_NUM]
with open('tf_strings_concatenated.data', 'wb') as file:
pickle.dump(res, file)
with open('datasets/tf_strings_concatenated.data', 'rb') as file:
res = pickle.load(file)
INPUT_SIZE = 500 # restricted by tokenizer
batch_size = 256 # For batch_size=256 it takes about 3.5Gb of memory
all_embeddings = {}
captions = [x.numpy().decode("utf-8").lower()[:INPUT_SIZE] for x in res]
embeddings = []
for start_idx in tqdm(range(0, len(captions), batch_size)):
curr_captions = captions[start_idx : start_idx + batch_size]
tokens = tokenizer(curr_captions, return_tensors="pt", padding=True)
tokens = tokens.to(DEVICE)
with torch.no_grad():
outputs = model(**tokens)
curr_embeddings = outputs.pooler_output.cpu()
embeddings.extend(curr_embeddings)
# Now, we would like to match image ids with the embeddings
# so it will be easier for us to use it at test time
result = {client_id: [] for client_id in sorted(list(set(client_ids)))}
for client_id, emb in zip(client_ids, embeddings):
result[client_id].append(emb)
# Now, let's save the resulted dict using pytorch save
torch.save(result, 'datasets/embeddings_{}.pt'.format(CLIENT_NUM))
result = torch.load('datasets/embeddings_5000.pt')
embeddings = [y[0] for _, y in result.items()]
```
# Two cluster of clients
## Set of clients
```
embeddings_ = [x.numpy() for x in embeddings]
numpy_embeddings = np.array(embeddings_)
numpy_embeddings.shape
import nest_asyncio
from utils_federated.datasets import stackoverflow_tag_prediction
nest_asyncio.apply()
train_fl, test_fl = stackoverflow_tag_prediction.get_federated_datasets(train_client_batch_size=500)
test_client_ids = set(test_fl.client_ids)
participating_client_numbers = []
for i in range(CLIENT_NUM):
curr_client_id = client_ids[i]
if not curr_client_id in test_client_ids:
continue
client_dataset = test_fl.create_tf_dataset_for_client(curr_client_id)
for element in client_dataset:
if element[0].shape[0] >= 50:
participating_client_numbers.append(i)
break
participating_client_ids = [client_ids[i] for i in participating_client_numbers]
numpy_embeddings = numpy_embeddings[participating_client_numbers]
```
## KMeans
```
from sklearn.cluster import KMeans
number_of_clusters = 10
kmeans = KMeans(n_clusters=number_of_clusters, n_init=100, max_iter=1000, tol=1e-50, random_state=0, algorithm='full')
kmeans.fit(X=numpy_embeddings)
```
## Clusters
```
import matplotlib.pyplot as plt
plt.hist(kmeans.labels_, bins=number_of_clusters);
label_count = np.zeros(shape=(number_of_clusters,))
for label in kmeans.labels_:
label_count[label] += 1
label_count
over_40_labels = [i for i in range(number_of_clusters) if label_count[i] >= 40 ]
label_to_client = {label: [] for label in range(number_of_clusters)}
for idx in range(len(participating_client_numbers)):
label_to_client[kmeans.labels_[idx]].append(participating_client_ids[idx])
for key in label_to_client:
if len(label_to_client[key]) != label_count[key]:
print('Error')
# intersecting_label_to_client = {label: [] for label in over_40_labels}
# test_clients_list = sorted(list(set(stackoverflow_test.client_ids)))
# for label in over_40_labels:
# for client in label_to_client[label]:
# if client in test_clients_list:
# intersecting_label_to_client[label].append(client)
# for label in intersecting_label_to_client:
# print(label, len(intersecting_label_to_client[label]))
# selected_labels = [label for label in intersecting_label_to_client if len(intersecting_label_to_client[label]) > 40]
selected_labels = over_40_labels
selected_labels
```
## Saving clients
```
for counter, label in enumerate(selected_labels):
with open('datasets/clients_cluster_{}.data'.format(counter), 'wb') as file:
pickle.dump(label_to_client[label], file)
```
| github_jupyter |
<a data-flickr-embed="true" href="https://www.flickr.com/photos/157237655@N08/46489714642/in/datetaken-public/" title="YOLO model training in progress"><img src="https://farm8.staticflickr.com/7840/46489714642_d69661a409_b.jpg" width="1024" height="797" alt="YOLO model training in progress"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
This is the fifth blog post of [Object Detection with YOLO blog series](https://fairyonice.github.io/tag/object-detection-using-yolov2-on-pascal-voc2012-series.html). This blog finally train the model using the scripts that are developed in the [previous blog posts](https://fairyonice.github.io/tag/object-detection-using-yolov2-on-pascal-voc2012-series.html).
I will use PASCAL VOC2012 data.
This blog assumes that the readers have read the previous blog posts - [Part 1](https://fairyonice.github.io/Part_1_Object_Detection_with_Yolo_for_VOC_2014_data_anchor_box_clustering.html), [Part 2](https://fairyonice.github.io/Part%202_Object_Detection_with_Yolo_using_VOC_2014_data_input_and_output_encoding.html), [Part 3](https://fairyonice.github.io/Part_3_Object_Detection_with_Yolo_using_VOC_2012_data_model.html), [Part 4](https://fairyonice.github.io/Part_4_Object_Detection_with_Yolo_using_VOC_2012_data_loss.html).
## Andrew Ng's YOLO lecture
- [Neural Networks - Bounding Box Predictions](https://www.youtube.com/watch?v=gKreZOUi-O0&t=0s&index=7&list=PL_IHmaMAvkVxdDOBRg2CbcJBq9SY7ZUvs)
- [C4W3L06 Intersection Over Union](https://www.youtube.com/watch?v=ANIzQ5G-XPE&t=7s)
- [C4W3L07 Nonmax Suppression](https://www.youtube.com/watch?v=VAo84c1hQX8&t=192s)
- [C4W3L08 Anchor Boxes](https://www.youtube.com/watch?v=RTlwl2bv0Tg&t=28s)
- [C4W3L09 YOLO Algorithm](https://www.youtube.com/watch?v=9s_FpMpdYW8&t=34s)
## Reference
- [You Only Look Once:Unified, Real-Time Object Detection](https://arxiv.org/pdf/1506.02640.pdf)
- [YOLO9000:Better, Faster, Stronger](https://arxiv.org/pdf/1612.08242.pdf)
- [experiencor/keras-yolo2](https://github.com/experiencor/keras-yolo2)
## Reference in my blog
- [Part 1 Object Detection using YOLOv2 on Pascal VOC2012 - anchor box clustering](https://fairyonice.github.io/Part_1_Object_Detection_with_Yolo_for_VOC_2014_data_anchor_box_clustering.html)
- [Part 2 Object Detection using YOLOv2 on Pascal VOC2012 - input and output encoding](https://fairyonice.github.io/Part%202_Object_Detection_with_Yolo_using_VOC_2014_data_input_and_output_encoding.html)
- [Part 3 Object Detection using YOLOv2 on Pascal VOC2012 - model](https://fairyonice.github.io/Part_3_Object_Detection_with_Yolo_using_VOC_2012_data_model.html)
- [Part 4 Object Detection using YOLOv2 on Pascal VOC2012 - loss](https://fairyonice.github.io/Part_4_Object_Detection_with_Yolo_using_VOC_2012_data_loss.html)
- [Part 5 Object Detection using YOLOv2 on Pascal VOC2012 - training](https://fairyonice.github.io/Part_5_Object_Detection_with_Yolo_using_VOC_2012_data_training.html)
- [Part 6 Object Detection using YOLOv2 on Pascal VOC 2012 data - inference on image](https://fairyonice.github.io/Part_6_Object_Detection_with_Yolo_using_VOC_2012_data_inference_image.html)
- [Part 7 Object Detection using YOLOv2 on Pascal VOC 2012 data - inference on video](https://fairyonice.github.io/Part_7_Object_Detection_with_Yolo_using_VOC_2012_data_inference_video.html)
## My GitHub repository
This repository contains all the ipython notebooks in this blog series and the funcitons (See backend.py).
- [FairyOnIce/ObjectDetectionYolo](https://github.com/FairyOnIce/ObjectDetectionYolo)
```
import matplotlib.pyplot as plt
import numpy as np
import os, sys
import tensorflow as tf
print(sys.version)
%matplotlib inline
```
## Define anchor box
<code>ANCHORS</code> defines the number of anchor boxes and the shape of each anchor box.
The choice of the anchor box specialization is already discussed in [Part 1 Object Detection using YOLOv2 on Pascal VOC2012 - anchor box clustering](https://fairyonice.github.io/Part_1_Object_Detection_with_Yolo_for_VOC_2014_data_anchor_box_clustering.html).
Based on the K-means analysis in the previous blog post, I will select 4 anchor boxes of following width and height. The width and heights are rescaled in the grid cell scale (Assuming that the number of grid size is 13 by 13.) See [Part 2 Object Detection using YOLOv2 on Pascal VOC2012 - input and output encoding](https://fairyonice.github.io/Part%202_Object_Detection_with_Yolo_using_VOC_2014_data_input_and_output_encoding.html) to learn how I rescal the anchor box shapes into the grid cell scale.
Here I choose 4 anchor boxes. With 13 by 13 grids, every frame gets 4 x 13 x 13 = 676 bouding box predictions.
```
ANCHORS = np.array([1.07709888, 1.78171903, # anchor box 1, width , height
2.71054693, 5.12469308, # anchor box 2, width, height
10.47181473, 10.09646365, # anchor box 3, width, height
5.48531347, 8.11011331]) # anchor box 4, width, height
```
## Define Label vector containing 20 object classe names.
```
LABELS = ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle',
'bus', 'car', 'cat', 'chair', 'cow',
'diningtable','dog', 'horse', 'motorbike', 'person',
'pottedplant','sheep', 'sofa', 'train', 'tvmonitor']
```
## Read images and annotations into memory
Use the pre-processing code for parsing annotation at [experiencor/keras-yolo2](https://github.com/experiencor/keras-yolo2).
This <code>parse_annoation</code> function is already used in [Part 1 Object Detection using YOLOv2 on Pascal VOC2012 - anchor box clustering](https://fairyonice.github.io/Part_1_Object_Detection_with_Yolo_for_VOC_2014_data_anchor_box_clustering.html) and saved in my python script.
This script can be downloaded at [my Github repository, FairyOnIce/ObjectDetectionYolo/backend](https://github.com/FairyOnIce/ObjectDetectionYolo/blob/master/backend.py).
```
### The location where the VOC2012 data is saved.
train_image_folder = "../ObjectDetectionRCNN/VOCdevkit/VOC2012/JPEGImages/"
train_annot_folder = "../ObjectDetectionRCNN/VOCdevkit/VOC2012/Annotations/"
np.random.seed(1)
from backend import parse_annotation
train_image, seen_train_labels = parse_annotation(train_annot_folder,
train_image_folder,
labels=LABELS)
print("N train = {}".format(len(train_image)))
```
## Instantiate batch generator object
<code>SimpleBatchGenerator</code> is discussed and used in
[Part 2 Object Detection using YOLOv2 on Pascal VOC2012 - input and output encoding](https://fairyonice.github.io/Part%202_Object_Detection_with_Yolo_using_VOC_2014_data_input_and_output_encoding.html).
This script can be downloaded at [my Github repository, FairyOnIce/ObjectDetectionYolo/backend](https://github.com/FairyOnIce/ObjectDetectionYolo/blob/master/backend.py).
```
from backend import SimpleBatchGenerator
BATCH_SIZE = 200
IMAGE_H, IMAGE_W = 416, 416
GRID_H, GRID_W = 13 , 13
TRUE_BOX_BUFFER = 50
BOX = int(len(ANCHORS)/2)
generator_config = {
'IMAGE_H' : IMAGE_H,
'IMAGE_W' : IMAGE_W,
'GRID_H' : GRID_H,
'GRID_W' : GRID_W,
'LABELS' : LABELS,
'ANCHORS' : ANCHORS,
'BATCH_SIZE' : BATCH_SIZE,
'TRUE_BOX_BUFFER' : TRUE_BOX_BUFFER,
}
def normalize(image):
return image / 255.
train_batch_generator = SimpleBatchGenerator(train_image, generator_config,
norm=normalize, shuffle=True)
```
## Define model
We define a YOLO model.
The model defenition function is already discussed in [Part 3 Object Detection using YOLOv2 on Pascal VOC2012 - model](https://fairyonice.github.io/Part_3_Object_Detection_with_Yolo_using_VOC_2012_data_model.html) and all the codes are available at [my Github](https://github.com/FairyOnIce/ObjectDetectionYolo/blob/master/backend.py).
```
from backend import define_YOLOv2, set_pretrained_weight, initialize_weight
CLASS = len(LABELS)
model, true_boxes = define_YOLOv2(IMAGE_H,IMAGE_W,GRID_H,GRID_W,TRUE_BOX_BUFFER,BOX,CLASS,
trainable=False)
model.summary()
```
## Initialize the weights
The initialization of weights are already discussed in [Part 3 Object Detection using YOLOv2 on Pascal VOC2012 - model](https://fairyonice.github.io/Part_3_Object_Detection_with_Yolo_using_VOC_2012_data_model.html).
All the codes from [Part 3](https://fairyonice.github.io/Part_3_Object_Detection_with_Yolo_using_VOC_2012_data_model.html) are stored at [my Github](https://github.com/FairyOnIce/ObjectDetectionYolo/blob/master/backend.py).
```
path_to_weight = "./yolov2.weights"
nb_conv = 22
model = set_pretrained_weight(model,nb_conv, path_to_weight)
layer = model.layers[-4] # the last convolutional layer
initialize_weight(layer,sd=1/(GRID_H*GRID_W))
```
## Loss function
We already discussed the loss function of YOLOv2 implemented by [experiencor/keras-yolo2](https://github.com/experiencor/keras-yolo2) in [Part 4 Object Detection using YOLOv2 on Pascal VOC2012 - loss](https://fairyonice.github.io/Part_4_Object_Detection_with_Yolo_using_VOC_2012_data_loss.html).
I modified the codes and the codes are available at [my Github](https://github.com/FairyOnIce/ObjectDetectionYolo/blob/master/backend.py).
```
from backend import custom_loss_core
help(custom_loss_core)
```
Notice that this custom function <code>custom_loss_core</code> depends not only on <code>y_true</code> and <code>y_pred</code> but also the various hayperparameters.
Unfortunately, Keras's loss function API does not accept any parameters except <code>y_true</code> and <code>y_pred</code>. Therefore, these hyperparameters need to be defined globaly.
To do this, I will define a wrapper function <code>custom_loss</code>.
```
GRID_W = 13
GRID_H = 13
BATCH_SIZE = 34
LAMBDA_NO_OBJECT = 1.0
LAMBDA_OBJECT = 5.0
LAMBDA_COORD = 1.0
LAMBDA_CLASS = 1.0
def custom_loss(y_true, y_pred):
return(custom_loss_core(
y_true,
y_pred,
true_boxes,
GRID_W,
GRID_H,
BATCH_SIZE,
ANCHORS,
LAMBDA_COORD,
LAMBDA_CLASS,
LAMBDA_NO_OBJECT,
LAMBDA_OBJECT))
```
## Training starts here!
Finally, we start the training here.
We only train the final 23rd layer and freeze the other weights.
This is because I am unfortunately using CPU environment.
```
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras.optimizers import SGD, Adam, RMSprop
dir_log = "logs/"
try:
os.makedirs(dir_log)
except:
pass
BATCH_SIZE = 32
generator_config['BATCH_SIZE'] = BATCH_SIZE
early_stop = EarlyStopping(monitor='loss',
min_delta=0.001,
patience=3,
mode='min',
verbose=1)
checkpoint = ModelCheckpoint('weights_yolo_on_voc2012.h5',
monitor='loss',
verbose=1,
save_best_only=True,
mode='min',
period=1)
optimizer = Adam(lr=0.5e-4, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
#optimizer = SGD(lr=1e-4, decay=0.0005, momentum=0.9)
#optimizer = RMSprop(lr=1e-4, rho=0.9, epsilon=1e-08, decay=0.0)
model.compile(loss=custom_loss, optimizer=optimizer)
model.fit_generator(generator = train_batch_generator,
steps_per_epoch = len(train_batch_generator),
epochs = 50,
verbose = 1,
#validation_data = valid_batch,
#validation_steps = len(valid_batch),
callbacks = [early_stop, checkpoint],
max_queue_size = 3)
```
[FairyOnIce/ObjectDetectionYolo](https://github.com/FairyOnIce/ObjectDetectionYolo)
contains this ipython notebook and all the functions that I defined in this notebook.
By accident, I stopped a notebook.
Here, let's resume the training..
```
model.load_weights('weights_yolo_on_voc2012.h5')
model.fit_generator(generator = train_batch_generator,
steps_per_epoch = len(train_batch_generator),
epochs = 50,
verbose = 1,
#validation_data = valid_batch,
#validation_steps = len(valid_batch),
callbacks = [early_stop, checkpoint],
max_queue_size = 3)
```
| github_jupyter |
## Experimental set-up: ##
This code will generate experimental files that can either be independently hosted on a website and run with recruited participants, or via our [MTurk iPython notebook](https://github.com/a-newman/mturk-api-notebook) be used for launching Amazon Mechanical Turk (MTurk) HITs.
An experiment is composed of different sets of images:
* **target images** are the images you want to collect attention data on - those are images that you provide (in directory `sourcedir` below)
* **tutorial images** are images that will be shown to participants at the beginning of the experiment to get them familiarized with the codecharts set-up (you can reuse the tutorial image we provide, or provide your own in directory `tutorial_source_dir` below)
* *hint: if your images are very different in content from the images in our set, you may want to train your participants on your own images, to avoid a context switch between the tutorial and main experiment*
* **sentinel images** are images interspersed throughout the experiment where participant attention is guided to a very specific point on the screen, used as validation/calibration images to ensure participants are actually moving their eyes and looking where they're supposed to; the code below will intersperse images from the `sentinel_target_images` directory we provide throughout your experimental sequence
* sentinel images can be interspersed throughout both the tutorial and target images, or excluded from the tutorial (via `add_sentinels_to_tutorial` flag below); we recommend having sentinel images as part of the tutorial to familiarize participants with such images as well
The code below will populate the `rootdir` task directory with #`num_subject_files` subject files for you, where each subject file corresponds to an experiment you can run on a single participant. For each subject file, a set of #`num_images_per_sf` will be randomly sampled from the `sourcedir` image directory. A set of #`num_sentinels_per_sf` sentinel images will also be sampled from the `sentinel_imdir` image directory, and distributed throughout the experiment. A tutorial will be generated at the beginning of the experiment with #`num_imgs_per_tutorial` randomly sampled from the `tutorial_source_dir` image directory, along with an additional #`num_sentinels_per_tutorial` sentinel files distributed throughout the tutorial (if `add_sentinels_to_tutorial` flag is set to true).
```
import os
import string
import random
import json
import matplotlib.pyplot as plt
import numpy as np
import base64
import glob
sourcedir = '../demo_experiment_images/' # replace this with your own directory of experiment images
tutorial_source_dir = 'tutorial_images' # you can reuse the tutorial images we provide, or provide your own directory
# PARAMETERS for generating subject files
num_subject_files = 3 # number of subject files to generate (i.e., # of mturk assignments that will be put up)
num_images_per_sf = 35 # number of target images per subject file
num_imgs_per_tutorial = 3 # number of tutorial images per subject file
num_sentinels_per_sf = 5 # number of sentinel images to distribute throughout the experiment (excluding the tutorial)
add_sentinels_to_tutorial = True # whether to have sentinel images as part of the tutorial
num_sentinels_per_tutorial = 3 # number of sentinel images to distribute throughout the tutorial
```
Another bit of terminology and experimental logistics involves **buckets** which are a way to distribute experiment stimuli so that multiple experiments can be run in parallel (and participants can be reused for different subsets of images). If you have a lot of images that you want to collect data on, and for each participant you are sampling a set of only #`num_images_per_sf`, then you might have to generate a large `num_subject_files` in order to have enough data points per image. A way to speed up data collection is to split all the target images into #`num_buckets` disjoint buckets, and then to generate subject files per bucket. Given that subject files generated per bucket are guaranteed to have a disjoint set of images, the same participant can be run on multiple subject files from different buckets. MTurk HITs corresponding to different buckets can be launched all at once. In summary, in MTurk terms, you can generate as many HITs as `num_buckets` specified below, and as many assignments per HIT as `num_subject_files`.
The way the codecharts methodology works, a jittered grid of alphanumeric triplets appears after every image presentation (whether it is a target, sentinel, or tutorial image), since a participant will need to indicate where on the preceding image s/he looked, by reporting a triplet. To avoid generating an excessive number of codecharts (that bulks up all the subject files), we can reuse some codecharts across buckets. The way we do this is by pre-generating #`ncodecharts` codecharts, and then randomly sampling from these when generating the individual subject files.
```
# we pre-generate some codecharts and sentinel images so that we can reuse these across participants and buckets
# and potentially not have to generate as many files; these can be set to any number, and the corresponding code
# will just sample as many images as need per subject file
ncodecharts = num_subject_files*num_images_per_sf # number of codecharts to generate; can be changed
sentinel_images_per_bucket = num_subject_files*num_sentinels_per_sf # can be changed
# set these parameters
num_buckets = 1 # number of disjoint sets of subject files to create (for running multiple parallel HITs)
start_bucket_at = 0 # you can use this and the next parameter to generate more buckets if running the code later
which_buckets = [0] # a list of specific buckets e.g., [4,5,6] to generate experiment data for
rootdir = '../assets/task_data' # where all the experiment data will be stored
if not os.path.exists(rootdir):
print('Creating directory %s'%(rootdir))
os.makedirs(rootdir)
real_image_dir = os.path.join(rootdir,'real_images') # target images, distributed by buckets
real_CC_dir = os.path.join(rootdir,'real_CC') # codecharts corresponding to the target images
# (shared across buckets)
sentinel_image_dir = os.path.join(rootdir,'sentinel_images') # sentinel images, distributed by buckets
sentinel_CC_dir = os.path.join(rootdir,'sentinel_CC') # codecharts corresponding to the sentinel images
# (shared across buckets)
#sentinel_targetim_dir = os.path.join(rootdir, 'sentinel_target')
# this cell creates an `all_images` directory, copies images from sourcedir, and pads them to the required dimensions
import create_padded_image_dir
all_image_dir = os.path.join(rootdir,'all_images')
if not os.path.exists(all_image_dir):
print('Creating directory %s'%(all_image_dir))
os.makedirs(all_image_dir)
allfiles = []
for ext in ('*.jpeg', '*.png', '*.jpg'):
allfiles.extend(glob.glob(os.path.join(sourcedir, ext)))
print("%d files copied from %s to %s"%(len(allfiles),sourcedir,all_image_dir))
image_width,image_height = create_padded_image_dir.save_padded_images(all_image_dir,allfiles)
# this cell generates a central fixation cross the size of the required image dimensions
# it is a gray image with a white cross in the middle that is supposed to re-center participant gaze, and provide a
# momentary break, between consecutive images
from generate_central_fixation_cross import save_fixation_cross
save_fixation_cross(rootdir,image_width,image_height);
# this cell creates the requested number of buckets and distributes images from `all_image_dir` to the corresponding
# bucket directories inside `real_image_dir`
from distribute_image_files_by_buckets import distribute_images
distribute_images(all_image_dir,real_image_dir,num_buckets,start_bucket_at)
# this cell generates #ncodecharts "codecharts" (jittered grids of triplets) of the required image dimensions
import generate_codecharts
from create_codecharts_dir import create_codecharts
create_codecharts(real_CC_dir,ncodecharts,image_width,image_height)
```
We create sentinel images by taking a small object (one of a: fixation cross, red dot, or image of a face) and choosing a random location for it on a blank image (away from the image boundaries by at least `border_padding` pixels). The code below creates #`sentinel_images_per_bucket` such sentinel images in each bucket.
```
# this cell generates #sentinel_images_per_bucket sentinel images per bucket, along with the corresponding codecharts
import generate_sentinels
# settings for generating sentinels
sentinel_type = "img" # one of 'fix_cross', 'red_dot', or 'img'
sentinel_imdir = 'sentinel_target_images' # directory where to find face images to use for generating sentinel images
# only relevant if sentinel_type="img"
border_padding = 100 # used to guarantee that chosen sentinel location is not too close to border to be hard to spot
generate_sentinels.generate_sentinels(sentinel_image_dir,sentinel_CC_dir,num_buckets,start_bucket_at,sentinel_images_per_bucket,\
image_width,image_height,border_padding,sentinel_type,sentinel_imdir)
# this cell generates codecharts corresponding to tutorial images, as well as sentinel images for the tutorial
from generate_tutorials import generate_tutorials
# inherit border_padding and sentinel type from above cell
tutorial_image_dir = os.path.join(rootdir,'tutorial_images') # where processed tutorial images will be saved
if not os.path.exists(tutorial_image_dir):
print('Creating directory %s'%(tutorial_image_dir))
os.makedirs(tutorial_image_dir)
allfiles = []
for ext in ('*.jpeg', '*.png', '*.jpg'):
allfiles.extend(glob.glob(os.path.join(tutorial_source_dir, ext)))
create_padded_image_dir.save_padded_images(tutorial_image_dir,allfiles,toplot=False,maxwidth=image_width,maxheight=image_height)
# TODO: or pick a random set of images to serve as tutorial images
N = 6 # number of images to use for tutorials (these will be sampled from to generate subject files below)
# note: make this larger than num_imgs_per_tutorial so not all subject files have the same tutorials
N_sent = 50 # number of sentinels to use for tutorials
# note: if equal to num_sentinels_per_tutorial, all subject files will have the same tutorial sentinels
generate_tutorials(tutorial_image_dir,rootdir,image_width,image_height,border_padding,N,sentinel_type,sentinel_imdir,N_sent)
```
Now that all the previous cells have generated the requisite image, codechart, sentinel, and tutorial files, the following code will generate `num_subject_files` individual subject files by sampling from the appropriate image directories and creating an experimental sequence.
```
start_subjects_at = 0 # where to start creating subject files at (if had created other subject files previously)
#if os.path.exists(os.path.join(rootdir,'subject_files/bucket0')):
# subjfiles = glob.glob(os.path.join(rootdir,'subject_files/bucket0/*.json'))
# start_subjects_at = len(subjfiles)
real_codecharts = glob.glob(os.path.join(real_CC_dir,'*.jpg'))
sentinel_codecharts = glob.glob(os.path.join(sentinel_CC_dir,'*.jpg'))
with open(os.path.join(real_CC_dir,'CC_codes_full.json')) as f:
real_codes_data = json.load(f) # contains mapping of image path to valid codes
## GENERATING SUBJECT FILES
subjdir = os.path.join(rootdir,'subject_files')
if not os.path.exists(subjdir):
os.makedirs(subjdir)
os.makedirs(os.path.join(rootdir,'full_subject_files'))
with open(os.path.join(rootdir,'tutorial_full.json')) as f:
tutorial_data = json.load(f)
tutorial_real_filenames = [fn for fn in tutorial_data.keys() if tutorial_data[fn]['flag']=='tutorial_real']
tutorial_sentinel_filenames = [fn for fn in tutorial_data.keys() if tutorial_data[fn]['flag']=='tutorial_sentinel']
# iterate over all buckets
for b in range(len(which_buckets)):
bucket = 'bucket%d'%(which_buckets[b])
img_bucket_dir = os.path.join(real_image_dir,bucket)
img_files = []
for ext in ('*.jpeg', '*.png', '*.jpg'):
img_files.extend(glob.glob(os.path.join(img_bucket_dir, ext)))
sentinel_bucket_dir = os.path.join(sentinel_image_dir,bucket)
sentinel_files = glob.glob(os.path.join(sentinel_bucket_dir,'*.jpg'))
with open(os.path.join(sentinel_bucket_dir,'sentinel_codes_full.json')) as f:
sentinel_codes_data = json.load(f) # contains mapping of image path to valid codes
subjdir = os.path.join(rootdir,'subject_files',bucket)
if not os.path.exists(subjdir):
os.makedirs(subjdir)
os.makedirs(os.path.join(rootdir,'full_subject_files',bucket))
print('Generating %d subject files in bucket %d'%(num_subject_files,b))
# for each bucket, generate subject files
for i in range(num_subject_files):
random.shuffle(img_files)
random.shuffle(sentinel_files)
random.shuffle(real_codecharts)
# for each subject files, add real images
sf_data = []
full_sf_data = []
# ADDING TUTORIALS
random.shuffle(tutorial_real_filenames)
random.shuffle(tutorial_sentinel_filenames)
# initialize temporary arrays, because will shuffle real & sentinel tutorial images before adding to
# final subject files
sf_data_temp = []
full_sf_data_temp = []
for j in range(num_imgs_per_tutorial):
image_data = {}
fn = tutorial_real_filenames[j]
image_data["image"] = fn
image_data["codechart"] = tutorial_data[fn]['codechart_file'] # stores codechart path
image_data["codes"] = tutorial_data[fn]['valid_codes'] # stores valid codes
image_data["flag"] = 'tutorial_real' # stores flag of whether we have real or sentinel image
full_image_data = image_data.copy() # identical to image_data but includes a key for coordinates
full_image_data["coordinates"] = tutorial_data[fn]['coordinates'] # store (x, y) coordinate of each triplet
sf_data_temp.append(image_data)
full_sf_data_temp.append(full_image_data)
if add_sentinels_to_tutorial and num_sentinels_per_tutorial>0:
for j in range(num_sentinels_per_tutorial):
image_data2 = {}
fn = tutorial_sentinel_filenames[j]
image_data2["image"] = fn
image_data2["codechart"] = tutorial_data[fn]['codechart_file'] # stores codechart path
image_data2["correct_code"] = tutorial_data[fn]['correct_code']
image_data2["correct_codes"] = tutorial_data[fn]['correct_codes']
image_data2["codes"] = tutorial_data[fn]['valid_codes'] # stores valid codes
image_data2["flag"] = 'tutorial_sentinel' # stores flag of whether we have real or sentinel image
full_image_data2 = image_data2.copy() # identical to image_data but includes a key for coordinates
full_image_data2["coordinate"] = tutorial_data[fn]['coordinate'] # stores coordinate for correct code
full_image_data2["codes"] = tutorial_data[fn]['valid_codes'] # stores valid codes
full_image_data2["coordinates"] = tutorial_data[fn]['coordinates'] # store (x, y) coordinate of each triplet
sf_data_temp.append(image_data2)
full_sf_data_temp.append(full_image_data2)
# up to here, have sequentially added real images and then sentinel images to tutorial
# now want to shuffle them
perm = np.random.permutation(len(sf_data_temp))
for j in range(len(perm)): # note need to make sure sf_data and full_sf_data correspond
sf_data.append(sf_data_temp[perm[j]])
full_sf_data.append(full_sf_data_temp[perm[j]])
# ADDING REAL IMAGES
for j in range(num_images_per_sf):
image_data = {}
image_data["image"] = img_files[j] # stores image path
# select a code chart
pathname = real_codecharts[j] # since shuffled, will pick up first set of random codecharts
image_data["codechart"] = pathname # stores codechart path
image_data["codes"] = real_codes_data[pathname]['valid_codes'] # stores valid codes
image_data["flag"] = 'real' # stores flag of whether we have real or sentinel image
full_image_data = image_data.copy() # identical to image_data but includes a key for coordinates
full_image_data["coordinates"] = real_codes_data[pathname]['coordinates'] # store locations - (x, y) coordinate of each triplet
sf_data.append(image_data)
full_sf_data.append(full_image_data)
## ADDING SENTINEL IMAGES
sentinel_spacing = int(num_images_per_sf/float(num_sentinels_per_sf))
insertat = num_imgs_per_tutorial+num_sentinels_per_tutorial + 1; # don't insert before all the tutorial images are done
for j in range(num_sentinels_per_sf):
sentinel_image_data = {}
sentinel_pathname = sentinel_files[j]
sentinel_image_data["image"] = sentinel_pathname # stores image path
sentinel_image_data["codechart"] = sentinel_codes_data[sentinel_pathname]['codechart_file']
sentinel_image_data["correct_code"] = sentinel_codes_data[sentinel_pathname]['correct_code']
sentinel_image_data["correct_codes"] = sentinel_codes_data[sentinel_pathname]['correct_codes']
sentinel_image_data["codes"] = sentinel_codes_data[sentinel_pathname]["valid_codes"]
sentinel_image_data["flag"] = 'sentinel' # stores flag of whether we have real or sentinel image
# for analysis, save other attributes too
full_sentinel_image_data = sentinel_image_data.copy() # identical to sentinel_image_data but includes coordinate key
full_sentinel_image_data["coordinate"] = sentinel_codes_data[sentinel_pathname]["coordinate"] # stores the coordinate of the correct code
full_sentinel_image_data["codes"] = sentinel_codes_data[sentinel_pathname]["valid_codes"] # stores other valid codes
full_sentinel_image_data["coordinates"] = sentinel_codes_data[sentinel_pathname]["coordinates"] # stores the coordinate of the valid code
insertat = insertat + random.choice(range(sentinel_spacing-1,sentinel_spacing+2))
insertat = min(insertat,len(sf_data)-1)
sf_data.insert(insertat, sentinel_image_data)
full_sf_data.insert(insertat, full_sentinel_image_data)
# Add an image_id to each subject file entry
image_id = 0 # represents the index of the image in the subject file
for d in range(len(sf_data)):
sf_data[d]['index'] = image_id
full_sf_data[d]['index'] = image_id
image_id+=1
subj_num = start_subjects_at+i
with open(os.path.join(rootdir,'subject_files',bucket,'subject_file_%d.json'%(subj_num)), 'w') as outfile:
print('Subject file %s DONE'%(outfile.name))
json.dump(sf_data, outfile)
with open(os.path.join(rootdir,'full_subject_files',bucket,'subject_file_%d.json'%(subj_num)), 'w') as outfile:
json.dump(full_sf_data, outfile)
```
| github_jupyter |
################################################################################
**Author**: _Pradip Kumar Das_
**License:** https://github.com/PradipKumarDas/Competitions/blob/main/LICENSE
**Profile & Contact:** [LinkedIn](https://www.linkedin.com/in/daspradipkumar/) | [GitHub](https://github.com/PradipKumarDas) | [Kaggle](https://www.kaggle.com/pradipkumardas) | pradipkumardas@hotmail.com (Email)
################################################################################
# **CommonLit Readability Assessment**
## Determinining Performance with BERT
**_Sections:_**
- _Required Packages & Helpers_
- _Configuration_
- _Data Preparation_
- _Modeling_
- _Evaluation_
- _Submission_
**_References (My Earlier Related Work):_**
1. [*Exploratory Data Analysis (EDA)*](https://www.kaggle.com/pradipkumardas/1-commonlit-readability-eda)
2. [*Baselining Model Performance with 1D ConvNet*](https://www.kaggle.com/pradipkumardas/2-commonlit-readability-baseline-perf-1dconvnet)
_**Note:** This notebook just fine-tunes pretrained BERT (base, uncased) model without cross validation, and tries to get the initial prediction performance. Other advanced options and techniques are being explored and will be shared soon._
## Required Packages & Helpers
```
# Imports required packages
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras import layers
from tensorflow.keras.models import Model
from tensorflow.keras.metrics import RootMeanSquaredError
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from transformers import TFAutoModelForSequenceClassification, AutoTokenizer
import matplotlib.pyplot as plt
import seaborn as sns
```
## Configurations
```
# Sets data configurations
data_config ={
"n_bins": 20,
"n_splits": 5
}
# Sets model specific configurations
model_config = {
"model_name": "../input/huggingface-bert/bert-base-cased",
"model_path": "model.h5",
"num_labels": 1,
"learning_rate": 5e-5,
"batch_size": 32,
"max_length": 256,
"steps_per_epoch": 70, # ~(<validation size>/<batch size>)
"epochs": 30,
}
# Setting initialization for the theme of the plots
sns.set_theme()
```
## Data Preparation
```
# Loads data
train = pd.read_csv("../input/commonlitreadabilityprize/train.csv")
test = pd.read_csv("../input/commonlitreadabilityprize/test.csv")
submission = pd.read_csv("../input/commonlitreadabilityprize/sample_submission.csv")
# Making all excerpts in lowercase
train.excerpt = train.excerpt.str.lower()
```
**Segmenting Labels (Distributing Lables in Discrete Intervals)**: As target is a interval variable, these labels should be segmented so that nearly equal number of samples from each segment can be selected during training model.
```
# Segments discrete interval of label by marking each sample with a bin number
train["bin"] = pd.cut(
x=train.target, bins=data_config["n_bins"],
labels=[i for i in range(data_config["n_bins"])])
x_train, x_val, y_train, y_val = train_test_split(train.excerpt, train.target, test_size=0.2, shuffle=True, stratify=train.bin)
```
## Modeling
### Prepares Data for model training
```
# Creates tokenizer to prepare data for model training
tokenizer = AutoTokenizer.from_pretrained(model_config["model_name"])
# Encodes all training, validation and test data
train_encodings = tokenizer(
x_train.tolist(),
max_length=model_config["max_length"],
truncation=True,
padding="max_length",
return_tensors="tf")
val_encodings = tokenizer(
x_val.tolist(),
max_length=model_config["max_length"],
truncation=True,
padding="max_length",
return_tensors="tf")
test_encodings = tokenizer(
test.excerpt.tolist(),
max_length=model_config["max_length"],
truncation=True,
padding="max_length",
return_tensors="tf")
# Prepares training, validation and test data in a form (from encodings to dataset) the model expects to get trained on
train_dataset = tf.data.Dataset.from_tensor_slices((
{"input_ids": train_encodings["input_ids"], "attention_mask": train_encodings["attention_mask"]},
y_train))
train_dataset = train_dataset.repeat()
train_dataset = train_dataset.shuffle(1024)
train_dataset = train_dataset.batch(model_config["batch_size"])
train_dataset = train_dataset.prefetch(tf.data.AUTOTUNE)
val_dataset = tf.data.Dataset.from_tensor_slices((
{"input_ids": val_encodings["input_ids"], "attention_mask": val_encodings["attention_mask"]},
y_val))
val_dataset = val_dataset.batch(model_config["batch_size"])
val_dataset = val_dataset.prefetch(tf.data.AUTOTUNE)
test_dataset = tf.data.Dataset.from_tensor_slices(
{"input_ids": test_encodings["input_ids"], "attention_mask": test_encodings["attention_mask"]})
test_dataset = test_dataset.batch(model_config["batch_size"])
test_dataset = test_dataset.prefetch(tf.data.AUTOTUNE)
```
### Creates Model
```
# Creates primary model
encoder = TFAutoModelForSequenceClassification.from_pretrained(
model_config["model_name"], num_labels = model_config["num_labels"])
# Creates multi inputs for model
input_ids = layers.Input(shape=(model_config["max_length"], ), dtype=tf.int32, name="input_ids")
attention_mask = layers.Input(shape=(model_config["max_length"]), dtype=tf.int32, name="attention_mask")
# Sets model output
outputs = encoder({"input_ids": input_ids, "attention_mask": attention_mask})
# Creates a wrapper model
model = Model(inputs=[input_ids, attention_mask], outputs=outputs)
# Compiles and shows the summary to check
model.compile(
optimizer=keras.optimizers.Adam(model_config["learning_rate"]),
loss=keras.losses.MeanSquaredError(),
metrics=keras.metrics.RootMeanSquaredError())
model.summary()
```
### Sets Model Rules and Trains
```
# Configures monitor with rules for model training to stop if criterion match
early_stopping_monitor = EarlyStopping(
monitor="val_root_mean_squared_error", mode="min", patience=5, restore_best_weights=True, verbose=1)
# Configures rules to store model parameters (only weights) at its best during training
checkpoint = ModelCheckpoint(
model_config["model_path"], monitor="val_root_mean_squared_error", mode="min", save_best_only=True, save_weights_only=True)
# Fits (trains) the model
history = model.fit(
x=train_dataset,
validation_data=val_dataset,
steps_per_epoch=model_config["steps_per_epoch"],
callbacks=[early_stopping_monitor, checkpoint],
epochs=model_config["epochs"],
verbose=2).history
```
## Evaluating Model
```
plt.title("Training vs. Validation Loss")
plt.plot(range(1, len(history["root_mean_squared_error"]) + 1), history["root_mean_squared_error"], "bo", label="Loss")
plt.plot(range(1, len(history["val_root_mean_squared_error"]) + 1), history["val_root_mean_squared_error"], "b", label="Validation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss (RMSE)")
plt.legend()
plt.show()
best_epoch = np.argmin(history["val_root_mean_squared_error"])
print(f"Best Validation Performance: {history['val_root_mean_squared_error'][best_epoch]} (RMSE) at epoch {best_epoch + 1}")
```
## Submission
```
# Predicts on test data
test_predictions = model.predict(test_dataset)["logits"]
# Averaging predictions across folds
submission.target = test_predictions
# Submitting by saving predictions into submission file
submission.to_csv("submission.csv", index=False)
```
| github_jupyter |
Template for test
```
from pred import Predictor
from pred import sequence_vector
from pred import chemical_vector
```
Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation.
Included is N Phosphorylation however no benchmarks are available, yet.
Training data is from phospho.elm and benchmarks are from dbptm.
```
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/clean_s_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=0)
y.supervised_training("mlp_adam")
y.benchmark("Data/Benchmarks/phos_stripped.csv", "S")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/clean_s_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=1)
x.supervised_training("mlp_adam")
x.benchmark("Data/Benchmarks/phos_stripped.csv", "S")
del x
```
Y Phosphorylation
```
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/clean_Y_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=0)
y.supervised_training("mlp_adam")
y.benchmark("Data/Benchmarks/phos_stripped.csv", "Y")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/clean_Y_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=1)
x.supervised_training("mlp_adam")
x.benchmark("Data/Benchmarks/phos_stripped.csv", "Y")
del x
```
T Phosphorylation
```
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/clean_t_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=0)
y.supervised_training("mlp_adam")
y.benchmark("Data/Benchmarks/phos_stripped.csv", "T")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/clean_t_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=1)
x.supervised_training("mlp_adam")
x.benchmark("Data/Benchmarks/phos_stripped.csv", "T")
del x
```
| github_jupyter |
# Figure 2: The spectral plateau disrupts the 1/f power law
```
from pathlib import Path
import matplotlib as mpl
import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
import mne
import yaml
import numpy as np
import scipy.signal as sig
from fooof import FOOOF
from fooof.sim.gen import gen_aperiodic
from utils import detect_plateau_onset, elec_phys_signal
%load_ext autoreload
%autoreload 2
```
#### Load params and make directory
```
yaml_file = open('params.yml')
parsed_yaml_file = yaml.load(yaml_file, Loader=yaml.FullLoader)
globals().update(parsed_yaml_file)
Path(fig_path).mkdir(parents=True, exist_ok=True)
```
#### Load empirical data of dataset 1
```
# Load data
data_path = "../data/Fig2/"
fname9 = "subj9_off_R1_raw.fif"
sub9 = mne.io.read_raw_fif(data_path + fname9, preload=True)
sub9.pick_channels(['STN_R01']) # select channel
# Filter out power line noise
# Important: Keep 50Hz line noise because we fit in that range!
filter_params = {"freqs": np.arange(100, 601, 50),
"notch_widths": .5,
"method": "spectrum_fit"}
sub9.notch_filter(**filter_params)
# Convert mne to numpy
sample_rate = 2400
start = int(0.5 * sample_rate) # artefact in beginning of recording
stop = int(185 * sample_rate) # artefact at the end of recording
sub9 = sub9.get_data(start=start, stop=stop)[0]
# Calc Welch
welch_params = dict(fs=sample_rate, nperseg=sample_rate)
freq, psd_sub9 = sig.welch(sub9, **welch_params)
# Mask above highpass and below lowpass
highpass = 1
lowpass = 600
filt = (freq >= highpass) & (freq <= lowpass)
freq = freq[filt]
psd_sub9 = psd_sub9[filt]
```
#### a) Simulate PSD with white noise plateau
```
# Make 1/f-noise
sim_exponent = 2
aperiodic_params = dict(exponent=sim_exponent, nlv=0.00005, seed=3)
signal_sim, _ = elec_phys_signal(**aperiodic_params)
# Calc PSD
freq, psd_sim = sig.welch(signal_sim, **welch_params)
# Mask above highpass and below lowpass
freq, psd_sim = freq[filt], psd_sim[filt]
# Normalize
psd_sim /= psd_sim[0]
# Detect Noise floor
plateau_onset_sim = detect_plateau_onset(freq, psd_sim, f_start=1)
signal_sim = (freq <= plateau_onset_sim)
plateau_sim = (freq >= plateau_onset_sim)
ground_truth = gen_aperiodic(freq, np.array([0, sim_exponent]))
```
#### Set plot parameters for a)
```
# %% Pack lines to plot
plot_sim = (freq[signal_sim], psd_sim[signal_sim], c_sim) # (x, y, color)
plot_plateau = (freq[plateau_sim], psd_sim[plateau_sim], c_noise)
plot_ground = (freq, 10**ground_truth, c_sim)
# plot limits, ticks, and labels
xlim_a = (1, 600)
y_plateau = psd_sim[freq == xlim_a[1]][0]
ylim_a = (.4*y_plateau, 1)
xticks_a = [1, 10, 50, 100, 200, 600]
ylabel_a = "PSD [a.u.]"
axes_a = dict(xlim=xlim_a, ylim=ylim_a, xticks=xticks_a, xticklabels=xticks_a,
ylabel=ylabel_a)
# fit for different upper fit limits and plot
plot_fits, dic_fits, vlines = [], [], []
upper_fit_limits = xticks_a[1:-1]
fit_alphas = [.9, 0.6, .4, .2]
fm = FOOOF(verbose=False)
for i, lim in enumerate(upper_fit_limits):
fm.fit(freq, psd_sim, [1, lim])
exp = fm.get_params('aperiodic_params', 'exponent')
fit = gen_aperiodic(fm.freqs, fm.aperiodic_params_)
label = fr"1-{lim} Hz $\beta=${exp:.2f}"
plot_fit = fm.freqs, 10**fit, "-"
dic_fit = dict(c=c_fits[i], lw=2, label=label)
# annotate x-crossing
vline = lim, ylim_a[0], 10**fit[-1]
plot_fits.append(plot_fit)
dic_fits.append(dic_fit)
vlines.append(vline)
dic_line = dict(color=c_sim, linestyle=":", lw=.3)
```
#### Set plot parameters for b)
```
# Detect Noise floor
plateau_onset_sub9 = detect_plateau_onset(freq, psd_sub9, f_start=1)
# Mask signal/noise
signal_sub9 = (freq <= plateau_onset_sub9)
plateau_sub9 = (freq >= plateau_onset_sub9)
# Pack lines to plot
plot_sub9 = (freq, psd_sub9, c_real) # (x, y, color)
plot_sub9_signal = (freq[signal_sub9], psd_sub9[signal_sub9], c_real)
plot_sub9_plateau = (freq[plateau_sub9], psd_sub9[plateau_sub9], c_noise)
# Get oscillation coordinates sub9
peak_center_freq1 = 23
peak_center_freq2 = 350
peak_height1 = psd_sub9[freq == peak_center_freq1]
peak_height2 = psd_sub9[freq == peak_center_freq2]
plateau_height = psd_sub9[freq == plateau_onset_sub9]
# Create lines, arrows, and text to annotate noise floor
line_peak1 = dict(x=peak_center_freq1, ymin=plateau_height*0.8,
ymax=peak_height1, color=c_sim, linestyle="--", lw=.5)
line_peak2 = dict(x=peak_center_freq2, ymin=plateau_height*0.8,
ymax=peak_height2, color=c_sim, linestyle="--", lw=.5)
arrow1 = dict(s="",
xy=(plateau_onset_sub9, plateau_height*0.8),
xytext=(peak_center_freq1, plateau_height*0.8),
arrowprops=dict(arrowstyle="->", color=c_sim, lw=1))
arrow2 = dict(s="",
xy=(plateau_onset_sub9, plateau_height*0.8),
xytext=(peak_center_freq2, plateau_height*0.8),
arrowprops=dict(arrowstyle="->", color=c_sim, lw=1))
plateau_line9 = dict(s="",
xy=(plateau_onset_sub9, plateau_height*0.86),
xytext=(plateau_onset_sub9, plateau_height*.5),
arrowprops=dict(arrowstyle="-", color=c_sim, lw=2))
plateau_txt9 = dict(s=f" {plateau_onset_sub9} Hz",
xy=(plateau_onset_sub9, plateau_height*0.7),
xytext=(plateau_onset_sub9*1.02, plateau_height*.53))
xticks_b = [1, 10, 100, 600]
xlim_b = (1, 826)
ylim_b = (0.005, 2)
ylabel_b = r"PSD [$\mu$$V^2$/Hz]"
axes_b = dict(xlabel=None, ylabel=ylabel_b, xlim=xlim_b, ylim=ylim_b,
xticks=xticks_b, xticklabels=xticks_b)
```
#### Simulate a signal that reproduces the empirical signal of b)
```
# Set 3 different 1/f-exponents to reproduce psd_sub9
exponent1 = 1
exponent15 = 1.5
exponent2 = 2
peak_params1 = [(3, 0, 1),
(5, 0, 1),
(10.5, 4, 3),
(16, 2, 1.5),
(23, 15, 5),
(42, 9, 15),
(50, .0075, .001),
(360, 25, 80)]
peak_params15 = [(3, 0, 1),
(5, .1, 1),
(10.5, 4, 3),
(16, 2, 1.5),
(23, 15, 5),
(42, 15, 22),
(50, .0075, .001),
(360, 30, 80)]
peak_params2 = [(3, 0.1, .6),
(5, .3, 1),
(10.5, 4.1, 3),
(16, 2.1, 2),
(23, 14, 5),
(42, 15, 20),
(50, .0075, .001),
(360, 25, 70)]
params1 = dict(exponent=exponent1, periodic_params=peak_params1,
highpass=True, nlv=.0002)
params15 = dict(exponent=exponent15, periodic_params=peak_params15,
highpass=True, nlv=.0002)
params2 = dict(exponent=exponent2, periodic_params=peak_params2,
highpass=True, nlv=.0002)
aperiodic_signal1, full_signal1 = elec_phys_signal(**params1)
aperiodic_signal15, full_signal15 = elec_phys_signal(**params15)
aperiodic_signal2, full_signal2 = elec_phys_signal(**params2)
# Calc Welch
freq, psd_aperiodic1 = sig.welch(aperiodic_signal1, **welch_params)
freq, psd_aperiodic15 = sig.welch(aperiodic_signal15, **welch_params)
freq, psd_aperiodic2 = sig.welch(aperiodic_signal2, **welch_params)
freq, psd_signal1 = sig.welch(full_signal1, **welch_params)
freq, psd_signal15 = sig.welch(full_signal15, **welch_params)
freq, psd_signal2 = sig.welch(full_signal2, **welch_params)
# Filter above highpass and below lowpass
freq = freq[filt]
psd_aperiodic1 = psd_aperiodic1[filt]
psd_aperiodic15 = psd_aperiodic15[filt]
psd_aperiodic2 = psd_aperiodic2[filt]
psd_signal1 = psd_signal1[filt]
psd_signal15 = psd_signal15[filt]
psd_signal2 = psd_signal2[filt]
# Normalize
norm1 = psd_aperiodic1[0] / psd_sub9[0]
norm15 = psd_aperiodic15[0] / psd_sub9[0]
norm2 = psd_aperiodic2[0] / psd_sub9[0]
psd_aperiodic1 /= norm1
psd_signal1 /= norm1
psd_aperiodic15 /= norm15
psd_signal15 /= norm15
psd_aperiodic2 /= norm2
psd_signal2 /= norm2
# Fit fooof
freq_range = [1, 95] # upper border above oscillations range, below plateau
fooof_params = dict(peak_width_limits=[1, 100], verbose=False)
fm1 = FOOOF(**fooof_params)
fm15 = FOOOF(**fooof_params)
fm2 = FOOOF(**fooof_params)
fm_sub9 = FOOOF(**fooof_params)
fm1.fit(freq, psd_signal1, freq_range)
fm15.fit(freq, psd_signal15, freq_range)
fm2.fit(freq, psd_signal2, freq_range)
fm_sub9.fit(freq, psd_sub9, freq_range)
# Extract fit results
exp1 = fm1.get_params("aperiodic", "exponent")
exp15 = fm15.get_params("aperiodic", "exponent")
exp2 = fm2.get_params("aperiodic", "exponent")
exp_sub9 = fm_sub9.get_params('aperiodic_params', 'exponent')
off_sub9 = fm_sub9.get_params('aperiodic_params', 'offset')
# Simulate fitting results
ap_fit1 = gen_aperiodic(fm1.freqs, fm1.aperiodic_params_)
ap_fit15 = gen_aperiodic(fm15.freqs, fm15.aperiodic_params_)
ap_fit2 = gen_aperiodic(fm2.freqs, fm2.aperiodic_params_)
ap_fit_sub9 = gen_aperiodic(fm_sub9.freqs, fm_sub9.aperiodic_params_)
fit1 = fm1.freqs, 10**ap_fit1, c_low
fit15 = fm15.freqs, 10**ap_fit15, c_med
fit2 = fm2.freqs, 10**ap_fit2, c_high
fit_sub9 = fm_sub9.freqs, 10**ap_fit_sub9, c_real
psd_plateau_fits = [fit1, fit15, fit2]
spec9_fit_label = fr"FOOOF LFP $\beta=${exp_sub9:.2f}"
```
#### Set plot parameters for c)
```
plot_sub9 = (freq, psd_sub9, c_real)
plot_signal1 = (freq, psd_signal1, c_low)
plot_signal15 = (freq, psd_signal15, c_med)
plot_signal2 = (freq, psd_signal2, c_high)
# Summarize
psd_plateau_vary = [plot_signal1, plot_signal15, plot_signal2]
plot_plateau1 = (freq, psd_aperiodic1, c_ground)
plot_plateau15 = (freq, psd_aperiodic15, c_ground)
plot_plateau2 = (freq, psd_aperiodic2, c_ground)
xlim_c = (1, 825)
xlabel_c = "Frequency in Hz"
# Summarize
plateau_labels = [r"$\beta_{fit}$="f"{exp1:.2f}",
r"$\beta_{fit}$="f"{exp15:.2f}",
r"$\beta_{fit}$="f"{exp2:.2f}"]
plateau_labels = [fr"FOOOF flat $\beta=${exp1:.2f}",
fr"FOOOF med $\beta=${exp15:.2f}",
fr"FOOOF steep $\beta=${exp2:.2f}"]
psd_aperiodic_vary = [plot_plateau1, plot_plateau15, plot_plateau2]
labelpad = 5
leg_c = dict(ncol=3, loc=10, bbox_to_anchor=(.54, .4))
axes_c = dict(xticks=xticks_b, xticklabels=xticks_b, xlim=xlim_c)
freq_mask_fill_area = (freq > 1) & (freq <= freq_range[1])
freqs_fill = freq[freq_mask_fill_area]
plot_delta_low = (freqs_fill, psd_aperiodic1[freq_mask_fill_area],
10**ap_fit1[fm1.freqs > 1])
plot_delta_med = (freqs_fill, psd_aperiodic15[freq_mask_fill_area],
10**ap_fit15[fm1.freqs > 1])
plot_delta_high = (freqs_fill, psd_aperiodic2[freq_mask_fill_area],
10**ap_fit2[fm1.freqs > 1])
# Summarize
delta_power = [plot_delta_low, plot_delta_med, plot_delta_high]
colors_c = [c_low, c_med, c_high]
```
#### Plot settings
```
# Mpl settings
mpl.rcParams['xtick.labelsize'] = legend_fontsize
mpl.rcParams['ytick.labelsize'] = legend_fontsize
mpl.rcParams['axes.labelsize'] = legend_fontsize
mpl.rcParams['legend.fontsize'] = legend_fontsize
mpl.rcParams["axes.spines.right"] = False
mpl.rcParams["axes.spines.top"] = False
panel_labels = dict(x=0, y=1.01, fontsize=panel_fontsize,
fontdict=dict(fontweight="bold"))
line_fit = dict(lw=2, ls=":", zorder=5)
line_ground = dict(lw=.5, ls="--", zorder=5)
psd_aperiodic_kwargs = dict(lw=0.5)
```
# Figure 2
```
# Prepare Gridspec
fig = plt.figure(figsize=[fig_width, 6.5], constrained_layout=True)
gs0 = gridspec.GridSpec(3, 1, figure=fig, height_ratios=[6, 6, 1])
# a) and b) gridspec
gs00 = gridspec.GridSpecFromSubplotSpec(1, 2, subplot_spec=gs0[0])
ax1 = fig.add_subplot(gs00[0])
ax2 = fig.add_subplot(gs00[1])
# c) gridspec
gs01 = gs0[1].subgridspec(1, 3)
ax3 = fig.add_subplot(gs01[0])
ax4 = fig.add_subplot(gs01[1])
ax5 = fig.add_subplot(gs01[2])
c_axes = [ax3, ax4, ax5]
# Legend suplot
gs02 = gs0[2]
ax_leg = fig.add_subplot(gs02)
ax_leg.axis("off")
# a)
ax = ax1
# Plot simulated PSD and ground truth
ax.loglog(*plot_sim, label=fr"1/f $\beta=${sim_exponent} + noise")
ax.loglog(*plot_ground, **line_ground,
label=fr"Ground truth $\beta=${sim_exponent}")
ax.loglog(*plot_plateau, label="Plateau")
# Plot fits for different upper fitting borders
for i in range(len(upper_fit_limits)):
ax.loglog(*plot_fits[i], **dic_fits[i])
ax.vlines(*vlines[i], **dic_line)
# Set axes
ax.set(**axes_a)
ax.legend()
ax.text(s="a", **panel_labels, transform=ax.transAxes)
# b)
ax = ax2
# Plot Sub 9
ax.loglog(*plot_sub9_signal, label="LFP")
ax.loglog(*plot_sub9_plateau, label="Plateau")
# Plot Peak lines
ax.vlines(**line_peak1)
ax.vlines(**line_peak2)
# Plot Arrow left and right
ax.annotate(**arrow1)
ax.annotate(**arrow2)
# Annotate plateau start
ax.annotate(**plateau_line9)
ax.annotate(**plateau_txt9, fontsize=legend_fontsize)
# Set axes
ax.set(**axes_b)
ax.legend(loc=0)
ax.text(s="b", **panel_labels, transform=ax.transAxes)
# c)
# Make sure we have just one label for each repetitive plot
spec9_label = ["STN-LFP", None, None]
spec9_fit_labels = [spec9_fit_label, None, None]
aperiodic_label = [None, None, "1/f + noise"]
for i, ax in enumerate(c_axes):
# Plot LFP and fooof fit
ax.loglog(*plot_sub9, alpha=0.3, lw=2, label=spec9_label[i])
ax.loglog(*fit_sub9, **line_fit, label=spec9_fit_labels[i])
# Plot sim low delta power and fooof fit
ax.loglog(*psd_plateau_vary[i])
ax.loglog(*psd_plateau_fits[i], lw=2, label=plateau_labels[i])
# Plot aperiodic component of sim
ax.loglog(*psd_aperiodic_vary[i], **psd_aperiodic_kwargs,
label=aperiodic_label[i])
# Indicate delta power as fill between aperiodic component
# and full spectrum
ax.fill_between(*delta_power[i], color=colors_c[i], alpha=0.2)
# Draw arrow
if i == 1:
ax.set_xlabel(xlabel_c)
# Save legend and set axes
if i == 0:
handles, labels = ax.get_legend_handles_labels()
ax.set_ylabel(ylabel_a, labelpad=labelpad)
ax.text(s="c", **panel_labels, transform=ax.transAxes)
else:
hands, labs = ax.get_legend_handles_labels()
handles.extend(hands)
labels.extend(labs)
ax.spines["left"].set_visible(False)
ax.set_yticks([], minor=True)
ax.set_yticks([])
ax.set(**axes_c)
# Set legend between subplots
leg = ax_leg.legend(handles, labels, **leg_c)
plt.savefig(fig_path + "Fig2.pdf", bbox_inches="tight")
plt.savefig(fig_path + "Fig2.png", dpi=1000, bbox_inches="tight")
plt.show()
```
| github_jupyter |
```
import pandas as pd
from tensorflow.keras.utils import get_file
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn import metrics
import warnings
warnings.filterwarnings('ignore')
from sklearn.ensemble import IsolationForest
def getData(file_path):
try:
path = get_file(file_path, origin='http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz')
except:
print('Error Downloading')
raise
df = pd.read_csv(path, header = None)
df.dropna(inplace = True, axis = 1)
df.columns = [
'duration',
'protocol_type',
'service',
'flag',
'src_bytes',
'dst_bytes',
'land',
'wrong_fragment',
'urgent',
'hot',
'num_failed_logins',
'logged_in',
'num_compromised',
'root_shell',
'su_attempted',
'num_root',
'num_file_creations',
'num_shells',
'num_access_files',
'num_outbound_cmds',
'is_host_login',
'is_guest_login',
'count',
'srv_count',
'serror_rate',
'srv_serror_rate',
'rerror_rate',
'srv_rerror_rate',
'same_srv_rate',
'diff_srv_rate',
'srv_diff_host_rate',
'dst_host_count',
'dst_host_srv_count',
'dst_host_same_srv_rate',
'dst_host_diff_srv_rate',
'dst_host_same_src_port_rate',
'dst_host_srv_diff_host_rate',
'dst_host_serror_rate',
'dst_host_srv_serror_rate',
'dst_host_rerror_rate',
'dst_host_srv_rerror_rate',
'outcome'
]
return df
def Preprocessing(df, cat_col_idx, get_dummy = True):
df_columns = df.columns.tolist()
numerical_columns = np.delete(df_columns, cat_col_idx)
std = StandardScaler()
for col in numerical_columns:
df[col] = std.fit_transform(df[[col]])
if get_dummy == True:
def encode_text_dummy(df, name):
dummies = pd.get_dummies(df[name])
for x in dummies.columns:
dummy_name = "{}-{}".format(name, x)
df[dummy_name] = dummies[x]
df.drop(name, axis = 1, inplace = True)
encode_text_dummy(df, 'protocol_type')
encode_text_dummy(df, 'service')
encode_text_dummy(df, 'flag')
encode_text_dummy(df, 'logged_in')
encode_text_dummy(df, 'is_host_login')
encode_text_dummy(df, 'is_guest_login')
df.dropna(inplace = True, axis = 1)
return df
else:
df.drop(columns = ['protocol_type', 'service', 'flag', 'land', 'logged_in',
'is_host_login', 'is_guest_login'], inplace = True)
df.dropna(inplace = True, axis = 1)
return df
def SplitData(df, testsize = None, seed = None, method = None):
if testsize == None:
raise AssertionError("Testsize must be defined.")
else:
pass
if method == None:
raise AssertionError("Method must be defined. (Outlier | Novelty)")
if method == "Novelty":
normal = df['outcome'] == 'normal.'
attack = df['outcome'] != 'normal.'
df.drop(columns = 'outcome', inplace = True)
df_normal = df[normal]
df_attack = df[attack]
x_normal = df_normal.values
x_attack = df_attack.values
x_normal_train, x_normal_test = train_test_split(x_normal, test_size = testsize, random_state = seed)
return x_normal_train, x_normal_test, x_attack
elif method == "Outlier":
train, test = train_test_split(df, test_size = testsize, random_state = seed)
x_train = train.drop(columns = 'outcome')
y_train = train['outcome']
x_test = test.drop(columns = 'outcome')
y_test = test['outcome']
return x_train, x_test, y_train, y_test
df = getData('kddcup.data_10_percent.gz')
df = Preprocessing(df, [1,2,3,6,11,20,21,41], get_dummy = False)
```
# Modeling
```
class SimpleIsolationForest:
def __init__(self, df):
self.df = df
def Modeling(self, train_data, seed):
self.train_data = train_data
self.seed = seed
model = IsolationForest(random_state = self.seed).fit(self.train_data)
self.model = model
def Prediction(self, test_data, data_type):
self.test_data = test_data
def ConvertLabel(x):
if x == -1:
return 1
else:
return 0
function = np.vectorize(ConvertLabel)
if data_type == None:
raise AssertionError('Data Type must be defined.')
elif data_type == 'Insample':
pred = self.model.predict(self.test_data)
pred = function(pred)
pred = list(pred)
print('Insample Classification Result \n')
print('Normal Value: {}'.format(pred.count(0)))
print('Anomlay Value: {}'.format(pred.count(1)))
elif data_type == 'OutOfSample':
pred = self.model.predict(self.test_data)
pred = function(pred)
pred = list(pred)
print('Insample Classification Result \n')
print('Normal Value: {}'.format(pred.count(0)))
print('Anomlay Value: {}'.format(pred.count(1)))
elif data_type == 'Attack':
pred = self.model.predict(self.test_data)
pred = function(pred)
pred = list(pred)
print('Insample Classification Result \n')
print('Normal Value: {}'.format(pred.count(0)))
print('Anomlay Value: {}'.format(pred.count(1)))
self.pred = pred
return self.pred
```
# Viz
```
x_train, x_test, y_train, y_test = SplitData(df, testsize = 0.25, seed = 42, method = "Outlier")
fig, axs = plt.subplots(5, 7, figsize = (25, 30), facecolor = 'w', edgecolor = 'k')
axs = axs.ravel()
for i, columns in enumerate(x_train.columns):
isolation_forest = IsolationForest()
isolation_forest.fit(x_train[columns].values.reshape(-1,1))
xx = np.linspace(x_train[columns].min(), df[columns].max(), len(x_train)).reshape(-1,1)
anomaly_score = isolation_forest.decision_function(xx)
outlier = isolation_forest.predict(xx)
axs[i].plot(xx, anomaly_score, label = 'Anomaly Score')
axs[i].fill_between(xx.T[0], np.min(anomaly_score), np.max(anomaly_score),
where = outlier == -1, color = 'r',
alpha = .4, label = 'Outlier Region')
axs[i].legend()
axs[i].set_title(columns)
```
| github_jupyter |
# Exercise 2: Measurement Error Mitigation
Present day quantum computers are subject to noise of various kinds. The principle behind error mitigation is to reduce the effects from a specific source of error. Here we will look at mitigating measurement errors, i.e., errors in determining the correct quantum state from measurements performed on qubits.
<img src="mitigation.png" width="900"/>
<center>Measurement Error Mitigation</center>
In the above picture, you can see the outcome of applying measurement error mitigation. On the left, the histogram shows results obtained using the device `ibmq_vigo`. The ideal result should have shown 50% counts $00000$ and 50% counts $10101$. Two features are notable here:
- First, notice that the result contains a skew toward $00000$. This is because of energy relaxation of the qubit during the measurement process. The relaxation takes the $\vert1\rangle$ state to the $\vert0\rangle$ state for each qubit.
- Second, notice that the result contains other counts beyond just $00000$ and $10101$. These arise due to various errors. One example of such errors comes from the discrimination after measurement, where the signal obtained from the measurement is identified as either $\vert0\rangle$ or $\vert1\rangle$.
The picture on the right shows the outcome of performing measurement error mitigation on the results. You can see that the device counts are closer to the ideal expectation of $50%$ results in $00000$ and $50%$ results in $10101$, while other counts have been significantly reduced.
## How measurement error mitigation works
We start by creating a set of circuits that prepare and measure each of the $2^n$ basis states, where $n$ is the number of qubits. For example, $n = 2$ qubits would prepare the states $|00\rangle$, $|01\rangle$, $|10\rangle$, and $|11\rangle$ individually and see the resulting outcomes. The outcome statistics are then captured by a matrix $M$, where the element $M_{ij}$ gives the probability to get output state $|i\rangle$ when state $|j\rangle$ was prepared. Even for a state that is in an arbitrary superposition $|\psi \rangle = \sum_j \alpha_j |j\rangle$, the linearity of quantum mechanics allows us to write the noisy output state as $|\psi_{noisy}\rangle = M |\psi\rangle$.
The goal of measurement error mitigation is not to model the noise, but rather to apply a classical correction that undoes the errors. Given a noisy outcome, measurement error mitigation seeks to recover the initial state that led to that outcome. Using linear algebra we can see that given a noisy outcome $|\psi_{noisy}\rangle$, this can be done by applying the inverse of the matrix $M$, i.e., $|\psi \rangle = M^{-1} |\psi_{noisy}\rangle$. Note that the matrix $M$ recovered from the measurements is usually non-invertible, thus requiring a generalized inverse method to solve. Additionally, the noise is not deterministic, and has fluctuations, so this will in general not give you the ideal noise-free state, but it should bring you closer to it.
You can find a more detailed description of measurement error mitigation in [Chapter 5.2](https://qiskit.org/textbook/ch-quantum-hardware/measurement-error-mitigation.html) of the Qiskit textbook.
**The goal of this exercise is to create a calibration matrix $M$ that you can apply to noisy results (provided by us) to infer the noise-free results.**
---
For useful tips to complete this exercise as well as pointers for communicating with other participants and asking questions, please take a look at the following [repository](https://github.com/qiskit-community/may4_challenge_exercises). You will also find a copy of these exercises, so feel free to edit and experiment with these notebooks.
---
In Qiskit, creating the circuits that test all basis states to replace the entries for the matrix is done by the following code:
```
#initialization
%matplotlib inline
# Importing standard Qiskit libraries and configuring account
from qiskit import IBMQ
from qiskit.compiler import transpile, assemble
from qiskit.providers.ibmq import least_busy
from qiskit.tools.jupyter import *
from qiskit.tools.monitor import job_monitor
from qiskit.visualization import *
from qiskit.ignis.mitigation.measurement import complete_meas_cal, CompleteMeasFitter
provider = IBMQ.load_account() # load your IBM Quantum Experience account
# If you are a member of the IBM Q Network, fill your hub, group, and project information to
# get access to your premium devices.
# provider = IBMQ.get_provider(hub='', group='', project='')
from may4_challenge.ex2 import get_counts, show_final_answer
num_qubits = 5
meas_calibs, state_labels = complete_meas_cal(range(num_qubits), circlabel='mcal')
```
Next, run these circuits on a real device! You can choose your favorite device, but we recommend choosing the least busy one to decrease your wait time in the queue. Upon executing the following cell you will be presented with a widget that displays all the information about the least busy backend that was selected. Clicking on the "Error Map" tab will reveal the latest noise information for the device. Important for this challenge is the "readout" (measurement) error located on the left (and possibly right) side of the figure. It is common to see readout errors of a few percent on each qubit. These are the errors we are mitigating in this exercise.
```
# find the least busy device that has at least 5 qubits
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= num_qubits and
not x.configuration().simulator and x.status().operational==True))
backend
```
Run the next cell to implement all of the above steps. In order to average out fluctuations as much as possible, we recommend choosing the highest number of shots, i.e., `shots=8192` as shown below.
The call to `transpile` maps the measurement calibration circuits to the topology of the backend being used. `backend.run()` sends the circuits to the IBM Quantum device returning a `job` instance, whereas `%qiskit_job_watcher` keeps track of where your submitted job is in the pipeline.
```
# run experiments on a real device
shots = 8192
experiments = transpile(meas_calibs, backend=backend, optimization_level=3)
job = backend.run(assemble(experiments, shots=shots))
print(job.job_id())
%qiskit_job_watcher
```
Note that you might be in the queue for quite a while. You can expand the 'IBMQ Jobs' window that just appeared in the top left corner to monitor your submitted jobs. Make sure to keep your job ID in case you ran other jobs in the meantime. You can then easily access the results once your job is finished by running
```python
job = backend.retrieve_job('YOUR_JOB_ID')
```
Once you have the results of your job, you can create the calibration matrix and calibration plot using the following code. However, as the counts are given in a dictionary instead of a matrix, it is more convenient to use the measurement filter object that you can directly apply to the noisy counts to receive a dictionary with the mitigated counts.
```
# get measurement filter
cal_results = job.result()
meas_fitter = CompleteMeasFitter(cal_results, state_labels, circlabel='mcal')
meas_filter = meas_fitter.filter
#print(meas_fitter.cal_matrix)
meas_fitter.plot_calibration()
```
In the calibration plot you can see the correct outcomes on the diagonal, while all incorrect outcomes are off-diagonal. Most of the latter are due to T1 errors depolarizing the states from $|1\rangle$ to $|0\rangle$ during the measurement, which causes the matrix to be asymmetric.
Below, we provide you with an array of noisy counts for four different circuits. Note that as measurement error mitigation is a device-specific error correction, the array you receive depends on the backend that you have used before to create the measurement filter.
**Apply the measurement filter in order to get the mitigated data. Given this mitigated data, choose which error-free outcome would be most likely.**
As there are other types of errors for which we cannot correct with this method, you will not get completely noise-free results, but you should be able to guess the correct results from the trend of the mitigated results.
## i) Consider the first set of noisy counts:
```
# get noisy counts
noisy_counts = get_counts(backend)
plot_histogram(noisy_counts[0])
# apply measurement error mitigation and plot the mitigated counts
mitigated_counts_0 = meas_filter.apply(noisy_counts[0])
plot_histogram([mitigated_counts_0, noisy_counts[0]])
```
## Which of the following histograms most likely resembles the *error-free* counts of the same circuit?
a) <img src="hist_1a.png" width="500">
b) <img src="hist_1b.png" width="500">
c) <img src="hist_1c.png" width="500">
d) <img src="hist_1d.png" width="500">
```
# uncomment whatever answer you think is correct
#answer1 = 'a'
#answer1 = 'b'
answer1 = 'c'
#answer1 = 'd'
```
## ii) Consider the second set of noisy counts:
```
# plot noisy counts
plot_histogram(noisy_counts[1])
# apply measurement error mitigation
# insert your code here to do measurement error mitigation on noisy_counts[1]
plot_histogram([mitigated_counts_1, noisy_counts[1]])
```
## Which of the following histograms most likely resembles the *error-free* counts of the same circuit?
a) <img src="hist_2a.png" width="500">
b) <img src="hist_2b.png" width="500">
c) <img src="hist_2c.png" width="500">
d) <img src="hist_2d.png" width="500">
```
# uncomment whatever answer you think is correct
#answer2 = 'a'
#answer2 = 'b'
#answer2 = 'c'
answer2 = 'd'
```
## iii) Next, consider the third set of noisy counts:
```
# plot noisy counts
plot_histogram(noisy_counts[2])
# apply measurement error mitigation
# insert your code here to do measurement error mitigation on noisy_counts[2]
plot_histogram([mitigated_counts_2, noisy_counts[2]])
```
## Which of the following histograms most likely resembles the *error-free* counts of the same circuit?
a) <img src="hist_3a.png" width="500">
b) <img src="hist_3b.png" width="500">
c) <img src="hist_3c.png" width="500">
d) <img src="hist_3d.png" width="500">
```
# uncomment whatever answer you think is correct
#answer3 = 'a'
answer3 = 'b'
#answer3 = 'c'
#answer3 = 'd'
```
## iv) Finally, consider the fourth set of noisy counts:
```
# plot noisy counts
plot_histogram(noisy_counts[3])
# apply measurement error mitigation
# insert your code here to do measurement error mitigation on noisy_counts[3]
plot_histogram([mitigated_counts_3, noisy_counts[3]])
```
## Which of the following histograms most likely resembles the *error-free* counts of the same circuit?
a) <img src="hist_4a.png" width="500">
b) <img src="hist_4b.png" width="500">
c) <img src="hist_4c.png" width="500">
d) <img src="hist_4d.png" width="500">
```
# uncomment whatever answer you think is correct
#answer4 = 'a'
answer4 = 'b'
#answer4 = 'c'
#answer4 = 'd'
```
The answer string of this exercise is just the string of all four answers. Copy and paste the output of the next line on the IBM Quantum Challenge page to complete the exercise and track your progress.
```
# answer string
show_final_answer(answer1, answer2, answer3, answer4)
```
Now that you are done, move on to the next exercise!
| github_jupyter |
# 3. FE_XGBClassifier_GSCV
Kaggle score:
pip install jupyter notebook tqdm pillow h5py seaborn scikit-learn scikit-image xgboost
## Run name
```
import time
project_name = 'TalkingdataAFD2018'
step_name = 'FE_XGBClassifier_GSCV'
time_str = time.strftime("%Y%m%d_%H%M%S", time.localtime())
run_name = '%s_%s_%s' % (project_name, step_name, time_str)
print('run_name: %s' % run_name)
t0 = time.time()
```
## Important params
```
feature_run_name = 'TalkingdataAFD2018_FeatureExtraction_20180501_185800'
date = 100
print('date: ', date)
is_debug = False
print('is_debug: %s' % is_debug)
# epoch = 3
# batch_size = 2000 * 10000
# skip_data_len = (epoch - 1) * batch_size
# data_len = batch_size
# print('Echo: %s, Data rows: [%s, %s]' % (epoch, skip_data_len, skip_data_len + data_len))
# epoch = 2
# batch_size = 4000 * 10000
# skip_data_len = 59633310 - batch_size
# data_len = batch_size
# print('batch_size: %s' % batch_size)
# print('epoch: %s, data rows: [%s, %s]' % (epoch, skip_data_len, skip_data_len + data_len))
# run_name = '%s_date%s%s' % (run_name, date, epoch)
run_name = '%s_date%s' % (run_name, date)
print(run_name)
if is_debug:
test_n_rows = 1 * 10000
else:
test_n_rows = None
# test_n_rows = 18790469
day_rows = {
0: {
'n_skiprows': 1,
'n_rows': 1 * 1000
},
1: {
'n_skiprows': 1 * 1000,
'n_rows': 2 * 1000
},
6: {
'n_skiprows': 1,
'n_rows': 2 * 1000 # 9308568
},
7: {
'n_skiprows': 1 + 9308568,
'n_rows': 2 * 1000 # 59633310
},
8: {
'n_skiprows': 1 + 9308568 + 59633310,
'n_rows': 2 * 1000 # 62945075
},
9: {
'n_skiprows': 1 + 9308568 + 59633310 + 62945075,
'n_rows': 2 * 1000 # 53016937
}
}
# n_skiprows = day_rows[date]['n_skiprows']
# n_rows = day_rows[date]['n_rows']
```
## Import PKGs
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from IPython.display import display
import os
import sys
import gc
import time
import random
import zipfile
import h5py
import pickle
import math
from PIL import Image
import shutil
from tqdm import tqdm
import multiprocessing
from multiprocessing import cpu_count
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import accuracy_score
random_num = np.random.randint(10000)
cpu_amount = cpu_count()
print('cpu_amount: %s' % (cpu_amount - 2))
print('random_num: %s' % random_num)
```
## Project folders
```
cwd = os.getcwd()
input_folder = os.path.join(cwd, 'input')
output_folder = os.path.join(cwd, 'output')
model_folder = os.path.join(cwd, 'model')
feature_folder = os.path.join(cwd, 'feature')
log_folder = os.path.join(cwd, 'log')
print('input_folder: \t\t\t%s' % input_folder)
print('output_folder: \t\t\t%s' % output_folder)
print('model_folder: \t\t\t%s' % model_folder)
print('feature_folder: \t\t%s' % feature_folder)
print('log_folder: \t\t\t%s' % log_folder)
train_csv_file = os.path.join(input_folder, 'train.csv')
train_sample_csv_file = os.path.join(input_folder, 'train_sample.csv')
test_csv_file = os.path.join(input_folder, 'test.csv')
sample_submission_csv_file = os.path.join(input_folder, 'sample_submission.csv')
print('\ntrain_csv_file: \t\t%s' % train_csv_file)
print('train_sample_csv_file: \t\t%s' % train_sample_csv_file)
print('test_csv_file: \t\t\t%s' % test_csv_file)
print('sample_submission_csv_file: \t%s' % sample_submission_csv_file)
```
## Load feature
```
sample_submission_csv = pd.read_csv(sample_submission_csv_file)
print('sample_submission_csv.shape: \t', sample_submission_csv.shape)
display(sample_submission_csv.head(2))
print('train_csv: %.2f Mb' % (sys.getsizeof(sample_submission_csv)/1024./1024.))
def save_feature(x_data, y_data, file_name):
print(y_data[:5])
if os.path.exists(file_name):
os.remove(file_name)
print('File removed: %s' % file_name)
with h5py.File(file_name) as h:
h.create_dataset('x_data', data=x_data)
h.create_dataset('y_data', data=y_data)
print('File saved: %s' % file_name)
def load_feature(file_name):
with h5py.File(file_name, 'r') as h:
x_data = np.array(h['x_data'])
y_data = np.array(h['y_data'])
print('File loaded: %s' % file_name)
print(y_data[:5])
return x_data, y_data
def save_test_feature(x_test, click_ids, file_name):
print(click_ids[:5])
if os.path.exists(file_name):
os.remove(file_name)
print('File removed: %s' % file_name)
with h5py.File(file_name) as h:
h.create_dataset('x_test', data=x_test)
h.create_dataset('click_ids', data=click_ids)
print('File saved: %s' % file_name)
def load_test_feature(file_name):
with h5py.File(file_name, 'r') as h:
x_test = np.array(h['x_test'])
click_ids = np.array(h['click_ids'])
print('File loaded: %s' % file_name)
print(click_ids[:5])
return x_test, click_ids
def save_feature_map(feature_map, file_name):
print(feature_map[:5])
feature_map_encode = []
for item in feature_map:
feature_name_encode = item[1].encode('UTF-8')
feature_map_encode.append((item[0], feature_name_encode))
if os.path.exists(file_name):
os.remove(file_name)
print('File removed: \t%s' % file_name)
with h5py.File(file_name) as h:
h.create_dataset('feature_map', data=feature_map_encode)
print('File saved: \t%s' % file_name)
def load_feature_map(file_name):
with h5py.File(file_name, 'r') as h:
feature_map_encode = np.array(h['feature_map'])
print('File loaded: \t%s' % file_name)
feature_map = []
for item in feature_map_encode:
feature_name = item[1].decode('UTF-8')
feature_map.append((int(item[0]), feature_name))
print(feature_map[:5])
return feature_map
def describe(data):
if isinstance(data, list):
print(len(data), '\t\t%.2f Mb' % (sys.getsizeof(data)/1024./1024.))
else:
print(data.shape, '\t%.2f Mb' % (sys.getsizeof(data)/1024./1024.))
test_np = np.ones((5000, 10))
describe(test_np)
describe(list(range(5000)))
%%time
feature_files = []
x_train = []
y_train = []
if date == 100:
for key in [7, 8, 9]:
y_proba_file = os.path.join(feature_folder, 'feature_%s_date%s.p' % (feature_run_name, key))
feature_files.append(y_proba_file)
x_train_date, y_train_date = load_feature(y_proba_file)
x_train.append(x_train_date)
y_train.append(y_train_date)
x_train = np.concatenate(x_train, axis=0)
y_train = np.concatenate(y_train, axis=0)
else:
y_proba_file = os.path.join(feature_folder, 'feature_%s_date%s.p' % (feature_run_name, date))
feature_files.append(y_proba_file)
x_train, y_train = load_feature(y_proba_file)
# Use date 6 as validation dataset
y_proba_file = os.path.join(feature_folder, 'feature_%s_date%s.p' % (feature_run_name, 6))
feature_files.append(y_proba_file)
x_val, y_val = load_feature(y_proba_file)
y_proba_file = os.path.join(feature_folder, 'feature_%s_test.p' % feature_run_name)
feature_files.append(y_proba_file)
x_test, click_ids = load_test_feature(y_proba_file)
print('*' * 80)
describe(x_train)
describe(y_train)
describe(x_val)
describe(y_val)
describe(x_test)
describe(click_ids)
# feature_files = []
# y_proba_file = os.path.join(feature_folder, 'feature_%s_date%s.p' % (feature_run_name, 6))
# feature_files.append(y_proba_file)
# x_train, y_train = load_feature(y_proba_file)
# y_proba_file = os.path.join(feature_folder, 'feature_%s_test.p' % feature_run_name)
# feature_files.append(y_proba_file)
# x_test, click_ids = load_test_feature(y_proba_file)
# print('*' * 80)
# describe(x_train)
# describe(y_train)
# describe(x_val)
# describe(y_val)
# describe(x_test)
# describe(click_ids)
# from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
# x_train, x_val, y_train, y_val = train_test_split(x_data[skip_data_len: data_len], y_data[skip_data_len: data_len], test_size=0.1, random_state=random_num, shuffle=True)
# x_train, x_val, y_train, y_val = train_test_split(x_data, y_data, test_size=0.2, random_state=random_num, shuffle=False)
# x_train = x_train[skip_data_len: skip_data_len + data_len]
# y_train = y_train[skip_data_len: skip_data_len + data_len]
# x_train, y_train = shuffle(x_train, y_train, random_state=random_num)
# describe(x_train)
# describe(y_train)
# describe(x_val)
# describe(y_val)
gc.collect()
```
## Train
```
%%time
import warnings
warnings.filterwarnings('ignore')
import xgboost as xgb
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import roc_auc_score
clf = xgb.XGBClassifier(
max_depth=5,
learning_rate=0.1,
n_estimators=1000,
silent=False,
objective='binary:logistic',
booster='gbtree',
n_jobs=cpu_amount,
nthread=None,
gamma=0,
min_child_weight=1,
max_delta_step=0,
subsample=0.7,
colsample_bytree=1,
colsample_bylevel=1,
reg_alpha=1,
reg_lambda=2,
scale_pos_weight=97,
base_score=0.5,
random_state=random_num,
seed=None,
missing=None,
# booster params
num_boost_round=60,
early_stopping_rounds=30,
# tree_method='gpu_hist',
# predictor='gpu_predictor',
eval_metric=['auc'],
)
parameters = {
# 'max_depth': [3, 5],
# 'n_estimators': [1000, 2000]
# 'subsample': [0.5, 1],
# 'colsample_bytree': [0.5, 1],
# 'reg_alpha':[0, 1, 5],
# 'reg_lambda':[1, 2, 8],
# 'scale_pos_weight': [1, 10, 80, 100, 120, 200]
}
grid_search = GridSearchCV(clf, parameters, verbose=2, cv=3, scoring='roc_auc')
grid_search.fit(x_train, y_train)
%%time
print('*' * 80)
y_train_proba = grid_search.predict_proba(x_train)
print(y_train.shape)
print(y_train_proba.shape)
print(y_train_proba[:10])
y_train_pred = (y_train_proba[:, 1]>=0.5).astype(int)
acc_train = accuracy_score(y_train, y_train_pred)
roc_train = roc_auc_score(y_train, y_train_proba[:, 1])
print('acc_train: %.4f \t roc_train: %.4f' % (acc_train, roc_train))
# y_train_pred = grid_search.predict(x_train)
# acc_train = accuracy_score(y_train, y_train_pred)
# roc_train = roc_auc_score(y_train, y_train_proba[:, 1])
# print('acc_train: %.4f \t roc_train: %.4f' % (acc_train, roc_train))
y_val_proba = grid_search.predict_proba(x_val)
y_val[0] = 0
y_val[1] = 1
print(y_val.shape)
print(y_val_proba.shape)
print(y_val_proba[:10])
y_val_pred = (y_val_proba[:, 1]>=0.5).astype(int)
acc_val = accuracy_score(y_val, y_val_pred)
roc_val = roc_auc_score(y_val, y_val_proba[:, 1])
print('acc_val: %.4f \t roc_val: %.4f' % (acc_val, roc_val))
print(grid_search.cv_results_)
print('*' * 60)
print(grid_search.grid_scores_ )
print(grid_search.best_estimator_)
print(grid_search.best_score_)
print(grid_search.best_params_)
print(grid_search.scorer_)
print('*' * 60)
print(type(grid_search.best_estimator_))
print(dir(grid_search.best_estimator_))
cv_results = pd.DataFrame(grid_search.cv_results_)
display(cv_results)
from xgboost import plot_importance
fig, ax = plt.subplots(figsize=(10,int(x_train.shape[1]/2)))
xgb.plot_importance(grid_search.best_estimator_, height=0.5, ax=ax, max_num_features=300)
plt.show()
feature_map_file_name = y_proba_file = os.path.join(feature_folder, 'feature_map_TalkingdataAFD2018_FeatureExtraction_20180501_053830_date7.p')
feature_map = load_feature_map(feature_map_file_name)
print(len(feature_map))
print(feature_map[:5])
feature_dict = {}
for item in feature_map:
feature_dict[item[0]] = item[1]
print(list(feature_dict.keys())[:5])
print(list(feature_dict.values())[:5])
# print(dir(grid_search.best_estimator_.get_booster()))
importance_score = grid_search.best_estimator_.get_booster().get_fscore()
sorted_score = []
for key in importance_score:
indx = int(key[1:])
sorted_score.append((importance_score[key], key, indx, feature_dict[indx]))
dtype = [('importance_score', int), ('key', 'S50'), ('indx', int), ('name', 'S50')]
importance_table = np.array(sorted_score, dtype=dtype)
display(importance_table[:2])
importance_table = np.sort(importance_table, axis=0, order=['importance_score'])
display(importance_table)
# del x_train; gc.collect()
# del x_val; gc.collect()
```
## Predict
```
run_name_acc = run_name + '_' + str(int(roc_val*10000)).zfill(4)
print(run_name_acc)
from sklearn.cross_validation import KFold
kf = KFold(105, n_folds=10)
for train_index, test_index in kf:
print(test_index)
kf = KFold(len(x_test), n_folds=10)
y_test_proba = []
for train_index, test_index in kf:
y_test_proba_fold = grid_search.predict_proba(x_test[test_index])
y_test_proba.append(y_test_proba_fold)
print(y_test_proba_fold.shape)
y_test_proba = np.concatenate(y_test_proba, axis=0)
print(y_test_proba.shape)
print(y_test_proba[:20])
def save_proba(y_train_proba, y_train, y_val_proba, y_val, y_test_proba, click_ids, file_name):
print(click_ids[:5])
if os.path.exists(file_name):
os.remove(file_name)
print('File removed: %s' % file_name)
with h5py.File(file_name) as h:
h.create_dataset('y_train_proba', data=y_train_proba)
h.create_dataset('y_train', data=y_train)
h.create_dataset('y_val_proba', data=y_val_proba)
h.create_dataset('y_val', data=y_val)
h.create_dataset('y_test_proba', data=y_test_proba)
h.create_dataset('click_ids', data=click_ids)
print('File saved: %s' % file_name)
def load_proba(file_name):
with h5py.File(file_name, 'r') as h:
y_train_proba = np.array(h['y_train_proba'])
y_train = np.array(h['y_train'])
y_val_proba = np.array(h['y_val_proba'])
y_val = np.array(h['y_val'])
y_test_proba = np.array(h['y_test_proba'])
click_ids = np.array(h['click_ids'])
print('File loaded: %s' % file_name)
print(click_ids[:5])
return y_train_proba, y_train, y_val_proba, y_val, y_test_proba, click_ids
y_proba_file = os.path.join(model_folder, 'proba_%s.p' % run_name_acc)
save_proba(
y_train_proba,
y_train,
y_val_proba,
y_val,
y_test_proba,
np.array(sample_submission_csv['click_id']),
y_proba_file
)
y_train_proba_true, y_train, y_val_proba_true, y_val, y_test_proba_true, click_ids = load_proba(y_proba_file)
print(y_train_proba_true.shape)
print(y_train.shape)
print(y_val_proba_true.shape)
print(y_val.shape)
print(y_test_proba_true.shape)
print(len(click_ids))
# %%time
submission_csv_file = os.path.join(output_folder, 'pred_%s.csv' % run_name_acc)
print(submission_csv_file)
submission_csv = pd.DataFrame({ 'click_id': click_ids , 'is_attributed': y_test_proba_true[:, 1] })
submission_csv.to_csv(submission_csv_file, index = False)
display(submission_csv.head())
print('Time cost: %.2f s' % (time.time() - t0))
print('random_num: ', random_num)
print('date: ', date)
print(run_name_acc)
print('Done!')
```
| github_jupyter |
<!--NOTEBOOK_HEADER-->
*This notebook contains material from [CBE40455-2020](https://jckantor.github.io/CBE40455-2020);
content is available [on Github](https://github.com/jckantor/CBE40455-2020.git).*
<!--NAVIGATION-->
< [7.5 Portfolio Optimization](https://jckantor.github.io/CBE40455-2020/07.05-Portfolio-Optimization.html) | [Contents](toc.html) | [7.7 Portfolio Optimization](https://jckantor.github.io/CBE40455-2020/07.07-MAD-Portfolio-Optimization.html) ><p><a href="https://colab.research.google.com/github/jckantor/CBE40455-2020/blob/master/docs/07.06-Portfolio-Optimization-using-Mean-Absolute-Deviation.ipynb"> <img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a><p><a href="https://jckantor.github.io/CBE40455-2020/07.06-Portfolio-Optimization-using-Mean-Absolute-Deviation.ipynb"> <img align="left" src="https://img.shields.io/badge/Github-Download-blue.svg" alt="Download" title="Download Notebook"></a>
# 7.6 Portfolio Optimization using Mean Absolute Deviation
This [IPython notebook](http://ipython.org/notebook.html) demonstrates portfolio optimization using the Mean Absolute Deviation (MAD) criterion. A portion of these notes is adapted from [GLPK Wikibook tutorial on the subject](http://en.wikibooks.org/wiki/GLPK/Portfolio_Optimization) written by me.
## 7.6.1 Background
Konno and Yamazaki (1990) proposed a linear programming model for portfolio optimization in which the risk measure is mean absolute deviation (MAD). This model computes a portfolio minimizing MAD subject to a lower bound on return.
In contrast to the classical Markowitz portfolio, the MAD criterion requires a data set consisting of returns on the investment assets. The data set may be an historical record or samples from a multivariate statistical model of portfolio returns. The MAD criterion produces portfolios with properties not shared by the Markowitz portfolio, including second degree stochastic dominance.
Below we demonstrate portfolio optimization with the MAD criterion where data is generated by sampling a multivariate normal distribution. Given mean return $r$ and the Cholesky decomposition of the covariance matrix $\Sigma$ (i.e., $C$ such that $CC^T = \Sigma$), we compute $r_t = r + C z_t$ where the elements of $z_t$ are zero mean normal variates with unit variance.
The rest of the formulation is adapted from "Optimization Methods in Finance" by Gerald Curnuejols and Reha Tutuncu (2007) which, in turn, follows an implementation due to Fienstein and Thapa (1993).
[A complete tutorial](http://en.wikibooks.org/wiki/GLPK/Portfolio_Optimization) describing the implementation of this model is available on the [GLPK wikibook](http://en.wikibooks.org/wiki/GLPK/).
## 7.6.2 Mean Absolute Deviation
Portfolio optimization refers to the process of allocating capital among a set of financial assets to achieve a desired tradeoff between risk and return. The classical Markowitz approach to this problem measures risk using the expected variance of the portfolio return. This criterion yields a quadratic program for the relative weights of assets in the optimal portfolio.
In 1991, Konno and Yamazaki <ref>{{Cite journal | last1 = Konno | first1 = Hiroshi | last2 = Yamazaki | first2 = Hiroaki | title = Mean-absolute deviation portfolio optimization model and its applications to Tokyo stock market | journal = Management Science | volume = 37 | pages = 519-531 | date = 1991}}</ref> proposed a linear programming model for portfolio optimization whereby risk is measured by the mean absolute deviation (MAD) from the expected return. Using MAD as the risk metric produces portfolios with several desirable properties not shared by the Markowitz portfolio, including [[w:Stochastic_dominance|second order stochastic dominance]].
As originally formulated by Konno and Yamazaki, one starts with a history of returns $R_i(t_n)$ for every asset in a set $S$ of assets. The return at time $t_n$ is determined by the change in price of the asset,
$$R_i(t_n)= {(P_i(t_n)-P_i(t_{n-1}))}/{P_i(t_{n-1})}$$
For each asset, the expected return is estimated by
$$\bar{R}_i \approx \frac{1}{N}\sum_{n=1}^NR_i(t_n)$$
The investor specifies a minimum required return $R_{p}$. The portfolio optimization problem is to determine the fraction of the total investment allocated to each asset, $w_i$, that minimizes the mean absolution deviation from the mean
$$\min_{w_i} \frac{1}{N}\sum_{n=1}^N\lvert\sum_{i \in S} w_i(R_i(t_n)-\bar{R}_i)\rvert$$
subject to the required return and a fixed total investment:
$$
\begin{array}{rcl}
\sum_{i \in S} w_i\bar{R}_i & \geq & R_{p} \\
\quad \\
\sum_{i \in S} w_i & = & 1
\end{array}
$$
The value of the minimum required return, $R_p$ expresses the investor's risk tolerance. A smaller value for $R_p$ increases the feasible solution space, resulting in portfolios with lower values of the MAD risk metric. Increasing $R_p$ results in portfolios with higher risk. The relationship between risk and return is a fundamental principle of investment theory.
This formulation doesn't place individual bounds on the weights $w_i$. In particular, $w_i < 0$ corresponds to short selling of the associated asset. Constraints can be added if the investor imposes limits on short-selling, on the maximum amount that can be invested in a single asset, or other requirements for portfolio diversification. Depending on the set of available investment opportunities, additional constraints can lead to infeasible investment problems.
## 7.6.3 Reformulation of the MAD Objective
The following formulation of the objective function is adapted from "Optimization Methods in Finance" by Gerald Curnuejols and Reha Tutuncu (2007) <ref>{{Cite book | last1 = Curnuejols | first1 = Gerald | last2 = Tutuncu | first2 = Reha | title = Optimization Methods in Finance | publisher = Cambridge University Press | date = 2007}}</ref> which, in turn, follows
Feinstein and Thapa (1993) <ref>{{Cite journal | last1 = Feinstein | first1 = Charles D. | last2 = Thapa | first2 = Mukund N. | title = A Reformulation of a Mean-Absolute Deviation Portfolio Optimization Model | journal = Management Science | volume = 39 | pages = 1552-1553 | date = 1993}}</ref>.
The model is streamlined by introducing decision variables $y_n \geq 0$ and $z_n \geq 0$, $n=1,\ldots,N$ so that the objective becomes
$$\min_{w_i, y_n,z_n} \frac{1}{N}\sum_{n = 1}^{N} (y_n+z_n)$$
subject to constraints
$$
\begin{array}{rcl}
\sum_{i \in S} w_i\bar{R}_i & \geq & R_{p} \\ \\
\sum_{i \in S} w_i & = & 1 \\ \\
y_n-z_n & = & \sum_{i \in S} w_i(R_i(t_n)-\bar{R}_i) \quad \mbox{for}\quad n = 1,\ldots,N
\end{array}
$$
As discussed by Feinstein and Thapa, this version reduces the problem to $N+2$ constraints in $ 2N+\mbox{card}(S)$ decision variables, noting the `card` function.
## 7.6.4 Seeding the GLPK Pseudo-Random Number Generator
Unfortunately, MathProg does not provide a method to seed the pseudo-random number generator in GLPK. Instead, the following GMPL code fragment uses the function gmtime() to find the number of seconds since 1970. Dropping the leading digit avoids subsequent overflow errors, and the square returns a number with big changes every second. Extracting the lowest digits produces a number between 0 and 100,000 that determines how many times to sample GLPK's pseudo-random number generator prior to subsequent use. This hack comes with no assurances regarding its statistical properties.
/* A workaround for the lack of a way to seed the PRNG in GMPL */
param utc := prod {1..2} (gmtime()-1000000000);
param seed := utc - 100000*floor(utc/100000);
check sum{1..seed} Uniform01() > 0;
## 7.6.5 Simulation of the Historical Returns
For this implementation, historical returns are simulated assuming knowledge of the means and covariances of asset returns. We begin with a vector of mean returns $\bar{R}$ and covariance matrix $\Sigma$ estimated by
$$
\Sigma_{ij} \approx \frac{1}{N-1}\sum_{n=1}^N(R_i(t_n)-\bar{R}_i)(R_j(t_n)-\bar{R}_j)
$$
Simulation of historical returns requires generation of samples from a multivariable normal distribution. For this purpose we compute the Cholesky factorization where, for a positive semi-definite $\Sigma$, $\Sigma=CC^T$ and $C$ is a lower triangular matrix. The following MathProg code fragment implements the Cholesky-Crout algorithm.
/* Cholesky Lower Triangular Decomposition of the Covariance Matrix */
param C{i in S, j in S : i >= j} :=
if i = j then
sqrt(Sigma[i,i]-(sum {k in S : k < i} (C[i,k]*C[i,k])))
else
(Sigma[i,j]-sum{k in S : k < j} C[i,k]*C[j,k])/C[j,j];
Without error checking, this code fragment fails unless $\Sigma$ is positive definite. The covariance matrix is normally positive definite for real-world data, so this is generally not an issue. However, it does become an issue if one attempts to include a risk-free asset, such as a government bond, in the set of investment assets.
Once the Cholesky factor $C$ has been computed, a vector of simulated returns $R(t_n)$ is given by $R(t_n) = \bar{R} + C Z(t_n)$ where the elements of $Z(t_n)$ are independent samples from a normal distribution with zero mean and unit variance.
/* Simulated returns */
param N default 5000;
set T := 1..N;
param R{i in S, t in T} := Rbar[i] + sum {j in S : j <= i} C[i,j]*Normal(0,1);
## 7.6.6 MathProg Model
```
%%script glpsol -m /dev/stdin
# Example: PortfolioMAD.mod Portfolio Optimization using Mean Absolute Deviation
/* Stock Data */
set S; # Set of stocks
param r{S}; # Means of projected returns
param cov{S,S}; # Covariance of projected returns
param r_portfolio
default (1/card(S))*sum{i in S} r[i]; # Lower bound on portfolio return
/* Generate sample data */
/* Cholesky Lower Triangular Decomposition of the Covariance Matrix */
param c{i in S, j in S : i >= j} :=
if i = j then
sqrt(cov[i,i]-(sum {k in S : k < i} (c[i,k]*c[i,k])))
else
(cov[i,j]-sum{k in S : k < j} c[i,k]*c[j,k])/c[j,j];
/* Because there is no way to seed the PRNG, a workaround */
param utc := prod {1..2} (gmtime()-1000000000);
param seed := utc - 100000*floor(utc/100000);
check sum{1..seed} Uniform01() > 0;
/* Normal random variates */
param N default 5000;
set T := 1..N;
param zn{j in S, t in T} := Normal(0,1);
param rt{i in S, t in T} := r[i] + sum {j in S : j <= i} c[i,j]*zn[j,t];
/* MAD Optimization */
var w{S} >= 0; # Portfolio Weights with Bounds
var y{T} >= 0; # Positive deviations (non-negative)
var z{T} >= 0; # Negative deviations (non-negative)
minimize MAD: (1/card(T))*sum {t in T} (y[t] + z[t]);
s.t. C1: sum {s in S} w[s]*r[s] >= r_portfolio;
s.t. C2: sum {s in S} w[s] = 1;
s.t. C3 {t in T}: (y[t] - z[t]) = sum{s in S} (rt[s,t]-r[s])*w[s];
solve;
/* Report */
/* Input Data */
printf "Stock Data\n\n";
printf " Return Variance\n";
printf {i in S} "%5s %7.4f %7.4f\n", i, r[i], cov[i,i];
printf "\nCovariance Matrix\n\n";
printf " ";
printf {j in S} " %7s ", j;
printf "\n";
for {i in S} {
printf "%5s " ,i;
printf {j in S} " %7.4f ", cov[i,j];
printf "\n";
}
/* MAD Optimal Portfolio */
printf "\nMinimum Absolute Deviation (MAD) Portfolio\n\n";
printf " Return = %7.4f\n",r_portfolio;
printf " Variance = %7.4f\n\n", sum {i in S, j in S} w[i]*w[j]*cov[i,j];
printf " Weight\n";
printf {s in S} "%5s %7.4f\n", s, w[s];
printf "\n";
table tab0 {s in S} OUT "JSON" "Optimal Portfolio" "PieChart":
s, w[s]~PortfolioWeight;
table tab1 {s in S} OUT "JSON" "Asset Return versus Volatility" "ScatterChart":
sqrt(cov[s,s])~StDev, r[s]~Return;
table tab2 {s in S} OUT "JSON" "Portfolio Weights" "ColumnChart":
s~Stock, w[s]~PortfolioWeight;
table tab3 {t in T} OUT "JSON" "Simulated Portfolio Return" "LineChart":
t~month, (y[t] - z[t])~PortfolioReturn;
/* Simulated Return data in Matlab Format */
/*
printf "\nrt = [ ... \n";
for {t in T} {
printf {s in S} "%9.4f",rt[s,t];
printf "; ...\n";
}
printf "];\n\n";
*/
data;
/* Data for monthly returns on four selected stocks for a three
year period ending December 4, 2009 */
param N := 200;
param r_portfolio := 0.01;
param : S : r :=
AAPL 0.0308
GE -0.0120
GS 0.0027
XOM 0.0018 ;
param cov :
AAPL GE GS XOM :=
AAPL 0.0158 0.0062 0.0088 0.0022
GE 0.0062 0.0136 0.0064 0.0011
GS 0.0088 0.0064 0.0135 0.0008
XOM 0.0022 0.0011 0.0008 0.0022 ;
end;
```
<!--NAVIGATION-->
< [7.5 Portfolio Optimization](https://jckantor.github.io/CBE40455-2020/07.05-Portfolio-Optimization.html) | [Contents](toc.html) | [7.7 Portfolio Optimization](https://jckantor.github.io/CBE40455-2020/07.07-MAD-Portfolio-Optimization.html) ><p><a href="https://colab.research.google.com/github/jckantor/CBE40455-2020/blob/master/docs/07.06-Portfolio-Optimization-using-Mean-Absolute-Deviation.ipynb"> <img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a><p><a href="https://jckantor.github.io/CBE40455-2020/07.06-Portfolio-Optimization-using-Mean-Absolute-Deviation.ipynb"> <img align="left" src="https://img.shields.io/badge/Github-Download-blue.svg" alt="Download" title="Download Notebook"></a>
| github_jupyter |
# Stimulator Calibration Mouse in-vivo Recordings
#### by Katrin Franke based on Franke et al. 2019 and Qiu et al. 2021
This script allows estimating photoceptor excitation caused by a combination of light sources (e.g. LEDs) and dichroic filters used in a visual stimulator for in-vivo recordings. To this end, spectral data from an Ocean Optics spectometer is recorded via USB (i.e. with the USB2000+ spectrometer, Ocean Optics) and converted into photoisomerization rates per cone photoreceptor type and LED, taking into account (i) the wavelength specific transmission of the mouse optical apparatus and (ii) the ratio between pupil size and retinal area.
The spectrometer measurements rely on the open source library [python-seabreeze](https://github.com/ap--/python-seabreeze) written by Andreas Pohlmann. This library is a wrapper for the C++ API provided by Ocean Optics. For installation instructions and further support, see there.
### Approach
Our approch consists of two main steps:
1. We first map electrical power ($P_{el}$, in $[W]$) to photon flux ($P_{Phi}$, in $[photons/s]$),
$$
P_{Phi}(\lambda) = \frac{P_{el}(\lambda) \cdot a \cdot \lambda \cdot 10^{-9}} {c \cdot h }\cdot \frac{1}{\mu_{lens2cam}(\lambda)}.
$$
For $\lambda$, we use the peak wavelength of the photoreceptor's spectral sensitivity curve ($\lambda_{S}=360 \: nm$, $\lambda_{M}=510 \: nm$).
The rest are constants ($a=6.2421018 \: eV/J$, $c=299,792,458 \: m/s$, and $h=4.135667 \cdot10^{-15} \: eV/s$).
2. Next, we convert the photon flux to photoisomerisation rate ($R_{Iso}$, in $[P^*/cone/s]$),
$$
R_{Iso}(\lambda) = \frac{P_{Phi}(\lambda)}{A_{Stim}} \cdot A_{Collect} \cdot T(\lambda) \cdot R_{pup2ret}
$$
where $A_{Stim}$ is area that is iluminated on the power meter sensor, and $A_{Collect}=0.2 \: \mu m^2$ the photoreceptor's outer segment (OS) light collection area (see below).
> Note: The OS light collection area ($[\mu m^2]$) an experimentally determined value, e.g. for wt mouse cones that are fully dark-adapted, a value of 0.2 is be assumed; for mouse rods, a value of 0.5 is considered realistic (for details, see [Nikonov et al., 2006](http://www.ncbi.nlm.nih.gov/pubmed/16567464)).
Another factor we need to consider is the wavelength-dependent attenuation by the mouse eye optics. The relative transmission for UV ($T_{Rel}(UV)$, at $\lambda=360 \: nm$) and green ($T_{Rel}(G)$, at $\lambda=510 \: nm$) is approx. 35% and 55%, respectively ([Henriksson et al., 2010](https://pubmed.ncbi.nlm.nih.gov/19925789/)).
In addition, the light reaching the retina depends on the ratio ($R_{pup2ret}$) between pupil area and retinal area (both in $[mm^2]$) ([Rhim et al., 2020](https://www.biorxiv.org/content/10.1101/2020.11.03.366682v1)). Here, we assume pupil areas of $0.21 \: mm^2$ (stationary) and $1.91 \: mm^2$ (running) (see [Pennesi et al., 1998](https://pubmed.ncbi.nlm.nih.gov/9761294/)). To calculate the retinal area of the mouse, we assume an eye axial length of approx. $3 \: mm$ and that the retina covers about 60% of the sphere's surface ([Schmucker & Schaeffel, 2004](https://www.sciencedirect.com/science/article/pii/S0042698904001257#FIG4)).
```
import os
import glob
import math
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
import scipy.special as sse
import pylab
from scipy.optimize import curve_fit
from IPython import display
# Helpers
#
import scripts.spectrum as spc
import scripts.progress_bar as pbar
import scripts.spectrometer_helper as spm
import scripts.fitting_funcs as fit
# Set graphics to be plotted in the notebook
#
%matplotlib inline
# Seaborn plot settings
#
sns.set()
# Paths and file names
# (Don't change unless you know what you are doing)
#
path_LightSources = "light-sources//"
path_Filters = "filters//"
path_SpectCalibData = "spectrometer-calibration-files//"
path_Data = "data//"
path_Opsins = "opsins//"
path_Transmission = "transmission-mouse-eye//"
file_GammaLUT = "defaultGammaLUT"
txtFileNewLineStr = "\r\n"
def setPlotStyle():
# Presettings for figures
#
mpl.rcParams['figure.figsize'] = [10, 5]
mpl.rc('font', size=10)
mpl.rc('axes', titlesize=12)
mpl.rc('axes', labelsize=12)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
mpl.rc('legend', fontsize=12)
mpl.rc('figure', titlesize=12)
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
```
## Define photoreceptors
Load opsin spectra from text files in the respective folder:
* Text files are expected to have one column per opsin ...
* ... plus one (the last one) for the scaling in x direction (wavelength, in 1-nm increments).
* All spectral files, including the filter and LED files loaded later, must have the same x range. In this version, 300 .. 699 nm is used.
```
x_wavelen_nm = np.loadtxt(path_Opsins +"mouse_cone_opsins.txt", usecols=(2,)) # 300 .. 699 nm
mouseMOpsin = np.loadtxt(path_Opsins +"mouse_cone_opsins.txt", usecols=(1,))
mouseSOpsin = np.loadtxt(path_Opsins +"mouse_cone_opsins.txt", usecols=(0,))
```
Define some properties as well as constants needed later to convert the data measured with the spectrometer into photoisomerizations.
* `h`: Planck's constant [eV*s]
* `c`: speed of light [m/s]
* `eV_per_J`: conversion factor ([eV] per [J])
* `ac_um2`: cone OS light collection area [µm^2], see [Nikonov et al., 2006] (http://www.ncbi.nlm.nih.gov/pubmed/16567464) for details. This is an experimentally determined value, e.g. for wt mouse cones that is fully dark-adapted, a value of 0.2 can be assumed.
* `ar_um2`: rod OS light collection area, see above. A value of 0.5 is considered realistic.
```
h = 4.135667E-15 # Planck's constant [eV*s]
c = 299792458 # speed of light [m/s]
eV_per_J = 6.242E+18 # [eV] per [J]
ac_um2 = 0.2
ar_um2 = 0.5
```
Organise photoreceptors as a list of dictionaries, with:
* `name`: name of photoreceptor, used later for plots etc.
* `peak_nm`: peak wavelength of opsin spectrum in [nm]
* `collecArea_um2`: see above
* `spect`: opsin spectrum
```
MCone = {"name" : "mouse_M_cone",
"peak_nm" : 511,
"collecArea_um2" : ac_um2,
"spect" : mouseMOpsin}
SCone = {"name" : "mouse_S_cone",
"peak_nm" : 360,
"collecArea_um2" : ac_um2,
"spect" : mouseSOpsin}
Rod = {"name" : "mouse_rod",
"peak_nm" : 510,
"collecArea_um2" : ar_um2,
"spect" : []}
PRs = [MCone, SCone, Rod]
```
Plot cone photoreceptor spectra:
```
setPlotStyle()
for PR in PRs:
if len(PR["spect"]) > 0:
col = spc.wavelength_to_rgb(PR["peak_nm"], darker=0.75)
plt.plot(x_wavelen_nm, PR["spect"], color=col, label=PR["name"])
plt.gca().set(xlabel="Wavelength [nm]", ylabel="Rel. sensitivity")
plt.legend(bbox_to_anchor=(1.01, 1), loc="upper left")
```
## Define stimulus LED/filter combinations
Load LED and filter (dicroic) spectra from text files in the respective folder. Files are expected to be organised as the photoreceptor data files except that they contain only two columns (relative transmission for filters and relative intensity for light sources; wavelength, from 300 to 699 nm in 1-nm increments).
```
# LED spectra
#
LED_Blue_Name = "LED_Blue"
LED_Blue = np.loadtxt(path_LightSources +LED_Blue_Name +".txt", usecols=(0,))
LED_UV_Name = "LED_UV"
LED_UV = np.loadtxt(path_LightSources +LED_UV_Name +".txt", usecols=(0,))
```
#### Decide, if one filter per LED is used, if a dual-band filter is used, or if no filter is used.
```
LEDFilterType = 2 # 0=individual for each LED, 1=dual-band filter, 2=no filters used
# LED filter(s)
#
if LEDFilterType == 0:
Filter_UV_Name = "Filter_387_11"
Filter_UV = np.loadtxt(path_Filters +Filter_UV_Name +".txt", usecols=(0,))
Filter_Green_Name = "Filter_576_10"
Filter_Green = np.loadtxt(path_Filters +Filter_Green_Name +".txt", usecols=(0,))
elif LEDFilterType == 1:
Filter_UVGreen_name = "Filter_F59-003_390_575"
Filter_UV = np.loadtxt(path_Filters +Filter_UVGreen_name +".txt", usecols=(0,))
Filter_Green = np.loadtxt(path_Filters +Filter_UVGreen_name +".txt", usecols=(0,))
elif LEDFilterType == 2:
Filter_UV = np.array(list([1.0]*len(LED_UV)))
Filter_Blue = np.array(list([1.0]*len(LED_UV)))
```
#### Decide if LEDs are used, or a standard TFT screen:
```
LightSourceType = 0 # 0=LEDs, 1=TFT screen
```
Organise the LED/filter combinations and spectra as a list of dictionary, with:
* `name`: name of LED/filter combinations, used later for plots etc.
* `peak_nm`: peak wavelength of LED/filter combination in [nm]
* `LED_spect`: spectrum of LED (same x range as the opsin spectra, that is from 300 to 699 nm, and with 1-nm resolution).
* `filter_spect`: spectrum of filter
* `spect_nw`: measured LED/filter spectrum in [nW]
* `spect_nw_norm`: `spect_nw` peak-normalized
* `spect_raw_bkg`: mean background spectrum (LED off)
* `spect_raw`: temporary spectra of LED/filter (all trials) for the current intensity level
* `spect_raw_avg`: list of mean spectra of LED/filter for all intensity levels
* `spect_raw_fit`: list of fits of mean LED/filter spectra for all intensity levels
```
minWLen_nm = 300
maxWLen_nm = 699
n = maxWLen_nm -minWLen_nm +1
x_wavelen_nm = np.array([v+minWLen_nm for v in range(n)], dtype=np.float64)
LightSourceType = 0 # 0 for LEDs, 1 for TFT monitor
if LightSourceType == 0:
# In vivo setup, UV-green-blue LCr
#
BlLED = {"name" : "blue",
"peak_nm" : 460, # 0=determine from peak
"target_PR" : "mouse_M_cone",
"bandwidth_nm" : 0, # obsolete
"LED_spect" : LED_Blue, # LED spectrum (datasheet)
"filter_spect" : Filter_Blue, # filter spectrum (datasheet)
"meas_n_trials": [1, 1], # trials to measure for LED off and on
"spect_nw_norm": None,
"spect_nw" : None,
"spect_raw_bkg": None,
"spect_raw" : None,
"spect_raw_fit": None,
"spect_raw_avg": None}
UVLED = {"name" : "UV",
"peak_nm" : 395,
"target_PR" : "mouse_S_cone",
"bandwidth_nm" : 0,
"LED_spect" : LED_UV,
"filter_spect" : Filter_UV,
"meas_n_trials": [1, 1],
"spect_nw_norm": None,
"spect_nw" : None,
"spect_raw_bkg": None,
"spect_raw" : None,
"spect_raw_fit": None,
"spect_raw_avg": None}
LEDs = [BlLED, UVLED]
elif LightSourceType == 1:
# TFT screen, for comparison
#
GrLED = {"name" : "green",
"peak_nm" : 520, # 0=determine from peak
"bandwidth_nm" : 0, # obsolete
"target_PR" : "mouse_M_cone",
"LED_spect" : [], # LED spectrum (datasheet)
"filter_spect" : [], # filter spectrum (datasheet)
"meas_n_trials": [2, 2], # trials to measure for LED off and on
"spect_nw_norm": None,
"spect_nw" : None,
"spect_raw_bkg": None,
"spect_raw" : None,
"spect_raw_fit": None,
"spect_raw_avg": None}
BlLED = {"name" : "blue",
"peak_nm" : 450,
"bandwidth_nm" : 0,
"target_PR" : "mouse_S_cone",
"LED_spect" : [],
"filter_spect" : [],
"meas_n_trials": [2, 2],
"spect_nw_norm": None,
"spect_nw" : None,
"spect_raw_bkg": None,
"spect_raw" : None,
"spect_raw_fit": None,
"spect_raw_avg": None}
ReLED = {"name" : "red",
"peak_nm" : 650,
"bandwidth_nm" : 0,
"target_PR" : "mouse_M_cone",
"LED_spect" : [],
"filter_spect" : [],
"meas_n_trials": [2, 2],
"spect_nw_norm": None,
"spect_nw" : None,
"spect_raw_bkg": None,
"spect_raw" : None,
"spect_raw_fit": None,
"spect_raw_avg": None}
LEDs = [ReLED, GrLED, BlLED]
```
Plot theoretical spectra of LED/filter combinations together with cone photoreceptors:
```
for LED in LEDs:
if len(LED["LED_spect"]) == 0:
continue
col_lo = spc.wavelength_to_rgb(LED["peak_nm"], darker=0.5)
col_hi = spc.wavelength_to_rgb(LED["peak_nm"], darker=0.8)
plt.plot(x_wavelen_nm, LED["LED_spect"], color=col_lo,
label=LED["name"] +" LED", linewidth=0.5)
plt.fill(x_wavelen_nm, LED["LED_spect"], facecolor=col_hi, alpha=0.25)
if LEDFilterType != 2:
plt.plot(x_wavelen_nm, LED["filter_spect"], color="black",
label=LED["name"] +" filter", linewidth=1)
for PR in PRs:
if len(PR["spect"]) > 0:
col = spc.wavelength_to_rgb(PR["peak_nm"], darker=0.75)
plt.plot(x_wavelen_nm, PR["spect"], color=col, label=PR["name"])
plt.gca().set(xlabel="Wavelength [nm]", ylabel="")
plt.legend(bbox_to_anchor=(1.01, 1), loc="upper left")
```
## Spectrometer measurements
This section allows measuring the light sources (e.g. LEDs) using a spectrometer. The measured data is stored in the data path (for details, see below) and can also be re-loaded instead of performing a measurement. This way, notebooks can be reproduced based on the stored spectral data.
### Import calibration file for spectrometer
For each wavelength, the file contains corresponding [µJ/count] correlation, which will be used to calibrate the readings from the spectrometer.
*__Note:__ These files need to be obtained for each individual device.*
```
spm_fileName = path_SpectCalibData + "Cal_F00079_20191003.txt"
spm_serialNumber = "USB2+F00079"
correction_factor = 50 # estimated using a calibrated spectrometer and a power meter
spm_calib_values = np.loadtxt(spm_fileName, usecols=(0,))*correction_factor
spm_calib_wavelengths = np.loadtxt(spm_fileName, usecols=(1,))
```
Plot spectometer calibration curve
```
plt.plot(spm_calib_wavelengths, spm_calib_values, color="black", label=spm_serialNumber)
plt.gca().set(xlabel="Wavelength [nm]", ylabel="[µJ/count]")
plt.legend(bbox_to_anchor=(1.01, 1), loc="upper left")
```
### Define measurement parameters
For the measurement and later for the calculation of photoisomerization rates, the area (`A_detect_um2`) of the measured light spot -- or, if it overfills the sensor of the spectrometer, the area of the sensor (e.g. the diffusor window of the spectrometer) -- is needed. Also required is the integration time (`int_time_s`) of the spectrometer.
*__Note:__ In case of the Ocean Optics USB2000 used here, the area of the diffusor window is 0.113411cm².*
*__Note:__ The integration time should be changed if output is saturated. To obtain [µJ/s] with the USB2000, use 1 s.*
```
if LightSourceType == 0:
# LEDs of LCr
int_time_s = 0.1
r_stimulus_um = 300
elif LightSourceType == 1:
# TFT screen, for comparison
int_time_s = 1.0
r_stimulus_um = 1900
# Calculations illuminated area
A_detect_um2 = np.pi*(r_stimulus_um)**2
print("Illuminated area is {0:.3e} µm2 (= {1:.3} cm²)".format(A_detect_um2, A_detect_um2 *1E-8))
```
Define stimulus conditions via step size, then the full range (`0 .. 255`) will be used, or, alternatively, by selecting the intensity steps directly, e.g.:
`stim_intens = [0,200,255]` (in this case, `stim_intens_step_size = 0`)
```
stim_intens_step_size = 0
stim_intens = [255]
# Some calculations ...
#
if stim_intens_step_size > 0:
stim_n_intens_levels = int(255 /stim_intens_step_size +1)
stim_intens = [stim_intens_step_size*i for i in range(0, stim_n_intens_levels)]
else:
stim_n_intens_levels = len(stim_intens)
stim_intens = np.array(stim_intens)
use_trigger = stim_n_intens_levels > 1
print("{0} intensity step(s):".format(stim_n_intens_levels))
print(stim_intens)
```
Estimate the fraction of the screen covered by the optical fiber of the spectrometer based on the distance to the screen and the acceptance angle of the fiber:
```
angle_fiber = 25 # in degrees
radians_fiber = angle_fiber*np.pi / 180
distance = 12 # in centimeters
radius_screen_fiber = np.tan(radians_fiber)*distance
fiber_area = np.pi*(radius_screen_fiber**2)
screen_x = 43 # in centimeters
screen_y = 26 # in centimeters
screen_area = screen_x * screen_y
# estimate a factor correcting for the fraction of the screen covered by the spectrometer
coverage_correction = screen_area/fiber_area
```
### Connect and setup the spectrometer
Select the device using its serial number and set the integration time in [s].
```
spm.connect(spm.DevType.OO_USB2000, spm_serialNumber, int_time_s, use_trigger)
```
### Record (or load) spectra of light sources
Prepare the measurements and when ready, press Enter. For each LED (light source), first the background spectrum (i.e. LED off) and then the spectra for all selected intensity are recorded.
The recorded mean spectra together with the wavelengts are written to a text file, one per LED. By chosing the name of an existing set of files (`fileNameMask`) in the folder `data//`, no measurements are taking place and instead the data of that earlier recording is loaded.
### Examples
`LEDs_Setup2_20180821` - Measurement of the intensity curve (128 levels, from 0 to 254 in steps of 2) for the UV and the green LED to estimate the gamma correction look-up table (LUT, `defaultGammaLUT.txt`). Note that the LED power was increased compare to the normal experimental conditions (i.e. retina recordings) to yield spectrometer readings in the low level range.
`LEDs_Setup2_20180823` - Measurement of the intensity curve (128 levels, from 0 to 254 in steps of 2) for the UV and the green LED **with gamma-correction** applied. Note that like for `LEDs_Setup2_20180821`, the LED power was increased compare to the normal experimental conditions (i.e. retina recordings) to yield spectrometer readings in the low level range.
`LEDs_Setup2_20190130` - Measurement of maximal intensity (only level 255) of the UV and the green LED **after adjusting LED power** to the range used for retina recordings.
`TFTMonitor_Samsung_4` - Measurement of a random TFT monitor.
```
# Examples
#
if LightSourceType == 0:
fileNameMask = "LEDs_LCr_20191108_0" # LCr LEDs
elif LightSourceType == 1:
fileNameMask = "TFT_Monitor_20191003_0" # Standard TFT screen, for comparison
for iLED, LED in enumerate(LEDs):
# At first, use colors that are defined for the current LED
#
col_lo = spc.wavelength_to_rgb(LED["peak_nm"], darker=0.5)
col_hi = spc.wavelength_to_rgb(LED["peak_nm"], darker=0.8)
# Generate file name
#
fName = "{0}{1}_{2}.csv".format(path_Data, fileNameMask, LED["name"])
isLoad = os.path.isfile(fName)
if isLoad:
# File for that LED exists, load spectral data
#
wavelengths, LED["spect_raw_bkg"], LED["spect_raw_avg"], stim_intens = spc.loadSpectData(fName)
else:
# Measure LED background ...
#
input("Measure background of {0} LED. Press Enter to start ...".format(LED["name"]))
curr_background = []
for i in pbar.log_progress(range(LED["meas_n_trials"][0]), name='Background, {0} LED'.format(LED["name"])):
wavelengths, intensities = spm.grabSpectrum(spm.DevType.OO_USB2000, 0)
curr_background.append(intensities)
LED["spect_raw_bkg"] = np.mean(curr_background, axis=0)
# Plot ...
#
plt.figure(1)
plt.plot(wavelengths, LED["spect_raw_bkg"], color=col_lo, label=LED["name"] +" LED/filter, background")
plt.gca().set(xlabel="Wavelength [nm]", ylabel="Counts (raw spectrometer output)")
plt.legend(bbox_to_anchor=(1.01, 1), loc="upper left")
display.display(plt.gcf())
display.clear_output(wait=True)
if not(isLoad):
# Measure LED at the selected intensities ...
#
LED["spect_raw_avg"] = []
nIntensFound = 0
tempText = "{0} LED/filter for intensity step".format(LED["name"])
input("Measure {0} LED . Press Enter to start ...".format(LED["name"]))
for iI in range(stim_n_intens_levels):
curr_spectrum = []
LED["spect_raw"] = []
for iT in range(LED["meas_n_trials"][1]):
wavelengths, intensities = spm.grabSpectrum(spm.DevType.OO_USB2000, 0)
curr_spectrum = intensities - LED["spect_raw_bkg"]
LED["spect_raw"].append(curr_spectrum)
LED["spect_raw_avg"].append(np.mean(LED["spect_raw"], axis=0))
nIntensFound += 1
# Correct spectrum by coverage factor
#
curr_spectrum = LED["spect_raw_avg"]
LED["spect_raw_avg"] = [i * coverage_correction for i in curr_spectrum]
# Plot ...
#
for iI, Intens in enumerate(stim_intens):
plt.figure(2)
tempLbl = LED["name"] +" LED/filter" if iI == 0 else ""
plt.plot(wavelengths, LED["spect_raw_avg"][iI], color=col_hi, label=tempLbl)
plt.fill(wavelengths, LED["spect_raw_avg"][iI], facecolor=col_hi, alpha=0.25)
plt.gca().set(xlabel="Wavelength [nm]", ylabel="Counts (raw spectrometer output)")
plt.legend(bbox_to_anchor=(1.01, 1), loc="upper left")
display.display(plt.gcf())
display.clear_output(wait=True)
if len(glob.glob("{0}{1}_*".format(path_Data, fileNameMask))) == 0:
# Save all measured data and disconnect spectrometer
#
#spc.saveAllSpectData(fileNameMask, path_Data, wavelengths, LEDs, stim_intens)
spm.disconnect(spm.DevType.OO_USB2000)
```
### Postprocess the spectra
If the LEDs are dim and the spectral measurements are noisy, the spectra can be cleaned up by removing "shot noise" and fitting the raw spectra. The available fit functions are a normal Gaussian (`Gauss()`) and an exponentially modified Gaussian (`EMG()`; see `scripts//fitting_funcs.py`). More fit functions can be defined by the user, if needed.
The function `removeShotNoise()` replaces single data points by the trace average if the datapoints are very (>`lim`) different from their direct neighboring datapoints.
The postprocessing includes the following steps:
1. `useFit` select the fit function (or none), with `0`=no, `1`=use Gaussian, `2`=use EMG
2. `useSNRemover`, 1=try removing shot noise using the threshold `SN_thres`
3. `sigmaStart` is the starting width for the Gaussian or EMG fit
4. `useRange_nm` is only considering this nm range around the peak of the spectrum, this is helpful as shorter wavelengths can introduce noise when multiplied with the calibration file
Typical parameters for LED measurements are `useFit=1`, `useSNRemover=1`, `SN_thres=50`, and `sigmaStart=1`.
For an RGB TFT screen, typical parameters are `useFit=2`, `useSNRemover=0`, `SN_thres=500`, and `sigmaStart=1`.
```
if LightSourceType == 0:
# Setup #2 green-UV lightcrafter
# (through-the-condenser stimulation)
#
useFit = 0 # 0=no, 1=use Gaussian, 2=use exponentially modified Gaussian (EMG)
useSNRemover = 1 # 0=no, 1=yes
useLEDPeak = 0 # 0=determine peak, 1=use peak given in the LED description
useRange_nm = 0 # if > 0, considers only the spectrum the range around the peak
useBaselineToZero = 1 # 0=no, 1=yes,
Baseline_thres = 3 # std of baseline
SN_thres = 50 # threshold for shot noise detection,
sigmaStart = 1
else:
# TFT screen, for comparison
#
useFit = 2
useSNRemover = 1
useLEDPeak = 1
useRange_nm = 0
SN_thres = 500
sigmaStart = 1
for iLED, LED in enumerate(LEDs):
LED["spect_raw_fit"] = []
for iI, Intens in enumerate(stim_intens):
x = wavelengths
y = LED["spect_raw_avg"][iI]
ampl = y.max()
mean = LED["peak_nm"] if useLEDPeak else x[np.argmax(y)]
sigma = sigmaStart
tau = 1
if useSNRemover > 0:
# Try removing shot noise
#
LED["spect_raw_avg"][iI] = fit.removeShotNoise(LED["spect_raw_avg"][iI], SN_thres)
if useFit > 0:
# Fit the measured curves
#
warnings.filterwarnings(action='ignore')
try:
if useFit == 1:
popt, pcov = curve_fit(fit.Gauss, x, y, p0=[ampl, mean, sigma])
y_fit = fit.Gauss(x, popt[0], popt[1], popt[2])
elif useFit == 2:
popt, pcov = curve_fit(fit.EMG, x, y, p0=[ampl, mean, sigma, tau])
y_fit = fit.EMG(x, popt[0], popt[1], popt[2], popt[3])
except RuntimeError:
# Error, try simple Gaussian fit ...
#
try:
popt, pcov = curve_fit(fit.Gauss, x, y, p0=[ampl, mean, sigma])
y_fit = fit.Gauss(x, popt[0], popt[1], popt[2])
except RuntimeError:
# ... failed as well, just use raw mean spectrum
#
y_fit = LED["spect_raw_avg"][iI]
warnings.filterwarnings(action='default')
else:
# Use the measured spectrum
#
y_fit = LED["spect_raw_avg"][iI]
if useRange_nm > 0:
# Cut result to the defined wavelength range
#
n = len(y)
x1 = max(x[0], mean -useRange_nm/2)
x2 = min(x[-1], mean +useRange_nm/2)
i1, = np.where(wavelengths > x1)
i2, = np.where(wavelengths > x2)
i1 = i1[0]
i2 = i2[0]
temp = np.copy(y_fit)
y_fit = np.zeros(n, dtype=np.float64)
y_fit[i1:i2] = temp[i1:i2]
if useBaselineToZero == 1:
baseline = y_fit[0:100]
y_fit_baseline = np.where(y_fit<2*np.std(baseline), 0, y_fit)
else:
y_fit_baseline = y_fit
if iI == len(stim_intens) -1:
if LED["peak_nm"] <= 0:
LED["peak_nm"] = spc.getPeak_in_nm(wavelengths, y_fit)
LED["spect_raw_fit"].append(y_fit_baseline)
for iLED, LED in enumerate(LEDs):
col_lo = spc.wavelength_to_rgb(LED["peak_nm"], darker=0.5)
col_hi = spc.wavelength_to_rgb(LED["peak_nm"], darker=0.8)
for iI, Intens in enumerate(stim_intens):
tempLbl = LED["name"] +" LED/filter" if iI == 0 else ""
plt.fill(wavelengths, LED["spect_raw_avg"][iI], facecolor=col_hi, alpha=0.25)
plt.plot(wavelengths, LED["spect_raw_fit"][iI], color=col_lo, label=tempLbl)
plt.gca().set(xlabel="Wavelength [nm]", ylabel="Counts (raw spectrometer output)")
plt.legend(bbox_to_anchor=(1.01, 1), loc="upper left")
plt.xlim((350, 650))
```
Now the measured LED/filter spectra are ...
1. corrected using the calibration data of the spectrometer (in [µJ/s]) and counts are converted into [nW] (1 µJ/s = 1000 nW),
2. interpolated to match the 300 to 699 nm x scale of the opsin data (-> `spect_nw`), ...
3. and a copy is made that is normalize to the maximal amplitude (-> `spect`)
```
for iLED, LED in enumerate(LEDs):
LED["spect_nw"] = []
LED["spect_nw_norm"] = []
for iI, Intens in enumerate(stim_intens):
try:
corrSpect = np.multiply(LED["spect_raw_fit"][iI], spm_calib_values)*1000 /int_time_s
corrSpectInterpol = np.interp(x_wavelen_nm, wavelengths, corrSpect)
_amax = np.amax(corrSpectInterpol)
if _amax != 0:
corrSpectIntNorm = corrSpectInterpol /_amax
else:
tempLbl = LED["name"] +" LED/filter"
print("WARNING: maximal amplitude is zero for intensity {0} of {1}.".format(Intens, tempLbl))
raise FloatingPointError
except FloatingPointError as err:
print("ERROR: for {0} LED/filter at {1}, `{2}`.".format(LED["name"], Intens, err))
corrSpectInterpol = np.zeros((len(x_wavelen_nm),), dtype=np.float64)
corrSpectIntNorm = corrSpectInterpol
LED["spect_nw"].append(corrSpectInterpol)
LED["spect_nw_norm"].append(corrSpectIntNorm)
```
Next, the LED/filter spectra need to be corrected for the wavelength-specific transmission through the eye (based on Henriksson et al. Experimental Eye Research 2010). Note that this is only necessary for in vivo recordings. First, the file is loaded and displayed.
```
# Load transmission data file
transmission_Name = "Transmission_mouse_eye"
transmission_mouse_eye = np.loadtxt(path_Transmission +transmission_Name +".txt", usecols=(0,))
transmission_wavelengths = np.loadtxt(path_Transmission +transmission_Name +".txt", usecols=(1,))
# Disply file
plt.plot(transmission_wavelengths, transmission_mouse_eye, label="transmission_mouse_eye")
plt.gca().set(xlabel="Wavelength [nm]", ylabel="Relative transmission")
plt.legend(bbox_to_anchor=(1.01, 1), loc="upper left")
display.display(plt.gcf())
display.clear_output(wait=True)
```
Provide the pupil area and estimate the ratio between pupil area and retinal area:
```
eye_axial_len_mm = 3
ret_area = 0.6 *(eye_axial_len_mm/2)**2 *np.pi *4
# pupil_area = 0.21 # stationary
pupil_area = 1.91 # running
pupil_to_retina = pupil_area/ret_area
print("mouse retinal area [mm²] = {0:.1f}".format(ret_area))
print("pupil area [mm²] = {0:.1f}".format(pupil_area))
```
Adjust intensity based on eye transmission and pupil size:
```
for iLED, LED in enumerate(LEDs):
col_hi = spc.wavelength_to_rgb(LED["peak_nm"], darker=0.8)
LED["spect_nw_corr"] = []
for iI, Intens in enumerate(stim_intens):
current_spectrum = np.multiply(LED["spect_nw"][iI], transmission_mouse_eye)
current_spectrum/=100
current_spectrum = current_spectrum
LED["spect_nw_corr"].append(current_spectrum*pupil_to_retina)
```
Plot calibrated (fitted) spectra of LED/filter combinations together with the photoreceptor opsin sensitivity curves and the filter spectra from the datasheets:
```
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
for iLED, LED in enumerate(LEDs):
col_lo = spc.wavelength_to_rgb(LED["peak_nm"], darker=0.5)
col_hi = spc.wavelength_to_rgb(LED["peak_nm"], darker=0.8)
for iI, Intens in enumerate(stim_intens):
tempLbl = LED["name"] +" LED/filter" if iI == 0 else ""
ax1.plot(x_wavelen_nm, LED["spect_nw_corr"][iI], color=col_lo, label=tempLbl, linewidth=0.5)
ax1.fill(x_wavelen_nm, LED["spect_nw_corr"][iI], facecolor=col_hi, alpha=0.25)
if LEDFilterType != 2:
ax2.plot(x_wavelen_nm, LED["filter_spect"], color="black", label="Filter_" +LED["name"], linewidth=1)
for PR in PRs:
if len(PR["spect"]) > 0:
col = spc.wavelength_to_rgb(PR["peak_nm"], darker=0.75)
ax2.plot(x_wavelen_nm, PR["spect"], color=col, label=PR["name"])
ax1.set_xlabel("Wavelength [nm]")
ax1.set_ylabel("LED/filter power [nW]")
ax1.legend(bbox_to_anchor=(1.10, 1), loc="upper left")
ax1.set_ylim(bottom=0)
ax2.set_ylabel("Rel. sensitivity")
ax2.legend(bbox_to_anchor=(1.10, 0.75), loc="upper left")
ax2.set_ylim(bottom=0)
ax2.grid(False)
ax2.set_xlim((300, 650))
for iLED, LED in enumerate(LEDs):
print("{0}\tLED/filter, sum = {1:.3e}".format(LED["name"], np.sum(LED["spect_nw_corr"])))
```
## Determine effective photoreceptor stimulation
Calculate spectra for effective LED/filter combinations ...
```
for LED in LEDs:
LED["effect_on_PR"] = []
for PR in PRs:
if len(PR["spect"]) > 0:
temp = {}
temp["PR_name"] = PR["name"]
temp["spect"] = []
temp["rel_exc"] = []
for iI, Intens in enumerate(stim_intens):
tempSpect = np.array(PR["spect"] *LED["spect_nw_norm"][iI])
temp["spect"].append(tempSpect)
A_overlap = np.trapz(tempSpect)
A_LED = np.trapz(LED["spect_nw_norm"][iI])
rel_exc = A_overlap /A_LED if (A_LED > 0) else 0.0
temp["rel_exc"].append(rel_exc)
LED["effect_on_PR"].append(temp)
```
## Generate summary (maximal isomerization rates)
1. Plot (normalized) spectra of photoreceptors and LED/filter combinations and print relative co-excitation of photoreceptors by the LEDs.
2. Calculate and print photo-isomerization rates for all LED/filter and photoreceptor combinations
```
# Plot (normalized) spectra of photoreceptors and LED/filter combinations
#
for PR in PRs:
if len(PR["spect"]) > 0:
plt.plot(x_wavelen_nm, PR["spect"],
color=spc.wavelength_to_rgb(PR["peak_nm"]), label=PR["name"])
for LED in LEDs:
colLED = spc.wavelength_to_rgb(LED["peak_nm"])
if len(LED["spect_nw_norm"][-1]) > 0:
plt.plot(x_wavelen_nm, LED["spect_nw_norm"][-1], color=colLED, label=LED["name"] +" LED/filter")
for effect in LED["effect_on_PR"]:
for PR in PRs:
if PR["name"] == effect["PR_name"]:
colPR = spc.wavelength_to_rgb(PR["peak_nm"])
plt.fill(x_wavelen_nm, effect["spect"][-1], facecolor=colPR, alpha=0.25)
plt.gca().set(xlabel="Wavelength [nm]", ylabel="Rel. sensitivity or intensity")
plt.legend(bbox_to_anchor=(1.01, 1), loc="upper left")
# Print co-excitation values for all LED/filter and photoreceptor combinations
#
print("Relative co-excitation (at intensity={0}):".format(stim_intens[-1]))
for LED in LEDs:
if len(LED["spect_nw_norm"][-1]) > 0:
for effect in LED["effect_on_PR"]:
print("{0:6.1f}% of {1} by {2}"
.format(effect["rel_exc"][-1]*100, effect["PR_name"], LED["name"]))
plt.show()
# Calculate and print photo-isomerization rates for all LED/filter and
# photoreceptor combinations
#
for iLED, LED in enumerate(LEDs):
LED["pow_eflux"] = []
LED["pow_Q"] = []
LED["pow_phi"] = []
LED["pow_E"] = []
for iI, Intens in enumerate(stim_intens):
# Convert energy flux from [nW] (=readout of spectrometer) into [eV/s]
#
pow_eflux = np.array((LED["spect_nw_corr"][iI] *1E-9 *eV_per_J), dtype=float)
# Calculate the wavelength-dependent photon energy `Q` in [eV]
#
pow_Q = np.array((c*h/(x_wavelen_nm *1E-9)), dtype=float)
# Divide energy flux by the photon energy to get the photon flux `phi`[photons/s]
# and then photon flux density `E` [photons/s /µm^2]
#
pow_phi = np.divide(pow_eflux, pow_Q)
pow_E = pow_phi /A_detect_um2
LED["pow_eflux"].append(pow_eflux)
LED["pow_Q"].append(pow_Q)
LED["pow_phi"].append(pow_phi)
LED["pow_E"].append(pow_E)
# Calculate per photoreceptor ...
#
for effect in LED["effect_on_PR"]:
for iPR, PR in enumerate(PRs):
if PR["name"] == effect["PR_name"]:
effect["photon_rate"] = []
effect["photoiso_rate"] = []
effect["photoiso_rate_total"] = []
for iI, Intens in enumerate(stim_intens):
# ... photon flux per photoreceptor `photon_rate` in [photons/s]
#
A_collect = PR["collecArea_um2"]
photon_rate = LED["pow_E"][iI] *A_collect
# ... photoisomerizations [P*/photoreceptor /s]
#
photoiso_rate = photon_rate *effect["rel_exc"][iI]
photoiso_rate_total = np.sum(photoiso_rate)
effect["photon_rate"].append(photon_rate)
effect["photoiso_rate"].append(photoiso_rate)
effect["photoiso_rate_total"].append(photoiso_rate_total)
if (iLED == 0) and (iPR == 0):
print("Maximal photoisomerization rates (at intensity={0}):".format(stim_intens[-1]))
print((effect["photoiso_rate_total"][-1], PR["name"], LED["name"]))
print("{0:7.1f} 10^3 photons/s in {1} for {2}"
.format(effect["photoiso_rate_total"][-1]/1E3, PR["name"], LED["name"]))
# Plot ...
#
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
for PR in PRs:
if len(PR["spect"]) > 0:
ax2.plot(x_wavelen_nm, PR["spect"],
color=spc.wavelength_to_rgb(PR["peak_nm"]), label=PR["name"])
for LED in LEDs:
for effect in LED["effect_on_PR"]:
for iPR, PR in enumerate(PRs):
if PR["name"] == effect["PR_name"]:
col_lo = spc.wavelength_to_rgb(PR["peak_nm"], darker=0.5)
col_hi = spc.wavelength_to_rgb(PR["peak_nm"], darker=0.8)
ax1.plot(x_wavelen_nm, effect["photoiso_rate"][-1], color=col_lo,
label=LED["name"] +" on " +PR["name"], linewidth=0.5)
ax1.fill(x_wavelen_nm, effect["photoiso_rate"][-1], facecolor=col_hi, alpha=0.25)
ax1.set_xlabel("Wavelength [nm]")
ax1.set_ylabel("Isomerisation rate")
ax1.legend(bbox_to_anchor=(1.10, 1), loc="upper left")
ax1.set_ylim(bottom=0)
ax2.set_ylabel("Rel. sensitivity")
ax2.legend(bbox_to_anchor=(1.10, 0.60), loc="upper left")
ax2.set_ylim(bottom=0)
ax2.grid(False)
```
| github_jupyter |
## Day 4: Neurons and Networks
Welcome to Day 4! Today, we learn about realistic modelling of synaptic interactions for networks of Hogkin Huxley Neurons and how to implement synapses in TensorFlow.
### How do we model Synapses?
A synapse is defined between two neurons. There can be multiple synapses between two neurons (even of the same type) but here we will consider only single synapses between two neuron. So, there can be atmost $n^2$ synapses of the same type between $n$ different neurons. Each synapse will have their own state variables the dynamics of which can be defined using differential equations.
For most networks, not all neurons will be connected by all types of synapses. The network of neurons have a certain connectivity that can be represented as an adjacency matrix in the language of graphs. Each type of synapse will have its own connectivity/adjacency matrix.
#### Types of Synapses
There are two major categories of synapses: Exitatory and Inhibitory Synapses.
Here we will focus on implementing one of each category of synapses, Excitatory and Inhibitory.
#### Modelling Synapses
The current that passes through a synaptic channel depends on the difference between its reversal potential ($E_{syn}$) and the actual value of the membrane potential ($u$), and is $I_{syn}(t)=g_{syn}(t)(u(t)−E_{syn})$. We can describe the synaptic conductance $g_{syn}(t)=g_{max}[O](t)$, by its maximal conductance $g_{max}$ and a gating variable $[O]$, where $[O](𝑡)$ is the fraction of open synaptic channels. Channels open when a neurotransmitter binds to the synapse which is a function of the presynaptic activation $[T]$.
$\frac{d[O]}{dt}=\alpha[T](1−[O])−\beta[O]$
where $\alpha$ is the binding constant, $\beta$ the unbinding constant and $(1−[O])$ the fraction of closed channels where binding of neurotransmitter can occur. The functional form of T depends on the type and nature of the synapse.
##### Acetylcholine Synapses (Excitatory)
$$[T]_{ach} = A\ \Theta(t_{max}+t_{fire}+t_{delay}-t)\ \Theta(t-t_{fire}-t_{delay})$$
##### GABAa Synapses (Inhibitory)
$$[T]_{gaba} = \frac{1}{1+e^{-\frac{V(t)-V_0}{\sigma}}}$$
#### Iterating over conditionals in TensorFlow
How would you solve a problem where you have to choose between two options based on the condition provided in a list/array using TensorFlow. Say, you have a array of 10 random variables (say x) between 0 and 1, and you want the output of your code to be 10, if the random variable is greater than 0.5, but -10 when it is not. To perform choices based on the conditions given in a boolean list/array, we can use the TensorFlow function tf.where(). tf.where(cond,a,b) chooses elements from a/b based on conditional array/list cond. Essentially it performs masking between a and b.|
```
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
# create the Tensor with the random variables
x = tf.constant(np.random.uniform(size = (10,)),dtype=tf.float64)
# a list of 10s to select from if true
if_true = tf.constant(10*np.ones((10,)),dtype=tf.float64)
# a list of -10s to select from if false
if_false = tf.constant(-10*np.ones((10,)),dtype=tf.float64)
# perform the conditional masking
selection = tf.where(tf.greater(x,0.5),if_true,if_false)
with tf.Session() as sess:
x_out = sess.run(x)
selection_out = sess.run(selection)
print(x_out)
print(selection_out)
```
#### Recalling and Redesigning the Generalized TensorFlow Integrator
Recall the RK4 based numerical integrator we had created on day 2. You might have noticed that there is an additional dynamical state variable required for the implementation of synapses which cannot be trivially changed by a memoryless differential equation. Here, we use the word memoryless because till now, all our dynamical variables have only depended on the value immediately before. The dynamical state variable in question is the time of last firing, lets call it fire\_t.
One limitation of using TensorFlow in this implentation is that, when we are calculating the change in the dynamical state variable, we only have access to the values for variables immediately before unless we explicity save it as a different variable. Note that if we want to check if a neuron has "fired", we need to know the value of the voltage before and after the updation to check if it crossed the threshold. This means we have to change our implementation of the integrator to be able to update the variable fire\_t
We do this as follows:
1. The Integrator needs to know two more properties, the number of neurons ($n$) and firing threshold ($threshold$) for each of these neurons. We provide this information as inputs to the Integrator Class itself as arguments.
2. Our state vector will now have an additional $n$ many variables representing the firing time that will not undergo the standard differential updation but be updated by a single bit memory method.
3. Inside our Integrator, we have access to the initial values of the state variable and the change in the state variable. We use this to check if the voltages have crossed the firing threshold. For this, we need to define a convection for the state vector, we will store the voltage of the neurons in the first $n$ elements and the fire times fire_t in the last $n$ elements
4. The differential update function ie. step_func will take all variables but not update the last n values ie. $\frac{d\ fire\_t}{dt}=0$, the updation will be performed by the scan function itself after the $\Delta y$ has been calculated. It will check for every neuron if the firing threshold of that neuron lies between $V$ and $V + \Delta V$ and update the variable fire_t of the appropriate neurons with the current time.
```
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
%matplotlib inline
def tf_check_type(t, y0): # Ensure Input is Correct
if not (y0.dtype.is_floating and t.dtype.is_floating):
raise TypeError('Error in Datatype')
class _Tf_Integrator():
def __init__(self,n_,F_b):
# class constructor to get inputs for number of neurons and firing thresholds
self.n_ = n_
self.F_b = F_b
def integrate(self, func, y0, t):
time_delta_grid = t[1:] - t[:-1]
def scan_func(y, t_dt):
# recall the necessary variables
n_ = self.n_
F_b = self.F_b
t, dt = t_dt
# Differential updation
dy = self._step_func(func,t,dt,y) # Make code more modular.
dy = tf.cast(dy, dtype=y.dtype) # Failsafe
out = y + dy # the result after differential updation
# Conditional to use specialized Integrator vs Normal Integrator (n=0)
if n_>0:
# Extract the last n variables for fire times
fire_t = y[-n_:]
# Value of change in firing times if neuron didnt fire = 0
l = tf.zeros(tf.shape(fire_t),dtype=fire_t.dtype)
# Value of change in firing times if neuron fired = Current Time - Last Fire Time
l_ = t-fire_t
# Check if Voltage is initially less than Firing Threshold
z = tf.less(y[:n_],F_b)
# Check if Voltage is more than Firing Threshold after updation
z_ = tf.greater_equal(out[:n_],F_b)
df = tf.where(tf.logical_and(z,z_),l_,l)
fire_t_ = fire_t+df # Update firing time
return tf.concat([out[:-n_],fire_t_],0)
else:
return out
y = tf.scan(scan_func, (t[:-1], time_delta_grid),y0)
return tf.concat([[y0], y], axis=0)
def _step_func(self, func, t, dt, y):
k1 = func(y, t)
half_step = t + dt / 2
dt_cast = tf.cast(dt, y.dtype) # Failsafe
k2 = func(y + dt_cast * k1 / 2, half_step)
k3 = func(y + dt_cast * k2 / 2, half_step)
k4 = func(y + dt_cast * k3, t + dt)
return tf.add_n([k1, 2 * k2, 2 * k3, k4]) * (dt_cast / 6)
def odeint(func, y0, t, n_, F_b):
t = tf.convert_to_tensor(t, preferred_dtype=tf.float64, name='t')
y0 = tf.convert_to_tensor(y0, name='y0')
tf_check_type(y0,t)
return _Tf_Integrator(n_, F_b).integrate(func,y0,t)
```
#### Implementing the Dynamical Function for an Hodkin Huxley Neuron
Recall, each Hodgkin Huxley Neuron in a $n$ network has 4 main dynamical variables V, m, n, h which we represent as $n$ vectors. Now we need to add some more state variables for representing each synapse ie. the fraction of open channels. Now, the dynamics are given by:
For each neuron:
$$C_m\frac{dV}{dt} = I_{injected} - I_{Na} - I_K - I_L - I_{ach} - I_{gaba}$$
$$\frac{dm}{dt} = - \frac{1}{\tau_m}(m-m_0)$$
$$\frac{dh}{dt} = - \frac{1}{\tau_h}(h-h_0)$$
$$\frac{dn}{dt} = - \frac{1}{\tau_n}(n-n_0)$$
For each synapse:
$$\frac{d[O]_{ach}}{dt} = \alpha (1-[O]_{ach})[T]_{ach}-\beta[O]_{ach}$$
$$[T]_{ach} = A\ \Theta(t_{max}+t_{fire}+t_{delay}-t)\ \Theta(t-t_{fire}-t_{delay})$$
$$\frac{d[O]_{gaba}}{dt} = \alpha (1-[O]_{gaba})[T]_{gaba}-\beta[O]_{gaba}$$
$$[T]_{gaba} = \frac{1}{1+e^{-\frac{V(t)-V_0}{\sigma}}}$$
where the values of $\tau_m$, $\tau_h$, $\tau_n$, $m_0$, $h_0$, $n_0$ are given from the equations mentioned earlier.
#### Synaptic Memory Management
As discussed earlier, there are atmost $n^2$ synapses of each type but at a time, unless the network is fully connected/very dense, mostly we need a very small subset of these synapses. We could, in principle, calculate the dynamics of all $n^2$ but it would be pointless. So have to devise a matrix based sparse-dense coding system for evaluating the dynamics of these variables and also using their values. This will reduce memory usage and minimize number of calculations at the cost of time for encoding and decoding into dense data from sparse data and vice versa. This is why we use a matrix approach, so that tensorflow can speed up the process.
#### Defining the Connectivity
Lets take a very simple 3 neuron network to test how the two types of synapses work. Let $X_1$ be an Excitatory Neuron and it forms Acetylcholine Synapses, $X_2$ be a Inhibitory Neuron and it forms GABAa Synapses and $X_3$ be an output neuron that doesn't synapse onto any neurons. Take the network of the form: $X_1\rightarrow X_2\rightarrow X_3$.
We create the connectivity matrices for both types of synapses. We need to define a convention for the ordering of connectivity. We set the presynaptic neurons as the column number, and the postsynaptic neurons as the row number.
Let $X_1$,$X_2$,$X_3$ be indexed by 0, 1 and 2 respectively. The Acetylcholine connectivity matrix takes the form of:
$$Ach_{n\times n}=
\begin{bmatrix}
0&0&0\\
1&0&0\\
0&0&0\\
\end{bmatrix}
$$
Similarly, the GABAa connectivity matrix becomes:
$$GABA_{n\times n}=
\begin{bmatrix}
0&0&0\\
0&0&0\\
0&1&0\\
\end{bmatrix}
$$
```
n_n = 3 # number of simultaneous neurons to simulate
# Acetylcholine
ach_mat = np.zeros((n_n,n_n)) # Ach Synapse Connectivity Matrix
ach_mat[1,0]=1
## PARAMETERS FOR ACETYLCHLOLINE SYNAPSES ##
n_ach = int(np.sum(ach_mat)) # Number of Acetylcholine (Ach) Synapses
alp_ach = [10.0]*n_ach # Alpha for Ach Synapse
bet_ach = [0.2]*n_ach # Beta for Ach Synapse
t_max = 0.3 # Maximum Time for Synapse
t_delay = 0 # Axonal Transmission Delay
A = [0.5]*n_n # Synaptic Response Strength
g_ach = [0.35]*n_n # Ach Conductance
E_ach = [0.0]*n_n # Ach Potential
# GABAa
gaba_mat = np.zeros((n_n,n_n)) # GABAa Synapse Connectivity Matrix
gaba_mat[2,1] = 1
## PARAMETERS FOR GABAa SYNAPSES ##
n_gaba = int(np.sum(gaba_mat)) # Number of GABAa Synapses
alp_gaba = [10.0]*n_gaba # Alpha for GABAa Synapse
bet_gaba = [0.16]*n_gaba # Beta for GABAa Synapse
V0 = [-20.0]*n_n # Decay Potential
sigma = [1.5]*n_n # Decay Time Constant
g_gaba = [0.8]*n_n # fGABA Conductance
E_gaba = [-70.0]*n_n # fGABA Potential
```
#### Defining Firing Thresholds
We shall also define a list that stores the firing threshold for every neuron.
```
## Storing Firing Thresholds ##
F_b = [0.0]*n_n # Fire threshold
```
#### Defining Input Current as function of Time
We can store our input to each neuron as a $n\times timesteps$ matrix, say current_input, and extract the input at each time point during dynamical update step using a function which we shall call I_inj_t(t).
```
def I_inj_t(t):
# Turn indices to integer and extract from matrix
index = tf.cast(t/epsilon,tf.int32)
return tf.constant(current_input.T,dtype=tf.float64)[index]
```
#### Working with Sparse<->Dense Dynamics
For performing the dynamical updates for the synapses, we need only as many variables as the number of synapse x number of equations required for each synapse. Here our synapse models require only one dynamical variable ie. open fraction [O] which we will store as an k-vector where k is the number of synapses.
We will need to work with this [O] vector on two seperate instances and each time we will have to go to a dense form to speed up computation.
##### A. Calculation of Synaptic Currents
The formula for $I_{syn}$ is given by $$I_{syn} = \sum_{presynaptic} g_{syn}[O](V-E_{syn})$$
The best way to represent this calculation is to use the connectivity matrix $\mathbf{C}$ for the synapses and the open fraction vector $\vec{[O]}$ to create an open fraction matrix $\mathbf{O}$ and perform the following computations.
$$\mathbf{C}=
\begin{bmatrix}
0&1&...&0\\
0&0&...&1\\
...&...&...&1\\
1&0&0&0
\end{bmatrix}\ \ \ \ \ \ \ \ \vec{[O]}=[O_1,O_2...O_k]\ \ \ \ \ \ \ \
\mathbf{O}=
\begin{bmatrix}
0&O_1&...&0\\
0&0&...&O_a\\
...&...&...&O_b\\
O_k&0&0&0
\end{bmatrix}
$$
$$\vec{[I_{syn}]}=\sum_{columns}\mathbf{O}\diamond(\vec{g}_{syn}\odot(\vec{V}-\vec{E}_{syn}))$$
where $\diamond$ is columnwise multiplication and $\odot$ is elementwise multiplication. $\vec{[I_{syn}]}$ is now the total synaptic current input to the each of the neurons.
##### Algorithm for Synaptic Currents
1. Firstly we need to convert from the sparse $[O]$ vector to the dense $\mathbf{O}$ matrix. TensorFlow does not allow to make changes to a defined tensor directly, thus we create a $n^{2}$ vector TensorFlow variable o\_ which we will later reshape to a $n\times n$ matrix.
2. We then flatten the synaptic connectivity matrix and find the indices where there is a connection. For this we use the boolean mask function to choose the correct k (total number of synapses) indices from the range $1$ to $n^2$ and store in the variable ind.
3. Using the scatter\_update function of TensorFlow, we fill the correct indices of the variable o\_ that we created with the values of open fraction from the $[O]$ vector.
4. We now reshape the vector as a $n\times n$ matrix. Since python stores matrices as array of arrays, with each row as an inner array, for performing columnswise multiplication, the easiest way is to tranpose the matrix, so that each column is the inner array, perform element wise multiplication with each inner array and apply transpose again.
5. Finally using reduce\_sum, we sum over the columns to get our $I_{syn}$ vector.
```
## Acetylcholine Synaptic Current ##
def I_ach(o,V):
o_ = tf.Variable([0.0]*n_n**2,dtype=tf.float64)
ind = tf.boolean_mask(tf.range(n_n**2),ach_mat.reshape(-1) == 1)
o_ = tf.scatter_update(o_,ind,o)
o_ = tf.transpose(tf.reshape(o_,(n_n,n_n)))
return tf.reduce_sum(tf.transpose((o_*(V-E_ach))*g_ach),1)
## GABAa Synaptic Current ##
def I_gaba(o,V):
o_ = tf.Variable([0.0]*n_n**2,dtype=tf.float64)
ind = tf.boolean_mask(tf.range(n_n**2),gaba_mat.reshape(-1) == 1)
o_ = tf.scatter_update(o_,ind,o)
o_ = tf.transpose(tf.reshape(o_,(n_n,n_n)))
return tf.reduce_sum(tf.transpose((o_*(V-E_gaba))*g_gaba),1)
## Other Currents ##
def I_K(V, n):
return g_K * n**4 * (V - E_K)
def I_Na(V, m, h):
return g_Na * m**3 * h * (V - E_Na)
def I_L(V):
return g_L * (V - E_L)
```
##### B. Updation of Synaptic Variables
For the updation, the first we need to calculate the values of the presynaptic activation [T] for both types of synapses. We will essentially calculate this for each neuron and then redirect the values to the correct post synaptic neuron. Recall:
$$[T]_{ach} = A\ \Theta(t_{max}+t_{fire}+t_{delay}-t)\ \Theta(t-t_{fire}-t_{delay})$$
$$[T]_{gaba} = \frac{1}{1+e^{-\frac{V(t)-V_0}{\sigma}}}$$
Thus $[T]_{ach}$ is function of the last firing time, and $[T]_{gaba}$ depends on the presynaptic voltage. Once we calculate the values of [T]-vector for both types of synapse, we need to redirect them to the correct synapses in a sparse $k\times1$ vector form.
##### Algorithm for Dynamics
1. For $[T]_{ach}$, use a boolean logical_and function to check is the current timepoint t is greater than the last fire time (fire\_t) + delay (t\_delay) and less than last fire time (fire\_t) + delay (t\_delay) + activation length (t\_max) for each neuron as a vector. Use the result of these boolean operations to choose between zero or an constant A. This serves as the heaviside step function. For $[T]_{gaba}$, simply use the V vector to determine T.
2. For making the sparse vector, we follow as two step process. First we multiply each row of the connectivity matrices $\mathbf{C}$ with the respective $[T]$ vector to get a activation matrix $\mathbf{T}$, and then we just flatten $\mathbf{T}$ and $\mathbf{C}$ and, using tf.boolean\_mask, remove all the zeros from $\mathbf{T}$ to get a $k\times1$ vector which now stores the presynaptic activation for each of the synapses where $k=n_{gaba}$ or $n_{ach}$
3. Calculate the differential change in the open fractions using the $k\times1$ vector.
```
def dXdt(X, t):
V = X[:1*n_n] # First n_n values are Membrane Voltage
m = X[1*n_n:2*n_n] # Next n_n values are Sodium Activation Gating Variables
h = X[2*n_n:3*n_n] # Next n_n values are Sodium Inactivation Gating Variables
n = X[3*n_n:4*n_n] # Next n_n values are Potassium Gating Variables
o_ach = X[4*n_n : 4*n_n + n_ach] # Next n_ach values are Acetylcholine Synapse Open Fractions
o_gaba = X[4*n_n + n_ach : 4*n_n + n_ach + n_gaba] # Next n_gaba values are GABAa Synapse Open Fractions
fire_t = X[-n_n:] # Last n_n values are the last fire times as updated by the modified integrator
dVdt = (I_inj_t(t) - I_Na(V, m, h) - I_K(V, n) - I_L(V) - I_ach(o_ach,V) - I_gaba(o_gaba,V)) / C_m
## Updation for gating variables ##
m0,tm,h0,th = Na_prop(V)
n0,tn = K_prop(V)
dmdt = - (1.0/tm)*(m-m0)
dhdt = - (1.0/th)*(h-h0)
dndt = - (1.0/tn)*(n-n0)
## Updation for o_ach ##
A_ = tf.constant(A,dtype=tf.float64)
Z_ = tf.zeros(tf.shape(A_),dtype=tf.float64)
T_ach = tf.where(tf.logical_and(tf.greater(t,fire_t+t_delay),tf.less(t,fire_t+t_max+t_delay)),A_,Z_)
T_ach = tf.multiply(tf.constant(ach_mat,dtype=tf.float64),T_ach)
T_ach = tf.boolean_mask(tf.reshape(T_ach,(-1,)),ach_mat.reshape(-1) == 1)
do_achdt = alp_ach*(1.0-o_ach)*T_ach - bet_ach*o_ach
## Updation for o_gaba ##
T_gaba = 1.0/(1.0+tf.exp(-(V-V0)/sigma))
T_gaba = tf.multiply(tf.constant(gaba_mat,dtype=tf.float64),T_gaba)
T_gaba = tf.boolean_mask(tf.reshape(T_gaba,(-1,)),gaba_mat.reshape(-1) == 1)
do_gabadt = alp_gaba*(1.0-o_gaba)*T_gaba - bet_gaba*o_gaba
## Updation for fire times ##
dfdt = tf.zeros(tf.shape(fire_t),dtype=fire_t.dtype) # zero change in fire_t
out = tf.concat([dVdt,dmdt,dhdt,dndt,do_achdt,do_gabadt,dfdt],0)
return out
```
#### Defining the Gating Variable Updation Function and the Initial Conditions
Like earlier, we again define the function that returns us the values of $\tau_m$, $\tau_h$, $\tau_n$, $m_0$, $h_0$, $n_0$ and we prepare the parameters and initial conditions.
Note: If we initialize the last firing time as 0, then the second neuron $X_2$ will get an EPSP immediately after the start of the simulation. To avoid this the last firing time should be initialized to a large negative number >= the length of the simulation.
```
def K_prop(V):
T = 22
phi = 3.0**((T-36.0)/10)
V_ = V-(-50)
alpha_n = 0.02*(15.0 - V_)/(tf.exp((15.0 - V_)/5.0) - 1.0)
beta_n = 0.5*tf.exp((10.0 - V_)/40.0)
t_n = 1.0/((alpha_n+beta_n)*phi)
n_0 = alpha_n/(alpha_n+beta_n)
return n_0, t_n
def Na_prop(V):
T = 22
phi = 3.0**((T-36)/10)
V_ = V-(-50)
alpha_m = 0.32*(13.0 - V_)/(tf.exp((13.0 - V_)/4.0) - 1.0)
beta_m = 0.28*(V_ - 40.0)/(tf.exp((V_ - 40.0)/5.0) - 1.0)
alpha_h = 0.128*tf.exp((17.0 - V_)/18.0)
beta_h = 4.0/(tf.exp((40.0 - V_)/5.0) + 1.0)
t_m = 1.0/((alpha_m+beta_m)*phi)
t_h = 1.0/((alpha_h+beta_h)*phi)
m_0 = alpha_m/(alpha_m+beta_m)
h_0 = alpha_h/(alpha_h+beta_h)
return m_0, t_m, h_0, t_h
# Initializing the Parameters
C_m = [1.0]*n_n
g_K = [10.0]*n_n
E_K = [-95.0]*n_n
g_Na = [100]*n_n
E_Na = [50]*n_n
g_L = [0.15]*n_n
E_L = [-55.0]*n_n
# Initializing the State Vector
y0 = tf.constant([-71]*n_n+[0,0,0]*n_n+[0]*n_ach+[0]*n_gaba+[-9999999]*n_n,dtype=tf.float64)
```
#### Creating the Current Input and Run the Simulation
We will run an 700 ms simulation with 100ms current injection at neuron $X_1$ of increasing amplitude with 100ms gaps in between.
```
epsilon = 0.01
t = np.arange(0,700,epsilon)
current_input= np.zeros((n_n,t.shape[0]))
current_input[0,int(100/epsilon):int(200/epsilon)] = 2.5
current_input[0,int(300/epsilon):int(400/epsilon)] = 5.0
current_input[0,int(500/epsilon):int(600/epsilon)] = 7.5
state = odeint(dXdt,y0,t,n_n,F_b)
with tf.Session() as sess:
# Since we are using variables we have to initialize them
tf.global_variables_initializer().run()
state = sess.run(state)
```
#### Visualizing and Interpreting the Output
We overlay the voltage traces for the three neurons and observe the dynamics.
```
plt.figure(figsize=(12,3))
plt.plot(t,state[:,0],label="$X_1$")
plt.plot(t,state[:,1],label="$X_2$")
plt.plot(t,state[:,2],label="$X_3$")
plt.title("Simple Network $X_1$ --> $X_2$ --◇ $X_3$")
plt.ylim([-90,60])
plt.xlabel("Time (in ms)")
plt.ylabel("Voltage (in mV)")
plt.legend()
plt.tight_layout()
plt.show()
```
We can see that the current injection triggers the firing of action potentials with increasing frequency with increasing current. Also we see that as soon as the first neuron $X_1$ crosses its fireing threshold, an EPSP is triggered in the next neuron $X_2$ causing a firing with a slight delay from the firing of $X_1$. Finally, as the second neuron depolarizes, we see a corresponding hyperpolarization in the next neuron $X_3$ caused by an IPSP. We can also plot the dynamics of the channels itself by plotting o\_ach and o\_gaba which are the 5th and 4th last elements respectively.
```
plt.figure(figsize=(12,3))
plt.plot(t,state[:,-5],label="$[O]_{ach}$ in $X_2$")
plt.plot(t,state[:,-4],label="$[O]_{gaba}$ in $X_3$")
plt.title("Channel Dynamics in $X_1$ --> $X_2$ --◇ $X_3$")
#plt.ylim([-90,60])
plt.xlabel("Time (in ms)")
plt.ylabel("Fraction of Channels open")
plt.legend()
plt.tight_layout()
plt.show()
```
Thus we are now capable of making complex networks of neurons with both excitatory and inhibitory connections.
| github_jupyter |
```
# memory footprint support libraries/code
!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
!pip install gputil
!pip install psutil
!pip install humanize
import psutil
import humanize
import os
import GPUtil as GPU
GPUs = GPU.getGPUs()
# XXX: only one GPU on Colab and isn’t guaranteed
gpu = GPUs[0]
def printm():
process = psutil.Process(os.getpid())
print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " | Proc size: " + humanize.naturalsize( process.memory_info().rss))
print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
printm()
import tensorflow as tf
tf.test.gpu_device_name()
from google.colab import drive
drive.mount('/content/drive')
import os
os.chdir("your directory")
%ls
#ONE TIME run
#clone the source files and fairseq repository
#fairseq - https://github.com/pytorch/fairseq.git
#Install fairseq
import os
os.chdir("path to fairseq repository")
!pip install --editable .
!pip install sentencepiece sacrebleu tensorboardX
# Preprocess the data
os.chdir('............../scripts')
!bash preprocess-ensi.sh
# Train the baseline transformer model
import os
os.chdir("your directory")
!CUDA_VISIBLE_DEVICES=0 fairseq-train \
data-bin/en_si/ \
--source-lang en --target-lang si \
--arch transformer --share-all-embeddings \
--encoder-layers 5 --decoder-layers 5 \
--encoder-embed-dim 512 --decoder-embed-dim 512 \
--encoder-ffn-embed-dim 2048 --decoder-ffn-embed-dim 2048 \
--encoder-attention-heads 2 --decoder-attention-heads 2 \
--encoder-normalize-before --decoder-normalize-before \
--dropout 0.1 --attention-dropout 0.2 --relu-dropout 0.2 \
--weight-decay 0.0001 \
--label-smoothing 0.2 --criterion label_smoothed_cross_entropy \
--optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0 \
--lr-scheduler inverse_sqrt --warmup-updates 4000 --warmup-init-lr 1e-7 \
--lr 1e-3 --min-lr 1e-9 \
--batch-size 32 \
--update-freq 4 \
--max-epoch 200 --save-interval 10 \
--tensorboard-logdir logs/
# Obtain the BLEU score for validation dataset
os.chdir("your directory")
!fairseq-generate \
data-bin/en_si/ \
--source-lang en --target-lang si \
--path checkpoints/checkpoint_best.pt \
--beam 5 --lenpen 1.2 \
--gen-subset valid \
# Obtain the BLEU score for test dataset
os.chdir("your directory")
!fairseq-generate \
data-bin/en_si/ \
--source-lang en --target-lang si \
--path checkpoints/checkpoint_best.pt \
--beam 5 --lenpen 1.2 \
--gen-subset test \
# Load the TensorBoard notebook extension
%load_ext tensorboard
# Visualize using tensorboard
%tensorboard --logdir logs/
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from IPython.display import display, Markdown
%load_ext autoreload
%autoreload 2
import pymedphys.mosaiq
start = '2019-11-01 00:00:00'
end = '2019-12-01 00:00:00'
centres = ['rccc', 'nbcc', 'sash']
servers = {
'rccc': 'msqsql',
'nbcc': 'physics-server:31433',
'sash': 'physics-server'
}
servers_list = [
item for _, item in servers.items()
]
physics_location = {
'rccc': 'Physics_Check',
'nbcc': 'Physics',
'sash': 'Physics_Check'
}
imrt_task_names = {
'nbcc': ['Physics Check IMRT'],
'rccc': ['IMRT Physics Check']
}
non_imrt_task_names = {
'nbcc': ['Physics Check 3DCRT'],
'rccc': ['3D Physics Check', 'Electron Factor']
}
sash_physics_task_name = 'Physics QA '
def get_staff_name(cursor, staff_id):
data = pymedphys.mosaiq.execute(
cursor,
"""
SELECT
Staff.Initials,
Staff.User_Name,
Staff.Type,
Staff.Category,
Staff.Last_Name,
Staff.First_Name
FROM Staff
WHERE
Staff.Staff_ID = %(staff_id)s
""",
{"staff_id": staff_id},
)
results = pd.DataFrame(
data=data,
columns=[
"initials",
"user_name",
"type",
"category",
"last_name",
"first_name",
],
)
return results
def get_qcls_by_date(cursor, location, start, end):
data = pymedphys.mosaiq.execute(
cursor,
"""
SELECT
Ident.IDA,
Patient.Last_Name,
Patient.First_Name,
Chklist.Due_DtTm,
Chklist.Act_DtTm,
Chklist.Instructions,
Chklist.Notes,
QCLTask.Description
FROM Chklist, Staff, QCLTask, Ident, Patient
WHERE
Chklist.Pat_ID1 = Ident.Pat_ID1 AND
Patient.Pat_ID1 = Ident.Pat_ID1 AND
QCLTask.TSK_ID = Chklist.TSK_ID AND
Staff.Staff_ID = Chklist.Rsp_Staff_ID AND
Staff.Last_Name = %(location)s AND
Chklist.Act_DtTm >= %(start)s AND
Chklist.Act_DtTm < %(end)s
""",
{"location": location, "start": start, "end": end},
)
results = pd.DataFrame(
data=data,
columns=[
"patient_id",
"last_name",
"first_name",
"due",
"actual_completed_time",
"instructions",
"comment",
"task",
],
)
results = results.sort_values(by=["actual_completed_time"])
return results
# with multi_mosaiq_connect(servers_list) as cursors:
# for centre in centres:
# display(Markdown('### {}'.format(centre)))
# cursor = cursors[servers[centre]]
# display(get_staff_name(cursor, physics_ids[centre]))
# Working out physics_id
# with mosaiq_connect(servers['sash']) as cursor:
# display(get_qcls_by_date(cursor, start, end))
with pymedphys.mosaiq.connect(servers_list) as cursors:
results = {
centre: get_qcls_by_date(
cursors[servers[centre]], physics_location[centre], start, end
)
for centre in centres
}
for server in servers:
results[server] = results[server].drop_duplicates(subset='actual_completed_time', keep='first')
for server in servers:
display(Markdown("### {}".format(server)))
display(results[server])
def count_results(results, imrt_task_names, non_imrt_task_names, server):
imrt_results = 0
non_imrt_results = 0
for task in results[server]['task']:
trimmed_task = task.strip()
if trimmed_task in imrt_task_names[server]:
imrt_results = imrt_results + 1
elif trimmed_task in non_imrt_task_names[server]:
non_imrt_results = non_imrt_results + 1
else:
print(trimmed_task)
return {
'imrt_results': imrt_results,
'non_imrt_results': non_imrt_results
}
counts = {
server: count_results(results, imrt_task_names, non_imrt_task_names, server)
for server in ['rccc', 'nbcc']
}
counts
# SASH results
len(results['sash']['task'])
```
| github_jupyter |
<h1> Preprocessing using Dataflow </h1>
This notebook illustrates:
<ol>
<li> Creating datasets for Machine Learning using Dataflow
</ol>
<p>
While Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming.
```
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
```
<h2> Save the query from earlier </h2>
The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
```
# Create SQL query using natality data after the year 2000
query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ABS(FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING)))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
"""
# Call BigQuery and examine in dataframe
import google.datalab.bigquery as bq
df = bq.Query(query + " LIMIT 100").execute().result().to_dataframe()
df.head()
```
<h2> Create ML dataset using Dataflow </h2>
Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.
Instead of using Beam/Dataflow, I had three other options:
* Use Cloud Dataprep to visually author a Dataflow pipeline. Cloud Dataprep also allows me to explore the data, so we could have avoided much of the handcoding of Python/Seaborn calls above as well!
* Read from BigQuery directly using TensorFlow.
* Use the BigQuery console (http://bigquery.cloud.google.com) to run a Query and save the result as a CSV file. For larger datasets, you may have to select the option to "allow large results" and save the result into a CSV file on Google Cloud Storage.
<p>
However, in this case, I want to do some preprocessing, modifying data so that we can simulate what is known if no ultrasound has been performed. If I didn't need preprocessing, I could have used the web console. Also, I prefer to script it out rather than run queries on the user interface, so I am using Cloud Dataflow for the preprocessing.
Note that after you launch this, the actual processing is happening on the cloud. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 20 minutes for me.
<p>
If you wish to continue without doing this step, you can copy my preprocessed output:
<pre>
gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc gs://your-bucket/
</pre>
```
import apache_beam as beam
import datetime, os
def to_csv(rowdict):
# Pull columns from BQ and create a line
import hashlib
import copy
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks'.split(',')
# Create synthetic data where we assume that no ultrasound has been performed
# and so we don't know sex of the baby. Let's assume that we can tell the difference
# between single and multiple, but that the errors rates in determining exact number
# is difficult in the absence of an ultrasound.
no_ultrasound = copy.deepcopy(rowdict)
w_ultrasound = copy.deepcopy(rowdict)
no_ultrasound['is_male'] = 'Unknown'
if rowdict['plurality'] > 1:
no_ultrasound['plurality'] = 'Multiple(2+)'
else:
no_ultrasound['plurality'] = 'Single(1)'
# Change the plurality column to strings
w_ultrasound['plurality'] = ['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)'][rowdict['plurality'] - 1]
# Write out two rows for each input row, one with ultrasound and one without
for result in [no_ultrasound, w_ultrasound]:
data = ','.join([str(result[k]) if k in result else 'None' for k in CSV_COLUMNS])
key = hashlib.sha224(data).hexdigest() # hash the columns to form a key
yield str('{},{}'.format(data, key))
def preprocess(in_test_mode):
import shutil, os, subprocess
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
print 'Launching local job ... hang on'
OUTPUT_DIR = './preproc'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
os.makedirs(OUTPUT_DIR)
else:
print 'Launching Dataflow job {} ... hang on'.format(job_name)
OUTPUT_DIR = 'gs://{0}/babyweight/preproc/'.format(BUCKET)
try:
subprocess.check_call('gsutil -m rm -r {}'.format(OUTPUT_DIR).split())
except:
pass
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True
}
opts = beam.pipeline.PipelineOptions(flags = [], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
p = beam.Pipeline(RUNNER, options = opts)
query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ABS(FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING)))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
AND month > 0
"""
if in_test_mode:
query = query + ' LIMIT 100'
for step in ['train', 'eval']:
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE MOD(ABS(hashmonth),4) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE MOD(ABS(hashmonth),4) = 3'.format(query)
(p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query = selquery, use_standard_sql = True))
| '{}_csv'.format(step) >> beam.FlatMap(to_csv)
| '{}_out'.format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, '{}.csv'.format(step))))
)
job = p.run()
if in_test_mode:
job.wait_until_finish()
print "Done!"
preprocess(in_test_mode = False)
%bash
gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
```
Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
# Evaluation and hyperparameter tuning
In the previous notebook, we saw two approaches to tune hyperparameters.
However, we did not present a proper framework to evaluate the tuned models.
Instead, we focused on the mechanism used to find the best set of parameters.
In this notebook, we will reuse some knowledge presented in the module
"Selecting the best model" to show how to evaluate models where
hyperparameters need to be tuned.
Thus, we will first load the dataset and create the predictive model that
we want to optimize and later on, evaluate.
## Loading the dataset
As in the previous notebook, we load the Adult census dataset. The loaded
dataframe is first divided to separate the input features and the target into
two separated variables. In addition, we drop the column `"education-num"` as
previously done.
```
import pandas as pd
target_name = "class"
adult_census = pd.read_csv("../datasets/adult-census.csv")
target = adult_census[target_name]
data = adult_census.drop(columns=[target_name, "education-num"])
```
## Our predictive model
We now create the predictive model that we want to optimize. Note that
this pipeline is identical to the one we used in the previous notebook.
```
from sklearn import set_config
# To get a diagram visualization of the pipeline
set_config(display="diagram")
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OrdinalEncoder
from sklearn.compose import make_column_selector as selector
categorical_columns_selector = selector(dtype_include=object)
categorical_columns = categorical_columns_selector(data)
categorical_preprocessor = OrdinalEncoder(
handle_unknown="use_encoded_value", unknown_value=-1
)
preprocessor = ColumnTransformer(
[
('cat_preprocessor', categorical_preprocessor, categorical_columns),
],
remainder='passthrough',
sparse_threshold=0,
)
from sklearn.ensemble import HistGradientBoostingClassifier
from sklearn.pipeline import Pipeline
model = Pipeline([
("preprocessor", preprocessor),
(
"classifier",
HistGradientBoostingClassifier(
random_state=42, max_leaf_nodes=4
)
),
])
model
```
## Evaluation
### Without hyperparameter tuning
In the module "Selecting the best model", we saw that one must use
cross-validation to evaluate such a model. Cross-validation allows to get a
distribution of the scores of the model. Thus, having this distribution at
hand, we can get to assess the variability of our estimate of the
generalization performance of the model. Here, we recall the necessary
`scikit-learn` tools needed to obtain the mean and standard deviation of the
scores.
```
from sklearn.model_selection import cross_validate
cv_results = cross_validate(model, data, target, cv=5)
cv_results = pd.DataFrame(cv_results)
cv_results
```
The cross-validation scores are coming from a 5-fold cross-validation. So
we can compute the mean and standard deviation of the generalization score.
```
print(
"Generalization score without hyperparameters tuning:\n"
f"{cv_results['test_score'].mean():.3f} +/- {cv_results['test_score'].std():.3f}"
)
```
We now present how to evaluate the model with hyperparameter tuning,
where an extra step is required to select the best set of parameters.
### With hyperparameter tuning
As we shown in the previous notebook, one can use a search strategy that uses
cross-validation to find the best set of parameters. Here, we will use a
grid-search strategy and reproduce the steps done in the previous notebook.
First, we have to embed our model into a grid-search and specify the
parameters and the parameter values that we want to explore.
```
from sklearn.model_selection import GridSearchCV
param_grid = {
'classifier__learning_rate': (0.05, 0.5),
'classifier__max_leaf_nodes': (10, 30),
}
model_grid_search = GridSearchCV(
model, param_grid=param_grid, n_jobs=2, cv=2
)
model_grid_search.fit(data, target)
```
As previously seen, when calling the `fit` method, the model embedded in the
grid-search is trained with every possible combination of parameters
resulting from the parameter grid. The best combination is selected by
keeping the combination leading to the best mean cross-validated score.
```
cv_results = pd.DataFrame(model_grid_search.cv_results_)
cv_results[[
"param_classifier__learning_rate",
"param_classifier__max_leaf_nodes",
"mean_test_score",
"std_test_score",
"rank_test_score"
]]
model_grid_search.best_params_
```
One important caveat here concerns the evaluation of the generalization
performance. Indeed, the mean and standard deviation of the scores computed
by the cross-validation in the grid-search are potentially not good estimates
of the generalization performance we would obtain by refitting a model with
the best combination of hyper-parameter values on the full dataset. Note that
scikit-learn automatically performs this refit by default when calling
`model_grid_search.fit`. This refitted model is trained with more data than
the different models trained internally during the cross-validation of the
grid-search.
We therefore used knowledge from the full dataset to both decide our model’s
hyper-parameters and to train the refitted model.
Because of the above, one must keep an external, held-out test set for the
final evaluation the refitted model. We highlight here the process using a
single train-test split.
```
from sklearn.model_selection import train_test_split
data_train, data_test, target_train, target_test = train_test_split(
data, target, test_size=0.2, random_state=42
)
model_grid_search.fit(data_train, target_train)
accuracy = model_grid_search.score(data_test, target_test)
print(f"Accuracy on test set: {accuracy:.3f}")
```
The score measure on the final test set is almost with the range of the
internal CV score for the best hyper-paramter combination. This is reassuring
as it means that the tuning procedure did not cause significant overfitting
in itself (other-wise the final test score would have been lower than the
internal CV scores). That is expected because our grid search explored very
few hyper-parameter combinations for the sake of speed. The test score of the
final model is actually a bit higher that what we could have expected from
the internal cross-validation. This is also expected because the refitted
model is trained on a larger dataset than the models evaluated in the
internal CV loop of the grid-search procedure. This is often the case that
models trained on a larger number of samples tend to generalize better.
In the code above, the selection of the best hyperparameters was done only on
the train set from the initial train-test split. Then, we evaluated the
generalization performance of our tuned model on the left out test set. This
can be shown schematically as follows

<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;">Note</p>
<p>This figure shows the particular case of <strong>K-fold</strong> cross-validation
strategy using <tt class="docutils literal">n_splits=5</tt> to further split the train set coming from a
train-test split.
For each cross-validation split, the procedure trains a model on all the red
samples, evaluates the score of a given set of hyperparameters on the green
samples. The best hyper-parameters are selected based on those intermediate
scores.</p>
<p>The a final model tuned with those hyper-parameters is fitted on the
concatenation of the red and green samples and evaluated on the blue samples.</p>
<p class="last">The green samples are sometimes called a <strong>validation sets</strong> to differentiate
them from the final test set in blue.</p>
</div>
However, this evaluation only provides us a single point estimate of the
generalization performance. As recall at the beginning of this notebook, it
is beneficial to have a rough idea of the uncertainty of our estimated
generalization performance. Therefore, we should instead use an additional
cross-validation for this evaluation.
This pattern is called **nested cross-validation**. We use an inner
cross-validation for the selection of the hyperparameters and an outer
cross-validation for the evaluation of generalization performance of the
refitted tuned model.
In practice, we only need to embed the grid-search in the function
`cross-validate` to perform such evaluation.
```
cv_results = cross_validate(
model_grid_search, data, target, cv=5, n_jobs=2, return_estimator=True
)
cv_results = pd.DataFrame(cv_results)
cv_test_scores = cv_results['test_score']
print(
"Generalization score with hyperparameters tuning:\n"
f"{cv_test_scores.mean():.3f} +/- {cv_test_scores.std():.3f}"
)
```
This result is compatible with the test score measured with the string outer
train-test split.
However, in this case, we can apprehend the variability of our estimate of
the generalization performance thanks to the measure of the
standard-deviation of the scores measured in the outer cross-validation.
Here is a schematic representation of the complete nested cross-validation
procedure:

<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;">Note</p>
<p>This figure illustrates the nested cross-validation strategy using
<tt class="docutils literal">cv_inner = KFold(n_splits=4)</tt> and <tt class="docutils literal">cv_outer = KFold(n_splits=5)</tt>.</p>
<p>For each inner cross-validation split (indexed on the left-hand side),
the procedure trains a model on all the red samples and evaluate the quality
of the hyperparameters on the green samples.</p>
<p>For each outer cross-validation split (indexed on the right-hand side),
the best hyper-parameters are selected based on the validation scores
(computed on the greed samples) and a model is refitted on the concatenation
of the red and green samples for that outer CV iteration.</p>
<p class="last">The generalization performance of the 5 refitted models from the outer CV
loop are then evaluated on the blue samples to get the final scores.</p>
</div>
In addition, passing the parameter `return_estimator=True`, we can check the
value of the best hyperparameters obtained for each fold of the outer
cross-validation.
```
for cv_fold, estimator_in_fold in enumerate(cv_results["estimator"]):
print(
f"Best hyperparameters for fold #{cv_fold + 1}:\n"
f"{estimator_in_fold.best_params_}"
)
```
It is interesting to see whether the hyper-parameter tuning procedure always
select similar values for the hyperparameters. If its the case, then all is
fine. It means that we can deploy a model fit with those hyperparameters and
expect that it will have an actual predictive performance close to what we
measured in the outer cross-validation.
But it is also possible that some hyperparameters do not matter at all, and
as a result in different tuning sessions give different results. In this
case, any value will do. This can typically be confirmed by doing a parallel
coordinate plot of the results of a large hyperparameter search as seen in
the exercises.
From a deployment point of view, one could also chose to deploy all the
models found by the outer cross-validation loop and make them vote to get the
final predictions. However this can cause operational problems because it
uses more memory and makes computing prediction slower, resulting in a higher
computational resource usage per prediction.
In this notebook, we have seen how to evaluate the predictive performance of
a model with tuned hyper-parameters using the nested cross-validation
procedure.
| github_jupyter |
<h1 align=center><font size = 5> Logistic Regression with Python</font></h1>
In this notebook, you will learn Logistic Regression, and then, you'll create a model for a telecommunication company, to predict when its customers will leave for a competitor, so that they can take some action to retain the customers.
<a id="ref1"></a>
## What is different between Linear and Logistic Regression?
While Linear Regression is suited for estimating continuous values (e.g. estimating house price), it is not the best tool for predicting the class of an observed data point. In order to estimate the class of a data point, we need some sort of guidance on what would be the **most probable class** for that data point. For this, we use **Logistic Regression**.
<div class="alert alert-success alertsuccess" style="margin-top: 20px">
<font size = 3><strong>Recall linear regression:</strong></font>
<br>
<br>
As you know, __Linear regression__ finds a function that relates a continuous dependent variable, _y_, to some predictors (independent variables _x1_, _x2_, etc.). For example, Simple linear regression assumes a function of the form:
<br><br>
$$
y = B_0 + B_1 * x1 + B_2 * x2 +...
$$
<br>
and finds the values of parameters _B0_, _B1_, _B2_, etc, where the term _B0_ is the "intercept". It can be generally shown as:
<br><br>
$$
ℎ_θ(𝑥) = 𝜃^TX
$$
<p></p>
</div>
Logistic Regression is a variation of Linear Regression, useful when the observed dependent variable, _y_, is categorical. It produces a formula that predicts the probability of the class label as a function of the independent variables.
Logistic regression fits a special s-shaped curve by taking the linear regression and transforming the numeric estimate into a probability with the following function, which is called sigmoid function 𝜎:
$$
ℎ_θ(𝑥) = 𝜎({θ^TX}) = \frac {e^{(B0 + B1 * x1 + B2 * x2 +...)}}{1 + e^{(B0 + B1 * x1 + B2 * x2 +...)}}
$$
Or:
$$
ProbabilityOfaClass_1 = P(Y=1|X) = 𝜎({θ^TX}) = \frac{e^{θ^TX}}{1+e^{θ^TX}}
$$
In this equation, ${θ^TX}$ is the regression result (the sum of the variables weighted by the coefficients), `exp` is the exponential function and $𝜎(θ^TX)$ is the sigmoid or [logistic function](http://en.wikipedia.org/wiki/Logistic_function), also called logistic curve. It is a common "S" shape (sigmoid curve).
So, briefly, Logistic Regression passes the input through the logistic/sigmoid but then treats the result as a probability:
$$
𝜎({θ^TX}) = \frac{1}{1+e^{θ^TX}}
$$
<img
src="https://ibm.box.com/shared/static/kgv9alcghmjcv97op4d6onkyxevk23b1.png" width = "1024" align = "center">
The objective of __Logistic Regression__ algorithm, is to find the best parameters θ, for $ℎ_θ(𝑥) = 𝜎({θ^TX})$, in such a way that the model best predicts the class of each case.
### Customer churn with Logistic Regression
**Problem** A telecommunications company is concerned about the number of customers leaving their land-line business for cable competitors. They need to understand who is leaving. Imagine that you’re an analyst at this company and you have to find out who is leaving and why.
Lets first import required libraries:
```
import pandas as pd
import numpy as np
import scipy.optimize as opt
from sklearn import preprocessing
%matplotlib inline
import matplotlib.pyplot as plt
```
### About dataset
We’ll use a telecommunications data for predicting customer churn. This is a historical customer data where each row represents one customer. The data is relatively easy to understand, and you may uncover insights you can use immediately. Typically it’s less expensive to keep customers than acquire new ones, so the focus of this analysis is to predict the customers who will stay with the company.
This data set provides info to help you predict behavior to retain customers. You can analyze all relevant customer data and develop focused customer retention programs.
The data set includes information about:
- Customers who left within the last month – the column is called Churn
- Services that each customer has signed up for – phone, multiple lines, internet, online security, online backup, device protection, tech support, and streaming TV and movies
- Customer account information – how long they’ve been a customer, contract, payment method, paperless billing, monthly charges, and total charges
- Demographic info about customers – gender, age range, and if they have partners and dependents
### Load the Telco Churn data
Telco Churn is a hypothetical data file that concerns a telecommunications company's efforts to reduce turnover in its customer base. Each case corresponds to a separate customer and it records various demographic and service usage information. Before you can work with the data, you must use the URL to get the ChurnData.csv.
### Load Data From CSV File
```
churn_df = pd.read_csv("Datasets/ChurnData.csv")
churn_df.head()
```
## Data pre-processing and selection
Lets select some features for the modeling. Also we change the target data type to be integer, as it is a requirement by the skitlearn algorithm:
```
churn_df.columns
churn_df = churn_df[['tenure', 'age', 'address', 'income', 'ed', 'employ', 'equip', 'callcard', 'wireless','churn']]
churn_df.head()
churn_df['churn'] = churn_df['churn'].astype('int')
churn_df
churn_df.isna().any()
```
## Practice
How many rows and columns are in this dataset in total? What are the name of columns?
```
# write your code here
```
Lets define X, and y for our dataset:
```
X = np.asarray(churn_df[['tenure', 'age', 'address', 'income', 'ed', 'employ', 'equip']])
X[0:5]
y = np.asarray(churn_df['churn'])
y [0:5]
```
Also, we normalize the dataset:
```
from sklearn import preprocessing
X = preprocessing.StandardScaler().fit(X).transform(X)
X[0:5]
```
## Train/Test dataset
Okay, we split our dataset into train and test set:
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4)
print ('Train set:', X_train.shape, y_train.shape)
print ('Test set:', X_test.shape, y_test.shape)
```
# Modeling (Logistic Regression with Scikit-learn)
Lets build our model using __LogisticRegression__ from Scikit-learn package. This function implements logistic regression and can use different numerical optimizers to find parameters, including ‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’ solvers. You can find extensive information about the pros and cons of these optimizers if you search it in internet.
#### Algorithm to use in the optimization problem.
- For small datasets, 'liblinear' is a good choice, whereas 'sag' and 'saga' are faster for large ones.
- For multiclass problems, only 'newton-cg', 'sag', 'saga' and 'lbfgs' handle multinomial loss; 'liblinear' is limited to one-versus-rest schemes.
- 'newton-cg', 'lbfgs', 'sag' and 'saga' handle L2 or no penalty
- 'liblinear' and 'saga' also handle L1 penalty
- 'saga' also supports 'elasticnet' penalty
- 'liblinear' does not support setting ``penalty='none'``
The version of Logistic Regression in Scikit-learn, support regularization. Regularization is a technique used to solve the overfitting problem in machine learning models.
__C__ parameter indicates __inverse of regularization strength__ which must be a positive float. Smaller values specify stronger regularization.
Now lets fit our model with train set:
```
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix,classification_report
LR = LogisticRegression(C=0.0001, solver='liblinear').fit(X_train,y_train)
LR
```
Now we can predict using our test set:
```
yhat = LR.predict(X_test)
yhat
y_test
```
__predict_proba__ returns estimates for all classes, ordered by the label of classes. So, the first column is the probability of class 1, P(Y=1|X), and second column is probability of class 0, P(Y=0|X):
```
yhat_prob = LR.predict_proba(X_test)
yhat_prob
set(churn_df['churn'])
```
## Evaluation
### What is the Jaccard Index?
The Jaccard similarity index (sometimes called the Jaccard similarity coefficient) compares members for two sets to see which members are shared and which are distinct. It’s a measure of similarity for the two sets of data, with a range from 0% to 100%. The higher the percentage, the more similar the two populations. Although it’s easy to interpret, it is extremely sensitive to small samples sizes and may give erroneous results, especially with very small samples or data sets with missing observations.
**How to Calculate the Jaccard Index**
The formula to find the Index is:
$Jaccard Index = (the number in both sets) / (the number in either set) * 100$
The same formula in notation is:
$J(X,Y) = |X∩Y| / |X∪Y|$
In Steps, that’s:
1. Count the number of members which are shared between both sets.
2. Count the total number of members in both sets (shared and un-shared).
3. Divide the number of shared members (1) by the total number of members (2).
4. Multiply the number you found in (3) by 100.
**This percentage tells you how similar the two sets are.**
Two sets that share all members would be 100% similar. the closer to 100%, the more similarity (e.g. 90% is more similar than 89%).
If they share no members, they are 0% similar.
The midway point — 50% — means that the two sets share half of the members.
Examples
A simple example using set notation: How similar are these two sets?
A = {0,1,2,5,6}
B = {0,2,3,4,5,7,9}
**Solution: J(A,B)** = |A∩B| / |A∪B|
= |{0,2,5}| / |{0,1,2,3,4,5,6,7,9}|
= 3/9
= 0.33
**Notes:**
The cardinality of A, denoted |A| is a count of the number of elements in set A.
Although it’s customary to leave the answer in decimal form if you’re using set notation, you could multiply by 100 to get a similarity of 33.33%.
Lets try jaccard index for accuracy evaluation. we can define jaccard as the size of the intersection divided by the size of the union of two label sets. If the entire set of predicted labels for a sample strictly match with the true set of labels, then the subset accuracy is 1.0; otherwise it is 0.0.
```
from sklearn.metrics import jaccard_score
jaccard_score(y_test, yhat)
```
### confusion matrix
Another way of looking at accuracy of classifier is to look at __confusion matrix__.
```
from sklearn.metrics import classification_report, confusion_matrix
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="green" if cm[i, j] > thresh else "red")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
print(confusion_matrix(y_test, yhat, labels=[1,0]))
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, yhat, labels=[1,0])
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['churn=1','churn=0'],normalize= False, title='Confusion matrix')
```
Look at first row. The firsr row is for customers whose actual churn value in test set is 1.
As you can calculate, out of 40 customers, the churn value of 15 of them is 1.
And out of these 15, the classifier correctly predicted 6 of them as 1, and 9 of them as 0.
It means, for 6 customers, the actual churn value were 1 in test set, and classifier also correctly predicted those as 1. However, while the actual label of 9 customers were 1, the classifier predicted those as 0, which is not very good. We can consider it as error of the model for first row.
What about the customers with churn value 0? Lets look at the second row.
It looks like there were 25 customers whom their churn value were 0.
The classifier correctly predicted 24 of them as 0, and one of them wrongly as 1. So, it has done a good job in predicting the customers with churn value 0. A good thing about confusion matrix is that shows the model’s ability to correctly predict or separate the classes. In specific case of binary classifier, such as this example, we can interpret these numbers as the count of true positives, false positives, true negatives, and false negatives.
```
print (classification_report(y_test, yhat))
```
Based on the count of each section, we can calculate precision and recall of each label:
- __Precision__ is a measure of the accuracy provided that a class label has been predicted. It is defined by: precision = TP / (TP + FP)
- __Recall__ is true positive rate. It is defined as: Recall = TP / (TP + FN)
So, we can calculate precision and recall of each class.
__F1 score:__
Now we are in the position to calculate the F1 scores for each label based on the precision and recall of that label.
The F1score is the harmonic average of the precision and recall, where an F1 score reaches its best value at 1 (perfect precision and recall) and worst at 0. It is a good way to show that a classifer has a good value for both recall and precision.
And finally, we can tell the average accuracy for this classifier is the average of the f1-score for both labels, which is 0.72 in our case.
### log loss
Now, lets try __log loss__ for evaluation. In logistic regression, the output can be the probability of customer churn is yes (or equals to 1). This probability is a value between 0 and 1.
Log loss( Logarithmic loss) measures the performance of a classifier where the predicted output is a probability value between 0 and 1.
```
from sklearn.metrics import log_loss
log_loss(y_test, yhat_prob)
```
## Classification: ROC Curve and AUC
**ROC curve**
An **ROC curve (receiver operating characteristic curve)** is a graph showing the performance of a classification model at all classification thresholds. This curve plots two parameters:
* True Positive Rate
* False Positive Rate
**True Positive Rate (TPR)** is a synonym for recall and is therefore defined as follows:
$TPR = \frac{TP}{TP+FN}$
**False Positive Rate (FPR)** is defined as follows:
$FPR = \frac{FP}{FP+TN}$
An ROC curve plots TPR vs. FPR at different classification thresholds. Lowering the classification threshold classifies more items as positive, thus increasing both False Positives and True Positives. The following figure shows a typical ROC curve.

##### Figure 4. TP vs. FP rate at different classification thresholds.
To compute the points in an ROC curve, we could evaluate a logistic regression model many times with different classification thresholds, but this would be inefficient. Fortunately, there's an efficient, sorting-based algorithm that can provide this information for us, called AUC.
#### AUC: Area Under the ROC Curve
AUC stands for "Area under the ROC Curve." That is, AUC measures the entire two-dimensional area underneath the entire ROC curve (think integral calculus) from (0,0) to (1,1).

**Figure 5. AUC (Area under the ROC Curve).**
AUC provides an aggregate measure of performance across all possible classification thresholds. One way of interpreting AUC is as the probability that the model ranks a random positive example more highly than a random negative example. For example, given the following examples, which are arranged from left to right in ascending order of logistic regression predictions:

Figure 6. Predictions ranked in ascending order of logistic regression score.
AUC represents the probability that a random positive (green) example is positioned to the right of a random negative (red) example.
AUC ranges in value from 0 to 1. A model whose predictions are 100% wrong has an AUC of 0.0; one whose predictions are 100% correct has an AUC of 1.0.
### AUC is desirable for the following two reasons:
* AUC is `scale-invariant`. It measures how well predictions are ranked, rather than their absolute values.
* AUC is `classification-threshold-invariant`. It measures the quality of the model's predictions irrespective of what classification threshold is chosen.
However, both these reasons come with caveats, which may limit the usefulness of AUC in certain use cases:
* `Scale invariance is not always desirable`. For example, sometimes we really do need well calibrated probability outputs, and AUC won’t tell us about that.
* `Classification-threshold invariance is not always desirable`. In cases where there are wide disparities in the cost of false negatives vs. false positives, it may be critical to minimize one type of classification error. For example, when doing email spam detection, you likely want to prioritize minimizing false positives (even if that results in a significant increase of false negatives). AUC isn't a useful metric for this type of optimization.
```
y_pred_proba = LR.predict_proba(X_test)[::,1]
print(y_pred_proba)
from sklearn import metrics
fpr, tpr, _ = metrics.roc_curve(y_test, y_pred_proba)
fpr
tpr
_
auc = metrics.roc_auc_score(y_test, y_pred_proba)
auc
plt.figure(figsize=(30,16))
plt.plot(fpr,tpr,label="data 1, auc="+str(auc))
plt.legend(loc=4)
plt.show()
```
## Practice
Try to build Logistic Regression model again for the same dataset, but this time, use different __solver__ and __regularization__ values? What is new __logLoss__ value?
```
# write your code here
LR2 = LogisticRegression(C=0.01, solver='sag').fit(X_train,y_train)
yhat_prob2 = LR2.predict_proba(X_test)
print ("LogLoss: : %.2f" % log_loss(y_test, yhat_prob2))
# write your code here
LR2 = LogisticRegression(C=0.01, solver='newton-cg').fit(X_train,y_train)
yhat_prob2 = LR2.predict_proba(X_test)
print ("LogLoss: : %.2f" % log_loss(y_test, yhat_prob2))
LR2 = LogisticRegression(C=0.01, solver='lbfgs').fit(X_train,y_train)
yhat_prob2 = LR2.predict_proba(X_test)
print ("LogLoss: : %.2f" % log_loss(y_test, yhat_prob2))
Solver = ['liblinear', 'newton-cg', 'lbfgs', 'sag', 'saga']
for i in Solver:
LR2 = LogisticRegression(C=0.01, solver=i).fit(X_train,y_train)
yhat_prob2 = LR2.predict_proba(X_test)
print ("LogLoss: : %.2f" % log_loss(y_test, yhat_prob2))
Data = pd.DataFrame({'Slover_method':Solver,
'LogLoss':log_loss(y_test, yhat_prob2)})
print(Data)
```
| github_jupyter |
<small><small><i>
All the IPython Notebooks in this **Python Games** series by Dr. Milaan Parmar are available @ **[GitHub](https://github.com/milaan9/92_Python_Games)**
</i></small></small>
# Bulls and Cows with AI
Adding a simple AI to the Bulls and Cows Game
* 👱 🆚 🤖 (1 - player mode)
```
#### Bulls and Cows with AI ####
import random
def vt (g, n): # g ➡ string and n ➡ string
t=v=0
for i in range(4) :
if g[i] == n[i] :
t+=1
elif g[i] in n :
v+=1
return (str(t) + str(v))
def distinct (n) : # n ➡ string
test = True
if n[0] == "0" :
test = False
if int(n) < 1000 or int(n) > 9999 :
test = False
if test == True :
for i in range(3) :
if (n.count(n[i]) > 1) :
test = False
return test
def liste (k, u, g) :# k ➡ list, u ➡ string, g ➡ string
r=[]
for i in k :
if vt(g,str(i)) == u :
r.append(i)
return r
def k0 () :
s=[]
for i in range(1000, 10000) :
if distinct(str(i)) :
s.append(i)
return s
stop = False
while stop == False :
print ("""Think about a number between 1000 and 9999 💭
such as its digits are distinct
Let's start now : You 👱 Start""")
possible = k0() # possible is a list of integer
num = possible [random.randint(0,len(possible)-1)]
while True :
while True :
try :
print ("\n\tYour 👱 turn")
choice = str(int((input("Guess my number💭: "))))
if distinct (str(choice)) :
break
except :
continue
print (vt(str(choice),str(num))[0] , "bulls and", vt(str(choice),str(num))[1], "cows")
if vt(str(choice),str(num) )[0] == "4" :
print ("Wow! You're amazing, you managed to beat a robot as smart as me. Respect! 🙌")
break
else :
print ("\n\t My 🤖 turn")
try :
guess = str (possible [random.randint(0,len(possible)-1)] )
if len (possible) == 1 :
print("HAHAHA I won 💪: " ,possible[0])
break
print ("Number 🤖 guessed: " , guess )
v = input ("How many cows ☆: ")
t = input ("How many bulls ★: ")
vtt = t+v
if t == "4" :
print ("HAHAHAA! I won 💪: " ,guess)
break
possible = liste (possible, vtt , guess)
if len (possible) == 0 :
print ("You apparently made a mistake somewhere!😤")
break
except :
print ("You apparently made a mistake somewhere!😤")
break
while True :
haha = input ("\nDo you wish to play again❓ yes/no: ")
if "no" in haha or "No" in haha or "NO" in haha or "yes" in haha or "YES" in haha or "Yes" in haha:
if "no" in haha or "No" in haha or "NO" in haha:
stop = True
break
print("""
""")
```
<div>
<img src="img/output_2.png" width="400"/>
</div>
| github_jupyter |
```
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import tensorflow as tf
import tensorflow_addons as tfa
from tensorflow.keras.callbacks import EarlyStopping
# reading data files and storing them in a dataframe
df_train_features = pd.read_csv('/Users/dhirenpagarani/Northeastern University/Fundamentals of Artificial Intelligence - EAI6000/lish-moa/train_features.csv')
df_test_features = pd.read_csv('/Users/dhirenpagarani/Northeastern University/Fundamentals of Artificial Intelligence - EAI6000/lish-moa/test_features.csv')
df_train_target_nonscored = pd.read_csv('/Users/dhirenpagarani/Northeastern University/Fundamentals of Artificial Intelligence - EAI6000/lish-moa/train_targets_nonscored.csv')
df_train_target_scored = pd.read_csv('/Users/dhirenpagarani/Northeastern University/Fundamentals of Artificial Intelligence - EAI6000/lish-moa/train_targets_scored.csv')
df_sample_submission = pd.read_csv('/Users/dhirenpagarani/Northeastern University/Fundamentals of Artificial Intelligence - EAI6000/lish-moa/sample_submission.csv')
import os
def seed_everything(seed=42):
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
tf.random.set_seed(seed)
seed_everything(seed=42)
print('Train shape:',df_train_features.shape)
print('Test shape:',df_test_features.shape)
new_train_features = df_train_features.copy()
new_train_features
# change cp_dose: D1 -> 0, D2 -> 1
new_train_features['cp_dose'] = new_train_features['cp_dose'].map({'D1':0, 'D2':1})
# change cp_time: 24 -> 0, 48 -> 1, 72 -> 2
new_train_features['cp_time'] = new_train_features['cp_time']//24-1
# drop the cp_type and sig_id column
new_train_features.drop(columns = ['sig_id','cp_type'], inplace = True)
new_train_features
new_train_targets_scored = df_train_target_scored.copy()
# drop the sig_id column
new_train_targets_scored.drop(columns = ['sig_id'], inplace = True)
new_train_targets_scored
X = new_train_features
y = new_train_targets_scored
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
early_stopping = EarlyStopping(monitor = 'val_loss',
patience = 3,
mode = 'min',
restore_best_weights = True)
def create_model(num_columns):
model = tf.keras.Sequential([
tf.keras.layers.Input(num_columns),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.2),
tfa.layers.WeightNormalization(tf.keras.layers.Dense(2048, activation="relu")),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.5),
tfa.layers.WeightNormalization(tf.keras.layers.Dense(1024, activation="relu")),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.5),
tfa.layers.WeightNormalization(tf.keras.layers.Dense(512, activation="relu")),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.5),
tfa.layers.WeightNormalization(tf.keras.layers.Dense(206, activation="sigmoid"))
])
model.compile(optimizer=tfa.optimizers.Lookahead(tf.optimizers.Adam(), sync_period=10),
loss='binary_crossentropy')
return model
import tensorflow_addons as tfa
model = create_model(X_train.shape[1])
history = model.fit(x = X_train,
y = y_train,
validation_data = (X_test, y_test),
epochs = 35,
verbose = 1,
callbacks = [early_stopping])
model.summary()
# plotting the losses of training and validation
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'c-', label='Training Loss')
plt.plot(epochs, val_loss, 'y-', label='Validation Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
# prepare test_features
new_test_features = df_test_features.copy()
# change cp_dose: D1 -> 0, D2 -> 1
new_test_features['cp_dose'] = new_test_features['cp_dose'].map({'D1':0, 'D2':1})
# change cp_time: 24 -> 0, 48 -> 1, 72 -> 2
new_test_features['cp_time'] = new_test_features['cp_time']//24-1
# drop the cp_type and sig_id column
new_test_features.drop(columns = ['sig_id','cp_type'], inplace = True)
new_test_features
# predict values for test_features
test_predict = model.predict(new_test_features)
test_predict
df_sample_submission.head()
sub = df_sample_submission.copy()
sig_ids = sub.sig_id
sub.drop(columns = ['sig_id'],inplace = True)
# add predicted values to sub
sub[:] = test_predict
# add the sig_id column back
sub.insert(0, "sig_id", sig_ids, True)
sub
# write sub to submission.csv file
sub.to_csv('submission.csv', index = False)
```
| github_jupyter |
# Ex 2: Manipulating Spectra Part 1 - `Spectrum`
## Notebook Initialization
```
%load_ext autoreload
import sys
sys.path.append("..")
%matplotlib inline
%autoreload
import matplotlib
matplotlib.rc_file('matplotlibrc')
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import quad
import darkhistory.physics as phys
import darkhistory.spec.spectools as spectools
from darkhistory.spec.spectrum import Spectrum
```
## DarkHistory Binning
DarkHistory deals extensively with spectra, so it is important to understand how we generally deal with the problem of discretizing spectra in our code.
Here are some general rules followed in DarkHistory when dealing with spectra:
1. Spectra in DarkHistory should be viewed as a series of bins indexed by energies $E_i$, each containing some number of particles at that energy.
2. Each bin has a *log bin width* ($\Delta \log E$), and two *bin boundaries*. We always use log-binning for all of the bins, with the bin boundaries taken to be the midpoint in log-space between two bin energies. The first and last bins are assumed to have their energies at the center of the bin in log-space.
3. Consider a spectrum $dN/dE$ that is a function of the energy $E$. There are several ways in which we discretize this spectrum over some chosen energy abscissa $E_i$ (a vector of energy values over which we want to approximate the function):
a. The first is to simply assign the value of $dN/dE$ to each bin:
$$ \frac{dN}{dE} (E_i) \approx \mathbf{S}[E_i] $$
where $\mathbf{S}$ is some vector of entries. This method is fast, but may fail to capture sharp features if the binning is insufficiently fine.
b. The second method is implemented numerically in [*spectools.discretize()*](https://darkhistory.readthedocs.io/en/latest/_autosummary/darkhistory/spec/spectools/darkhistory.spec.spectools.discretize.html). Given an abscissa $E_i$, total number and energy conservation is enforced, and a good approximation to the spectrum is returned. See the linked documentation for more details.
## `Spectrum` Class - Introduction
Individual spectra are stored as [*Spectrum*](https://darkhistory.readthedocs.io/en/latest/_autosummary/darkhistory/spec/spectrum/darkhistory.spec.spectrum.Spectrum.html) objects in DarkHistory.
The main attributes are as follows:
1. `eng`: the energy abscissa of the spectrum.
2. `N` and `dNdE`: the number of particles in each bin, or the $dN/dE$ in each bin.
The relationship between `N` and `dNdE` is given by `Spectrum.N == Spectrum.dNdE * Spectrum.eng * log_bin_width`. To calculate `log_bin_width`, the function [*spectools.get_log_bin_width()*](https://darkhistory.readthedocs.io/en/latest/_autosummary/darkhistory/spec/spectools/darkhistory.spec.spectools.get_log_bin_width.html) can be used with `spec.eng` as the argument.
Other optional attributes include `rs` and `in_eng`, which contain the redshift or the injected energy of the particle that produced the spectrum, if these attributes are applicable.
To initialize, simply define an abscissa, a spectrum over that abscissa, and then do the following:
```
# Create the energy abscissa.
eng = 10**((np.arange(120)-90)*(1/10))
# Random spectrum
random_spec_arr = 1e14/(np.exp((eng - 0.1)/0.1) + 1)
random_spec = Spectrum(eng, random_spec_arr, spec_type = 'dNdE')
```
`spec_type = 'dNdE'` tells the constructor that you are giving it an array of $dN/dE$ values. Let's make a plot of this spectrum:
```
plt.figure()
plt.loglog()
random_plot, = plt.plot(
random_spec.eng, random_spec.eng**2*random_spec.dNdE
)
plt.title(r'Random Spectrum')
plt.xlabel('Photon Energy [eV]')
plt.ylabel('$E^2 \, dn_\gamma/dE$ [eV$^{-1}$ cm$^{-3}$]')
plt.axis([1e-8, 1e4, 1e-10, 1e20])
```
## The CMB Spectrum
To create our first [*Spectrum*](https://darkhistory.readthedocs.io/en/latest/_autosummary/darkhistory/spec/spectrum/darkhistory.spec.spectrum.Spectrum.html), we will use the function [*spectools.discretize()*](https://darkhistory.readthedocs.io/en/latest/_autosummary/darkhistory/spec/spectools/darkhistory.spec.spectools.discretize.html), and apply it to the CMB blackbody spectrum,
$$ \frac{dn_\gamma}{dE} = \frac{E^2}{\pi^2 (\hbar c)^3} \frac{1}{e^{E/T_\text{CMB}} - 1}$$
The function [*physics.CMB_spec()*](https://darkhistory.readthedocs.io/en/latest/_autosummary/darkhistory/physics/darkhistory.physics.CMB_spec.html) returns the value of $dn_\gamma/dE$ for some photon energy $E$ and temperature $T$. This function can be passed to [*spectools.discretize()*](https://darkhistory.readthedocs.io/en/latest/_autosummary/darkhistory/spec/spectools/darkhistory.spec.spectools.discretize.html) to create a [*Spectrum*](https://darkhistory.readthedocs.io/en/latest/_autosummary/darkhistory/spec/spectrum/darkhistory.spec.spectrum.Spectrum.html) object.
```
T = 1.5 # The temperature in eV of the CMB.
# discretize() takes the abscissa, the function to discretize, and
# any other arguments that need to be passed to the function.
discrete_CMB = spectools.discretize(eng, phys.CMB_spec, T)
```
We should also set the `rs` attribute to the correct redshift. Here, we use [*physics.TCMB()*](https://darkhistory.readthedocs.io/en/latest/_autosummary/darkhistory/physics/darkhistory.physics.TCMB.html), which takes in a redshift as an argument, to calculate what the redshift $1+z$ is at the temperature `T` that we selected earlier.
```
rs = T/phys.TCMB(1)
discrete_CMB.rs = rs
```
Let's plot the spectrum contained in `discrete_CMB` as a check, and make sure that it agrees with [*physics.CMB_spec()*](https://darkhistory.readthedocs.io/en/latest/_autosummary/darkhistory/physics/darkhistory.physics.CMB_spec.html).
```
plt.figure()
plt.loglog()
discrete_plot, = plt.plot(
discrete_CMB.eng, discrete_CMB.eng**2*discrete_CMB.dNdE, label='Discrete Spectrum'
)
analytic_plot, = plt.plot(
eng, eng**2*phys.CMB_spec(eng, T), 'o', marker='o', markersize='5',
markevery=3, markerfacecolor='w', label='Analytic Spectrum'
)
plt.legend(handles=[discrete_plot, analytic_plot])
plt.title(r'CMB Spectrum, $T_\mathrm{CMB} = $'+'{:3.2f}'.format(T)+' eV')
plt.xlabel('Photon Energy [eV]')
plt.ylabel('$E^2 \, dn_\gamma/dE$ [eV$^{-1}$ cm$^{-3}$]')
plt.axis([1e-8, 1e4, 1e-10, 1e20])
```
## Number and Energy
At this point, let's introduce two important methods for the `Spectrum` class: [*Spectrum.totN()*](https://darkhistory.readthedocs.io/en/latest/_autosummary/darkhistory/spec/spectrum/darkhistory.spec.spectrum.Spectrum.html?highlight=Spectrum#darkhistory.spec.spectrum.Spectrum.totN) and [*Spectrum.toteng()*](https://darkhistory.readthedocs.io/en/latest/_autosummary/darkhistory/spec/spectrum/darkhistory.spec.spectrum.Spectrum.html?highlight=Spectrum#darkhistory.spec.spectrum.Spectrum.toteng). These methods are used to obtain the total number of particles and total energy stored in the `Spectrum` object respectively. One can find the total amount of energy stored between two abscissa values or in a certain bin using this method, but for now we'll simply use them to find the total number of particles and total energy in the spectrum.
Analytically, the total number density of particles in a blackbody at temperature $T$ is
$$ n_\gamma = \frac{16 \pi \zeta(3)}{\hbar^3 c^3} T^3 $$
and the total energy density is
$$ u_\gamma = \frac{\pi^2}{15 c^3 \hbar^3} T^4 $$
where $T$ is expressed in eV. Let's check that we do recover these results in `discrete_CMB`. The physical constants can all be found in [*physics*](https://darkhistory.readthedocs.io/en/latest/_autosummary/darkhistory/darkhistory.physics.html).
```
from scipy.special import zeta
n_gamma_analytic = 16*np.pi*zeta(3)/(phys.hbar**3 * phys.c**3 * (2*np.pi)**3) * T**3
u_gamma_analytic = np.pi**2/(15 * phys.c**3 * phys.hbar**3) * T**4
print('Number density (Analytic): ', n_gamma_analytic)
print('Total number of photons in discrete_spec: ', discrete_CMB.totN())
print('Ratio: ', discrete_CMB.totN()/n_gamma_analytic)
print('Energy density (Analytic): ', u_gamma_analytic)
print('Total energy of photons in discrete_spec: ', discrete_CMB.toteng())
print('Ratio: ', discrete_CMB.toteng()/u_gamma_analytic)
```
Because we used `spectools.discretize`, the total number and energy density enforced by the analytic expression for $dn_\gamma/dE$ are fully preserved. Of course, the energy abscissa provided must span the part of the spectrum that contains the bulk of the energy.
For convenience, the expression `n_gamma_analytic` and `u_gamma_analytic` can be obtained using [*phys.CMB_N_density()*](https://darkhistory.readthedocs.io/en/latest/_autosummary/darkhistory/physics/darkhistory.physics.CMB_N_density.html) and [*phys.CMB_eng_density()*](https://darkhistory.readthedocs.io/en/latest/_autosummary/darkhistory/physics/darkhistory.physics.CMB_eng_density.html) respectively.
`self.totN()` can also be used to find the total number of particles in various bins or combinations of bins. This is done by specifying the bin boundaries for the sets of desired combinations using the bound_arr keyword; e.g. `bound_arr=[a,b,c,d]` will produce three outputs, corresponding to the total number of particles in the bins between boundaries a-b, between boundaries b-c, and between boundaries c-d. For example, the code below returns the total number of particles in each of the first two bins. The bin boundaries do not need to be integers.
```
print('Number of particles in first and second bin: ',
discrete_CMB.totN(bound_type='bin', bound_arr=np.array([0,1,2]))
)
```
One can also use `Spectrum.N` to get a list of number of particles in each bin. This is related to the `Spectrum` type, which we will come back to later.
```
print('Number of particles in first and second bin: ', discrete_CMB.N[0:2])
```
`self.totN()` can also return the total number of particles between different energy boundaries:
```
print('Number of particles between 0.2 and 0.45 eV, 0.45 eV and 0.6 eV: ',
discrete_CMB.totN(bound_type='eng', bound_arr=np.array([0.2, 0.45, 0.6]))
)
```
The function `self.toteng()` can be used in a similar manner.
## Addition and Multiplication
You can add or subtract two `Spectrum` objects together (they must have the same `eng` and `spec_type` to do so), or add or subtract a scalar to a `Spectrum`.
```
sum_of_BB_spec = discrete_CMB + random_spec
plt.figure()
plt.loglog()
CMB_plot, = plt.plot(discrete_CMB.eng, discrete_CMB.eng**2*discrete_CMB.dNdE, ':', label='CMB')
random_plot, = plt.plot(random_spec.eng, random_spec.eng**2*random_spec.dNdE, ':', label='Random')
sum_plot, = plt.plot(sum_of_BB_spec.eng, sum_of_BB_spec.eng**2*sum_of_BB_spec.dNdE, label='Sum')
plt.legend(handles=[CMB_plot, random_plot, sum_plot])
plt.title(r'Adding Spectrum Objects')
plt.xlabel('Photon Energy [eV]')
plt.ylabel('$E^2 \, dn_\gamma/dE$ [eV$^{-1}$ cm$^{-3}$]')
plt.axis([1e-8, 1e4, 1e-10, 1e20])
```
Similarly, you can multiply or divide `Spectrum` objects by another `Spectrum`, an array or a scalar. Suppose, for example, we wanted to find the average energy of particles in `discrete_CMB`. We could do
```
a = discrete_CMB * discrete_CMB.eng
print('Mean energy in the CMB at {:3.2f}'.format(T)+' eV in units of k_B T: ', np.sum(a.N)/discrete_CMB.totN()/T)
```
which should be close to the theoretical value of $\langle E \rangle \approx 2.70 k_B T$. Of course, we could simply have done
```
print(
'Mean energy in the CMB at {:3.2f}'.format(T)+' eV in units of k_B T: ',
discrete_CMB.toteng()/discrete_CMB.totN()/T
)
```
## `Spectrum` - Rebinning
Changing the abscissa of a spectrum is frequently useful, for example in redshifting a spectrum, or in converting a spectrum of photoionizing photons into a spectrum of electrons freed from an atom. DarkHistory provides the method [*Spectrum.rebin()*](https://darkhistory.readthedocs.io/en/latest/_autosummary/darkhistory/spec/spectrum/darkhistory.spec.spectrum.Spectrum.html?highlight=Spectrum.rebin#darkhistory.spec.spectrum.Spectrum.rebin) for doing so in a manner that conserves total number and total energy, while attempting to preserve the shape of the spectrum.
To see how this works, let's perform redshifting on our CMB spectrum. Let's first create a copy of `discrete_CMB`, called `redshifted_CMB`.
```
redshifted_CMB = Spectrum(discrete_CMB.eng, discrete_CMB.dNdE, spec_type='dNdE', rs=discrete_CMB.rs)
```
The first thing we'll do is to change the energy abscissa of `redshifted_CMB` to the final energy after redshifting to `final_rs`. This is done by the function [*Spectrum.shift_eng()*](https://darkhistory.readthedocs.io/en/latest/_autosummary/darkhistory/spec/spectrum/darkhistory.spec.spectrum.Spectrum.html?highlight=Spectrum.rebin#darkhistory.spec.spectrum.Spectrum.shift_eng), which not only changes the abscissa, but ensures that $dN/dE$ is correctly updated with the new bin widths. The argument passed to Spectrum.shift_eng() is the array of shifted energy abscissae.
```
final_rs = rs/4
redshifted_CMB.shift_eng(discrete_CMB.eng * final_rs / rs)
```
At this point, the energy abscissa has been changed, but it is often the case that we want the final spectrum to have the same binning as the original. To do this, we use the function [*Spectrum.rebin()*](https://darkhistory.readthedocs.io/en/latest/_autosummary/darkhistory/spec/spectrum/darkhistory.spec.spectrum.Spectrum.html?highlight=Spectrum.rebin#darkhistory.spec.spectrum.Spectrum.rebin), which reassigns the particles in each bin of the original spectrum to the new one in a manner that conserves *total* number and energy, while attempting to preserve the spectral shape. The argument passed to Spectrum.rebin() is the array of desired new energy abscissae.
```
redshifted_CMB.rebin(discrete_CMB.eng)
```
Let's make a plot for comparison! Don't forget that since we're actually storing number *densities* in these spectra, to compare before and after redshifting, we also have to include a factor of redshift$^3$.
```
plt.figure()
plt.loglog()
orig_plot, = plt.plot(discrete_CMB.eng,
discrete_CMB.dNdE*discrete_CMB.eng**2,
label=r'$T_\mathrm{CMB} =$ '+'{:3.2f}'.format(T)+' eV')
redshifted_plot, = plt.plot(redshifted_CMB.eng,
redshifted_CMB.eng**2*redshifted_CMB.dNdE * (final_rs/rs)**3,
label=r'$T_\mathrm{CMB} =$ '+'{:3.2f}'.format(T * final_rs/rs)+' eV, Rebinned')
plt.legend(handles=[orig_plot, redshifted_plot])
plt.title('Redshifting CMB')
plt.xlabel('Photon Energy [eV]')
plt.ylabel(r'$E^2 \, dn_\gamma/dE$ [eV$^{-1}$ cm$^{-3}$]')
plt.axis([1e-8, 1e4, 1e-10, 1e26])
```
Redshifting is also provided as a convenience function, [*Spectrum.redshift()*](https://darkhistory.readthedocs.io/en/latest/_autosummary/darkhistory/spec/spectrum/darkhistory.spec.spectrum.Spectrum.html?highlight=Spectrum.rebin#darkhistory.spec.spectrum.Spectrum.redshift). The input to Spectrum.redshift() is the new desired redshift (1+z), and the effect of calling the function is to update the Spectrum object to the desired redshift, properly redshifting the stored spectrum while maintaining the original energy abscissa.
## Underflow
Let's perform a really large redshift on yet another copy of `discrete_CMB`:
```
large_redshift_CMB = Spectrum(discrete_CMB.eng, discrete_CMB.N, spec_type='N', rs=discrete_CMB.rs)
final_rs_large = rs/1e8
large_redshift_CMB.redshift(final_rs_large)
```
This spectrum looks like this:
```
plt.figure()
plt.loglog()
orig_spec, = plt.plot(discrete_CMB.eng,
discrete_CMB.dNdE*discrete_CMB.eng**2,
label=r'$T_\mathrm{CMB} =$ '+'{:3.2f}'.format(T)+' eV')
large_redshift_spec, = plt.plot(large_redshift_CMB.eng,
large_redshift_CMB.dNdE*discrete_CMB.eng**2*(final_rs_large/rs)**3,
label=r'$T_\mathrm{CMB} =$ '+'{:2.2e}'.format(T * (final_rs_large/rs))+' eV')
plt.legend(handles=[orig_spec, large_redshift_spec])
plt.title('Redshifting CMB')
plt.xlabel('Photon Energy [eV]')
plt.ylabel(r'$E^2 \, dn_\gamma/dE$ [eV$^{-1}$ cm$^{-3}$]')
plt.axis([1e-8, 1e4, 1e-30, 1e28])
```
Because the spectrum has shifted so significantly to low energies, the photons that were in the lowest energy bins of `discrete_CMB` are in danger of being lost. However, number and energy conservation is always enforced when using `Spectrum.rebin()` or any function that calls this method, e.g. `Spectrum.redshift()` by assigning photons below the new energy abscissa to an underflow bin. `Spectrum.totN()` and `Spectrum.toteng()` automatically include these underflow photons.
```
print('Original total number of particles: ', discrete_CMB.totN())
print('Redshifted total number of particles: ', large_redshift_CMB.totN())
print('Total number of photons in underflow: ', large_redshift_CMB.underflow['N'])
print('Ratio: ', discrete_CMB.totN()/large_redshift_CMB.totN())
print('**********************************************************')
print('Original total energy: ', discrete_CMB.toteng())
print('Redshifted total energy: ', large_redshift_CMB.toteng())
print('Ratio: ', discrete_CMB.toteng()/large_redshift_CMB.toteng())
print('Total energy of photons in underflow: ', large_redshift_CMB.underflow['eng'])
```
| github_jupyter |
<a href="https://colab.research.google.com/github/yukinaga/ai_music/blob/main/section_2/01_simple_melody_rnn.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Melody RNNを使ってみよう!
「Melody RNN」を使い、曲をを生成してみましょう。
## ライブラリのインストール
Magentaと共に、音楽生成用のライブラリpyFluidSynth、MIDIデータを処理するためのpretty_midiなどをインストールします。
```
!apt-get update -qq && apt-get install -qq libfluidsynth1 fluid-soundfont-gm build-essential libasound2-dev libjack-dev
!pip install -qU pyfluidsynth pretty_midi
!pip install -qU magenta
```
## NoteSequenceで音を鳴らす
今回は、Section1で作成した「きらきら星」のNoteSequenceをベースに曲を生成します。
以下のコードは、前回のコードに曲のテンポおよび所要時間を指定する行を加えたものです。
`qpm`のオプションにより、1分間における四分音符の数を指定することができます。
```
import magenta
import note_seq
from note_seq.protobuf import music_pb2
kira2 = music_pb2.NoteSequence() # NoteSequence
# notesにnoteを追加
kira2.notes.add(pitch=60, start_time=0.0, end_time=0.4, velocity=80)
kira2.notes.add(pitch=60, start_time=0.4, end_time=0.8, velocity=80)
kira2.notes.add(pitch=67, start_time=0.8, end_time=1.2, velocity=80)
kira2.notes.add(pitch=67, start_time=1.2, end_time=1.6, velocity=80)
kira2.notes.add(pitch=69, start_time=1.6, end_time=2.0, velocity=80)
kira2.notes.add(pitch=69, start_time=2.0, end_time=2.4, velocity=80)
kira2.notes.add(pitch=67, start_time=2.4, end_time=3.2, velocity=80)
kira2.notes.add(pitch=65, start_time=3.2, end_time=3.6, velocity=80)
kira2.notes.add(pitch=65, start_time=3.6, end_time=4.0, velocity=80)
kira2.notes.add(pitch=64, start_time=4.0, end_time=4.4, velocity=80)
kira2.notes.add(pitch=64, start_time=4.4, end_time=4.8, velocity=80)
kira2.notes.add(pitch=62, start_time=4.8, end_time=5.2, velocity=80)
kira2.notes.add(pitch=62, start_time=5.2, end_time=5.6, velocity=80)
kira2.notes.add(pitch=60, start_time=5.6, end_time=6.4, velocity=80)
kira2.total_time = 6.4 # 所要時間
kira2.tempos.add(qpm=75); # 曲のテンポを指定
note_seq.plot_sequence(kira2) # NoteSequenceの可視化
note_seq.play_sequence(kira2, synth=note_seq.fluidsynth) # NoteSequenceの再生
```
## MelodyRNN
MelodyRNNは、RNNの一種「LSTM」をベースにした曲を生成するモデルです。
これにより、直近のnoteの並びから次のnoteが予測されます。
これを繰り返すことで、曲が生成されることになります。
Magentaには数千曲のMIDIデータを使って学習した学習済みのモデルが含まれており、この学習済みモデルはそのまま作曲に使用することができます。
学習済みのモデルは Bundleファイル(.magファイル)に保存されています。
以下のコードは、学習済みモデル「basic_rnn.mag」を読み込み、曲の生成器を設定しています。
```
from magenta.models.melody_rnn import melody_rnn_sequence_generator
from magenta.models.shared import sequence_generator_bundle
# モデルの初期化
note_seq.notebook_utils.download_bundle("basic_rnn.mag", "/models/") # Bundle(.magファイル)をダウンロード
bundle = sequence_generator_bundle.read_bundle_file("/models/basic_rnn.mag") # Bundleの読み込み
generator_map = melody_rnn_sequence_generator.get_generator_map()
melody_rnn = generator_map["basic_rnn"](checkpoint=None, bundle=bundle) # 生成器の設定
melody_rnn.initialize() # 初期化
```
## 曲の生成
各設定を行った上で、生成器により曲を生成します。
`temperature`を変更することで、曲の「ランダム度合い」を調整することができます。
```
from note_seq.protobuf import generator_pb2
base_sequence = kira2 # ベースになるNoteSeqence
total_time = 36 # 曲の長さ(秒)
temperature = 1.2 # 曲の「ランダム度合い」を決める定数
base_end_time = max(note.end_time for note in base_sequence.notes) #ベース曲の終了時刻
# 生成器に関する設定
generator_options = generator_pb2.GeneratorOptions() # 生成器のオプション
generator_options.args["temperature"].float_value = temperature # ランダム度合い
generator_options.generate_sections.add(
start_time=base_end_time, # 作曲開始時刻
end_time=total_time) # 作曲終了時刻
# 曲の生成
gen_seq = melody_rnn.generate(base_sequence, generator_options)
note_seq.plot_sequence(gen_seq) # NoteSequenceの可視化
note_seq.play_sequence(gen_seq, synth=note_seq.fluidsynth) # NoteSequenceの再生
```
## MIDIファイルの保存とダウンロード
`NoteSequence`をMIDIデータに変換し、保存してダウンロードします。
```
from google.colab import files
note_seq.sequence_proto_to_midi_file(gen_seq, "simple_melody_rnn.mid") #MIDI データに変換し保存
files.download("simple_melody_rnn.mid") # ダウンロード
```
| github_jupyter |
<h1> 2d. Distributed training and monitoring </h1>
In this notebook, we refactor to call ```train_and_evaluate``` instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.
We also use TensorBoard to monitor the training.
```
import datalab.bigquery as bq
import tensorflow as tf
import numpy as np
import shutil
from google.datalab.ml import TensorBoard
print(tf.__version__)
```
<h2> Input </h2>
Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph.
```
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
# No need to features.pop('key') since it is not specified in the INPUT_COLUMNS.
# The key passes through the graph unused.
return features, label
# Create list of file names that match "glob" pattern (i.e. data_file_*.csv)
filenames_dataset = tf.data.Dataset.list_files(filename)
# Read lines from text files
textlines_dataset = filenames_dataset.flat_map(tf.data.TextLineDataset)
# Parse text lines as comma-separated values (CSV)
dataset = textlines_dataset.map(decode_csv)
# Note:
# use tf.data.Dataset.flat_map to apply one to many transformations (here: filename -> text lines)
# use tf.data.Dataset.map to apply one to one transformations (here: text line -> feature list)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset
```
<h2> Create features out of input data </h2>
For now, pass these through. (same as previous lab)
```
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
```
<h2> Serving input function </h2>
```
# Defines the expected shape of the JSON feed that the model
# will receive once deployed behind a REST API in production.
def serving_input_fn():
json_feature_placeholders = {
'pickuplon' : tf.placeholder(tf.float32, [None]),
'pickuplat' : tf.placeholder(tf.float32, [None]),
'dropofflat' : tf.placeholder(tf.float32, [None]),
'dropofflon' : tf.placeholder(tf.float32, [None]),
'passengers' : tf.placeholder(tf.float32, [None]),
}
# You can transforma data here from the input format to the format expected by your model.
features = json_feature_placeholders # no transformation needed
return tf.estimator.export.ServingInputReceiver(features, json_feature_placeholders)
```
<h2> tf.estimator.train_and_evaluate </h2>
```
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn = lambda: read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = num_train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = lambda: read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
```
<h2> Monitoring with TensorBoard </h2>
<br/>
Use "refresh" in Tensorboard during training to see progress.
```
OUTDIR = './taxi_trained'
TensorBoard().start(OUTDIR)
```
<h2>Run training</h2>
```
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
train_and_evaluate(OUTDIR, num_train_steps = 500)
```
<h4> You can now shut Tensorboard down </h4>
```
# to list Tensorboard instances
TensorBoard().list()
pids_df = TensorBoard.list()
if not pids_df.empty:
for pid in pids_df['pid']:
TensorBoard().stop(pid)
print('Stopped TensorBoard with pid {}'.format(pid))
```
## Challenge Exercise
Modify your solution to the challenge exercise in c_dataset.ipynb appropriately.
Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
<a href="https://colab.research.google.com/github/agungsantoso/deep-learning-v2-pytorch/blob/master/convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Multi-Layer Perceptron, MNIST
---
In this notebook, we will train an MLP to classify images from the [MNIST database](http://yann.lecun.com/exdb/mnist/) hand-written digit database.
The process will be broken down into the following steps:
>1. Load and visualize the data
2. Define a neural network
3. Train the model
4. Evaluate the performance of our trained model on a test dataset!
Before we begin, we have to import the necessary libraries for working with data and PyTorch.
```
# http://pytorch.org/
from os.path import exists
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/'
accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'
!pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision
import torch
!pip install Pillow==4.0.0
!pip install PIL
!pip install image
import PIL
# import libraries
import torch
import numpy as np
```
---
## Load and Visualize the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)
Downloading may take a few moments, and you should see your progress as the data is loading. You may also choose to change the `batch_size` if you want to load more data at a time.
This cell will create DataLoaders for each of our datasets.
```
from torchvision import datasets
import torchvision.transforms as transforms
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# choose the training and test datasets
train_data = datasets.MNIST(root='data', train=True,
download=True, transform=transform)
test_data = datasets.MNIST(root='data', train=False,
download=True, transform=transform)
# prepare data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
```
### Visualize a Batch of Training Data
The first step in a classification task is to take a look at the data, make sure it is loaded in correctly, then make any initial observations about patterns in that data.
```
import matplotlib.pyplot as plt
%matplotlib inline
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
# print out the correct label for each image
# .item() gets the value contained in a Tensor
ax.set_title(str(labels[idx].item()))
```
### View an Image in More Detail
```
img = np.squeeze(images[1])
fig = plt.figure(figsize = (12,12))
ax = fig.add_subplot(111)
ax.imshow(img, cmap='gray')
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if img[x][y]<thresh else 'black')
```
---
## Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)
The architecture will be responsible for seeing as input a 784-dim Tensor of pixel values for each image, and producing a Tensor of length 10 (our number of classes) that indicates the class scores for an input image. This particular example uses two hidden layers and dropout to avoid overfitting.
```
import torch.nn as nn
import torch.nn.functional as F
## Define the NN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# number of hidden nodes in each layer
hidden_1 = 512
hidden_2 = 512
# linear layer (784 -> hidden_1)
self.fc1 = nn.Linear(28 * 28, hidden_1)
# linear layer (hidden_1 -> hidden_2)
self.fc2 = nn.Linear(hidden_1, hidden_2)
# linear layer (hidden_2 -> 10)
self.fc3 = nn.Linear(hidden_2, 10)
# dropout layer (p=0.2)
# dropout prevent overfitting data
self.dropout = nn.Dropout(0.2)
def forward(self, x):
# flatten image input
x = x.view(-1, 28 * 28)
# add hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add hidden layer, with relu activation function
x = F.relu(self.fc2(x))
# add dropout layer
x = self.dropout(x)
# add output layer
x = self.fc3(x)
return x
# initialize the NN
model = Net()
print(model)
```
### Specify [Loss Function](http://pytorch.org/docs/stable/nn.html#loss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)
It's recommended that you use cross-entropy loss for classification. If you look at the documentation (linked above), you can see that PyTorch's cross entropy function applies a softmax funtion to the output layer *and* then calculates the log loss.
```
# specify loss function
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
```
---
## Train the Network
The steps for training/learning from a batch of data are described in the comments below:
1. Clear the gradients of all optimized variables
2. Forward pass: compute predicted outputs by passing inputs to the model
3. Calculate the loss
4. Backward pass: compute gradient of the loss with respect to model parameters
5. Perform a single optimization step (parameter update)
6. Update average training loss
The following loop trains for 30 epochs; feel free to change this number. For now, we suggest somewhere between 20-50 epochs. As you train, take a look at how the values for the training loss decrease over time. We want it to decrease while also avoiding overfitting the training data.
```
# number of epochs to train the model
n_epochs = 30 # suggest training between 20-50 epochs
model.train() # prep model for training
for epoch in range(n_epochs):
# monitor training loss
train_loss = 0.0
###################
# train the model #
###################
for data, target in train_loader:
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update running training loss
train_loss += loss.item()*data.size(0)
# print training statistics
# calculate average loss over an epoch
train_loss = train_loss/len(train_loader.dataset)
print('Epoch: {} \tTraining Loss: {:.6f}'.format(
epoch+1,
train_loss
))
```
---
## Test the Trained Network
Finally, we test our best model on previously unseen **test data** and evaluate it's performance. Testing on unseen data is a good way to check that our model generalizes well. It may also be useful to be granular in this analysis and take a look at how this model performs on each class as well as looking at its overall loss and accuracy.
#### `model.eval()`
`model.eval(`) will set all the layers in your model to evaluation mode. This affects layers like dropout layers that turn "off" nodes during training with some probability, but should allow every node to be "on" for evaluation!
```
# initialize lists to monitor test loss and accuracy
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval() # prep model for *evaluation*
for data, target in test_loader:
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct = np.squeeze(pred.eq(target.data.view_as(pred)))
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# calculate and print avg test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
str(i), 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
```
### Visualize Sample Test Results
This cell displays test images and their labels in this format: `predicted (ground-truth)`. The text will be green for accurately classified examples and red for incorrect predictions.
```
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds = torch.max(output, 1)
# prep images for display
images = images.numpy()
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
ax.set_title("{} ({})".format(str(preds[idx].item()), str(labels[idx].item())),
color=("green" if preds[idx]==labels[idx] else "red"))
```
| github_jupyter |
# Jupyter Notebook & Python Intro
Zuerst navigieren wir mit der Kommandozeile in den Folder, wo wir das Jupyter Notebook abspeichern wollen. Dann gehen wir in unser virtual environment und starten mit "jupyter notebook" unser Notebook auf. Jupyter Notebook ist eine Arbeitsoberfläche, der für Coding-Anfänger sehr einfach zu bedienen ist, denn es lassen sich Code-Teile einzelnen abspielen.
Es gibt zwei Formate der Zellen. Code-Format und sogenanntes Markdown. Letzteres ist ein Textformat, das dem Text möglichst wenige Formatinfos anhängt. Nicht wie Word zum Beispiel. Wenn man grosse Notebooks entwickelt, ist es sehr hilfreich damit zu arbeiten. Zum Beispiel
# Titel
## Titel
### Titel
#### Titel
##### Titel
```
#dsfdskjfbskjdfbdkjbfkjdbf
#asdasd
```
1. sad
**hello**
Oder Aufzählungen, Fetten. Das geht alles mit Markdown. Man kann sogar Tabellen bauen oder Hyper Links setzen. Wie zum Beispiel auf dieses [Markdown Cheatsheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet). Hier sind weitere sehr praktische Format. In der Regel benutzten wir Jupyter Notebooks aber nicht, um zu texten, sondern zu coden. Legen wir los.
1. Print und Input
2. Datentypen
3. Aktionen
4. Variablen und Zuordnungen
5. If, elif, else
6. Lists
7. Dictionaries
8. Tuples
9. Simple Funktionen: len, sort, sorted
10. For Loop
# Python
#### Print und Input
```
#Mit einem Hashtag vor einer Zeile können wir Code kommentieren, auch das ist sehr wichtig.
#Immer, wirklich, immer den eigenen Code zu kommentieren. Vor allem am Anfang.
print("hello world")
#Der Printbefehl druckt einfach alles aus. Nicht wirklich wahnsinnig toll.
#Doch er ist später sehr nützlich. Vorallem wenn es darum geht Fehler im eigenn Code zu finden.
#Mit dem Inputbefehl kannst Du Den Nutzer mit dem intergieren.
input('wie alt bis Du?')
```
#### Datentypen
```
#Strings
'Hallo wie "geht es Dir"'
"12345"
124
str(124)
#Integer
type(567)
type(int('1234'))
#Floats
4.542323
float(12)
int(4.64)
#Dates, eigentlich Strings
'15-11-2019'
```
#### Aktionen
```
print('Hallo' + ' '+ 'wie' + 'geht' + 'es')
print('Hallo','wie','geht','es')
#Alle anderen gängigen:
#minus -
#Mal *
#geteilt durch /
#Spezial: Modulo. %, geteilt durch und der Rest, der übrigbleibt
22 % 5
2
```
#### Variablen, Vergleiche und Zuordnungen von Variablen
```
#Grösser und kleiner als:
#< >
#Gleich == (wichtig, doppelte Gleichzeichen)
#Denn das einfach definiert eine Variable
'Schweiz' == 'Schweiz'
Schweiz = 'reich'
Schweiz
Schweiz == 'reich'
reich = 'arm'
1 = 'reich'
"5schweiz"
1 = 6
a = 34
a = b
a = 'b'
a == 'b'
a
```
#### if - else - (elif)
```
elem = int(input('Wie alt bist Du?'))
elem
if elem < 0:
print('Das ist unmöglich')
else:
print('Du bist aber alt')
elem = int(input('Wie alt bist Du?'))
if elem < 0:
print('Das ist unmöglich')
elif elem < 25:
print('Du bist aber jung')
else:
print('Du bist aber alt')
```
#### Lists
```
#Eckige Klammern
[1,"hallo",3,4,5.23,6,7]
lst = [1,2,3,4,5,6,7]
lst
#Einzelene Elemente
lst[0]
#Ganze Abschnitte
lst[:4]
#Komplexere Schnitte
lst[::3]
lst
#Append, Pop, etc.
saved_item = lst.pop()
lst
lst.append(saved_item)
list
#Aufpassen mit Befehl: list weil das macht aus etwas eine Liste. Auch aus Strings:
list('hallo wie geht')
range(0,10)
#Elegantester Weg, eine Liste zu schreiben. Und ganz wichtig,
#der Computer beginn immer bei 0.
list(range(10))
list(range(9,-1,-1))
```
#### Dictionaries
Verwende hier die geschwungene Klammern
```
#Komische, geschwungene Klammern
{'Tier': 'Hund', 'Grösse': 124, 'Alter': 10}
dct = {'Tier': 'Hund', 'Grösse': 124, 'Alter': 10}
dct
dct['Grösse']
#List of Dictionaires
dct_lst = [{'Tier': 'Hund', 'Grösse': 124, 'Alter': 10}, {'Tier': 'Katze', 'Grösse': 130, 'Alter': 8}]
type(dct_lst)
dct_lst[1]
dct_lst[0]['Alter']
neue_list = []
for xxxxxxxxxxxx in dct_lst:
neue_list.append(xxxxxxxxxxxx['Alter'])
neue_list
```
#### Tuples
Hier sind runde Klammern König.
```
lst
tuple(lst)
lst
lst = tuple(lst)
lst
#Unveränderbar. Also gutes Format, um Sachen abzuspeichern.
#Aber das wirklich nur der Vollständigkeitshalber.
```
#### Simple Funktionen - len und sort
Beachte wie man die aufruft. Nämlich mit runden Klammern
```
#len mit Strings
len('hallo wie geht es Dir')
#len mit Lists
len([1,2,3,4,4,5])
#len mit dictionaries
len({'Tier': 'Hund', 'Alter': 345})
#len mit Tuples
len((1,1,1,2,2,1))
#sorted für momentane Sortierung
sorted('hallo wie geht es Dir')
a = 'hallo wie geht es Dir'
sorted(a)
a
#Sort funktioniert allerdings "nur" mit lists
lst = [1, 5, 9, 10, 34, 12, 12, 14]
lst.sort()
lst
dic = {'Tier': 'Hund', 'Alter': 345}
dic.sort()
```
#### For Loop
```
lst
for hghjgfjhf in lst:
print(x)
dicbkjghkg = {'Tier': 'Hund', 'Alter': 345}
for key, value in dicbkjghkg.items():
print(key, value)
#for loop to make new lists
lst
#Nehmen wir einmal an, wir wollen nur die geraden Zahlen in der Liste
new_lst = []
for elem in lst:
if elem % 2 == 0:
new_lst.append(elem)
# else:
# continue
new_lst
```
#### For loop with list of dictionaries
```
dic_lst = [{'Animal': 'Dog', 'Size': 45},
{'Animal': 'Cat', 'Size': 23},
{'Animal': 'Bird', 'Size': 121212}]
for dic in dic_lst:
print(dic)
for dic in dic_lst:
print(dic['Animal'])
for dic in dic_lst:
print(dic['Animal'] + ': '+ dic['Size']))
```
| github_jupyter |
# Towards Quantum Measurement Error Mitigation
## How to connect CDR with quantum state tomography
Quantum computers are astonishing devices. Yet, they are error-prone, too. Therefore, we need to implement quantum error mitigation methods to reduce the negative effect of errors on our computation results.
In a series of previous posts, we [learned the Clifford Data Regression method](https://pyqml.medium.com/mitigating-quantum-errors-using-clifford-data-regression-98ab663bf4c6) and mitigated errors in a [simulated environment](https://towardsdatascience.com/how-to-implement-quantum-error-mitigation-with-qiskit-and-mitiq-e2f6a933619c) and on a [real quantum computer](https://towardsdatascience.com/practical-error-mitigation-on-a-real-quantum-computer-41a99dddf740).
The results are encouraging. Yet, an unexpected obstacle appeared when I tried to use it to participate in IBM's Quantum Open Science Prize.
IBM asks us to simulate a three-particle Heisenberg Hamiltonian using Trotterization. No, that's not the problem. The problem is that they assess any submission through quantum state tomography. This is an approach to recreate a quantum state through measurements. More specifically, the problem is that they use the `StateTomographyFitter` inside Qiksit. This implementation builds upon experiment counts. But the CDR method works with expectation values.
Let me illustrate the problem a little bit. The following figure depicts the simple case of a 1-qubit quantum circuit.

Whenever we look at a qubit, it is either 0 or 1. That's it. Which of both depends on chance and the internal quantum state. Say the qubit is in state $|+\rangle$. In this state, the measuring 0 and 1 have the same probability. In this state, measuring zero and one have the same probability. But, we can't conclude on the probabilities when running the quantum circuit only once.
But when we run it repeatedly, say a thousand times, we would see zero and one 500 times each, except for a slight statistical variance.
Unfortunately, current quantum computers are noisy devices. We sometimes measure a 0 for a 1 and vice versa. So, the results become blurred. For instance, we would measure 0 412 times.
The raw result of running a quantum circuit is the measurement. Since we almost always execute circuits multiple times, we count the measurements.
So, let's look at the source code in Qiskit of such a circuit.
```
from qiskit import QuantumCircuit, execute, Aer
# Create a quantum circuit with one qubit
qc = QuantumCircuit(1)
qc.h(0)
qc.measure_all()
```
We define a quantum circuit with a single qubit and apply the Hadamard gate to it. This transformation gate puts the qubit (from the initial state $|0\rangle$) into state $|+\rangle$. At the end of the circuit, we measure the qubit. The following image shows the circuit diagram.
```
qc.draw(output='text')
```
Let's run this circuit 1,000 times. We do this by getting a `backend` (here, a noise-free statistical simulator) and calling the `execute` function.
```
# Tell Qiskit how to simulate our circuit
backend = Aer.get_backend('qasm_simulator')
# Do the simulation, returning the result
result = execute(qc, backend, shots=1000).result()
```
We get back a Qiskit result object ([Qiskit reference](https://qiskit.org/documentation/stubs/qiskit.result.Result.html)). The essential function of this object is `get_counts` because it tells us what we saw when looking at the qubit.
```
result.get_counts()
```
It is a simple Python dictionary with the measurement result as the key and the number of times we observed this result as the value. It is now up to us to interpret these counts and do something meaningful. From now on, these results are as good as any other statistical data. We can use them to calculate further values, such as the expectation value. This is the probabilistic expected value of the measurement of an experiment. It is similar to a classical expectation value. For example, consider tossing a fair coin that lands on heads and tails equally likely. If you assign the value 1 to heads and 0 to tails, the expectation value is $0.5*1+0.5*0=0.5$. It is the same for a qubit in state $|+\rangle$.
Usually, when you see the calculation of expectation values in quantum computing, it looks like this $\langle \psi|Z|\psi\rangle$. The letter "psi" ($\psi$) denotes the quantum state, and the Z in the middle symbolizes the observable. In this case, it is the Z-axis.
The important thing here is that this notation introduces the concept of an observable. When we measured our qubit in Qiskit before, we implicitly chose the Z-observable because this is the default measurement basis for a qubit in Qiskit. So, we are essentially talking about the same concept. In [this post](https://towardsdatascience.com/how-to-implement-quantum-error-mitigation-with-qiskit-and-mitiq-e2f6a933619c), we looked at observables in more detail. The one thing that we need to know is that there is not a single observable, but many. Just think about a sphere like the earth. The observable is the specific point from which you look at the globe. The world looks different depending on the perspective, yet, it is the same.

Essentially, the counts and the expectation value are closely tight together. They both have their uses. While the counts contain more information about the different measurements, the expectation value is better to work with because it is a single number.
This is the point of struggle I described in my previous post. While the quantum error mitigation method CDR uses the simplicity of the expectation value, the quantum state tomography that IBM uses to evaluate the performance of the error mitigation works with the counts.
To participate in IBM's challenge, it is now our job to integrate the two.
I chose to adapt the CDR to work with an unchanged state tomography because the latter is IBM's evaluation method. I believe messing around with their assessment tool may disqualify myself from the contest.
So, we need to change the CDR method to change a counts dictionary instead of a single number.
Let's revisit the CDR briefly.

The CDR method has three steps. First, we generate training data. Then, in this step, we run our quantum circuit twice. Once on a classical computer to obtain the exact value of an observable's expectation value of interest. And once on a real quantum computer to produce the noisy value.
In the second step, we create a linear model of the relationship between the noisy and the exact values. Building a linear model from a set of data points is known as regression. Therefore the name CDR--Clifford Data Regression.
Finally, we use the model to transform a noisy expectation value into a mitigated one by predicting the noise-free value.
All these steps need to work with the Qiskit [experiment result](https://qiskit.org/documentation/stubs/qiskit.result.Result.html). However, the problem is that this is an object that the Qiskit `execute` function creates. It stores most of its data in a read-only manner that we can't change any more.
But, we can apply a little trick. We write our own `Result` class that allows us to change the counts afterward.
Conceptually, we create a new Python class that serves the one function the state tomography uses. This is the `get_counts` function. So, when the state tomography function queries the counts, it gets a response. But since we implement this new class, we can also provide a function that overwrites the counts.
The following listing depicts the source code of our `OwnResult` class.
```
from qiskit.result import Result
from qiskit.circuit.quantumcircuit import QuantumCircuit
from qiskit.exceptions import QiskitError
from qiskit.result.counts import Counts
class OwnResult(Result):
def __init__(self, result):
self._result = result
self._counts = {}
def get_counts(self, experiment=None):
if experiment is None:
exp_keys = range(len(self._result.results))
else:
exp_keys = [experiment]
dict_list = []
for key in exp_keys:
exp = self._result._get_experiment(key)
try:
header = exp.header.to_dict()
except (AttributeError, QiskitError): # header is not available
header = None
if "counts" in self._result.data(key).keys():
if header:
counts_header = {
k: v
for k, v in header.items()
if k in {"time_taken", "creg_sizes", "memory_slots"}
}
else:
counts_header = {}
# CUSTOM CODE STARTS HERE #######################
dict_list.append(Counts(
self._counts[str(key)] if str(key) in map(lambda k: str(k), self._counts.keys()) else self._result.data(key)["counts"]
, **counts_header))
# CUSTOM CODE ENDS HERE #######################
elif "statevector" in self._result.data(key).keys():
vec = postprocess.format_statevector(self._result.data(key)["statevector"])
dict_list.append(statevector.Statevector(vec).probabilities_dict(decimals=15))
else:
raise QiskitError(f'No counts for experiment "{repr(key)}"')
# Return first item of dict_list if size is 1
if len(dict_list) == 1:
return dict_list[0]
else:
return dict_list
def set_counts(self, counts, experiment=None):
self._counts[str(experiment) if experiment is not None else "0"] = counts
```
The class takes the existing Qiskit result as an initialization parameter. Further, we specify `_counts` as a member variable that we initialize with an empty dictionary. This will hold our changed counts. The code of the `get_count` function is copied from the original source code except for two little thing. First, whenever we refer to the a property of the result, such as `data` we need to look into `self._result.data` instead of `self.data`. Second, at lines 40-44, we look into the custom member function for the actual counts. If they exist (`if str(key) in self._counts.keys()if str(key) in self._counts.keys()`) we return the changed counts (`self._counts[str(key)]self._counts[str(key)]`). If they don't exist, we return the original counts (`self._result.data(key)["counts"]`)
Let's have a look at how that works.
```
# create our own Result object
own = OwnResult(result)
print("original result: ", own.get_counts())
# set new counts
own.set_counts({"0": 100, "1": 900})
print("Changed counts: ", own.get_counts())
```
So, let's see whether we can use our `OwnResult` inside the state tomography.
In the following snippet, we create the same simple circuit we used before. The only difference is that we omit the measurement because we need to create the `state_tomography_circuits` from it. Then, we run these circuits on a noise-free `Qasm-Simulator` and store the result of it.
So, now we're ready for the interesting part. We loop through the circuits in the list of experiments (`st_qcs`). For each of the circuits, we set the counts to a fixed dictionary with arbitrary values. We don't care about the values for now because we only want to verify whether the `StateTomographyFitter` works with our `OwnResult`.
Finally, we compute the fidelity based on the original and the changed results.
```
# Import state tomography modules
from qiskit.ignis.verification.tomography import state_tomography_circuits, StateTomographyFitter
from qiskit.quantum_info import state_fidelity
from qiskit.opflow import Zero, One
from qiskit.providers.aer import QasmSimulator
from qiskit import execute, transpile, assemble
# create a circuit without measurement
qc = QuantumCircuit(1)
qc.h(0)
# The expected final state; necessary to determine state tomography fidelity
target_state = (One).to_matrix() # |1>
# create the three state tomography circuits
st_qcs = state_tomography_circuits(qc, [0])
# Noiseless simulated backend
sim = QasmSimulator()
job = execute(st_qcs, sim, shots=1000)
# get the result
result = job.result()
# put the result in our own class
own_res = OwnResult(result)
# loop through the experiments
for experiment in st_qcs:
exp_keys = [experiment]
for key in exp_keys:
exp = result._get_experiment(key)
own_res.set_counts({"0": 100, "1": 900}, key)
# Compute fidelity with original result
orig_tomo_fitter = StateTomographyFitter(result, st_qcs)
orig_rho_fit = orig_tomo_fitter.fit(method='lstsq')
orig_fid = state_fidelity(orig_rho_fit, target_state)
print("original fidelity:", orig_fid)
# Compute fidelity with chnaged result
tomo_fitter = StateTomographyFitter(own_res, st_qcs)
rho_fit = tomo_fitter.fit(method='lstsq')
fid = state_fidelity(rho_fit, target_state)
print("changed fidelity: ", fid)
```
The output shows that the fidelity we calculated based on the changed counts differs significantly from the one we calculated based on the original counts. Apparently, the `StateTomographyFitter` works with our custom counts. This creates the prerequisite for mitigating errors in the next step.
| github_jupyter |
# Identify the time a waterbody exceeds X% wet surface area <img align="right" src="../../../Supplementary_data/dea_logo.jpg">
* **Compatibility:** Notebook currently compatible with the `NCI` environment only. You can make this notebook `Sandbox` compatible by pointing it to the DEA Waterbodies timeseries located in AWS.
* **Products used:**
None.
* **Prerequisites:** This notebook explores the individual waterbody timeseries csvs contained within the DEA Waterbodies dataset. It has been designed with that very specific purpose in mind, and is not intended as a general analysis notebook.
This notebook in its current form assumes that the [`RemoveRiversfromWaterBodyPolygons.ipynb`](RemoveRiversfromWaterBodyPolygons.ipynb) notebook has already been run. You can choose to run this notebook on the non-river filtered polygons, but rivers will produce erroneous results when performing the wet area detection as they are long-standing features in the landscape and a 'first observed' analysis will inevitably be limited by observation availability and not reflect the actual feature.
## Description
This notebook loops through all of the individual DEA Waterbodies timeseries files and finds the quarter (JFM/AMJ/JAS/OND) that each waterbody is first/last observed at having at least `SurfaceAreaThreshold`% of the total surface area of the waterbody observed as wet. This attribute is used as a proxy for construction date for built waterbody structures. This date is then appended to the waterbody polygon shapefile as an attribute.
1. Load in the required modules
2. Set up the file paths for inputs/outputs
3. Read in the DEA Waterbodies shapefile
4. Loop through each timeseries and find the quarter where the waterbody is first/last observed at having at least X% of the total surface area of the waterbody observed as wet
5. Append this date to the shapefile
6. Write out an updated shapefile
7. Load in the shapefile again (done as a separate step so you can choose to start to run the notebook from here without re-doing the analysis)
8. Groupby the dates and sum the area of all the polygons first/last observed in that quarter
9. Plot the results
***
## Getting started
To run this analysis, run all the cells in the notebook, starting with the "Load packages" cell.
### Load packages
Import Python packages that are used for the analysis.
```
%matplotlib inline
import matplotlib.pyplot as plt
import fiona
import geopandas as gp
import numpy as np
import pandas as pd
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
```
### Analysis parameters
* `WaterBodyRiverFiltered`: file path to the DEA Waterbodies shapefile to read in
* `CSVFolder`: file path to the folder of DEA Waterbodies timeseries
* `SurfaceAreaThreshold`: e.g. 50. Select the percentage of the total surface area of the waterbody that must have been observed as wet as a single time period.
* `FirstorLastIndex`: e.g. {'First': 0} for first, or {'Last': -1} for last. Set which index to pull out depending on whether you want to find the first time the surface area wetness threshold was met, or the last time it was.
```
WaterBodyRiverFiltered = '/g/data/r78/cek156/dea-notebooks/Scientific_workflows/DEAWaterbodies/AusAllTime01-005HybridWaterbodies/AusWaterBodiesFINALRiverFiltered.shp'
CSVFolder = '/g/data/r78/cek156/dea-notebooks/Scientific_workflows/DEAWaterbodies/timeseries_aus_uid/'
SurfaceAreaThreshold = 50
FirstorLastIndex = {'First': 0}
```
### Read in data and loop through each timeseries
```
WaterBodyRiverFilteredShapes = gp.read_file(WaterBodyRiverFiltered)
# Set up an empty attribute to write into
WaterBodyRiverFilteredShapes[f'Time {list(FirstorLastIndex.keys())[0]} Exceeds {SurfaceAreaThreshold}%'] = -999
PolygonsThatDidntWork = []
for shapes in fiona.open(WaterBodyRiverFiltered):
polyName = shapes['properties']['UID']
# Read in the correct csv file
FileToLoad = f'{CSVFolder}{polyName[:4]}/{polyName}.csv'
try:
Timeseries = pd.read_csv(FileToLoad)
except:
print(f'Can\'t open{FileToLoad}')
continue
# Change the date column to an actual datetime object
Timeseries['Observation Date'] = pd.to_datetime(
Timeseries['Observation Date'])
Timeseries = Timeseries.set_index(['Observation Date'])
# Aggregate the timeseries quarterly
QuartelyAveraged = Timeseries.resample('Q', label='left').agg('max')
try:
# Find the first time that the timeseries is greater than or equal to 50%
IndexExceeds = np.where(
QuartelyAveraged['Wet pixel percentage'] >=
SurfaceAreaThreshold)[0][list(FirstorLastIndex.values())[0]]
TimeExceeds = QuartelyAveraged.iloc[IndexExceeds]
DateString, BitToThrowOut = str(TimeExceeds.name).split(' ')
except:
print('Another one that didn\'t work...')
PolygonsThatDidntWork.append(polyName)
continue
# Put the values back into the dataframe
WaterBodyRiverFilteredShapes.at[WaterBodyRiverFilteredShapes.loc[
WaterBodyRiverFilteredShapes['UID'] ==
polyName].index, f'{list(FirstorLastIndex.keys())[0]}{SurfaceAreaThreshold}%'] = DateString
WaterBodyRiverFilteredShapes.to_file(
f'/g/data/r78/cek156/dea-notebooks/Scientific_workflows/DEAWaterbodies/AusAllTime01-005HybridWaterbodies/AusWaterBodiesFINALRiverFilteredTimeseries{list(FirstorLastIndex.keys())[0]}{SurfaceAreaThreshold}.shp'
)
```
## Calculate some statistics on the timing of waterbodies wet observations
Load back in the new appended shapefile and add up the total surface area of waterbodies that were first/last seen as at least X% wet in each quarter through time.
*Note that this notebook can be run starting from this point, ignoring the processing cell above if it has previously been run.*
```
AllTheData = gp.read_file(f'/g/data/r78/cek156/dea-notebooks/Scientific_workflows/DEAWaterbodies/AusAllTime01-005HybridWaterbodies/AusWaterBodiesFINALRiverFilteredTimeseries{list(FirstorLastIndex.keys())[0]}{SurfaceAreaThreshold}.shp')
```
### Group the data by quarter, remove missing values and convert to $km^2$
```
# Group all the waterbodies by quarters and sum them up
GroupedData = AllTheData.groupby(by=f'{list(FirstorLastIndex.keys())[0]}{SurfaceAreaThreshold}%').sum()
# Drop the -999 missing values
CleanedData = GroupedData.drop('-999')
# Convert the quarter dates to a pandas datetime object
CleanedData.index = pd.to_datetime(CleanedData.index)
# Convert from m2 to km2
CleanedData['area'] = CleanedData['area'] / 1000000
```
### Plot the results
```
plt.figure(figsize=[10, 8])
plt.plot(CleanedData.area.cumsum())
plt.xlim('1992-01-01', '2018-01-01')
plt.ylim(40000, 55000)
plt.ylabel('Total area of water bodies ($km^2$)')
plt.xlabel(f'Time each waterbody {list(FirstorLastIndex.keys())[0]} observed at having at least {SurfaceAreaThreshold}% \n'
'of the total surface area of the waterbody observed as wet')
plt.title(f'Total area of waterbodies {list(FirstorLastIndex.keys())[0]} observed as {SurfaceAreaThreshold}% wet each quarter')
plt.rcParams["font.size"] = "14"
```
***
## Additional information
**License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
Digital Earth Australia data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.
**Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).
If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/GeoscienceAustralia/dea-notebooks).
**Last modified:** January 2020
**Compatible datacube version:** N/A
## Tags
Browse all available tags on the DEA User Guide's [Tags Index](https://docs.dea.ga.gov.au/genindex.html)
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from utils import *
import altair as alt
import streamlit as st
#importando o dataset
df = pd.read_json("dashboard_cars.json", lines = True)
# Utilizando a função extra_variables
df = extra_variables(df)
#Utilizando a função agregação clean_df.
df = clean_df(df)
df.head()
df.isna().sum()
df.brand.value_counts()
df.motorpower.value_counts()
df.model.value_counts()
df.info()
df['mileage'].describe()
def scater_price_mileage(df):
selection = alt.selection_multi(fields =['brand'], bind='legend')
domain = ['chevrolet', 'volkswagen', 'ford']
range_ = ['steelblue', 'mediumvioletred', 'silver']
chart = alt.Chart(df).transform_calculate(
).mark_point().encode(
x='price:Q',
y='mileage',
#alt.X('price:Q', axis=alt.Axis(tickSize=0)),
#alt.Y('mileage', stack='center'),
#alt.Color('brand:N', scale=alt.Scale(scheme='tableau20')),
color=alt.Color('brand', scale=alt.Scale(domain=domain, range=range_)),
opacity=alt.condition(selection, alt.value(1), alt.value(0.1)),
tooltip=['brand:N', 'model:N', 'price:N', 'mileage:N']
).add_selection(
selection
)
return chart
scater_price_mileage(df)
df.model.value_counts()
def mean_price(df):
selection = alt.selection_multi(fields=['model'], bind='legend')
domain = ['focus', 'cruze', 'ecosport', 'fiesta', 'spin', 'cobalt', 'onix', 'prisma', 'fusion',
'tracker', 'polo', 'jetta', 'golf', 'escort']
range_ = ['#416ca6', '#7b9bc3', '#e9e9e9', '#7eb6d9','#4f4b43','#2c4786', '#425559', '#dce2f2','#011c40', '#DC143C',
'#32CD32', '#8A2BE2', '#D2691E', '#000080']
chart = alt.Chart(df).mark_bar().encode(
alt.X('mean(price)', axis=alt.Axis(tickSize=0)),
alt.Y('model', stack='center'),
#alt.Color('model:N', scale=alt.Scale(scheme='tableau20')),
color=alt.Color('model', scale=alt.Scale(domain=domain, range=range_)),
opacity=alt.condition(selection, alt.value(1), alt.value(0.1))
).add_selection(
selection
)
return chart
mean_price(df)
def model_regdate_count(df):
selection = alt.selection_multi(fields=['regdate'], bind='legend')
chart = alt.Chart(df).mark_bar().encode(
alt.X('count()', axis=alt.Axis(tickSize=0)),
alt.Y('model', sort = alt.Sort(encoding = 'x', order= 'descending')),
tooltip = ['regdate','count()'],
opacity=alt.condition(selection, alt.value(1), alt.value(0.1)),
color = 'regdate:O'
).add_selection(
selection
)
return chart
model_regdate_count(df)
def financial_regdate(df):
selection = alt.selection_multi(fields=['finalcial'])
chart = alt.Chart(df).mark_bar().encode(
alt.X('count()', axis=alt.Axis(tickSize=0)),
alt.Y('financial', sort = alt.Sort(encoding = 'x', order= 'descending')),
tooltip = ['count()'],
opacity=alt.condition(selection, alt.value(1), alt.value(0.1)),
color = 'financial:O'
).add_selection(
selection
)
return chart
financial_regdate(df)
def model_power_count(df):
selection = alt.selection_multi(fields=['motorpower'], bind='legend')
chart = alt.Chart(df).mark_bar().encode(
x = 'count()',
y = alt.Y('model', sort = alt.Sort(encoding = 'x', order= 'descending')),
tooltip = ['motorpower','count()'],
opacity=alt.condition(selection, alt.value(1), alt.value(0.1)),
color = 'motorpower:O'
).add_selection(
selection
)
return chart
model_power_count(df)
def model_power_price(df):
selection = alt.selection_multi(fields=['motorpower'], bind='legend')
chart = alt.Chart(df).mark_bar().encode(
x = 'mean(price)',
y = alt.Y('model', sort = alt.Sort(encoding = 'x', order= 'descending')),
tooltip = ['motorpower','mean(price)'],
opacity=alt.condition(selection, alt.value(1), alt.value(0.1)),
color = 'motorpower:O'
).add_selection(
selection
)
return chart
model_power_price(df)
df.columns.to_list
def data_info(df):
cols = ['vidro elétrico',
'air bag',
'trava elétrica',
'ar condicionado',
'direção hidráulica',
'alarme',
'som',
'sensor de ré',
'camera de ré']
st.write(df[cols].sum())
st.write(df['model'].value_counts())
st.write(df['mileage'].describe().T)
cols = ['vidro elétrico',
'air bag',
'trava elétrica',
'ar condicionado',
'direção hidráulica',
'alarme',
'som',
'sensor de ré',
'camera de ré']
print("Equipamentos com maior presença: \n", df[cols].sum())
df['price'].describe()
df['model'].value_counts()
path = "dashboards_cars.json"
car_data = pd.read_json(path, lines = True)
add_extra_features = extra_variables(car_data)
clean_data = clean_df(add_extra_features)
```
| github_jupyter |
```
from bs4 import BeautifulSoup
from fake_useragent import UserAgent
import requests
ua = UserAgent()
headers = {
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-MY,en;q=0.9,en-US;q=0.8,ms;q=0.7',
'cache-control': 'max-age=0',
'cookie': 'perseusRolloutSplit=2; dhhPerseusSessionId=1572777846270.344129640382112400.tqve6035lc8; dhhPerseusGuestId=1572773675054.296431960063008960.th00zfvm5m; __cfduid=d49830dd8faef655d8fd7859443a1fd1a1572773671; hl=en; AppVersion=02c2d36; ld_key=118.100.76.150; _pxhd=6097c3582e3b323a10236581cc2f09942a843225042959a6592317fbc32efa2c:23bbaec1-fe1d-11e9-b8fd-5d9da8797a65; optimizelyEndUserId=oeu1572773673643r0.3029206072281285; _pxvid=23bbaec1-fe1d-11e9-b8fd-5d9da8797a65; perseusRolloutSplit=2; dhhPerseusGuestId=1572773675054.296431960063008960.th00zfvm5m; _gcl_au=1.1.1974466263.1572773675; _ga=GA1.2.834634854.1572773675; _gid=GA1.2.733629929.1572773675; _hjid=cbf68d6c-0e9a-4343-87da-bdb6dd825dc6; __ssid=b43de24b73336ad349e495d33424f8e; ViewContentStatus=true; _dc_gtm_UA-90537345-1=1; _dc_gtm_UA-50811932-30=1; _px3=3c729ac794d8d45c7fe754cbd67790d71474a785a2627fe57b8dd1354665be4e:fNXD1m3oMRkZmH5yzruS63pfq2wij+y7tLFTLJITIoz/x88BJDR5YkmH2xWELbrnp0cfUmp+8OXMnJIjFHB8Qw==:1000:AJqIWGYwqD+LpejoGvA2HPJbAfvzQ+jAwi0RQCe+zvgkpfJaiErJeIMGRrPiJG7PVkepT8hmntejklhnnxF8qR57jOzmSCAwb5kSaOuaqH2+u7jScpM+KnlZgmozgyUaqj6QccbWsJ1FO2bNoyzvj//TsuWcVj6EeyYubyxqnQ4=; _tq_id.TV-81365445-1.cf54=fc76d66efbbf4a71.1572773676.0.1572779104..',
'sec-fetch-mode': 'navigate',
'sec-fetch-site': 'none',
'sec-fetch-user': '?1',
'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36'
}
# headers = {"User-Agent": ua.chrome,
# "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
# "Accept-Language": "en-US,en;q=0.5",
# "Accept-Encoding": "gzip, deflate", "DNT": "1", "Connection": "close",
# "Upgrade-Insecure-Requests": "13434"}
cookies = {'JSESSIONID': 'fbadab78eef7d2d'}
import json
with open('foodpanda-restaurant.json') as fopen:
restaurants = json.load(fopen)
restaurants.keys()
restaurants['Kuala Lumpur']['Q Bistro Nasi Kandar']
import time
parsed = {}
quit = False
for no, (city, city_values) in enumerate(list(restaurants.items())):
if quit:
break
for no_nested, (restaurant, restaurant_values) in enumerate(list(city_values.items())):
if '%s <> %s'%(city, restaurant) in parsed:
print('%s <> %s'%(city, restaurant), ' already inside parsed')
continue
try:
response = requests.get('https://www.foodpanda.my' + restaurant_values['link'],
headers = headers, cookies = cookies, timeout = 60)
soup = BeautifulSoup(response.content, 'html.parser')
if 'Access to this page has been denied' in soup.title.contents[0]:
quit = True
print('quit at ', no, no_nested)
break
foods = soup.find_all("div", class_="desktop-cart mobile-cart__hidden")
print('PARSING ', restaurant)
data = []
for food in foods:
try:
data.append(json.loads(food.get('data-vendor')))
except:
print('ERROR 404 food ', restaurant)
parsed['%s <> %s'%(city, restaurant)] = data
restaurants[city][restaurant]['data'] = data
except:
print('ERROR 404 ', restaurant)
len(parsed)
import json
with open('foodpanda-foods.json', 'w') as fopen:
json.dump(restaurants, fopen)
```
| github_jupyter |
```
# The purpose of this notebook is to find out what's the smallest network
# that can be used to get max prediction score
%run "standardScalerSetup.ipynb"
#loads all the preprocessing libraries and prepares the dataframe
from sklearn import metrics
# Neural Nets imports
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.layers import Dense, Dropout, Flatten
from tensorflow.keras.layers import Conv1D, MaxPooling1D, LSTM, BatchNormalization
from tensorflow.keras.optimizers import Adam, Adadelta, Adagrad, Adamax, SGD
from tensorflow.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint
from tensorflow.keras.regularizers import l1, l2, l1_l2
verboseLevel=0
validationSplit=0.2
batchSize=30
epochs=1000
# callback preparation
reduce_lr = ReduceLROnPlateau(monitor='val_loss',
factor=0.5,
patience=2,
verbose=verboseLevel,
mode='min',
min_lr=0.001)
from tensorflow.keras.utils import multi_gpu_model
inputSize = 9
colList = ['HiddenLayers', 'R2Score', 'MAE', 'MSE', 'H5FileName', 'TrainHistory', 'TrainPredictions']
# Check Large Dataset first
def createModel(dataFrame, layerSize, loops, y_train, X_train, y_test, X_test, targetScaler, labelSet):
print(f'Creating model using layer size = {layerSize} on set = {labelSet}.\n')
for i in range(loops):
print(f'Training on {i} hidden layers\n')
model = Sequential()
model.add(Dense(layerSize, kernel_initializer='normal',
kernel_regularizer=l1(0.01), bias_regularizer=l1(0.01),
#kernel_regularizer=l2(0.01), bias_regularizer=l2(0.01),
#kernel_regularizer=l1_l2(0.01), bias_regularizer=l1_l2(0.01),
input_dim=inputSize, activation=custom_activation))
for j in range(i):
model.add(Dense(layerSize,
kernel_regularizer=l1(0.01), bias_regularizer=l1(0.01),
#kernel_regularizer=l2(0.01), bias_regularizer=l2(0.01),
#kernel_regularizer=l1_l2(0.01), bias_regularizer=l1_l2(0.01),
kernel_initializer='normal', activation=custom_activation))
model.add(BatchNormalization())
model.add(Dense(1, kernel_initializer='normal',
kernel_regularizer=l1(0.01), bias_regularizer=l1(0.01),
#kernel_regularizer=l2(0.01), bias_regularizer=l2(0.01),
#kernel_regularizer=l1_l2(0.01), bias_regularizer=l1_l2(0.01),
activation='linear'))
# only set this if GPU capabilities available
# model = multi_gpu_model(model, gpus=2)
optmzr=Adam(lr=0.001)
model.compile(optimizer=optmzr, loss='mae', metrics=['mae'])
model_h5_name = 'mlp_' + str(layerSize)+ '_' + str(i) + '_model_std_' + labelSet + '_L1.h5'
checkpoint_nn_std = ModelCheckpoint(model_h5_name,
monitor='val_loss',
verbose=verboseLevel,
save_best_only=True,
mode='min')
callbacks_list_nn_std = [checkpoint_nn_std, reduce_lr]
history_MLP_std = model.fit(X_train.to_numpy(), y_train,
batch_size=batchSize,
validation_split=validationSplit,
epochs=epochs, verbose=verboseLevel,
callbacks=callbacks_list_nn_std)
#reload the best model!
model_new = load_model(model_h5_name)
#Predict
y_pred_scaled = model_new.predict(X_test.to_numpy())
#Evaluate metrics
y_pred = targetScaler.inverse_transform(y_pred_scaled)
r2_score = metrics.r2_score(y_test, y_pred)
mae = metrics.mean_absolute_error(y_test, y_pred)
mse = metrics.mean_squared_error(y_test, y_pred)
#store values
row = [i, r2_score, mae, mse, model_h5_name, history_MLP_std, y_pred]
df = pd.DataFrame(np.array(row).reshape(1, len(colList)), columns=colList)
dataFrame = dataFrame.append(df, ignore_index=True)
tf.keras.backend.clear_session()
del(model)
del(model_new)
return dataFrame
%%time
dataFrame = pd.DataFrame(columns=colList)
layerSize = 64
loops = 15
dataFrame = createModel(dataFrame, layerSize, loops,
y_train_scaled_std_all, X_train_all_std,
y_test_all_std, X_test_all_std,
targetStdScalerAll, 'all')
dataFrame
#Plot train vs validation
plt.figure(figsize=(20,10))
#plt.plot(dataFrame['R2Score'])
plt.plot(dataFrame['MAE'])
#plt.plot(dataFrame['MSE'])
plt.title('Training Scores MLP')
plt.ylabel('Score')
plt.xlabel('Iteration')
plt.legend(['MAE'], loc='upper right')
plt.show()
# Determine the IDX value where the MAE is smallest
minMaeIDX = dataFrame.loc[dataFrame['MAE']==dataFrame['MAE'].min()].index[0]
dataFrame.iloc[minMaeIDX]
minMaeAWS = dataFrame.iloc[minMaeIDX]['MAE']
minR2AWS = dataFrame.iloc[minMaeIDX]['R2Score']
modelNameAWS = "MLP_64_Std"
posAWS = minMaeIDX
history_MLP = dataFrame['TrainHistory'][minMaeIDX]
#Plot train vs validation
plt.figure(figsize=(20,10))
plt.plot(history_MLP.history['loss'])
plt.plot(history_MLP.history['val_loss'])
plt.title('Validation vs Train loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['train', 'validation'], loc='upper right')
plt.show()
y_pred_MLP_std = dataFrame['TrainPredictions'][minMaeIDX]
# Plot prediction vs original
plt.figure(figsize=(20,10))
plt.scatter(range(y_test_all_std.shape[0]),y_test_all_std,label="Original Data", alpha=0.6, c='red')
plt.scatter(range(y_pred_MLP_std.shape[0]),y_pred_MLP_std,label="Predicted Data",
alpha=0.6, c='black')
plt.ylabel('Total Messages')
plt.xlabel('Records')
plt.title('MLP Std Model for X_test dataset prediction vs original')
plt.legend()
plt.show()
%%time
dataFrame128 = pd.DataFrame(columns=colList)
layerSize = 128
#loops = 15
dataFrame128 = createModel(dataFrame128, layerSize, loops,
y_train_scaled_std_all, X_train_all_std,
y_test_all_std, X_test_all_std,
targetStdScalerAll, 'all')
dataFrame128
# Determine the IDX value where the MAE is smallest
minMaeIDX = dataFrame128.loc[dataFrame128['MAE']==dataFrame128['MAE'].min()].index[0]
dataFrame128.iloc[minMaeIDX]
if dataFrame128.iloc[minMaeIDX]['MAE'] < minMaeAWS :
minMaeAWS = dataFrame128.iloc[minMaeIDX]['MAE']
minR2AWS = dataFrame128.iloc[minMaeIDX]['R2Score']
modelNameAWS = "MLP_128_Std"
posAWS = minMaeIDX
#Plot train vs validation
plt.figure(figsize=(20,10))
plt.rcParams.update({'font.size': 20})
#plt.plot(100*dataFrame128['R2Score'])
plt.plot(dataFrame128['MAE'])
#plt.plot(dataFrame128['MSE'])
plt.title('Training Scores MLP')
plt.ylabel('Score')
plt.xlabel('Iteration')
plt.legend(['MAE'], loc='upper right')
plt.show()
history_MLP = dataFrame128['TrainHistory'][minMaeIDX]
#Plot train vs validation
plt.figure(figsize=(20,10))
plt.rcParams.update({'font.size': 20})
plt.plot(history_MLP.history['loss'])
plt.plot(history_MLP.history['val_loss'])
plt.title('Validation vs Train loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['train', 'validation'], loc='upper right')
plt.show()
y_pred_MLP_std = dataFrame128['TrainPredictions'][minMaeIDX]
# Plot prediction vs original
plt.figure(figsize=(20,10))
plt.rcParams.update({'font.size': 20})
plt.scatter(range(y_test_all_std.shape[0]),y_test_all_std,label="Original Data", alpha=0.6, c='red')
plt.scatter(range(y_pred_MLP_std.shape[0]),y_pred_MLP_std,label="Predicted Data",
alpha=0.6, c='black')
plt.ylabel('Total Messages')
plt.xlabel('Records')
plt.title('MLP Std Model for X_test dataset prediction vs original')
plt.rc('legend', fontsize=18)
plt.legend(loc='best', bbox_to_anchor=(0.75, 0, 0.25, 0.25))
plt.show()
%%time
dataFrame32 = pd.DataFrame(columns=colList)
layerSize = 32
#loops = 15
dataFrame32 = createModel(dataFrame32, layerSize, loops,
y_train_scaled_std_all, X_train_all_std,
y_test_all_std, X_test_all_std,
targetStdScalerAll, 'all')
# Determine the IDX value where the MAE is smallest
minMaeIDX = dataFrame32.loc[dataFrame32['MAE']==dataFrame32['MAE'].min()].index[0]
dataFrame32.iloc[minMaeIDX]
if dataFrame32.iloc[minMaeIDX]['MAE'] < minMaeAWS :
minMaeAWS = dataFrame32.iloc[minMaeIDX]['MAE']
minR2AWS = dataFrame32.iloc[minMaeIDX]['R2Score']
modelNameAWS = "MLP_32_Std"
posAWS = minMaeIDX
#Plot train vs validation
plt.figure(figsize=(20,10))
plt.rcParams.update({'font.size': 20})
#plt.plot(100*dataFrame32['R2Score'])
plt.plot(dataFrame32['MAE'])
#plt.plot(dataFrame32['MSE'])
plt.title('Training Scores MLP')
plt.ylabel('Score')
plt.xlabel('Iteration')
plt.legend(['MAE'], loc='upper right')
plt.show()
history_MLP = dataFrame32['TrainHistory'][minMaeIDX]
#Plot train vs validation
plt.figure(figsize=(20,10))
plt.rcParams.update({'font.size': 20})
plt.plot(history_MLP.history['loss'])
plt.plot(history_MLP.history['val_loss'])
plt.title('Validation vs Train loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['train', 'validation'], loc='upper right')
plt.show()
y_pred_MLP_std = dataFrame32['TrainPredictions'][minMaeIDX]
# Plot prediction vs original
plt.figure(figsize=(20,10))
plt.rcParams.update({'font.size': 20})
plt.scatter(range(y_test_all_std.shape[0]),y_test_all_std,label="Original Data", alpha=0.6, c='red')
plt.scatter(range(y_pred_MLP_std.shape[0]),y_pred_MLP_std,label="Predicted Data",
alpha=0.6, c='black')
plt.ylabel('Total Messages')
plt.xlabel('Records')
plt.title('MLP Std Model for X_test dataset prediction vs original')
plt.legend()
plt.show()
#### Base Std
%%time
dataFrame32_1 = pd.DataFrame(columns=colList)
layerSize = 32
#loops = 15
dataFrame32_1 = createModel(dataFrame32_1, layerSize, loops,
y_train_scaled_std_base, X_train_base_std,
y_test1_summaryL1Z2_std, X_test1_summaryL1Z2_std,
targetStdScalerBase, 'base')
dataFrame32_1
# Determine the IDX value where the MAE is smallest
minMaeIDX = dataFrame32_1.loc[dataFrame32_1['MAE']==dataFrame32_1['MAE'].min()].index[0]
dataFrame32_1.iloc[minMaeIDX]
#if dataFrame32_1.iloc[minMaeIDX]['MAE'] < minMaeAWS :
minMaeAWS_base = dataFrame32_1.iloc[minMaeIDX]['MAE']
minR2AWS_base = dataFrame32_1.iloc[minMaeIDX]['R2Score']
modelNameAWS_base = "MLP_32_Base_Std"
posAWS_base = minMaeIDX
#Plot train vs validation
plt.figure(figsize=(20,10))
#plt.plot(100*dataFrame32['R2Score'])
plt.plot(dataFrame32_1['MAE'])
#plt.plot(dataFrame32['MSE'])
plt.title('Training Scores MLP')
plt.ylabel('Score')
plt.xlabel('Iteration')
plt.legend(['MAE'], loc='upper right')
plt.show()
history_MLP = dataFrame32_1['TrainHistory'][minMaeIDX]
#Plot train vs validation
plt.figure(figsize=(20,10))
plt.plot(history_MLP.history['loss'])
plt.plot(history_MLP.history['val_loss'])
plt.title('Validation vs Train loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['train', 'validation'], loc='upper right')
plt.show()
y_pred_MLP_std = dataFrame32_1['TrainPredictions'][minMaeIDX]
# Plot prediction vs original
plt.figure(figsize=(20,10))
plt.scatter(range(y_test1_summaryL1Z2_std.shape[0]),y_test1_summaryL1Z2_std,label="Original Data", alpha=0.6, c='red')
plt.scatter(range(y_pred_MLP_std.shape[0]),y_pred_MLP_std,label="Predicted Data",
alpha=0.6, c='black')
plt.ylabel('Total Messages')
plt.xlabel('Records')
plt.title('MLP Std Model for X_test dataset prediction vs original')
plt.legend()
plt.show()
%%time
dataFrame64_1 = pd.DataFrame(columns=colList)
layerSize = 64
#loops = 15
dataFrame64_1 = createModel(dataFrame64_1, layerSize, loops,
y_train_scaled_std_base, X_train_base_std,
y_test1_summaryL1Z2_std, X_test1_summaryL1Z2_std,
targetStdScalerBase, 'base')
# Determine the IDX value where the MAE is smallest
minMaeIDX = dataFrame64_1.loc[dataFrame64_1['MAE']==dataFrame64_1['MAE'].min()].index[0]
dataFrame64_1.iloc[minMaeIDX]
if dataFrame64_1.iloc[minMaeIDX]['MAE'] < minMaeAWS_base :
minMaeAWS_base = dataFrame64_1.iloc[minMaeIDX]['MAE']
minR2AWS_base = dataFrame64_1.iloc[minMaeIDX]['R2Score']
modelNameAWS_base = "MLP_64_Base_Std"
posAWS_base = minMaeIDX
#Plot train vs validation
plt.figure(figsize=(20,10))
#plt.plot(100*dataFrame32['R2Score'])
plt.plot(dataFrame64_1['MAE'])
#plt.plot(dataFrame32['MSE'])
plt.title('Training Scores MLP')
plt.ylabel('Score')
plt.xlabel('Iteration')
plt.legend(['MAE'], loc='upper right')
plt.show()
history_MLP = dataFrame64_1['TrainHistory'][minMaeIDX]
#Plot train vs validation
plt.figure(figsize=(20,10))
plt.plot(history_MLP.history['loss'])
plt.plot(history_MLP.history['val_loss'])
plt.title('Validation vs Train loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['train', 'validation'], loc='upper right')
plt.show()
y_pred_MLP_Std = dataFrame64_1['TrainPredictions'][minMaeIDX]
# Plot prediction vs original
plt.figure(figsize=(20,10))
plt.scatter(range(y_test1_summaryL1Z2_std.shape[0]),y_test1_summaryL1Z2_std,label="Original Data", alpha=0.6, c='black')
plt.scatter(range(y_pred_MLP_std.shape[0]),y_pred_MLP_std,label="Predicted Data",
alpha=0.6, c='red')
plt.ylabel('Total Messages')
plt.xlabel('Records')
plt.title('MLP Std Model for X_test dataset prediction vs original')
plt.legend()
plt.show()
%%time
dataFrame128_1 = pd.DataFrame(columns=colList)
layerSize = 128
#loops = 15
dataFrame128_1 = createModel(dataFrame128_1, layerSize, loops,
y_train_scaled_std_base, X_train_base_std,
y_test1_summaryL1Z2_std, X_test1_summaryL1Z2_std,
targetStdScalerBase, 'base')
# Determine the IDX value where the MAE is smallest
minMaeIDX = dataFrame128_1.loc[dataFrame128_1['MAE']==dataFrame128_1['MAE'].min()].index[0]
dataFrame128_1.iloc[minMaeIDX]
if dataFrame128_1.iloc[minMaeIDX]['MAE'] < minMaeAWS_base :
minMaeAWS_base = dataFrame128_1.iloc[minMaeIDX]['MAE']
minR2AWS_base = dataFrame128_1.iloc[minMaeIDX]['R2Score']
modelNameAWS_base = "MLP_64_Base_Std"
posAWS_base = minMaeIDX
#Plot train vs validation
plt.figure(figsize=(20,10))
#plt.plot(100*dataFrame32['R2Score'])
plt.plot(dataFrame128_1['MAE'])
#plt.plot(dataFrame32['MSE'])
plt.title('Training Scores MLP')
plt.ylabel('Score')
plt.xlabel('Iteration')
plt.legend(['MAE'], loc='upper right')
plt.show()
history_MLP = dataFrame128_1['TrainHistory'][minMaeIDX]
#Plot train vs validation
plt.figure(figsize=(20,10))
plt.plot(history_MLP.history['loss'])
plt.plot(history_MLP.history['val_loss'])
plt.title('Validation vs Train loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['train', 'validation'], loc='upper right')
plt.show()
y_pred_MLP_Std = dataFrame128_1['TrainPredictions'][minMaeIDX]
# Plot prediction vs original
plt.figure(figsize=(20,10))
plt.scatter(range(y_test1_summaryL1Z2_std.shape[0]),y_test1_summaryL1Z2_std,label="Original Data", alpha=0.6, c='black')
plt.scatter(range(y_pred_MLP_std.shape[0]),y_pred_MLP_std,label="Predicted Data",
alpha=0.6, c='red')
plt.ylabel('Total Messages')
plt.xlabel('Records')
plt.title('MLP Std Model for X_test dataset prediction vs original')
plt.legend()
plt.show()
minMaeAWS, minR2AWS, modelNameAWS, posAWS
minMaeAWS_base, minR2AWS_base, modelNameAWS_base, posAWS_base
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib
from matplotlib.colors import DivergingNorm
from matplotlib.colors import to_rgba_array
import numpy as np
from joblib import dump, load
from tqdm import tqdm
import pandas as pd
tqdm.pandas(ascii=True)
import seaborn as sns
import os
from rdkit import Chem
from rdkit.Chem import AllChem
from rdkit.Chem.Draw import rdMolDraw2D
from rdkit.Chem import rdFMCS
from rdkit.Chem import Draw
from IPython import display
from base64 import b64decode
import molmap
extractor = molmap.feature.fingerprint.Extraction({'PubChemFP':{}, 'MACCSFP':{}, "PharmacoErGFP":{}})
sns.set(style='white', font='sans-serif', font_scale=1.5)
s = sns.color_palette("bwr_r", n_colors=10)
sns.palplot(s)
sns.__version__
from chembench import dataset
data = dataset.load_BACE()
df = data.df
df_feature_imp = pd.read_csv('./feature_importance.csv', index_col = 'v')
X = extractor.batch_transform(df.smiles)
dfx = pd.DataFrame(X, columns = extractor.bitsinfo.IDs)*1
dfx.index = df.smiles
weight = 'Feature importance based on training set'
from molmap.feature.fingerprint import smarts_maccskey, smarts_pharmacophore, smarts_pubchem
smt1 = smarts_pubchem.smartsPatts
smt2 = smarts_maccskey.smartsPatts
_, bits = molmap.feature.fingerprint.pharmErGfp.GetPharmacoErGFPs(Chem.MolFromSmiles('C'), return_bitInfo=True)
erg = pd.DataFrame([bits]).T
erg.index = ['PharmacoErGFP%s' % i for i in erg.index]
smt3 = erg[0].to_dict()
smt3_map = smarts_pharmacophore.pharmacophore_smarts
smarts_all = []
for fp, s in df_feature_imp.iterrows():
if s.Subtypes == 'PubChemFP':
smarts = smt1.get(fp)
if s.Subtypes == 'MACCSFP':
smarts = smt2.get(fp)
if s.Subtypes == 'PharmacoErGFP':
smarts = smt3.get(fp)
smarts_all.append(smarts)
df_feature_imp['smarts'] = smarts_all
df_map = df_feature_imp
df_map = df_map.dropna()
df_map.head(5)
def hit_bonds_from_atoms(mol, patt, hit_ats):
bond_lists = []
for i, hit_at in enumerate(hit_ats):
hit_at = list(hit_at)
bond_list = []
for bond in patt.GetBonds():
a1 = hit_at[bond.GetBeginAtomIdx()]
a2 = hit_at[bond.GetEndAtomIdx()]
bond_list.append(mol.GetBondBetweenAtoms(a1, a2).GetIdx())
bond_lists.append(bond_list)
return bond_lists
def _apply_mean(g):
"""
average to atom & bond's contribution
"""
color_weight = g.color_weight.mean()
radius_weight = g.radius_weight.mean()
return radius_weight, color_weight
def _apply_scale(x, vmin, vmax):
"""
scale to 0,1 (the same level)
"""
#vmin= 0.008 #0.006
#vmax= 0.06 #0.021
return (x - vmin)/(vmax - vmin)
def value2color(values, vmin, vmax, cmap):
# norm = DivergingNorm(vmin=values.min(), vcenter = values.mean(), vmax=values.max())
# v = norm(values)
v = _apply_scale(values, vmin, vmax)
weighted_colors = [matplotlib.colors.to_rgb(i) for i in cmap(v)]
return weighted_colors
def plot_fp(fp , i = 0):
a = dfx[[fp]]
a['y'] = df['Class'].tolist()
a = a[a.y==1]
m = a[a[fp]==1].index[i]
mol = Chem.MolFromSmiles(m)
if "MACCSFP" in fp:
smt, count = smt2.get(fp)
if "PubChemFP" in fp:
smt, count = smt1.get(fp)
patt = Chem.MolFromSmarts(smt)
hit_ats = mol.GetSubstructMatches(patt)
bond_lists = hit_bonds_from_atoms(mol, patt, hit_ats)
colours = [(1, 0, 0), (0, 1, 0), (0, 0, 1)]
atom_cols = {}
bond_cols = {}
atom_list = []
bond_list = []
for i, (hit_atom, hit_bond) in enumerate(zip(hit_ats, bond_lists)):
hit_atom = list(hit_atom)
for at in hit_atom:
atom_cols[at] = colours[1]
atom_list.append(at)
for bd in hit_bond:
bond_cols[bd] = colours[1]
bond_list.append(bd)
d = rdMolDraw2D.MolDraw2DSVG(300, 300)
rdMolDraw2D.PrepareAndDrawMolecule(d, mol, highlightAtoms=atom_list,
highlightAtomColors=atom_cols,
highlightBonds=bond_list,
highlightBondColors=bond_cols)
d.FinishDrawing()
svg = d.GetDrawingText()
return svg, hit_ats, bimg, patt, mol
def highlight_important_atom(smiles, cmap, core_mol = None, vmin = 0.007, vmax = 0.025, top = 50, ):
"""
#smiles = 'O1CC[C@@H](NC(=O)[C@@H](Cc2cc3cc(ccc3nc2N)-c2ccccc2C)C)CC1(C)C'
cmap: mpl color map
core_smiles: core to align the molecule
top: number of the top fingerprint to highlight, if -1, use all of it
"""
X = extractor.transform(smiles)
dfx = pd.DataFrame(X, index = extractor.bitsinfo.IDs, columns = [smiles])*1
dfxs = dfx.T[df_map.index]
## make alignment
mol = Chem.MolFromSmiles(smiles)
if core_mol != None:
AllChem.Compute2DCoords(core_mol)
AllChem.GenerateDepictionMatching2DStructure(mol, core_mol)
bits = dfxs.loc[smiles].to_frame(name = 'onbit').join(df_map)
onbits = bits[bits['onbit']==1]
all_hit = {}
for fp, series in onbits.iterrows():
color_weight = series[weight]
if series.Subtypes == 'PharmacoErGFP':
#continue
radius_weight = 0.2
p1, p2, erg_p = series.smarts
hit_ats = []
hit_bonds = []
# point-1
p1_hit_ats = []
p1_hit_bonds = []
for smt in smt3_map.get(p1):
patt = Chem.MolFromSmarts(smt)
hit_at = mol.GetSubstructMatches(patt)
hit_bond = hit_bonds_from_atoms(mol, patt, hit_at)
p1_hit_ats.extend(hit_at)
p1_hit_bonds.extend(hit_bond)
# point-2
p2_hit_ats = []
p2_hit_bonds = []
for smt in smt3_map.get(p2):
patt = Chem.MolFromSmarts(smt)
hit_at = mol.GetSubstructMatches(patt)
hit_bond = hit_bonds_from_atoms(mol, patt, hit_at)
p2_hit_ats.extend(hit_at)
p2_hit_bonds.extend(hit_bond)
if (len(p1_hit_ats) > 0) & (len(p2_hit_ats) > 0):
flag = False
for i in p1_hit_ats:
for j in p2_hit_ats:
for k in i:
for l in j:
v = abs(k-l)
if v==erg_p:
flag = True
break
if flag:
hit_ats.extend(p1_hit_ats)
hit_bonds.extend(p1_hit_bonds)
hit_ats.extend(p2_hit_ats)
hit_bonds.extend(p2_hit_bonds)
else:
hit_ats = []
hit_bonds = []
#remove duplicates
atbd = pd.DataFrame([hit_ats, hit_bonds]).T.drop_duplicates(0)
hit_ats = atbd[0].tolist()
hit_bonds = atbd[1].tolist()
else:
smarts = series.smarts[0]
try:
patt = Chem.MolFromSmarts(smarts)
hit_ats = list(mol.GetSubstructMatches(patt))
hit_bonds = hit_bonds_from_atoms(mol, patt, hit_ats)
except:
hit_ats = []
hit_bonds = []
radius_weight = 0.2
all_hit[fp] = {"smarts":series.smarts,
'color_weight': color_weight,
'radius_weight': radius_weight,
"hit_ats":hit_ats,
"hit_bonds":hit_bonds }
res = pd.Series(all_hit).apply(pd.Series)
res = res.iloc[:top]
unstack_res_atom = []
unstack_res_bond = []
for fp, hits in res.iterrows():
color_weight = hits.color_weight
radius_weight = hits.radius_weight
#print(fp, radius_weight)
for hit_atom, hit_bond in zip(hits.hit_ats, hits.hit_bonds):
for at in hit_atom:
unstack_res_atom.append({'color_weight':color_weight,
'radius_weight':radius_weight,
'atom':at})
for bd in hit_bond:
unstack_res_bond.append({'color_weight':color_weight,
'radius_weight':radius_weight,
'bond':bd})
dfres_atom = pd.DataFrame(unstack_res_atom)
dfres_bond = pd.DataFrame(unstack_res_bond)
atoms = dfres_atom.groupby('atom').apply(_apply_mean).apply(pd.Series)
atoms.columns = ['radii', 'color']
print(atoms.color.min(), atoms.color.max())
atoms.color = value2color(atoms.color.values, vmin, vmax, cmap)
atom_list = atoms.index.tolist()
atom_cols = atoms.color.to_dict()
atom_radis = atoms.radii.to_dict()
bonds = dfres_bond.groupby('bond').apply(_apply_mean).apply(pd.Series)
bonds.columns = ['radii', 'color']
print(bonds.color.min(), bonds.color.max())
bonds.color = value2color(bonds.color.values, vmin, vmax, cmap)
bond_cols = bonds.color.to_dict()
bond_list = bonds.index.tolist()
d = rdMolDraw2D.MolDraw2DCairo(500, 500)
rdMolDraw2D.PrepareAndDrawMolecule(d, mol, highlightAtoms=atom_list,
highlightAtomColors=atom_cols,
highlightBonds=bond_list,
highlightAtomRadii = atom_radis,
highlightBondColors=bond_cols
)
bimg = d.GetDrawingText()
d = rdMolDraw2D.MolDraw2DSVG(300, 300)
rdMolDraw2D.PrepareAndDrawMolecule(d, mol, highlightAtoms=atom_list,
highlightAtomColors=atom_cols,
highlightBonds=bond_list,
highlightBondColors=bond_cols)
d.FinishDrawing()
svg = d.GetDrawingText()
return bimg, svg, bits
```
## 01. define min,max and draw colorbar
```
top = 50
vmin = 0.008 #df_map[weight].iloc[:top].min()
vmax = 0.02 #df_map[weight].iloc[:top].max()
fig, ax = plt.subplots(figsize=(0.2, 10))
fig.subplots_adjust(bottom=0.5)
norm = mpl.colors.Normalize(vmin=vmin, vmax=vmax)
cmap = mpl.colors.LinearSegmentedColormap.from_list("", ["red", "white","green"], N = 1000, gamma = 1.2)
#cmap = mpl.cm.cool
cb1 = mpl.colorbar.ColorbarBase(ax, cmap=cmap,
norm=norm, #orientation='horizontal'
)
cb1.set_label('average importance')
fig.show()
fig.savefig('./cbar.svg', bbox_inches = 'tight')
```
## 01) clinical drugs https://www.nature.com/articles/nrd.2017.43/tables/1
```
dfd = pd.read_csv('../data/drugs_predict_all.csv')
dfd.head(5)
# s = [0, 1, 2]
# dfb = dfd[dfd.index.isin(s)].reset_index(drop=True)
# dfb = dfb.sort_values('MMNF', ascending=False).reset_index(drop=True)
# dfb
dfb = dfd.head(5)
sms = ['CC(=O)Nc1ccc(F)cc1', 'CC#Cc1cncc(C)c1', 'CC(=O)Nc1ccc(F)cc1']
for i in dfb.index:
s = dfb.iloc[i]
print(i, s.smiles)
if i == 2:
core =Chem.MolFromSmiles('CC#Cc1cccnc1')
else:
core = Chem.MolFromSmiles('CNC(C)=O')
ids = "drugs-%s-%s-%s" % (i, s['name'],s.MMNF.round(3))
bimg, svg, bits = highlight_important_atom(s.smiles, cmap, core_mol = core, vmin = vmin, vmax = vmax, top = top)
display.Image(bimg)
with open('./drugs/%s.svg' % ids , 'w') as f:
f.write(svg)
dfb = dfd.sort_values('MMNF', ascending=False).tail(3)
dfb.index = [5,6,7]
dfb
dfb.smiles.tolist()
for i in dfb.index:
s = dfb.loc[i]
print(i, s.smiles)
if i==6:
core = Chem.MolFromSmiles('Fc1ncccc1-c1ccccc1')
elif i == 7:
core = Chem.MolFromSmiles('C[C@H]1C[C@@]2(CN(C)S(=O)(=O)N2C)CCN1C')
else:
core = None
ids = "drugs-%s-%s-%s" % (i, s['name'],s.MMNF.round(3))
bimg, svg, bits = highlight_important_atom(s.smiles, cmap, core_mol = core, vmin = vmin, vmax = vmax, top = top)
display.Image(bimg)
with open('./drugs/%s.svg' % ids , 'w') as f:
f.write(svg)
```
## 02.draw scaffold-1
```
dfs = pd.read_csv('../data/scaffold_1.csv')
core = Chem.MolFromSmiles('Cc1ccccc1-c1ccc2nc(N)ccc2c1')
for i in dfs.index[:5]:
s = dfs.iloc[i]
ids = "scaffold1-%s-%s-%s" % (i, s.ID, s.pIC50.round(3))
bimg, svg, bits = highlight_important_atom(s.smiles, cmap, core_mol = core, vmin = vmin, vmax = vmax, top = top)
display.Image(bimg)
with open('./scaffold1/%s.svg' % ids , 'w') as f:
f.write(svg)
core = Chem.MolFromSmiles('CCn1c(N)nc2cc(Cl)ccc12')
for i in dfs.index[5:]:
s = dfs.iloc[i]
ids = "scaffold1-%s-%s-%s" % (i, s.ID, s.pIC50.round(3))
bimg, svg, bits = highlight_important_atom(s.smiles, cmap, core_mol = core, vmin = vmin, vmax = vmax, top = top)
display.Image(bimg)
with open('./scaffold1/%s.svg' % ids , 'w') as f:
f.write(svg)
```
## 03.draw non-inhibitors
```
dfs = pd.read_csv('../data/non-inhibitors.csv')
dfs = pd.read_csv('../data/non-inhibitors.csv')
for i in dfs.index:
s = dfs.iloc[i]
core = Chem.MolFromSmiles(s.scaffold)
#print(i)
ids = "scaffold2-%s-%s-%s-%s" % (s.group, i, s.ID, s.pIC50.round(3))
bimg, svg, bits = highlight_important_atom(s.smiles, cmap, core_mol = core, vmin = vmin, vmax = vmax, top = top)
display.Image(bimg)
with open('./non_inhibitor/%s.svg' % ids , 'w') as f:
f.write(svg)
```
## 04.draw inhibitors
```
dfs = pd.read_csv('../data/inhibitors.csv')
dfs
dfs = pd.read_csv('../data/inhibitors.csv')
for i in dfs.index:
s = dfs.iloc[i]
core = Chem.MolFromSmiles(s.scaffold)
print(i)
ids = "scaffold-%s-%s-%s-%s" % (s.group, i, s.ID, s.pIC50.round(3))
bimg, svg, bits = highlight_important_atom(s.smiles, cmap, core_mol = core, vmin = vmin, vmax = vmax, top = top)
display.Image(bimg)
with open('./inhibitor/%s.svg' % ids , 'w') as f:
f.write(svg)
```
# scaffold-21
```
dfs = pd.read_csv('../data/scaffold_21.csv')
dfs
dfs.iloc[:5].smiles.tolist()
for i in dfs.index[:5]:
core = Chem.MolFromSmiles('CC(O)C(Cc1ccccc1)NC(C)=O')
print(i)
s = dfs.iloc[i]
ids = "scaffold2-%s-%s-%s" % (i, s.ID, s.pIC50.round(3))
bimg, svg, bits = highlight_important_atom(s.smiles, cmap, core_mol = core, vmin = vmin, vmax = vmax, top = top)
display.Image(bimg)
with open('./scaffold21/%s.svg' % ids , 'w') as f:
f.write(svg)
for i in dfs.index[5:]:
core = Chem.MolFromSmiles('CC(Cc1ccccc1)NC1=NC(C)(C)Cc2ccccc12')
print(i)
s = dfs.iloc[i]
ids = "scaffold2-%s-%s-%s" % (i, s.ID, s.pIC50.round(3))
bimg, svg, bits = highlight_important_atom(s.smiles, cmap, core_mol = core, vmin = vmin, vmax = vmax, top = top)
display.Image(bimg)
with open('./scaffold21/%s.svg' % ids , 'w') as f:
f.write(svg)
dfs = pd.read_csv('../data/scaffold_22.csv')
dfs
dfs.iloc[3:].smiles.tolist()
for i in dfs.index[3:]:
core = Chem.MolFromSmiles('O=C(C1C[NH2+]CC1c1ccccc1)N1CCCCC1')
print(i)
s = dfs.iloc[i]
ids = "scaffold2-%s-%s-%s" % (i, s.ID, s.pIC50.round(3))
bimg, svg, bits = highlight_important_atom(s.smiles, cmap, core_mol = core, vmin = vmin, vmax = vmax, top = top)
display.Image(bimg)
with open('./scaffold22/%s.svg' % ids , 'w') as f:
f.write(svg)
core = Chem.MolFromSmiles('CN1C(N)=NC(C1=O)(c1ccccc1)c1cccc(c1)-c1cccnc1')
#core = None
for i in dfs.index[5:]:
s = dfs.iloc[i]
ids = "scaffold2-%s-%s-%s" % (i, s.ID, s.pIC50.round(3))
bimg, svg, bits = highlight_important_atom(s.smiles, cmap, core_mol = core, vmin = vmin, vmax = vmax, top = top)
display.Image(bimg)
with open('./scaffold2/%s.svg' % ids , 'w') as f:
f.write(svg)
```
# scaffold 3
```
dfs = pd.read_csv('../data/scaffold_3.csv')
dfs
core = Chem.MolFromSmiles('Clc1ccc2C(NCCc3ccccc3)=NCCc2c1')
for i in dfs.index[:5]:
print(i)
s = dfs.iloc[i]
ids = "scaffold3-%s-%s-%s" % (i, s.ID, s.pIC50.round(3))
bimg, svg, bits = highlight_important_atom(s.smiles, cmap, core_mol = core, vmin = vmin, vmax = vmax, top = top)
display.Image(bimg)
with open('./scaffold3/%s.svg' % ids , 'w') as f:
f.write(svg)
core = Chem.MolFromSmiles('CC(=O)NC(Cc1ccccc1)C(O)C[NH2+]C1CC2(CCC2)Oc2ncc(CC(C)(C)C)cc12')
for i in dfs.index[5:]:
print(i)
s = dfs.iloc[i]
ids = "scaffold3-%s-%s-%s" % (i, s.ID, s.pIC50.round(3))
bimg, svg, bits = highlight_important_atom(s.smiles, cmap, core_mol = core, vmin = vmin, vmax = vmax, top = top)
display.Image(bimg)
with open('./scaffold3/%s.svg' % ids , 'w') as f:
f.write(svg)
```
# One Smiles
```
Clc1cc2nc(n(c2cc1)CCCC(=O)NC(C)C)N
Clc1cc2nc(n(c2cc1)CCCO)N
s = 'Fc1ncccc1-c1cc(ccc1)C1(N=C(N)N(C)C1=O)c1cn(nc1)CCC(C)C'
bimg, svg, bits = highlight_important_atom(s,cmap, vmin = vmin, vmax = vmax, top = top)
display.Image(bimg)
smiles = "O=C(NCC1CCCCC1)CCc1cc2c(nc1N)cccc2" #"Clc1cc2nc(n(c2cc1)CCCC(=O)N(CC1CCCCC1)C)N"#"O=C(N(C)C1CCCCC1)CCc1cc2cc(ccc2nc1N)-c1ccccc1C"
#
# smiles = "Clc1cc2nc(n(c2cc1)CCCC(=O)N(CC1CCCCC1)C)N"
# smiles = "O=C(N(C)C1CCCCC1)CCc1cc2cc(ccc2nc1N)-c1ccccc1C"
# smiles = "Clc1ccccc1-c1n(Cc2nc(N)ccc2)c(cc1)-c1ccc(Oc2cncnc2)cc1"
# smiles = "Clc1ccccc1-c1n(Cc2nc(N)ccc2)c(cc1)-c1ccc(Nc2cncnc2)cc1"
smiles = "O1CC[C@@H](NC(=O)[C@@H](Cc2cc3cc(ccc3nc2N)-c2ccccc2C)C)CC1(C)C"
smiles = "O=C(NCCC(C)(C)C)C(Cc1cc2cc(ccc2nc1N)-c1ccccc1C)C"
```
## 01) Helogen Group
```
smt2.get("MACCSFP87")
smt2.get("MACCSFP107")
smt2.get("MACCSFP134")
smt2.get("MACCSFP42")
smt1.get("PubChemFP287")
smt1.get("PubChemFP364")
smt1.get("PubChemFP364")
smt1.get("PubChemFP363")
```
## 02) Chalcogen group
```
smt2.get("MACCSFP127")
smt2.get("MACCSFP143")
Chem.MolFromSmarts(smt2.get("MACCSFP143")[0])
```
## 03)Donor, Acceptor
```
smt2.get("MACCSFP53")
smt2.get("MACCSFP84")
smt2.get("MACCSFP131")
smt1.get("PubChemFP16")
smt3.get('PharmacoErGFP4')
smt3.get('PharmacoErGFP5')
smt3.get('PharmacoErGFP24')
smt3.get('PharmacoErGFP26')
smt3.get('PharmacoErGFP103')
smt3.get('PharmacoErGFP104')
smt3_map.get('Donor')
smt3_map.get('Acceptor')
```
## 04) Group4: Aromatic Ring
```
smt1.get("PubChemFP797")
smt1.get("PubChemFP696")
smt1.get("PubChemFP697")
smt1.get("PubChemFP712")
smt1.get("PubChemFP734")
```
# 05) Group5: -N-O-
```
smt2.get("MACCSFP110")
smt2.get("MACCSFP92")
smt1.get("PubChemFP536")
smt1.get("PubChemFP451")
smt2.get("MACCSFP154")
smt1.get("PubChemFP420")
smt1.get("PubChemFP439") #31, 29
smt1.get("PubChemFP443") #32, 29
smt1.get("PubChemFP579") #33, 28
smt1.get("PubChemFP685") #34, 28
smt1.get("PubChemFP684") #34, 29
smt1.get("PubChemFP692") #35, 29
smt1.get("PubChemFP704") #35, 30
```
# 06) Group6: Phamarcophore
```
#smt3_map.get('Hydrophobic')
smt3.get('PharmacoErGFP286') #3, 32
smt3.get('PharmacoErGFP144') #3, 33
smt3.get('PharmacoErGFP287') #4, 30
smt3.get('PharmacoErGFP283') #4, 34
smt3.get('PharmacoErGFP149') #5, 26
smt3.get('PharmacoErGFP289') #5, 27
smt3.get('PharmacoErGFP148') #5, 29
smt3.get('PharmacoErGFP146') #5, 31
smt3.get('PharmacoErGFP44') #6, 32
smt3.get('PharmacoErGFP42') #7, 33
smt3.get('PharmacoErGFP41') #7, 34
Donor = ["[N;!H0;v3,v4&+1]", "[O,S;H1;+0]", "[n&H1&+0]"]
Acceptor = ["[O,S;H1;v2;!$(*-*=[O,N,P,S])]", "[O;H0;v2]", "[O,S;v1;-]",
"[N;v3;!$(N-*=[O,N,P,S])]", "[n&H0&+0]", "[o;+0;!$([o]:n);!$([o]:c:n)]"]
Positive = ["[#7;+]", "[N;H2&+0][$([C,a]);!$([C,a](=O))]",
"[N;H1&+0]([$([C,a]);!$([C,a](=O))])[$([C,a]);!$([C,a](=O))]",
"[N;H0&+0]([C;!$(C(=O))])([C;!$(C(=O))])[C;!$(C(=O))]"]
Negative = ["[C,S](=[O,S,P])-[O;H1,-1]"]
Hydrophobic = ["[C;D3,D4](-[CH3])-[CH3]", "[S;D2](-C)-C"]
Aromatic = ["a"]
Chem.MolFromSmarts('[#6]=[#8]')
Chem.MolFromSmarts(smt3_map.get('Positive')[0])
Chem.MolFromSmarts(smt3_map.get('Positive')[2])
Chem.MolFromSmarts(smt3_map.get('Positive')[3])
fp = "MACCSFP127" #,"PubChemFP734"
svg, hit_ats, bimg, patt, mol = plot_fp(fp, i = -10)
with open('./%s.svg' % fp , 'w') as f:
f.write(svg)
```
| github_jupyter |
# Missing Value Imputation
The machine learning estimators in scikit-learn do not allow input or output data to contain missing values. You must handle these missing values in some fashion - by either removing them or filling them.
### Handling missing values
The quickest way to handle missing values is to simply drop the rows or columns that contain missing values. If you want to preserve all of the rows and columns in the dataset, you need to **impute** (fill) the missing values with some other value.
### Basic imputation strategies
The type of imputation you choose for your features will likely be dependent on the type of data within. The two broad types of columns are those that are continuous (always numeric) and categorical (discrete). The following describes basic imputation strategies that can quickly get your dataset prepared to do machine learning.
For continuous columns, basic strategies involve first calculating a single summary statistic for each column that contains missing values, such as the mean, median, or mode. The missing values of each column are replaced with the respective summary statistic.
For categorical columns, because there are discrete categories, often a new category is created and given a label such as 'MISSING' or some other unique value not already found in that column. Using the most frequent value is another basic strategy.
### Advanced imputation
A variety of more advanced strategies have been developed for imputation. **Hot-deck** imputation involves randomly selecting one of the non-missing values in that column for each of the missing values.
Supervised machine learning models can be used to fill in missing values by making the column with missing values the target variable and using all the other columns as the features. Currently only the k-nearest neighbors machine learning model is available for imputation.
### Iterative Imputation
Another advanced strategy called **iterative imputation** may be used to fill in missing values in datasets where multiple columns contain missing values. As the name implies, multiple iterations take place to fill in all the missing values. A supervised machine learning model must be selected before completing the imputation.
During the first iteration, one of the columns that contains missing values is designated as the target variable. All the other features are used as the input. Only the observations that do not contain missing values may be used. A model is trained on this subset of the data. The model then makes predictions for the missing values. It can only make predictions for observations that have no missing values.
## Imputation in scikit-learn
Only simple imputation strategies and K-nearest neighbor are available currently in scikit-learn with the `SimpleImputer` and `KNNImputer` transformer of the `impute` module. Iterative imputation is in development and should be available for scikit-learn version 0.23 expected to be released in 2020. Let's get started by reading in all of the columns of the sample housing dataset.
```
import pandas as pd
housing = pd.read_csv('../data/housing_sample.csv')
housing.head()
```
### Find columns with missing values
Let's use pandas to output the number of missing values of each column.
```
housing.isna().sum()
```
### Attempt to use a machine learning estimator with missing values
scikit-learn machine learning estimators do not allow for there to be a single missing value in the input or output data. (This is not entirely true anymore as of scikit-learn version 0.22. Gradient boosted trees from the ensemble module can be used with data that has missing values.) Let's select the `LotFrontage` column as one of the features in our model.
```
X = housing[['GrLivArea', 'GarageArea', 'LotFrontage']]
y = housing['SalePrice']
```
Attempting to fit a model using this data containing missing values results in an error.
```
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(X, y)
```
### Begin with the `SimpleImputer`
One of the ways to fill in missing values is with the `SimpleImputer` transformer. It only delivers basic imputation with the mean, median, mode, or a specified constant.
### Transformer
The `SimpleImputer` is a **transformer**, a type of estimator that transforms the input data or the target variable, but does NOT do machine learning. It does **learn from data** but does not NOT learn from both the input and target simultaneously like the machine learning estimators. Typically, transformers are used to transform the input data but there are some transformers for the target variable as well.
Transformers are used **before** the machine learning models get their hands on the data. Most of the transformers are found in the `preprocessing` module as they are used to process the data before the machine learning. The `SimpleImputer` is actually found in the `impute` module which is dedicated to having tools for handling missing values.
### Import, Instantiate, Fit
All transformers follow the same three-step process as the machine learning estimators to learn from the data. In this case, the `SimpleImputer` transformer learns a single statistic from each column of the input data. During instantiation, you must set the parameter `strategy` as one of the following strings:
* 'mean'
* 'median'
* 'most_frequent'
* 'constant'
If you choose the string 'constant', you must also set the `fill_value` parameter to that constant. Below, we import, instantiate our transformer with our strategy (using the mean) and call the `fit` method to learn from the data.
```
from sklearn.impute import SimpleImputer
si = SimpleImputer(strategy='mean')
si.fit(X)
```
### What has it learned?
The learning is very simple. The mean was calculated from each column. Access the `statistics_` attribute to see the results of the learning.
```
si.statistics_
```
You can verify this result by calling the DataFrame `mean` method.
```
X.mean()
```
### No transformation has taken place yet
The input data `X` has not changed. It still contains missing values. The only thing that has taken place is the calculation of the mean of each column.
### The `transform` method
There is no `predict` method for transformers as they are not machine learning models. Instead, there exists a `transform` method that will use the information learned during the `fit` method to transform the data. We can pass the `transform` method the data we would like transformed. It returns a new copy of data that has filled the missing values. We assign this new transformed data to the variable `X_filled`.
```
X_filled = si.transform(X)
```
### A numpy array is returned
Although we passed in a pandas DataFrame, scikit-learn returns us a numpy array. Let's verify this by outputting the first 5 rows.
```
X_filled[:5]
```
Let's verify that no missing values remain. To do so, we must import the numpy library as numpy arrays do not have an `isna` method like DataFrames.
```
import numpy as np
np.isnan(X_filled).sum()
```
The original input data is unchanged, still a pandas DataFrame, and contains the same number of missing values.
```
X.isna().sum()
```
### Attempt machine learning again
We can now successfully call the `fit` method using the transformed data set.
```
lr.fit(X_filled, y)
```
This new transformed data can be used in any other place in scikit-learn such as with cross validation.
```
from sklearn.model_selection import cross_val_score, KFold
kf = KFold(n_splits=5, shuffle=True, random_state=123)
cross_val_score(lr, X_filled, y, cv=kf).mean()
```
### Obtain a final model by fitting on all of the data
Cross validation returns our expected score on future data, but does not give us a final model. Let's fit the linear regression model to all of this transformed data.
```
lr.fit(X_filled, y)
```
### Fitting and transforming in a single step
Often, you will want to immediately transform your data after fitting it. scikit-learn provides this functionality with a single method named `fit_transform`. In this example, it finds the mean of each column and then returns a new copy of the transformed data with the missing data filled in with the mean.
```
si = SimpleImputer(strategy='mean')
X_filled = si.fit_transform(X)
```
### The input data is never mutated
scikit-learn never mutates the data passed to its `transform` or `fit_transform` methods. Instead, it returns a new copy of data with the missing values filled in with the calculated strategy statistic. In our last call to `fit_transform`, the DataFrame `X` was not mutated. `X_filled` is the new copy of data with no missing values.
### Imputing new data
The goal of machine learning is to be able to make good predictions on future unseen data. It's possible this new data in the future also has missing values. In order to make predictions on it, we will have to fill those values. Let's say the following array of four observations represents new houses that we'd like to predict the sale price.
```
X1 = np.array([[1500, 500, np.nan],
[1200, np.nan, 88],
[np.nan, 240, 30],
[np.nan, np.nan, np.nan]])
X1
```
Our transformer `si` was already trained on the input data. It is ready to use on new data and we do not call `fit` or `fit_transform` on it again. The `transform` method will fill in the missing values with the means of the columns of the training data. Let's fill each value now.
```
X1_filled = si.transform(X1)
X1_filled
```
We can verify that the correct values were filled by accessing the `statistics_` attribute again.
```
si.statistics_
```
From here, we can use our linear model trained from above to make a prediction.
```
lr.predict(X1_filled).round(-3)
```
### Fills in missing values for columns that had none in training
The ground living area and garage area had no missing values in the training set. Our new dataset, `X1`, had missing values for each column and even had a row of nothing but missing values. Our transformer was able to fill in these missing values since it calculated the mean on all of the columns and not just `LotFrontage`, which was the only one that had missing values in the training set.
## Common mistake - filling with mean of new data
A common mistake involves filling in the missing values of the new data with the mean of that new data. Let's calculate the mean of our new data. Because it is a numpy array, we need to use the `nanmean` function to have it ignore missing values during the calculation.
```
np.nanmean(X1, 0)
```
Although it is possible for you to use these values to fill in the missing values, it is incorrect. The linear regression model was built using the means calculated on the training set and needs to be filled with those values. Also, there is a possibility that the new data has a column consisting entirely of missing values, so it would be impossible to use the mean of the new data.
## Summary of simple imputation
In summary, when we have missing data in our training set and want to fill them with a simple summary statistic, we take the following steps:
1. Instantiate the `SimpleImputer` selecting a fill strategy (in the cell below, we use the median)
1. Learn the value to be filled for each feature when calling the `fit` method
1. Using the `transform` method, create a new copy of data with the missing values filled
1. Use cross validation to estimate future performance
1. Fit a machine learning model with this data that has no missing values
1. Get new data that you'd like to make a prediction
1. Fill in the missing values of this new data with the `transform` method (do not re-fit)
1. Call the `predict` method with the new filled data
```
si = SimpleImputer(strategy='median')
X_filled = si.fit_transform(X)
lr = LinearRegression()
cross_val_score(lr, X_filled, y, cv=kf).mean()
lr.fit(X_filled, y)
X1_filled = si.transform(X1)
lr.predict(X1_filled).round(-3)
```
## K-nearest neighbor imputation
Released with scikit-learn version 0.22 in December of 2019, the `KNNImputer` provides a more advanced method of imputation. For each observation that has missing values, it finds the 'k' nearest observations using euclidean distance as its metric. It does not use the target variable in this calculation. It then predicts the missing value of that feature as the average of those 'k' nearest observations for that feature.
### Step-by-step example of KNN imputation
It's beneficial to see a step-by-step example of how KNN imputation works. Let's continue to use our same input dataset with three features, where `LotFrontage` is the only one containing missing values. First, we split the data into two separate DataFrames, `X_missing` which contains all rows with missing values, and `X_not_missing` which contains no rows with missing values.
```
X_missing = X[X['LotFrontage'].isna()]
X_not_missing = X[X['LotFrontage'].notna()]
X_missing.head()
X_not_missing.head()
```
We can verify that the shapes of the DataFrames are what we expect.
```
X_missing.shape
X_not_missing.shape
```
We need to find the nearest 'k' neighbors for each of the 259 rows in `X_missing`. Let's begin by selecting the first row of data assigning it to a new variable name.
```
first_row = X_missing.iloc[0, :2]
first_row
```
We now find the distance between it's above ground living area (2,090) and the above ground living area for all of the rows in the `X_not_missing` Datarame.
```
dist_1 = X_not_missing['GrLivArea'] - first_row['GrLivArea']
dist_1.head()
```
We repeat for the garage area.
```
dist_2 = X_not_missing['GarageArea'] - first_row['GarageArea']
dist_2.head()
```
We calculate the euclidean distance by squaring, summing, and taking the square root of each distance.
```
euclidean_dist = np.sqrt(dist_1 ** 2 + dist_2 ** 2)
euclidean_dist.head()
```
We then call the `nsmallest` method to retrieve the 'k' nearest neighbors based on this distance. Here, we choose 'k' to be 5.
```
neighbors = euclidean_dist.nsmallest(5)
neighbors
```
We now ned to locate the known value of `LotFrontage` for these 5 neighbors. We can do this by using the index of this pandas Series which labels the row position.
```
X_not_missing.loc[neighbors.index]
```
As expected, all of these neighbors have similar above ground living area and garage area (2,090 and 484) as the observation that had a missing `LotFrontage` value. Now, we just calculate the mean of `LotFrontage` for these five neighbors. It is this value that is used to fill in the missing value for that one observation. The process repeats for all rows that have missing values.
```
X_not_missing.loc[neighbors.index, 'LotFrontage'].mean()
```
### Using `KNNImputer`
The `KNNImputer` transformer automates this process for us. Let's complete the three-step process to use KNN imputation with our data. We will immediately fit and transform the data with the `fit_transform` method.
```
from sklearn.impute import KNNImputer
knni = KNNImputer(n_neighbors=5)
X_filled = knni.fit_transform(X)
```
Let's verify that the scikit-learn `KNNImputer` calculated the same result as us from above. The missing row was found at index 7.
```
X_filled[7]
```
The third column of the array contains the filled value of 124.8 which was indeed the same as we calculated above.
## Iterative imputation (experimental)
Iterative imputation is actually available in scikit-learn but is labeled as 'experimental' meaning that it's not quite fully tested and that its functionality might change in the future. To enable it, run the following line.
```
from sklearn.experimental import enable_iterative_imputer
```
Once you run the above, the `IterativeImputer` transformer will be available to import from the `impute` module. Let's import it now.
```
from sklearn.impute import IterativeImputer
```
The `IterativeImputer` may be instantiated without setting any of its parameters. Here are explanations of some of the defaults choices it makes:
* It uses a machine learning model named Bayesian Ridge to do the imputation. You can change this by setting the first parameter, `estimator` to any supervised regression model
* It impute features in 'ascending' order - those that have the least number of missing get imputed first. Change this with the `imputation_order` parameter
Let's use iterative imputation to fill in our missing values using a decision tree with a max depth of 3 as our machine learning estimator to make the imputation. We instantiate both estimators here.
```
from sklearn.tree import DecisionTreeRegressor
dtr = DecisionTreeRegressor(max_depth=3)
ii = IterativeImputer(dtr)
```
We can now learn how to impute and return a transformed dataset with the missing values filled with the `fit_transform` method.
```
X_filled_ii = ii.fit_transform(X)
X_filled_ii[:3]
```
Let's look at just the rows that have missing values to see that they are no longer filled with just the same number.
```
filt = X['LotFrontage'].isna()
X_filled_ii[filt][:5]
```
We can also use the trained imputer to fill in missing values from our new data.
```
ii.transform(X1)
```
## Exercises
### Exercise 1
<span style="color:green; font-size:16px">Run the cells below which read in the full housing dataset (79) features and outputs the features with missing values and how many are missing. Do any of these columns make sense to fill in the missing values with the mean?</span>
```
import pandas as pd
pd.set_option('display.max_columns', 100)
housing_all = pd.read_csv('../data/housing.csv')
housing_all.head()
housing_all.isna().sum()[lambda x: x > 0]
```
### Exercise 2
<span style="color:green; font-size:16px">Choose a numeric column with missing values besides `LotFrontage` to be part of the model and impute missing values and then cross-validate scores with it.</span>
| github_jupyter |
<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
<i>Licensed under the MIT License.</i>
# NRMS: Neural News Recommendation with Multi-Head Self-Attention
NRMS \[1\] is a neural news recommendation approach with multi-head selfattention. The core of NRMS is a news encoder and a user encoder. In the newsencoder, a multi-head self-attentions is used to learn news representations from news titles by modeling the interactions between words. In the user encoder, we learn representations of users from their browsed news and use multihead self-attention to capture the relatedness between the news. Besides, we apply additive
attention to learn more informative news and user representations by selecting important words and news.
## Properties of NRMS:
- NRMS is a content-based neural news recommendation approach.
- It uses multi-self attention to learn news representations by modeling the iteractions between words and learn user representations by capturing the relationship between user browsed news.
- NRMS uses additive attentions to learn informative news and user representations by selecting important words and news.
## Data format:
For quicker training and evaluaiton, we sample MINDdemo dataset of 5k users from [MIND small dataset](https://msnews.github.io/). The MINDdemo dataset has the same file format as MINDsmall and MINDlarge. If you want to try experiments on MINDsmall and MINDlarge, please change the dowload source. Select the MIND_type parameter from ['large', 'small', 'demo'] to choose dataset.
**MINDdemo_train** is used for training, and **MINDdemo_dev** is used for evaluation. Training data and evaluation data are composed of a news file and a behaviors file. You can find more detailed data description in [MIND repo](https://github.com/msnews/msnews.github.io/blob/master/assets/doc/introduction.md)
### news data
This file contains news information including newsid, category, subcatgory, news title, news abstarct, news url and entities in news title, entities in news abstarct.
One simple example: <br>
`N46466 lifestyle lifestyleroyals The Brands Queen Elizabeth, Prince Charles, and Prince Philip Swear By Shop the notebooks, jackets, and more that the royals can't live without. https://www.msn.com/en-us/lifestyle/lifestyleroyals/the-brands-queen-elizabeth,-prince-charles,-and-prince-philip-swear-by/ss-AAGH0ET?ocid=chopendata [{"Label": "Prince Philip, Duke of Edinburgh", "Type": "P", "WikidataId": "Q80976", "Confidence": 1.0, "OccurrenceOffsets": [48], "SurfaceForms": ["Prince Philip"]}, {"Label": "Charles, Prince of Wales", "Type": "P", "WikidataId": "Q43274", "Confidence": 1.0, "OccurrenceOffsets": [28], "SurfaceForms": ["Prince Charles"]}, {"Label": "Elizabeth II", "Type": "P", "WikidataId": "Q9682", "Confidence": 0.97, "OccurrenceOffsets": [11], "SurfaceForms": ["Queen Elizabeth"]}] []`
<br>
In general, each line in data file represents information of one piece of news: <br>
`[News ID] [Category] [Subcategory] [News Title] [News Abstrct] [News Url] [Entities in News Title] [Entities in News Abstract] ...`
<br>
We generate a word_dict file to tranform words in news title to word indexes, and a embedding matrix is initted from pretrained glove embeddings.
### behaviors data
One simple example: <br>
`1 U82271 11/11/2019 3:28:58 PM N3130 N11621 N12917 N4574 N12140 N9748 N13390-0 N7180-0 N20785-0 N6937-0 N15776-0 N25810-0 N20820-0 N6885-0 N27294-0 N18835-0 N16945-0 N7410-0 N23967-0 N22679-0 N20532-0 N26651-0 N22078-0 N4098-0 N16473-0 N13841-0 N15660-0 N25787-0 N2315-0 N1615-0 N9087-0 N23880-0 N3600-0 N24479-0 N22882-0 N26308-0 N13594-0 N2220-0 N28356-0 N17083-0 N21415-0 N18671-0 N9440-0 N17759-0 N10861-0 N21830-0 N8064-0 N5675-0 N15037-0 N26154-0 N15368-1 N481-0 N3256-0 N20663-0 N23940-0 N7654-0 N10729-0 N7090-0 N23596-0 N15901-0 N16348-0 N13645-0 N8124-0 N20094-0 N27774-0 N23011-0 N14832-0 N15971-0 N27729-0 N2167-0 N11186-0 N18390-0 N21328-0 N10992-0 N20122-0 N1958-0 N2004-0 N26156-0 N17632-0 N26146-0 N17322-0 N18403-0 N17397-0 N18215-0 N14475-0 N9781-0 N17958-0 N3370-0 N1127-0 N15525-0 N12657-0 N10537-0 N18224-0`
<br>
In general, each line in data file represents one instance of an impression. The format is like: <br>
`[Impression ID] [User ID] [Impression Time] [User Click History] [Impression News]`
<br>
User Click History is the user historical clicked news before Impression Time. Impression News is the displayed news in an impression, which format is:<br>
`[News ID 1]-[label1] ... [News ID n]-[labeln]`
<br>
Label represents whether the news is clicked by the user. All information of news in User Click History and Impression News can be found in news data file.
## Global settings and imports
```
import sys
sys.path.append("../../")
from reco_utils.recommender.deeprec.deeprec_utils import download_deeprec_resources
from reco_utils.recommender.newsrec.newsrec_utils import prepare_hparams
from reco_utils.recommender.newsrec.models.nrms import NRMSModel
from reco_utils.recommender.newsrec.io.mind_iterator import MINDIterator
from reco_utils.recommender.newsrec.newsrec_utils import get_mind_data_set
import papermill as pm
from tempfile import TemporaryDirectory
import tensorflow as tf
import os
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
tmpdir = TemporaryDirectory()
```
## Prepare parameters
```
epochs=8
seed=42
MIND_type = 'demo'
```
## Download and load data
```
data_path = tmpdir.name
train_news_file = os.path.join(data_path, 'train', r'news.tsv')
train_behaviors_file = os.path.join(data_path, 'train', r'behaviors.tsv')
valid_news_file = os.path.join(data_path, 'valid', r'news.tsv')
valid_behaviors_file = os.path.join(data_path, 'valid', r'behaviors.tsv')
wordEmb_file = os.path.join(data_path, "utils", "embedding.npy")
userDict_file = os.path.join(data_path, "utils", "uid2index.pkl")
wordDict_file = os.path.join(data_path, "utils", "word_dict.pkl")
yaml_file = os.path.join(data_path, "utils", r'nrms.yaml')
mind_url, mind_train_dataset, mind_dev_dataset, mind_utils = get_mind_data_set(MIND_type)
if not os.path.exists(train_news_file):
download_deeprec_resources(mind_url, os.path.join(data_path, 'train'), mind_train_dataset)
if not os.path.exists(valid_news_file):
download_deeprec_resources(mind_url, \
os.path.join(data_path, 'valid'), mind_dev_dataset)
if not os.path.exists(yaml_file):
download_deeprec_resources(r'https://recodatasets.blob.core.windows.net/newsrec/', \
os.path.join(data_path, 'utils'), mind_utils)
```
## Create hyper-parameters
```
hparams = prepare_hparams(yaml_file, wordEmb_file=wordEmb_file, \
wordDict_file=wordDict_file, userDict_file=userDict_file, \
epochs=epochs,
show_step=10)
print(hparams)
```
## Train the NRMS model
```
iterator = MINDIterator
model = NRMSModel(hparams, iterator, seed=seed)
print(model.run_eval(valid_news_file, valid_behaviors_file))
model.fit(train_news_file, train_behaviors_file, valid_news_file, valid_behaviors_file)
res_syn = model.run_eval(valid_news_file, valid_behaviors_file)
print(res_syn)
pm.record("res_syn", res_syn)
```
## Save the model
```
model_path = os.path.join(data_path, "model")
os.makedirs(model_path, exist_ok=True)
model.model.save_weights(os.path.join(model_path, "nrms_ckpt"))
```
## Output Predcition File
This code segment is used to generate the prediction.zip file, which is in the same format in [MIND Competition Submission Tutorial](https://competitions.codalab.org/competitions/24122#learn_the_details-submission-guidelines).
Please change the `MIND_type` parameter to `large` if you want to submit your prediction to [MIND Competition](https://msnews.github.io/competition.html).
```
group_impr_indexes, group_labels, group_preds = model.run_fast_eval(valid_news_file, valid_behaviors_file)
import numpy as np
from tqdm import tqdm
with open(os.path.join(data_path, 'prediction.txt'), 'w') as f:
for impr_index, preds in tqdm(zip(group_impr_indexes, group_preds)):
impr_index += 1
pred_rank = (np.argsort(np.argsort(preds)[::-1]) + 1).tolist()
pred_rank = '[' + ','.join([str(i) for i in pred_rank]) + ']'
f.write(' '.join([str(impr_index), pred_rank])+ '\n')
import zipfile
f = zipfile.ZipFile(os.path.join(data_path, 'prediction.zip'), 'w', zipfile.ZIP_DEFLATED)
f.write(os.path.join(data_path, 'prediction.txt'), arcname='prediction.txt')
f.close()
```
## Reference
\[1\] Wu et al. "Neural News Recommendation with Multi-Head Self-Attention." in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)<br>
\[2\] Wu, Fangzhao, et al. "MIND: A Large-scale Dataset for News Recommendation" Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. https://msnews.github.io/competition.html <br>
\[3\] GloVe: Global Vectors for Word Representation. https://nlp.stanford.edu/projects/glove/
| github_jupyter |
# Using `jit`
We know how to find hotspots now, how do we improve their performance?
We `jit` them!
We'll start with a trivial example but get to some more realistic applications shortly.
### Array sum
The function below is a naive `sum` function that sums all the elements of a given array.
```
def sum_array(inp):
J, I = inp.shape
#this is a bad idea
mysum = 0
for j in range(J):
for i in range(I):
mysum += inp[j, i]
return mysum
import numpy
arr = numpy.random.random((300, 300))
```
First hand the array `arr` off to `sum_array` to make sure it works (or at least doesn't error out)
```
sum_array(arr)
```
Now run and save `timeit` results of `sum_array` as a baseline to compare against.
```
plain = %timeit -o sum_array(arr)
```
# Let's get started
```
from numba import jit
```
**Note**: There are two ways to `jit` a function. These are just two ways of doing the same thing. You can choose whichever you prefer.
## As a function call
```
sum_array_numba = jit()(sum_array)
```
What's up with the weird double `()`s? We'll cover that in a little bit.
Now we have a new function, called `sum_array_numba` which is the `jit`ted version of `sum_array`.
We can again make sure that it works (and hopefully produces the same result as `sum_array`).
```
sum_array_numba(arr)
```
Good, that's the same result as the first version, so nothing has gone horribly wrong.
Now let's time and save these results.
```
jitted = %timeit -o sum_array_numba(arr)
```
Wow. 73.7 µs is a lot faster than 15.5 ms... How much faster? Let's see.
```
plain.best / jitted.best
```
So, a factor of 210x. Not too shabby. But we're comparing the best runs, what about the worst runs?
```
plain.worst / jitted.worst
```
Yeah, that's still an incredible speedup.
## (more commonly) As a decorator
The second way to `jit` a function is to use the `jit` decorator. This is a very easy syntax to handle and makes applying `jit` to a function trivial.
Note that the only difference in terms of the outcome (compared to the other `jit` method) is that there will be only one function, called `sum_array` that is a Numba `jit`ted function. The "original" `sum_array` will no longer exist, so this method, while convenient, doesn't allow you to compare results between "vanilla" and `jit`ted Python.
When should you use one or the other? That's up to you. If I'm investigating whether Numba can help, I use `jit` as a function call, so I can compare results. Once I've decided to use Numba, I stick with the decorator syntax since it's much prettier (and I don't care if the "original" function is available).
```
@jit
def sum_array(inp):
I, J = inp.shape
mysum = 0
for i in range(I):
for j in range(J):
mysum += inp[i, j]
return mysum
sum_array(arr)
```
So again, we can see that we have the same result. That's good. And timing?
```
%timeit sum_array(arr)
```
As expected, more or less identical to the first `jit` example.
## How does this compare to NumPy?
NumPy, of course, has built in methods for summing arrays, how does Numba stack up against those?
```
%timeit arr.sum()
```
Right. Remember, NumPy has been hand-tuned over many years to be very, very good at what it does. For simple operations, Numba is not going to outperform it, but when things get more complex Numba can save the day.
Also, take a moment to appreciate that our `jit`ted code, which was compiled on-the-fly is offering performance in the same order of magnitude as NumPy. That's pretty incredible.
## When does `numba` compile things?
`numba` is a just-in-time (hence, `jit`) compiler. The very first time you run a `numba` compiled function, there will be a little bit of overhead for the compilation step to take place. In practice, this is usually not noticeable. You may get a message from `timeit` that one "run" was much slower than most; this is due to the compilation overhead.
## [Your turn!](./exercises/02.Intro.to.JIT.exercises.ipynb#JIT-Exercise)
| github_jupyter |
### IMPORT, DATA
```
import pandas as pd
import numpy as np
import os
import glob
import random
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
import warnings
warnings.filterwarnings("ignore")
plt.rcParams['font.size'] = 15
train = pd.read_csv('./data/train/train.csv')
train.tail()
submission = pd.read_csv('./data/sample_submission.csv')
submission.tail()
```
### CORRELATION
```
import seaborn as sns
import matplotlib.pyplot as plt
plt.rcParams['font.size'] = 15
plt.rcParams['figure.figsize'] = (20, 1)
corr = df_train.corr(method = 'pearson')
corr = corr.iloc[1:2,:]
sns.heatmap(corr, annot=True, cmap="YlGnBu")
#sns.pairplot(df_train, diag_kind="kde")
#plt.show()
#sns.pairplot(df_weather, hue="species", markers=["o", "s", "D"], palette="husl")
#plt.show()
col_name_list = ['Hour', 'TARGET', 'DHI', 'DNI', 'WS', 'RH', 'T'] #train.columns.values.iloc
iteration_count = 20
def shift_data(data, iteration_feature, is_train=True):
temp = data.copy()
temp = temp[col_name_list]
if is_train==True:
for i in range(1, iteration_count+1):
temp[iteration_feature+str(i)] = temp[iteration_feature].shift(-48*i).fillna(method='ffill')
temp = temp.dropna()
return temp.iloc[:-48*iteration_count]
elif is_train==False:
temp = temp[col_name_list]
return temp.iloc[-48:, :]
# corr1 _shifted target value
df_train = shift_data(train, 'TARGET')
scaler = MinMaxScaler(feature_range=(0, 1))
scaled_df = pd.DataFrame(scaler.fit_transform(df_train), columns=df_train.columns)
corr = scaled_df.corr(method = 'pearson').iloc[:-iteration_count,:]
plt.rcParams['figure.figsize'] = (30, 8)
sns.heatmap(corr, annot=True, cmap="YlGnBu")
plt.show()
# corr2 _shifted feature value
feature_idx = 1
plt.rcParams['figure.figsize'] = (iteration_count*1.5, 1)
for col_name in col_name_list:
if col_name=='Hour':
continue;
df_train = shift_data(train, col_name)
scaled_df = pd.DataFrame(scaler.fit_transform(df_train), columns=df_train.columns)
corr = scaled_df.corr(method = 'pearson').iloc[[1,feature_idx],:]
sns.heatmap(corr, annot=True, cmap="YlGnBu")
plt.show()
feature_idx += 1
```
### DATA SHIFT (TARGET)
```
def preprocess_data(data, is_train=True):
temp = data.copy()
temp = temp[['Hour', 'TARGET', 'DHI', 'DNI', 'WS', 'RH', 'T']]
if is_train==True:
temp['Target1'] = temp['TARGET'].shift(-48).fillna(method='ffill')
temp['Target2'] = temp['TARGET'].shift(-48*2).fillna(method='ffill')
temp = temp.dropna()
return temp.iloc[:-96]
elif is_train==False:
temp = temp[['Hour', 'TARGET', 'DHI', 'DNI', 'WS', 'RH', 'T']]
return temp.iloc[-48:, :]
df_train = preprocess_data(train)
df_train.iloc[:48]
df_train.iloc[-48*loof_n:-48*(loof_n-1),:]
train.iloc[48:96]
train.iloc[48+48:96+48]
df_train.tail()
df_test = []
for i in range(81):
file_path = './data/test/' + str(i) + '.csv'
temp = pd.read_csv(file_path)
temp = preprocess_data(temp, is_train=False)
df_test.append(temp)
X_test = pd.concat(df_test)
X_test.shape
X_test.head(48)
df_train.head()
df_train.iloc[-48:]
from sklearn.model_selection import train_test_split
X_train_1, X_valid_1, Y_train_1, Y_valid_1 = train_test_split(df_train.iloc[:, :-2], df_train.iloc[:, -2], test_size=0.3, random_state=0)
X_train_2, X_valid_2, Y_train_2, Y_valid_2 = train_test_split(df_train.iloc[:, :-2], df_train.iloc[:, -1], test_size=0.3, random_state=0)
X_train_1.head()
X_test.head()
quantiles = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
from lightgbm import LGBMRegressor
# Get the model and the predictions in (a) - (b)
def LGBM(q, X_train, Y_train, X_valid, Y_valid, X_test):
# (a) Modeling
model = LGBMRegressor(objective='quantile', alpha=q,
n_estimators=10000, bagging_fraction=0.7, learning_rate=0.027, subsample=0.7)
model.fit(X_train, Y_train, eval_metric = ['quantile'],
eval_set=[(X_valid, Y_valid)], early_stopping_rounds=300, verbose=500)
# (b) Predictions
pred = pd.Series(model.predict(X_test).round(2))
return pred, model
```
# TARGET 예측
```
# Target 예측
def train_data(X_train, Y_train, X_valid, Y_valid, X_test):
LGBM_models=[]
LGBM_actual_pred = pd.DataFrame()
for q in quantiles:
print(q)
pred , model = LGBM(q, X_train, Y_train, X_valid, Y_valid, X_test)
LGBM_models.append(model)
LGBM_actual_pred = pd.concat([LGBM_actual_pred,pred],axis=1)
LGBM_actual_pred.columns=quantiles
return LGBM_models, LGBM_actual_pred
# Target1
models_1, results_1 = train_data(X_train_1, Y_train_1, X_valid_1, Y_valid_1, X_test)
results_1.sort_index()[:48]
# Target2
models_2, results_2 = train_data(X_train_2, Y_train_2, X_valid_2, Y_valid_2, X_test)
results_2.sort_index()[:48]
results_1.sort_index().iloc[:48]
results_2.sort_index()
print(results_1.shape, results_2.shape)
submission.loc[submission.id.str.contains("Day7"), "q_0.1":] = results_1.sort_index().values
submission.loc[submission.id.str.contains("Day8"), "q_0.1":] = results_2.sort_index().values
submission
submission.iloc[:48]
submission.iloc[48:96]
submission.to_csv('./data/submission_v3.csv', index=False)
```
| github_jupyter |
# Creating A New MLRun Project
--------------------------------------------------------------------
creating a full project with multiple functions and workflow and working wit Git.
#### **notebook how-to's**
* Add local or library/remote functions
* Add a workflow
* Save to a remote git
* Run pipeline
<a id='top'></a>
#### **steps**
**[Add functions](#load-functions)**<br>
**[Create and save a workflow](#create-workflow)**<br>
**[Update remote git](#git-remote)**<br>
**[Run a pipeline workflow](#run-pipeline)**<br>
```
from mlrun import new_project, code_to_function
# update the dir and repo to reflect real locations
# the remote git repo must be initialized in GitHub
project_dir = '/User/new-proj'
remote_git = 'https://github.com/<my-org>/<my-repo>.git'
newproj = new_project('new-project', project_dir, init_git=True)
```
Set the remote git repo and pull to sync in case it has some content
```
newproj.create_remote(remote_git)
newproj.pull()
```
<a id='load-functions'></a>
### Load functions from remote URLs or marketplace
We create two functions:
1. Load a function from the function market (converted into a function object)
2. Create a function from file in the context dir (w copy a demo file into the dir)
```
newproj.set_function('hub://load_dataset', 'ingest').doc()
```
### Create a local function (use code from mlrun examples)
```
!curl -o {project_dir}/handler.py https://raw.githubusercontent.com/mlrun/mlrun/master/examples/handler.py
# add function with build config (base image, run command)
fn = code_to_function('tstfunc', filename='handler.py', kind='job')
fn.build_config(base_image = 'mlrun/mlrun', commands=['pip install pandas'])
newproj.set_function(fn)
print(newproj.func('tstfunc').to_yaml())
```
<a id='create-workflow'></a>
### Create a workflow file and store it in the context dir
```
%%writefile {project_dir}/workflow.py
from kfp import dsl
from mlrun import mount_v3io
funcs = {}
def init_functions(functions: dict, project=None, secrets=None):
functions['ingest'].apply(mount_v3io())
@dsl.pipeline(
name='demo project', description='Shows how to use mlrun project.'
)
def kfpipeline(p1=3):
# first step build the function container
builder = funcs['tstfunc'].deploy_step(with_mlrun=False)
ingest = funcs['ingest'].as_step(name='load-data', params={'dataset': 'boston'})
# first step
s1 = funcs['tstfunc'].as_step(name='step-one', handler='my_func',
image=builder.outputs['image'],
params={'p1': p1})
newproj.set_workflow('main', 'workflow.py')
print(newproj.to_yaml())
```
<a id='git-remote'></a>
### Register and push the project to a remote Repo
```
newproj.push('master', 'first push', add=['handler.py', 'workflow.py'])
newproj.source
```
<a id='run-pipeline'></a>
### Run the workflow
```
newproj.run('main', arguments={}, artifact_path='v3io:///users/admin/mlrun/kfp/{{workflow.uid}}/')
```
**[back to top](#top)**
| github_jupyter |
# Introduction to Natural Language Processing (NLP)
Generally speaking, <i>Computational Text Analysis</i> is a set of interpretive methods which seek to understand patterns in human discourse, in part through statistics. More familiar methods, such as close reading, are exceptionally well-suited to the analysis of individual texts, however our research questions typically compel us to look for relationships across texts, sometimes counting in the thousands or even millions. We have to zoom out, in order to perform so-called <i>distant reading</i>. Fortunately for us, computers are well-suited to identify the kinds of textual relationships that exist at scale.
We will spend the week exploring research questions that computational methods can help to answer and thinking about how these complement -- rather than displace -- other interpretive methods. Before moving to that conceptual level, however, we will familiarize ourselves with the basic tools of the trade.
<i>Natural Language Processing</i> is an umbrella term for the methods by which a computer handles human language text. This includes transforming the text into a numerical form that the computer manipulates natively, as well as the measurements that reserchers often perform. In the parlance, <i>natural language</i> refers to a language spoken by humans, as opposed to a <i>formal language</i>, such as Python, which comprises a set of logical operations.
The goal of this lesson is to jump right in to text analysis and natural language processing. Rather than starting with the nitty gritty of programming in Python, this lesson will demonstrate some neat things you can do with a minimal amount of coding. Today, we aim to build intuition about how computers read human text and learn some of the basic operations we'll perform with them.
# Lesson Outline
- Jargon
- Text in Python
- Tokenization & Term Frequency
- Pre-Processing:
* Changing words to lowercase
* Removing stop words
* Removing punctuation
- Part-of-Speech Tagging
* Tagging tokens
* Counting tagged tokens
- Demonstration: Guess the Novel!
- Concordance
# 0. Key Jargon
### General
* *programming* (or *coding*)
* A program is a sequence of instructions given to the computer, in order to perform a specific task. Those instructions are written in a specific *programming language*, in our case, Python. Writing these instructions can be an art as much as a science.
* *Python*
* A general-use programming language that is popular for NLP and statistics.
* *script*
* A block of executable code.
* *Jupyter Notebook*
* Jupyter is a popular interface in which Python scripts can be written and executed. Stand-alone scripts are saved in Notebooks. The script can be sub-divided into units called <i>cells</i> and executed individually. Cells can also contain discursive text and html formatting (such as in this cell!)
* *package* (or *module*)
* Python offers a basic set of functions that can be used off-the-shelf. However, we often wish to go beyond the basics. To that end, <i>packages</i> are collections of python files that contain pre-made functions. These functions are made available to our program when we <i>import</i> the package that contains them.
* *Anaconda*
* Anaconda is a <i>platform</i> for programming in Python. A platform constitutes a closed environment on your computer that has been standardized for functionality. For example, Anaconda contains common packages and programming interfaces for Python, and its developers ensure compatibility among the moving parts.
### When Programming
* *variable*
* A variable is a generic container that stores a value, such as a number or series of letters. This is not like a variable from high-school algebra, which had a single "correct" value that must be solved. Rather, the user <i>assigns</i> values to the variable in order to perform operations on it later.
* *string*
* A type of object consisting of a single sequence of alpha-numeric characters. In Python, a string is indicated by quotation marks around the sequence"
* *list*
* A type of object that consists of a sequence of elements.
### Natural Language Processing
* *pre-processing*
* Transforming a human lanugage text into computer-manipulable format. A typical pre-processing workflow includes <i>stop-word</i> removal, setting text in lower case, and <i>term frequency</i> counting.
* *token*
* An individual word unit within a sentence.
* *stop words*
* The function words in a natural langauge, such as <i>the</i>, <i>of</i>, <i>it</i>, etc. These are typically the most common words.
* *term frequency*
* The number of times a term appears in a given text. This is either reported as a raw tally or it is <i>normalized</i> by dividing by the total number of words in a text.
* *POS tagging*
* One common task in NLP is the determination of a word's part-of-speech (POS). The label that describes a word's POS is called its <i>tag</i>. Specialized functions that make these determinations are called <i>POS Taggers</i>.
* *concordance*
* Index of instances of a given word (or other linguistic feature) in a text. Typically, each instance is presented within a contextual window for human readability.
* *NLTK (Natural Language Tool Kit)*
* A common Python package that contains many NLP-related functions
### Further Resources:
Check out the full range of techniques included in Python's nltk package here: http://www.nltk.org/book/
# 1. Text in Python
First, a quote about what digital humanities means, from digital humanist Kathleen Fitzpatrick. Source: "On Scholarly Communication and the Digital Humanities: An Interview with Kathleen Fitzpatrick", *In the Library with the Lead Pipe*
```
print("For me it has to do with the work that gets done at the crossroads of digital media and traditional humanistic study. And that happens in two different ways. On the one hand, it's bringing the tools and techniques of digital media to bear on traditional humanistic questions; on the other, it's also bringing humanistic modes of inquiry to bear on digital media.")
# Assign the quote to a variable, so we can refer back to it later
# We get to make up the name of our variable, so let's give it a descriptive label: "sentence"
sentence = "For me it has to do with the work that gets done at the crossroads of digital media and traditional humanistic study. And that happens in two different ways. On the one hand, it's bringing the tools and techniques of digital media to bear on traditional humanistic questions; on the other, it's also bringing humanistic modes of inquiry to bear on digital media."
# Oh, also: anything on a line starting with a hashtag is called a comment,
# and is meant to clarify code for human readers. The computer ignores these lines.
# Print the contents of the variable 'sentence'
print(sentence)
```
# 2. Tokenizing Text and Counting Words
The above output is how a human would read that sentence. Next we look the main way in which a computer "reads", or *parses*, that sentence.
The first step is typically to <i>tokenize</i> it, or to change it into a series of <i>tokens</i>. Each token roughly corresponds to either a word or punctuation mark. These smaller units are more straight-forward for the computer to handle for tasks like counting.
```
# Import the NLTK (Natural Language Tool Kit) package
import nltk
# Tokenize our sentence!
nltk.word_tokenize(sentence)
# Create new variable that contains our tokenized sentence
sentence_tokens = nltk.word_tokenize(sentence)
# Inspect our new variable
# Note the square braces at the beginning and end that indicate we are looking at a list-type object
print(sentence_tokens)
```
### Note on Tokenization
While seemingly simple, tokenization is a non-trivial task.
For example, notice how the tokenizer has handled contractions: a contracted word is divided into two separate tokens! What do you think is the motivation for this? How else might you tokenize them?
Also notice each token is either a word or punctuation mark. In practice, it is sometimes useful to remove punctuation marks and at other times to include them, depending on the situation.
In the coming days, we will see other tokenizers and have opportunities to explore their reasoning. For now, we will look at a few examples of NLP tasks that tokenization enables.
```
# How many tokens are in our list?
len(sentence_tokens)
# How often does each token appear in our list?
import collections
collections.Counter(sentence_tokens)
# Assign those token counts to a variable
token_frequency = collections.Counter(sentence_tokens)
# Get an ordered list of the most frequent tokens
token_frequency.most_common(10)
```
### Note on Term Frequency
Some of the most frequent words appear to summarize the sentence: in particular the words "humanistic", "digital", and "media". However, most of the these terms seem to add noise in the summary: "the", "it", "to", ".", etc.
There are many strategies for identifying the most important words in a text, and we will cover the most popular ones in the next week. Today, we will look at two of them. In the first, we will simply remove the noisey tokens. In the second, we will identify important words using their parts of speech.
# 3. Pre-Processing: Lower Case, Remove Stop Words and Punctuation
Typically, a text goes through a number of pre-processing steps before beginning to the actual analysis. We have already seen the tokenization step. Typically, pre-processing includes transforming tokens to lower case and removing stop words and punctuation marks.
Again, pre-processing is a non-trivial process that can have large impacts on the analysis that follows. For instance, what will be the most common token in our example sentence, once we set all tokens to lower case?
### Lower Case
```
# Let's revisit our original sentence
sentence
# And now transform it to lower case, all at once
sentence.lower()
# Okay, let's set our list of tokens to lower case, one at a time
# The syntax of the line below is tricky. Don't worry about it for now.
# We'll spend plenty of time on it tomorrow!
lower_case_tokens = [ word.lower() for word in sentence_tokens ]
# Inspect
print(lower_case_tokens)
```
### Stop Words
```
# Import the stopwords list
from nltk.corpus import stopwords
# Take a look at what stop words are included
print(stopwords.words('english'))
# Try another language
print(stopwords.words('spanish'))
# Create a new variable that contains the sentence tokens but NOT the stopwords
tokens_nostops = [ word for word in lower_case_tokens if word not in stopwords.words('english') ]
# Inspect
print(tokens_nostops)
```
### Punctuation
```
# Import a list of punctuation marks
import string
# Inspect
string.punctuation
# Remove punctuation marks from token list
tokens_clean = [word for word in tokens_nostops if word not in string.punctuation]
# See what's left
print(tokens_clean)
```
### Re-count the Most Frequent Words
```
# Count the new token list
word_frequency_clean = collections.Counter(tokens_clean)
# Most common words
word_frequency_clean.most_common(10)
```
Better! The ten most frequent words now give us a pretty good sense of the substance of this sentence. But we still have problems. For example, the token "'s" sneaked in there. One solution is to keep adding stop words to our list, but this could go on forever and is not a good solution when processing lots of text.
There's another way of identifying content words, and it involves identifying the part of speech of each word.
# 4. Part-of-Speech Tagging
You may have noticed that stop words are typically short function words, like conjunctions and prepositions. Intuitively, if we could identify the part of speech of a word, we would have another way of identifying which contribute to the text's subject matter. NLTK can do that too!
NLTK has a <i>POS Tagger</i>, which identifies and labels the part-of-speech (POS) for every token in a text. The particular labels that NLTK uses come from the Penn Treebank corpus, a major resource from corpus linguistics.
You can find a list of all Penn POS tags here: https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html
Note that, from this point on, the code is going to get a little more complex. Don't worry about the particularities of each line. For now, we will focus on the NLP tasks themselves and the textual patterns they identify.
```
# Let's revisit our original list of tokens
print(sentence_tokens)
# Use the NLTK POS tagger
nltk.pos_tag(sentence_tokens)
# Assign POS-tagged list to a variable
tagged_tokens = nltk.pos_tag(sentence_tokens)
```
### Most Frequent POS Tags
```
# We'll tread lightly here, and just say that we're counting POS tags
tag_frequency = collections.Counter( [ tag for (word, tag) in tagged_tokens ])
# POS Tags sorted by frequency
tag_frequency.most_common()
```
### Now it's getting interesting
The "IN" tag refers to prepositions, so it's no surprise that it should be the most common. However, we can see at a glance now that the sentence contains a lot of adjectives, "JJ". This feels like it tells us something about the rhetorical style or structure of the sentence: certain qualifiers seem to be important to the meaning of the sentence.
Let's dig in to see what those adjectives are.
```
# Let's filter our list, so it only keeps adjectives
adjectives = [word for word,pos in tagged_tokens if pos == 'JJ' or pos=='JJR' or pos=='JJS']
# Inspect
print( adjectives )
# Tally the frequency of each adjective
adj_frequency = collections.Counter(adjectives)
# Most frequent adjectives
adj_frequency.most_common(5)
# Let's do the same for nouns.
nouns = [word for word,pos in tagged_tokens if pos=='NN' or pos=='NNS']
# Inspect
print(nouns)
# Tally the frequency of the nouns
noun_frequency = collections.Counter(nouns)
# Most Frequent Nouns
print(noun_frequency.most_common(5))
```
And now verbs.
```
# And we'll do the verbs in one fell swoop
verbs = [word for word,pos in tagged_tokens if pos == 'VB' or pos=='VBD' or pos=='VBG' or pos=='VBN' or pos=='VBP' or pos=='VBZ']
verb_frequency = collections.Counter(verbs)
print(verb_frequency.most_common(5))
# If we bring all of this together we get a pretty good summary of the sentence
print(adj_frequency.most_common(3))
print(noun_frequency.most_common(3))
print(verb_frequency.most_common(3))
```
# 5. Demonstration: Guess the Novel
To illustrate this process on a slightly larger scale, we will do the exactly what we did above, but will do so on two unknown novels. Your challenge: guess the novels from the most frequent words.
We will do this in one chunk of code, so another challenge for you during breaks or the next few weeks is to see how much of the following code you can follow (or, in computer science terms, how much of the code you can parse). If the answer is none, not to worry! Tomorrow we will take a step back and work on the nitty gritty of programming.
```
# Read the two text files from your hard drive
# Assign first mystery text to variable 'text1' and second to 'text2'
text1 = open('text1.txt').read()
text2 = open('text2.txt').read()
# Tokenize both texts
text1_tokens = nltk.word_tokenize(text1)
text2_tokens = nltk.word_tokenize(text2)
# Set to lower case
text1_tokens_lc = [word.lower() for word in text1_tokens]
text2_tokens_lc = [word.lower() for word in text2_tokens]
# Remove stopwords
text1_tokens_nostops = [word for word in text1_tokens_lc if word not in stopwords.words('english')]
text2_tokens_nostops = [word for word in text2_tokens_lc if word not in stopwords.words('english')]
# Remove punctuation using the list of punctuation from the string pacage
text1_tokens_clean = [word for word in text1_tokens_nostops if word not in string.punctuation]
text2_tokens_clean = [word for word in text2_tokens_nostops if word not in string.punctuation]
# Frequency distribution
text1_word_frequency = collections.Counter(text1_tokens_clean)
text2_word_frequency = collections.Counter(text2_tokens_clean)
# Guess the novel!
text1_word_frequency.most_common(20)
# Guess the novel!
text2_word_frequency.most_common(20)
```
Computational Text Analysis is not simply the processing of texts through computers, but involves reflection on the part of human interpreters. How were you able to tell what each novel was? Do you notice any differences between each novel's list of frequent words?
The patterns that we notice in our computational model often enrich and extend our research questions -- sometimes in surprising ways! What next steps would you take to investigate these novels?
# 6. Concordances and Similar Words using NLTK
Tallying word frequencies gives us a bird's-eye-view of our text but we lose one important aspect: context. As the dictum goes: "You shall know a word by the company it keeps."
Concordances show us every occurrence of a given word in a text, inside a window of context words that appear before and after it. This is helpful for close reading to get at a word's meaning by seeing how it is used. We can also use the logic of shared context in order to identify which words have similar meanings. To illustrate this, we can compare the way the word "monstrous" is used in our two novels.
### Concordance
```
# Transform our raw token lists in NLTK Text-objects
text1_nltk = nltk.Text(text1_tokens)
text2_nltk = nltk.Text(text2_tokens)
# Really they're no differnt from the raw text, but they have additional useful functions
print(text1_nltk)
print(text2_nltk)
# Like a concordancer!
text1_nltk.concordance("monstrous")
text2_nltk.concordance("monstrous")
```
### Contextual Similarity
```
# Get words that appear in a similar context to "monstrous"
text1_nltk.similar("monstrous")
text2_nltk.similar("monstrous")
```
# Closing Reflection
The methods we have looked at today are the bread-and-butter of NLP. Before moving on, take a moment to reflect on the model of textuality that these rely on. Human language texts are split into tokens. Most often, these are transformed into simple tallies: 'whale' appears 1083 times; "dashwood" appears 249 times. This does not resemble human reading at all! Yet in spite of that, such a list of frequent terms makes a useful summary of the text.
A few questions in closing:
* Can we imagine other ways of representing the text to the computer?
* Why do you think term frequencies are uncannily descriptive?
* What is lost from the text when we rely on frequency information alone?
* Can context similarity recover some of what was lost?
* What kinds of research questions can be answered using these techniques?
* What kinds can't?
| github_jupyter |
```
%matplotlib inline
```
분류기(Classifier) 학습하기
===========================
지금까지 어떻게 신경망을 정의하고, 손실을 계산하며 또 가중치를 갱신하는지에
대해서 배웠습니다.
이제 아마도 이런 생각을 하고 계실텐데요,
데이터는 어떻게 하나요?
------------------------
일반적으로 이미지나 텍스트, 오디오나 비디오 데이터를 다룰텐데요, 이러한 데이터는
표준 Python 패키지를 사용하여 불러온 후 NumPy 배열로 변환하면 됩니다.
그리고 그 배열을 ``torch.*Tensor`` 로 변환하면 됩니다.
- 이미지는 Pillow나 OpenCV 같은 패키지가 유용합니다.
- 오디오를 처리할 때는 SciPy와 LibROSA가 유용하고요.
- 텍스트의 경우에는 그냥 Python이나 Cython의 것들을 사용하거나, NLTK나 SpaCy도
좋습니다.
특별히 영상 분야를 위해서는 ``torchvision`` 이라는 패키지를 만들어두었는데요,
여기에는 Imagenet이나 CIFAR10, MNIST 등과 같은 일반적으로 사용하는 데이터셋을
불러오는 함수들(data loaders)이나, image, viz., ``torchvision.datasets`` 와
``torch.utils.data.DataLoader`` 데이터 변환기가 포함되어 있습니다.
이러한 기능은 엄청나게 편리하며, 매번 유사한 코드(boilerplate code)를 반복해서
작성하는 것을 피할 수 있습니다.
이 튜토리얼에서는 CIFAR10 데이터셋을 사용할 텐데요, 여기에는 다음과 같은 분류들이
있습니다: '비행기(airplane)', '자동차(automobile)', '새(bird)', '고양이(cat)',
'사슴(deer)', '개(dog)', '개구리(frog)', '말(horse)', '배(ship)', '트럭(truck)'.
그리고 CIFAR10에 포함된 이미지의 크기는 3x32x32인데요, 이는 32x32 픽셀 크기의 이미지가
3개 채널(channel)로 이뤄져 있다는 뜻입니다.
.. figure:: /_static/img/cifar10.png
:alt: cifar10
cifar10
이미지 분류기 학습하기
----------------------------
다음의 단계로 진행해보겠습니다:
1. CIFAR10의 학습용 / 시험(test)용 데이터셋을 ``torchvision`` 을 사용하여
불러오고, 정규화(nomarlizing)합니다.
2. 합성곱 신경망(Convolution Neural Network)을 정의합니다.
3. 손실 함수를 정의합니다.
4. 학습용 데이터를 사용하여 신경망을 학습합니다.
5. 시험용 데이터를 사용하여 신경망을 검사합니다.
1. CIFAR10를 불러오고 정규화하기
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``torchvision`` 을 사용하면 매우 쉽게 CIFAR10 데이터를 불러올 수 있습니다.
```
import torch
import torchvision
import torchvision.transforms as transforms
```
torchvision 데이터셋의 출력(output)은 [0, 1] 범위를 갖는 PILImage 이미지입니다.
이를 [-1, 1]의 범위로 정규화된 Tensor로 변환하겠습니다.
```
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
```
재미삼아 학습용 이미지 몇 개를 보겠습니다.
```
import matplotlib.pyplot as plt
import numpy as np
# 이미지를 보여주기 위한 함수
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# 학습용 이미지를 무작위로 가져오기
dataiter = iter(trainloader)
images, labels = dataiter.next()
# 이미지 보여주기
imshow(torchvision.utils.make_grid(images))
# 정답(label) 출력
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
```
2. 합성곱 신경망(Convolution Neural Network) 정의하기
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
이전에 배웠던 신경망 섹션에서 신경망을 복사하고, (기존에 1채널 이미지만 처리하던
것 대신) 3채널 이미지를 처리할 수 있도록 수정합니다.
```
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
```
3. 손실 함수와 Optimizer 정의하기
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
이제, 분류에 대한 교차 엔트로피 손실(Cross-Entropy loss)과 momentum을 갖는
SGD를 사용합니다.
```
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
```
4. 신경망 학습하기
^^^^^^^^^^^^^^^^^^^^
이제부터 흥미로우실 겁니다.
데이터를 반복해서 신경망에 입력으로 제공하고, 최적화(Optimize)만 하면 됩니다.
```
for epoch in range(2): # 데이터셋을 수차례 반복합니다.
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# 입력을 받은 후
inputs, labels = data
# 변화도(Gradient) 매개변수를 0으로 만든 후
optimizer.zero_grad()
# 순전파 + 역전파 + 최적화
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# 통계 출력
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
```
5. 시험용 데이터로 신경망 검사하기
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
학습용 데이터셋을 2회 반복하여 신경망을 학습시켰는데요, 신경망이 전혀 배운게
없을지도 모르니 확인해보겠습니다.
신경망이 예측한 정답과 진짜 정답(Ground-truth)을 비교하는 방식으로 확인할텐데요,
예측이 맞다면 샘플을 '맞은 예측값(Correct predictions)'에 넣겠습니다.
먼저 시험용 데이터를 좀 보겠습니다.
```
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
```
좋습니다, 이제 신경망이 어떻게 예측했는지를 보죠:
```
outputs = net(images)
```
출력은 10개 분류 각각에 대한 값으로 나타납니다. 어떤 분류에 대해서 더 높은 값이
나타난다는 것은, 신경망이 그 이미지가 더 해당 분류에 가깝다고 생각한다는 것입니다.
따라서, 가장 높은 값을 갖는 인덱스(index)를 뽑아보겠습니다:
```
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
```
결과가 괜찮아보이네요.
그럼 전체 데이터셋에 대해서는 어떻게 동작하는지 보겠습니다.
```
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
```
(10가지 분류에서 무작위로) 찍었을 때의 정확도인 10% 보다는 나아보입니다.
신경망이 뭔가 배우긴 한 것 같네요.
그럼 어떤 것들을 더 잘 분류하고, 어떤 것들을 더 못했는지 알아보겠습니다:
```
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
```
자, 이제 다음은 뭘까요?
이러한 신경망들을 GPU에서 실행한다면 어떨까요?
GPU에서 학습하기
----------------
Tensor를 GPU로 옮겼던 것처럼, 신경망을 GPU로 옮길 수 있습니다.
먼저, CUDA를 사용할 수 있는 경우 첫번째 CUDA 장치(Device)를 사용하도록 설정합니다:
```
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# CUDA 기기 상에서 돌린다고 가정하면, 이와 같이 하면 CUDA 장치를 출력합니다:
print(device)
```
이 섹션의 남머지에서는 `device` 를 CUDA 장치라고 가정하겠습니다.
그리고 이 메소드(Method)들은 재귀적으로 모든 모듈로 가서 매개변수와 버퍼를
CUDA tensor로 변경합니다:
.. code:: python
net.to(device)
모든 단계에서 입력(input)과 정답(target)도 GPU로 보내야 한다는 것도 기억하셔야
합니다:
.. code:: python
inputs, labels = inputs.to(device), labels.to(device)
CPU와 비교했을 때 어마어마한 속도 차이가 나지 않는 것은 왜 그럴까요?
그 이유는 바로 신경망이 너무 작기 때문입니다.
**Exercise:** 신경망의 크기를 키웠을 때 얼마나 빨라지는지 확인해보세요.
(첫번째 ``nn.Conv2d`` 의 2번째 매개변수와 두번째 ``nn.Conv2d`` 의 1번째
매개변수는 같아야 합니다.)
**목표를 달성했습니다**:
- 높은 수준에서 PyTorch의 Tensor library와 신경망를 이해합니다.
- 이미지를 분류하는 작은 신경망을 학습시킵니다.
여러개의 GPU에서 학습하기
-------------------------
모든 GPU를 활용해서 더욱 더 속도를 올리고 싶다면, :doc:`data_parallel_tutorial` 을 참고하세요.
이제 뭘 해볼까요?
-------------------
- :doc:`Train neural nets to play video games </intermediate/reinforcement_q_learning>`
- `Train a state-of-the-art ResNet network on imagenet`_
- `Train a face generator using Generative Adversarial Networks`_
- `Train a word-level language model using Recurrent LSTM networks`_
- `다른 예제들 참고하기`_
- `더 많은 튜토리얼 보기`_
- `포럼에서 PyTorch에 대해 얘기하기`_
- `Slack에서 다른 사용자와 대화하기`_
| github_jupyter |

# 亞洲大學基礎程式設計教材(AUP110-Fundamentals of Programming)
[](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb)
#Week3 Python的變數和運算

## Topic 1(主題1)-輸入一個整數或浮點數
Python中使用input()函數讀入輸入的資料。
呼叫一次input()函數會輸入一個字串。
如果我們要輸入一個整數或浮點數。可以使用int()或float()函數
### Step 1: 輸入一個整數
```
instr = input('Please input your score:')
score = int(instr) #變整數
print(score) #列印變數的值
print(type(score)) #列印變數的型別
```
### Step 2: 輸入一個浮點數
```
instr = input('Please input your score:')
score = float(instr) #變浮點數
print(score) #列印變數的值
print(type(score)) #列印變數的型別
```
* Python 型別轉換函數
- int() #變整數
- float() #變浮點數
- str() #變字串
- 變數名稱=int(字串變數)
- 變數名稱=str(數值變數)
## Topic 2(主題2)-算術運算子和表示式(expressions)
數值運算符號(Arithmetic Operators)
```
+ - * / #加減乘除
** #次方
// #商
% #餘數
```
### Step 1: 加減乘除
```
print(1 + 2)
print(3 - 4)
print(5 * 6)
print(12 / 5)
print(1.3 + 2)
print(3 - 4.2)
print(5 * 6.5)
print(10.0 / 5)
```
### Step 2: 商和餘數
```
print(23 / 5)
print(23 // 5)
print(23 % 5)
```
### Step 3: 次方
```
print(23 * 2)
print(23 ** 2)
```
##Topic 3: 運算子優先序(Operator precedence)
### Step 1: 先乘除後加減, 括號優先
```
print(3 * 4 + 5) #先乘除後加減
print(3 * (4 + 5))#小括號 ()優先
```
### Step 2: 次方比加減乘除優先
```
print(1 * 2 ** 3) #次方比乘除優先
print(1 **2 + 3) #次方比加減優先
```
## Topic 4(主題3)-標準庫math的應用
### Step 1: 計算pi 和sin($\pi/3$) 函數
```
pi = 3.14159
sin(pi/3)
```
### Step 2: 使用math標準庫的pi 和sin 函數
```
import math
print(math.pi)
print(math.sin(math.pi/3))
```
### Step 3: 使用as
```
import math as m
print(m.pi)
print(m.sin(m.pi/3))
```
### Step 4: 使用標準庫math的角度(degree)和弧度(radian)轉換
```
import math as m
print(m.degrees((math.pi/3)))
print(m.radians(60))
```
## Topic 5: print()函數的參數
### Step 1: Hello World with 其他參數
* sep = "..." 列印分隔 end="" 列印結尾
- sep: string inserted between values, default a space.
- end: string appended after the last value, default a newline.
```
print('Hello World!') #'Hello World!' is the same as "Hello World!"
help(print) #註解是不會執行的
print('Hello '+'World!')
print("Hello","World", sep="+")
print("Hello"); print("World!")
print("Hello", end=' ');print("World")
print("Hello\nWorld!")
print("Hello","World!", sep="\n")
```
### Step 2:Escape Sequence (逸出序列)
- \newline Ignored
- \\ Backslash (\)
- \' Single quote (')
- \" Double quote (")
- \a ASCII Bell (BEL)
- \b ASCII Backspace (BS)
- \n ASCII Linefeed (LF)
- \r ASCII Carriage Return (CR)
- \t ASCII Horizontal Tab (TAB)
- \ooo ASCII character with octal value ooo
- \xhh... ASCII character with hex value hh...
```
txt = "We are the so-called \"Vikings\" from the north."
print(txt)
```
## Topic 6: 多行的字串
### Step 1: 使用 字串尾部的\來建立長字串
```
iPhone11='iPhone 11是由蘋果公司設計和銷售的智能手機,為第13代iPhone系列智能手機之一,亦是iPhone XR的後繼機種。\
其在2019年9月10日於蘋果園區史蒂夫·喬布斯劇院由CEO蒂姆·庫克隨iPhone 11 Pro及iPhone 11 Pro Max一起發佈,\
並於2019年9月20日在世界大部分地區正式發售。其採用類似iPhone XR的玻璃配鋁金屬設計;\
具有6.1英吋Liquid Retina HD顯示器,配有Face ID;並採用由蘋果自家設計的A13仿生晶片,\
帶有第三代神經網絡引擎。機器能夠防濺、耐水及防塵,在最深2米的水下停留時間最長可達30分鐘。'
print(iPhone11)
```
### Step 2: 使用六個雙引號來建立長字串 ''' ... ''' 或 """ ... """
```
iPhone11='''
iPhone 11是由蘋果公司設計和銷售的智能手機,為第13代iPhone系列智能手機之一,亦是iPhone XR的後繼機種。
其在2019年9月10日於蘋果園區史蒂夫·喬布斯劇院由CEO蒂姆·庫克隨iPhone 11 Pro及iPhone 11 Pro Max一起發佈,
並於2019年9月20日在世界大部分地區正式發售。其採用類似iPhone XR的玻璃配鋁金屬設計;
具有6.1英吋Liquid Retina HD顯示器,配有Face ID;並採用由蘋果自家設計的A13仿生晶片,
帶有第三代神經網絡引擎。機器能夠防濺、耐水及防塵,在最深2米的水下停留時間最長可達30分鐘。'''
print(iPhone11)
iPhone11="""
iPhone 11是由蘋果公司設計和銷售的智能手機,為第13代iPhone系列智能手機之一,亦是iPhone XR的後繼機種。
其在2019年9月10日於蘋果園區史蒂夫·喬布斯劇院由CEO蒂姆·庫克隨iPhone 11 Pro及iPhone 11 Pro Max一起發佈,
並於2019年9月20日在世界大部分地區正式發售。其採用類似iPhone XR的玻璃配鋁金屬設計;
具有6.1英吋Liquid Retina HD顯示器,配有Face ID;並採用由蘋果自家設計的A13仿生晶片,
帶有第三代神經網絡引擎。機器能夠防濺、耐水及防塵,在最深2米的水下停留時間最長可達30分鐘。"""
print(iPhone11)
```
## Toic 7: 原始碼的字元編碼 (encoding)
預設 Python 原始碼檔案的字元編碼使用 UTF-8。在這個編碼中,世界上多數語言的文字可以同時被使用在字串內容、識別名 (identifier) 及註解中 --- 雖然在標準函式庫中只使用 ASCII 字元作為識別名,這也是個任何 portable 程式碼需遵守的慣例。如果要正確地顯示所有字元,您的編輯器需要能夠認識檔案為 UTF-8,並且需要能顯示檔案中所有字元的字型。
如果不使用預設編碼,則要聲明檔案的編碼,檔案的第一行要寫成特殊註解。語法如下:
```
# -*- coding: encoding -*-
```
其中, encoding 可以是 Python 支援的任意一種 codecs。
比如,聲明使用 Windows-1252 編碼,源碼檔案要寫成:
```
# -*- coding: cp1252 -*-
```
第一行的規則也有一種例外情況,在源碼以 UNIX "shebang" line 行開頭時。此時,編碼聲明要寫在檔案的第二行。例如:
```
#!/usr/bin/env python3
# -*- coding: cp1252 -*-
```
## Topic 8: MarkDown語法
### Step 1: 標題
題語法是以1到數個 # 號開頭加上空格,例如:
```
# 標題一
## 標題二
### 標題三
```
### Step 2: 分隔線
分隔線為插入3個以上的 * 或 -,可包含空格。
```
***
*****
- - -
---
```
### Step 3: 粗體及斜體
粗體及斜體 通常會用來強調某些重點文字,語法如下:
```
**粗體**
*斜體*
***斜斜體***
~刪除~
```
### Step 4: 清單
無序清單使用(*)(+)(-)作為標記:
```
* Red
* Green
+ Red
+ Blue
- Green
- Blue
```
有序清單使用數字接著小數點:
```
1. Bird
1. McHale
1. Parish
```
| github_jupyter |
# Initial Regression Analysis of House Prices
In this notebook we are interested in building a predictive regression model for house prices. Data describes the sales of individual residential properties in Ames, Iowa from 2006 to 2010. We will perform a very basic initial analysis and fit a linear predictive model to the observed data.
```
# note that the latest dev version of sklearn is used (because of CategoricalEncoder)
# install it with pip install git+https://github.com/scikit-learn/scikit-learn.git
import warnings
import matplotlib
import numpy as np
import pandas as pd
from plotnine import *
from scipy.stats import norm, uniform, randint
from scipy.stats.mstats import mquantiles
from sklearn.dummy import DummyRegressor
from sklearn.exceptions import DataConversionWarning
from sklearn.feature_selection import VarianceThreshold, f_regression, SelectKBest
from sklearn.linear_model import LinearRegression, Ridge, Lasso, ElasticNet
from sklearn.model_selection import RandomizedSearchCV, cross_val_score
from sklearn.metrics import mean_squared_log_error, make_scorer
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import CategoricalEncoder, StandardScaler, Imputer, FunctionTransformer, PolynomialFeatures
from sklearn.utils import resample
from sklearn_pandas import DataFrameMapper, CategoricalImputer
%matplotlib inline
warnings.filterwarnings(action='ignore', category=DataConversionWarning)
np.random.seed(0)
# load the data
X_tr = pd.read_csv('train.csv', index_col='Id')
X_te = pd.read_csv('test.csv', index_col='Id')
response_col_name = X_tr.columns[-1]
y_tr = X_tr[response_col_name]
X_tr.drop(response_col_name, axis=1, inplace=True)
print("train data dimensions: {}".format(X_tr.shape))
print("response column name and dimensions: {}, {}".format(response_col_name, y_tr.shape))
print("test data dimensions: {}".format(X_te.shape))
X_tr.head()
```
## 1. Preprocessing
As a first step we will put the observed data into the form that is suitable for modelling. All of the transformations are going to be part of the learning process in order to avoid getting overconfident results (i.e. we'll use sklearn's Pipeline objects). First let's specify (based on *data_description.txt*) which features will be treated as categorical, ordinal, poisson and continuous. Unfortunately we have to do this manually.
```
categorical_cols = [
'MSSubClass',
'MSZoning',
'Street',
'Alley',
'LotShape',
'LandContour',
'Utilities',
'LotConfig',
'LandSlope',
'Neighborhood',
'Condition1',
'Condition2',
'BldgType',
'HouseStyle',
'RoofStyle',
'RoofMatl',
'Exterior1st',
'Exterior2nd',
'MasVnrType',
'Foundation',
'Heating',
'CentralAir',
'Electrical',
'GarageType',
'PavedDrive',
'MiscFeature',
'SaleType',
'SaleCondition']
ordinal_cols = [
'OverallQual',
'OverallCond',
'ExterQual', # not encoded
'ExterCond', # not encoded
'BsmtQual', # not encoded
'BsmtCond', # not encoded
'BsmtExposure', # not encoded
'BsmtFinType1', # not encoded
'BsmtFinType2', # not encoded
'HeatingQC', # not encoded
'KitchenQual', # not encoded
'Functional', # not encoded
'FireplaceQu', # not encoded
'GarageFinish', # not encoded
'GarageQual', # not encoded
'GarageCond', # not encoded
'PoolQC', # not encoded
'Fence', # not encoded
'MoSold']
poisson_cols = [
'BsmtFullBath',
'BsmtHalfBath',
'FullBath',
'HalfBath',
'BedroomAbvGr',
'KitchenAbvGr',
'TotRmsAbvGrd',
'Fireplaces',
'GarageCars']
continuous_cols = [
'LotFrontage',
'LotArea',
'YearBuilt',
'YearRemodAdd',
'MasVnrArea',
'BsmtFinSF1',
'BsmtFinSF2',
'BsmtUnfSF',
'TotalBsmtSF',
'1stFlrSF',
'2ndFlrSF',
'LowQualFinSF',
'GrLivArea',
'GarageYrBlt',
'GarageArea',
'WoodDeckSF',
'OpenPorchSF',
'EnclosedPorch',
'3SsnPorch',
'ScreenPorch',
'PoolArea',
'MiscVal',
'YrSold']
len(categorical_cols + ordinal_cols + poisson_cols + continuous_cols)
```
### 1.1 Missing Values
Let's take a look at the fraction of missing values for each feature in the training and test sets.
```
def get_fraction_nan(df):
fraction_nan = df.isnull().sum() / len(df)
return fraction_nan[fraction_nan != 0].sort_values(ascending=False)
get_fraction_nan(X_tr)
get_fraction_nan(X_te)
```
At first glance it seems that we are dealing with quite a lot of missing values, but as it turns out some of the NaNs actually provide meaningful info (see *data_description.txt*). E.g. NaN values in the *PoolQC* (pool quality) feature mean *no pool*, rather than *we don't know*.
We should be careful with true missing data however, e.g. we shouldn't impute mean value for *GarageYrBlt* if there is no garage, etc. Nevertheles, we won't spend time with this as part of the first solution, specially as there aren't a lot of problematic examples (e.g. see fractions for *GarageX* features above).
Let's fix the meaningful NaNs first.
```
not_nan_cols = [
'PoolQC',
'MiscFeature',
'Alley',
'Fence',
'FireplaceQu',
'GarageCond',
'GarageQual',
'GarageFinish',
'GarageType',
'BsmtCond',
'BsmtExposure',
'BsmtQual',
'BsmtFinType1',
'BsmtFinType2']
X_tr[not_nan_cols] = X_tr[not_nan_cols].fillna('No')
X_te[not_nan_cols] = X_te[not_nan_cols].fillna('No')
get_fraction_nan(X_tr)
get_fraction_nan(X_te)
def make_imputer():
# most frequent imputer
impute_most_frequent_cols = categorical_cols + ordinal_cols + poisson_cols
most_frequent_imputer = [([col], CategoricalImputer()) for col in impute_most_frequent_cols]
# mean imputer
mean_imputer = [([col], Imputer(strategy='mean')) for col in continuous_cols]
return DataFrameMapper(most_frequent_imputer + mean_imputer, df_out=True)
```
### 1.2 Encoding and Normalization
We will use dummy variables for categorical features. On the other hand, encoding ordinal variables is a more delicate matter. If we use ordinal encoding (e.g. 1, 2, ..., n), weights cannot encode an n-way choice. If we use one-hot encoding we discard the information about relative ordering of the choices. Again, for the purposes of simplicity of the first solution we will use dummy variables for ordinal features. We will use zero mean, unit variance normalization for continuous features. Count data should also be standardized, but let's take a look at the scales of all poisson features first.
```
X_tr[poisson_cols].describe().T[['min', 'max']]
```
For now we will normalize count data the same as continuous data, although we should come back to this later.
```
def make_encoder():
# one-hot encoding
encoder_params = {'encoding': 'onehot-dense', 'handle_unknown': 'ignore'}
categorical_encoder = [([col], CategoricalEncoder(**encoder_params)) for col in categorical_cols + ordinal_cols]
return DataFrameMapper(categorical_encoder, default=None, df_out=True)
def make_normalizer(df_out):
# zero mean, unit variance
unit_normal_standardizer = [([col], StandardScaler()) for col in continuous_cols + poisson_cols]
return DataFrameMapper(unit_normal_standardizer, default=None, df_out=df_out)
# a function that initializes a new pipeline (unfitted objects)
def init_pipeline(*transformers, df_out=False):
imputer = make_imputer()
cat_encoder = make_encoder()
normalizer = make_normalizer(df_out)
return make_pipeline(imputer, cat_encoder, normalizer, *transformers)
# test the pipeline on train and test data
preprocess = init_pipeline(df_out=True)
X_tr_prep = preprocess.fit_transform(X_tr)
print("preprocessed train data dimensions: {}".format(X_tr_prep.shape))
X_tr_prep.head()
X_te_prep = preprocess.transform(X_te)
print("preprocessed test data dimensions: {}".format(X_te_prep.shape))
X_te_prep.head()
# function to map transformed features back to human-readable names
# for easier interpretation
features_names_dict = {x[0][0]: x[1].categories_[0] for x in preprocess.steps[1][1].features}
def feature_name(feature_name):
if '_' not in feature_name:
return feature_name
name, index = feature_name.split('_')
return "{:s}_{}".format(name, features_names_dict[name][int(index)])
```
## 2. Exploratory Analysis
Let's perform some very basic EDA. We'll start with univariate feature analysis (estimate linear relationship by calculating each features' F score). Additionally, we will bootstrap standard errors and pragmatically estimate the score that could easily be obtained by chance (via a permutation test).
```
# permutaion test
f_scores = []
y_tr_permutated = np.copy(y_tr)
for _ in range(1000):
np.random.shuffle(y_tr_permutated)
f_score = f_regression(X_tr_prep, y_tr_permutated)
f_scores += f_score[0].tolist()
f_score_chance = max(f_scores)
f_score_chance
# bootstrapping
f_scores = np.empty((0, X_tr_prep.shape[1]))
for _ in range(1000):
X_sample, y_sample = resample(X_tr_prep, y_tr, replace=True)
# remove warnings caused by constant features
with warnings.catch_warnings():
warnings.simplefilter('ignore')
f_score = f_regression(X_sample, y_sample)
f_scores = np.vstack((f_scores, f_score[0]))
# 95 percent confidence intervals
f_scores_se = mquantiles(f_scores, [.025, 0.5, .975], axis=0)
df = pd.DataFrame({
'low': f_scores_se[0, :],
'mean': f_scores_se[1, :],
'high': f_scores_se[2, :],
'feature': [feature_name(x) for x in X_tr_prep.columns],
'feature_orig_name': X_tr_prep.columns})
# drop features that had zero variance
df = df.dropna()
ordered_cat = df.sort_values(by='mean', ascending=False)['feature']
df['feature'] = pd.Categorical(df['feature'], categories=ordered_cat, ordered=True)
df = df.sort_values(by='mean', ascending=False)
```
Now we can take a look at the number of features that are less correlated with the observed variable than could be by chance. Additionally, we will plot mean F scores along with standard errors for the best 20 features. When modelling, we could take advantage of all informative features (although we should keep in mind that uninformative could still be useful in combination with some other features).
```
df_worst = df[df['mean'] < f_score_chance]
print("number of features correlated less than by chance: {:d}".format(len(df_worst)))
df_top = df[df['mean'] > f_score_chance]
df_top_20 = df_top.head(20)
df_top_20['feature'].cat.remove_unused_categories(inplace=True)
# visualize some top features
(ggplot(df_top_20)
+ geom_errorbar(
aes(x='feature',
ymin='low',
ymax='high',
color='feature',
size=1),
show_legend=False)
+ geom_point(aes(x='feature', y='mean'), size=5, fill='white')
+ labs(x='feature', y='F score')
+ theme(axis_text_x=element_text(angle=90, hjust=1), figure_size=(12, 7)))
```
## 3. Modelling
### 3.1 Linear Relationships
We will use linear regression as our first model. Let's first check if any of the assumptions we make about the underlying data (by selecting this particular parametric model) are violated. For now we won't care how much we (over)fit to the data, we are only using certain diagnostics to check whether our model is able to capture the given information. There are many assumptions that should be checked, but for now we will only inspect the normality of residuals.
```
lin_reg = LinearRegression()
pipeline = init_pipeline(lin_reg)
pipeline.fit(X_tr, y_tr)
y_tr_pred = pipeline.predict(X_tr)
residuals = y_tr - y_tr_pred
residuals = (residuals - residuals.mean()) / residuals.std()
qs = np.arange(0, 1.01, .01)
theoretical_quantiles = norm.ppf(qs)
sample_quantiles = mquantiles(residuals, qs)
df = pd.DataFrame({'x': theoretical_quantiles, 'y': sample_quantiles})
(ggplot(df)
+ geom_point(aes(x='x', y='y'), color='blue')
+ geom_line(aes(x='x', y='x'), color='red')
+ labs(x='theoretical normal quantile', y='sample quantile')
+ theme(figure_size=(9, 5)))
```
It looks like residuals follow a distribution with slightly thinner tailes (compared to a Gaussian distribution). We will conclude, that residuals are "enough" normal for our purposes.
Let's search the hyper-parameter space of three different linear models (i.e. choices for priors: L1, L2, and elastic net regularization). Before that we will transform the response variable into the log-space, because prices are inherently positive values and hence we don't want to predict negative values. Alternatively, we could use a GLM with a logarithmic link.
```
# log-space transformation
y_tr_log = y_tr.apply(np.log)
# root mean squared log error in the original space
def rmsle(y, y_pred):
y, y_pred = np.exp(y), np.exp(y_pred)
return np.sqrt(mean_squared_log_error(y, y_pred))
rmsle = make_scorer(rmsle)
def random_search(*transformers, params_dist):
pipeline = init_pipeline(*transformers)
random_search = RandomizedSearchCV(
estimator=pipeline,
param_distributions=params_dist,
cv=10,
n_iter=40,
scoring=rmsle)
random_search.fit(X_tr, y_tr_log)
mean_score = random_search.cv_results_['mean_test_score'][0]
std_score = random_search.cv_results_['std_test_score'][0]
print("root mean squared log error: {:.4f} (+/- {:.2f})".format(mean_score, 2 * std_score))
print("best params: {}".format(random_search.best_params_))
# best_estimator_ is refitted on the whole training dataset
return random_search.best_estimator_
# baseline
baseline = DummyRegressor(strategy='mean')
scores = cross_val_score(baseline, X_tr, y_tr_log, cv=10, scoring=rmsle)
print("root mean squared log error: {:.4f} (+/- {:.2f})".format(scores.mean(), 2 * scores.std()))
# L1
params_dist = {'lasso__alpha': uniform(1e-5, 1e-1)}
lin_reg_l1 = random_search(Lasso(), params_dist=params_dist)
# L2
params_dist = {'ridge__alpha': uniform(1e-5, 1e-1)}
lin_reg_l2 = random_search(Ridge(), params_dist=params_dist)
# Elastic Net
params_dist = {'elasticnet__alpha': uniform(1e-5, 1e-1), 'elasticnet__l1_ratio': uniform()}
lin_reg_l1_l2 = random_search(ElasticNet(), params_dist=params_dist)
```
It seems that L2 regularized model performs best in the current scenario (in my experience this is usually the case, at least if we are mainly interested in predictive power).
Nevertheless, Lasso has a tendency to produce zero weights and could hence serve as a feature selection technique. We will inspect L1 regression coefficients next, but solely to get an initial notion. It is important to note that we won't make any claims based on the results. Linearity doesn't imply interpretability (in fact interpretation of regression coefficients requires a careful handling). Before making any claims there are at least two things we would have to do first. First we should inspect multicollinearity (e.g. by calculating pearsons correlation coefficients) and only include subsets of lowly correlated features. Additionally, we should obtain standard errors (by bootstrapping or using Bayesian inference). We won't perform this sort of EDA now as primary objective was to construct a predictive model (multicollinearity also doesn't hurt predictive performance).
```
l1_weights = lin_reg_l1.steps[-1][-1].coef_
df = pd.DataFrame({
'feature': X_tr_prep.columns.values,
'w': l1_weights,
'w_abs': np.abs(l1_weights)})
ordered_cat = df.sort_values(by='w_abs', ascending=False)['feature']
df['feature'] = pd.Categorical(df['feature'], categories=ordered_cat, ordered=True)
df = df.sort_values(by='w_abs', ascending=False).head(10)
df['feature'].cat.remove_unused_categories(inplace=True)
(ggplot(df)
+ geom_bar(aes(x='feature', y='w'), stat='stat_identity', fill='blue')
+ labs(x='feature', y='L1 w')
+ theme(axis_text_x=element_text(angle=90, hjust=1), figure_size=(8, 6)))
l1_l2_weights = lin_reg_l1_l2.steps[-1][-1].coef_
df = pd.DataFrame({
'feature': X_tr_prep.columns.values,
'w': l1_l2_weights,
'w_abs': np.abs(l1_l2_weights)})
ordered_cat = df.sort_values(by='w_abs', ascending=False)['feature']
df['feature'] = pd.Categorical(df['feature'], categories=ordered_cat, ordered=True)
df = df.sort_values(by='w_abs', ascending=False).head(10)
df['feature'].cat.remove_unused_categories(inplace=True)
(ggplot(df)
+ geom_bar(aes(x='feature', y='w'), stat='stat_identity', fill='blue')
+ labs(x='feature', y='Elastic Net w')
+ theme(axis_text_x=element_text(angle=90, hjust=1), figure_size=(8, 6)))
```
GrLivArea, GarageCars, YearBuilt, TotalBsmtSF, YearRemoddAdd and Fireplaces were selected by our Lasso and Elastic Net models. Given that all these features came up in our (top 20) univariate analysis before, this indicates that for some definition of informativeness, they are the most useful for predicting house prices. We should still be careful not to immediately declare correlation for causation. To do that we should make sure that there is enough data to remove the unwanted latent effects and use our domain knowledge to make final claims.
### 3.2 Non-linear relationships, heteroskedasticity
Let's take a look at the residual vs fitted values plot next. The plot is used to detect non-linearity, unequal error variances, and outliers. Again, we are constructing it on the training dataset, hence we are only inspecting assumptions of the model, not its generalization.
```
y_tr_log_pred = lin_reg_l2.predict(X_tr)
residuals = y_tr_log - y_tr_log_pred
residuals = (residuals - residuals.mean()) / residuals.std()
df = pd.DataFrame({'x': y_tr_log_pred, 'y': residuals})
(ggplot(df)
+ geom_point(aes(x='x', y='y'), color='blue')
+ geom_smooth(aes(x='x', y='y'), method='loess', color='red')
+ labs(x='fitted log values', y='residuals')
+ theme(figure_size=(10, 7)))
```
It seems that for middle examples (fitted log values) variance is constant and relationship linear, with a few extreme exceptions that should be inspected further (could very well be outliers). There is more variability at low and high values due to small number of examples (with really low and really large house prices) and there may bo some non-linear relationship that we are not capturing in this regions. Let's try to model simple non-linear relationships next, but we shouldn't expect much improvement (if at all) based on the residual vs fitted values plot.
```
# do fss prior to expansion to avoid blowing up feature space
remove_constant = VarianceThreshold(threshold=0)
select_k_best = SelectKBest(score_func=f_regression)
poly_expand = PolynomialFeatures(include_bias=False)
# L1
params_dist = {
'selectkbest-1__k': randint(80, 200),
'selectkbest-2__k': randint(300, 400),
'lasso__alpha': uniform(1e-5, 1e-1)}
lin_reg_l1_poly = random_search(
remove_constant,
select_k_best,
poly_expand,
remove_constant,
select_k_best,
Lasso(max_iter=5000),
params_dist=params_dist)
# L2
params_dist = {
'selectkbest-1__k': randint(80, 200),
'selectkbest-2__k': randint(300, 400),
'ridge__alpha': uniform(1e-5, 1e-1)}
lin_reg_l2_poly = random_search(
remove_constant,
select_k_best,
poly_expand,
remove_constant,
select_k_best,
Ridge(),
params_dist=params_dist)
# Elastic Net
params_dist = {
'selectkbest-1__k': randint(80, 200),
'selectkbest-2__k': randint(300, 400),
'elasticnet__alpha': uniform(1e-5, 1e-1),
'elasticnet__l1_ratio': uniform()}
lin_reg_l1_l2_poly = random_search(
remove_constant,
select_k_best,
poly_expand,
remove_constant,
select_k_best,
ElasticNet(max_iter=5000),
params_dist=params_dist)
```
It looks like the result is not improving by polynimially expanding the feature space (this was expected based on the previous plot). We could still do a more comprehensive search for feature subset selection (leaving more features and using more agressive regularization), but won't do so because of limited resources (running on a laptop).
## 4. Next Steps
**I would take the next steps (in the order of importance) given more time.**
- **Use domain knowledge to manually engineer features is where we can gain the most predictive power so this would be the next step.**
- **Further analysis of the residual vs fitted values plot would allow us to understand which samples we make the biggest mistakes at and help us detect outliers.**
- Try PCA to transorm feature space.
- Turn to more complex ML algorithms and ensembling (even a simple average of current models should give some additional predictive power).
- We should go back to data cleaning: cross-validate ordinal and poisson feature encoding and different imputation schemes.
- When we construct a sufficiently-performing model, we should approximate out-of-the-sample performance by calculating the score on the *X_te* test set (preferably only once) and report that (not the score from the cross-validation procedure).
- If we wanted to understand the relationships between predictors and the response variable even better, we should extend our univariate feature analysis. E.g. we could use Bayesian linear regression to obtain posterior regression coefficients. As stated above, multicollinearity should be taken care of first.
| github_jupyter |
# Prepare Data for Training with Remapped Categories
```
DEVELOPMENT = True
import os
import numpy as np
import pandas as pd
from scipy.stats import mode
from sklearn.preprocessing import MinMaxScaler, LabelEncoder
if DEVELOPMENT:
np.random.seed(0) # Don't be so random
```
## Prepare transaction data
```
# Retrieve indices of such customers
transactions = pd.read_csv('original_data/customer_transaction_data.csv')
transactions.sample(5)
transactions.drop('date', axis=1, inplace=True)
transactions['original_price'] = transactions.selling_price - transactions.other_discount - transactions.coupon_discount
transactions.original_price = transactions.original_price / transactions.quantity
transactions.coupon_discount = transactions.coupon_discount / transactions.quantity
transactions.drop(['other_discount', 'selling_price'], axis=1, inplace=True)
transactions['coupon_used'] = transactions.coupon_discount.apply(lambda x: 1 if x != 0 else 0)
transactions.coupon_used.value_counts(normalize=True)
```
#### Modify quantities to fit clothing store categories
```
# Scale quantity to something more realistic in a clothing store
print(f'Min quantity: {transactions.quantity.min()}')
print(f'Max quantity: {transactions.quantity.max()}')
print(f'Mean quantity: {transactions.quantity.mean()}')
print(f'Median quantity: {transactions.quantity.median()}')
print(f'Mode quantity: {transactions.quantity.mode()[0]}')
# We want min to be 1 and max to be, e.g. 23, and also for the items to be integers
scaler = MinMaxScaler((1, 23))
transactions['quantity'] = scaler.fit_transform(transactions[['quantity']])
transactions['quantity'] = transactions['quantity'].round(decimals=0).astype(np.int64)
print('\nNew values:')
print(f'Min quantity: {transactions.quantity.min()}')
print(f'Max quantity: {transactions.quantity.max()}')
print(f'Mean quantity: {transactions.quantity.mean()}')
print(f'Median quantity: {transactions.quantity.median()}')
print(f'Mode quantity: {transactions.quantity.mode()[0]}')
```
## Prepare customer demographics data
```
customers = pd.read_csv('original_data/customer_demographics.csv')
customers.sample(5)
customers.isnull().sum()
customers['family_size'] = customers.family_size.apply(lambda x: int(x.replace('+', '')))
customers['no_of_children'] = customers.no_of_children.apply(lambda x: int(x.replace('+', '')) if pd.notna(x) else x)
customers.loc[pd.isnull(customers.marital_status) & (customers.family_size == 1),
'marital_status'] = 'Single'
customers.loc[pd.isnull(customers.marital_status) & ((customers.family_size - customers.no_of_children) == 1),
'marital_status'] = 'Single'
customers.loc[pd.isnull(customers.marital_status) & ((customers.family_size - customers.no_of_children) == 2),
'marital_status'] = 'Married'
customers.loc[pd.isnull(customers.marital_status) & pd.isnull(customers.no_of_children) & (customers.family_size == 2),
'marital_status'] = 'Married'
customers.loc[pd.isnull(customers.no_of_children) & (customers.marital_status == 'Married') & (customers.family_size == 2),
'no_of_children'] = 0
customers.loc[pd.isnull(customers.no_of_children) & (customers.family_size == 1), 'no_of_children'] = 0
customers.loc[pd.isnull(customers.no_of_children) & (customers.family_size == 2), 'no_of_children'] = 1
customers['no_of_children'] = customers['no_of_children'].astype(np.int64)
customers.drop('rented', axis=1, inplace=True)
len(customers)
```
#### Fill in customer demographics data for customer ids absent from customers dataframe, but present in transactions dataframe
```
ids_to_add = sorted([i for i in transactions.customer_id.unique() if i not in customers.customer_id.values])
len(ids_to_add)
# Calculate statistics for existing customers
demographics = customers.drop('customer_id', axis=1).value_counts(normalize=True).rename('count_normalized').reset_index()
# Fill in missing customer info based on the statistics
columns = demographics.drop('count_normalized', axis=1).columns
values = demographics.drop('count_normalized', axis=1).to_records()
probs = demographics['count_normalized'].values
row_num = len(ids_to_add)
new_customers = pd.DataFrame(np.random.choice(values, row_num, p=probs), columns=columns)
new_customers['customer_id'] = ids_to_add
new_customers
customers = customers.append(new_customers).sort_values('customer_id').reset_index(drop=True)
customers['gender'] = np.random.choice(['M', 'F'], len(customers), p=[0.48, 0.52])
customers.gender.value_counts()
customers
```
## Prepare item data
```
items = pd.read_csv('original_data/item_data.csv')
items.sample(5)
items.drop(['brand_type', 'brand'], axis=1, inplace=True)
items.sort_values(by='item_id', inplace=True)
```
### Find out item price and mean coupon discount for that item from transactions table
```
item_price = pd.pivot_table(
data=transactions,
index='item_id',
values=['original_price', 'coupon_discount'],
aggfunc={
'original_price': lambda x: np.round(np.mean(x), decimals=2),
'coupon_discount': lambda x: np.round(np.mean(x), decimals=2)
}
)
item_price.rename(columns={'original_price': 'mean_item_price', 'coupon_discount': 'mean_coupon_discount'}, inplace=True)
item_price.sample(5)
# Merge prices with items
items = pd.merge(items, item_price, on='item_id', how='left').reset_index(drop=True)
items.sample(5)
items.isnull().sum()
items.dropna(axis='index', inplace=True)
items.isnull().sum()
items.shape
```
## Category Mapping
### Merge customers, items, and transactions
```
item_categories = items.drop(['mean_item_price', 'mean_coupon_discount'], axis=1)
df = transactions.merge(customers, on='customer_id', how='left')
df = df.merge(item_categories, on='item_id', how='left')
df.sample(5)
df.isnull().sum()
```
### Category mapping
```
# Check which items are bought mainly by people with children
df['has_children'] = df.no_of_children.apply(lambda x: 1 if x > 0 else 0)
df['no_children'] = df.no_of_children.apply(lambda x: 1 if x == 0 else 0)
agg_items = df.groupby('item_id').agg({'has_children': np.sum, 'no_children': np.sum}).reset_index()
more_with_children = agg_items.loc[(agg_items['has_children'] - agg_items['no_children']) > 0]
items_for_children = more_with_children.drop(['has_children', 'no_children'], axis=1)
items_for_children['category'] = np.random.choice(['Boys', 'Girls'], len(items_for_children))
items_for_children.reset_index(drop=True, inplace=True)
# Check which items are more popular with each gender
df['male'] = df.gender.apply(lambda x: 1 if x == 'M' else 0)
df['female'] = df.gender.apply(lambda x: 1 if x == 'F' else 0)
agg_items = df.groupby('item_id').agg({'male': np.sum, 'female': np.sum}).reset_index()
more_by_men = agg_items.loc[(agg_items['male'] > agg_items['female'])]
more_by_women = agg_items.loc[(agg_items['female'] >= agg_items['male'])]
# Sanity check
more_by_men_transactions = more_by_men.male.sum() + more_by_men.female.sum()
more_by_women_transactions = more_by_women.female.sum() + more_by_women.male.sum()
assert len(transactions) == more_by_women_transactions + more_by_men_transactions
items_for_men = more_by_men.loc[~(more_by_men.item_id.isin(items_for_children.item_id.values))].reset_index(drop=True)
items_for_men.drop(['male', 'female'], axis=1, inplace=True)
items_for_men['category'] = np.random.choice(['Men', 'Sport'], len(items_for_men), p=[0.7, 0.3])
items_for_women = more_by_women.loc[~(more_by_women.item_id.isin(items_for_children.item_id.values))].reset_index(drop=True)
items_for_women.drop(['male', 'female'], axis=1, inplace=True)
items_for_women['category'] = np.random.choice(['Women', 'Sport'], len(items_for_women), p=[0.8, 0.2])
print(f'{len(items)} items total')
print(f'{len(items_for_children)} items for children')
print(f'{len(items_for_women)} items for women')
print(f'{len(items_for_men)} items for men')
not_categorized = len(items) - (len(items_for_men) + len(items_for_women) + len(items_for_children))
print(f'{not_categorized} items not categorized')
new_items = items_for_children.append(items_for_women).append(items_for_men)
new_items
new_items.category.value_counts()
items.drop('category', axis=1, inplace=True)
items = pd.merge(new_items, items, on='item_id', how='left').sort_values('item_id').reset_index(drop=True)
items
```
### Merge items with coupon info
```
coupons = pd.read_csv('original_data/coupon_item_mapping.csv')
coupons.sample(5)
coupons_items = coupons.merge(items, on='item_id', how='left')
coupons_items
coupons_items.isnull().sum()
coupons_items.dropna(axis=0, inplace=True)
coupons_items.shape
coupons_items.loc[coupons_items['mean_coupon_discount'] < 0]
coupon_stats = pd.pivot_table(
data=coupons_items,
index=['coupon_id'],
values=['mean_coupon_discount', 'mean_item_price'],
aggfunc={
'mean_coupon_discount': lambda x: np.round(np.mean(x), decimals=2),
'mean_item_price': lambda x: np.round(np.mean(x), decimals=2)
}
)
coupon_stats.shape
coupon_stats.loc[coupon_stats.mean_coupon_discount == 0]
coupon_stats.loc[coupon_stats.mean_coupon_discount == 0, 'mean_coupon_discount'] = coupon_stats.mean_coupon_discount.mean()
len(coupon_stats.loc[np.absolute(coupon_stats.mean_coupon_discount) > coupon_stats.mean_item_price])
```
## Add customer stats
```
transactions.sample(5)
pt_sums = pd.pivot_table(transactions, index='customer_id',
values=['quantity', 'original_price', 'coupon_discount', 'coupon_used'],
aggfunc={
'quantity': np.sum,
'original_price': lambda x: np.round(np.sum(x), decimals=2),
'coupon_discount': lambda x: np.round(np.sum(x), decimals=2),
'coupon_used': np.sum
})
pt_sums.reset_index(inplace=True)
pt_sums.rename(columns={
'quantity': 'total_quantity_bought_by_cust',
'original_price': 'total_price_paid_by_cust',
'coupon_discount': 'total_discount_used_by_cust',
'coupon_used': 'total_coupons_used_by_cust'}, inplace=True)
pt_sums
pt_means = pd.pivot_table(transactions, index='customer_id',
values=['item_id', 'quantity', 'original_price', 'coupon_discount'],
aggfunc={
'item_id': lambda x: len(set(x)),
'quantity': lambda x: np.round(np.mean(x), decimals=2),
'original_price': lambda x: np.round(np.mean(x), decimals=2),
'coupon_discount': lambda x: np.round(np.mean(x), decimals=2)
})
pt_means.reset_index(inplace=True)
pt_means.rename(columns={
'item_id': 'unique_items_bought_by_cust',
'quantity': 'mean_quantity_bought_by_cust',
'original_price': 'mean_selling_price_paid_by_cust',
'coupon_discount': 'mean_discount_used_by_cust'}, inplace=True)
pt_means
cust_tran_stats = pd.merge(pt_means, pt_sums, on='customer_id')
cust_tran_stats
if not os.path.exists('csv_4_db'):
os.mkdir('csv_4_db')
coupons_items[['coupon_id', 'item_id', 'category']].sort_values('coupon_id').drop_duplicates()\
.to_csv('csv_4_db/coupon_categories.csv', index=False)
coupon_stats['mean_coupon_discount'] = coupon_stats.mean_coupon_discount.apply(lambda x: np.round(x, decimals=2))
coupon_stats.reset_index().to_csv('csv_4_db/coupon_info.csv', index=False)
cust_stats = pd.merge(customers, cust_tran_stats, on='customer_id', how='left')
cust_stats.to_csv('csv_4_db/customer_info.csv', index=False)
```
## Merge everything with train df
```
train = pd.read_csv('original_data/train.csv')
train.sample(5)
train = pd.merge(train, cust_stats, on='customer_id', how='left')
train = pd.merge(train, coupon_stats, on='coupon_id', how='left')
train = pd.merge(train, coupons_items[['coupon_id', 'item_id', 'category']], on='coupon_id', how='left')
train
train.columns
train.drop(['id', 'campaign_id', 'item_id'], axis=1, inplace=True)
train_full = train
train_full.drop_duplicates(inplace=True)
train_full.columns
train.shape
train = train.drop(['coupon_id', 'customer_id'], axis=1)
train.drop_duplicates(inplace=True)
train.shape
```
# Encoding
```
def encode(df):
encoder = LabelEncoder()
df['age_range'] = encoder.fit_transform(df['age_range'])
print('Age range encoding:')
for i, c in enumerate(encoder.classes_):
print(f'{i}: {c}')
df['marital_status'] = encoder.fit_transform(df['marital_status'])
print('Marital status encoding:')
for i, c in enumerate(encoder.classes_):
print(f'{i}: {c}')
df['gender'] = encoder.fit_transform(df['gender'])
print('Gender encoding:')
for i, c in enumerate(encoder.classes_):
print(f'{i}: {c}')
df = pd.get_dummies(df, columns=['category'])
return df
train_encoded = encode(train)
train_with_ids_encoded = encode(train_full)
```
## Save to files
```
dirname = 'prepped-data'
if not os.path.exists(dirname):
os.mkdir(dirname)
train_full.to_csv(os.path.join(dirname, 'train_full.csv'), index=False)
train_encoded.to_csv(os.path.join(dirname, 'train.csv'), index=False)
```
| github_jupyter |
# Example: CanvasXpress histogram Chart No. 1
This example page demonstrates how to, using the Python package, create a chart that matches the CanvasXpress online example located at:
https://www.canvasxpress.org/examples/histogram-1.html
This example is generated using the reproducible JSON obtained from the above page and the `canvasxpress.util.generator.generate_canvasxpress_code_from_json_file()` function.
Everything required for the chart to render is included in the code below. Simply run the code block.
```
from canvasxpress.canvas import CanvasXpress
from canvasxpress.js.collection import CXEvents
from canvasxpress.render.jupyter import CXNoteBook
cx = CanvasXpress(
render_to="histogram1",
data={
"x": {
"Description": [
"Survival time in days"
]
},
"z": {
"Organ": [
"Stomach",
"Stomach",
"Stomach",
"Stomach",
"Stomach",
"Stomach",
"Stomach",
"Stomach",
"Stomach",
"Stomach",
"Stomach",
"Stomach",
"Stomach",
"Bronchus",
"Bronchus",
"Bronchus",
"Bronchus",
"Bronchus",
"Bronchus",
"Bronchus",
"Bronchus",
"Bronchus",
"Bronchus",
"Bronchus",
"Bronchus",
"Bronchus",
"Bronchus",
"Bronchus",
"Bronchus",
"Bronchus",
"Colon",
"Colon",
"Colon",
"Colon",
"Colon",
"Colon",
"Colon",
"Colon",
"Colon",
"Colon",
"Colon",
"Colon",
"Colon",
"Colon",
"Colon",
"Colon",
"Colon",
"Ovary",
"Ovary",
"Ovary",
"Ovary",
"Ovary",
"Ovary",
"Breast",
"Breast",
"Breast",
"Breast",
"Breast",
"Breast",
"Breast",
"Breast",
"Breast",
"Breast",
"Breast"
]
},
"y": {
"smps": [
"Survival"
],
"vars": [
"s1",
"s2",
"s3",
"s4",
"s5",
"s6",
"s7",
"s8",
"s9",
"s10",
"s11",
"s12",
"s13",
"s14",
"s15",
"s16",
"s17",
"s18",
"s19",
"s20",
"s21",
"s22",
"s23",
"s24",
"s25",
"s26",
"s27",
"s28",
"s29",
"s30",
"s31",
"s32",
"s33",
"s34",
"s35",
"s36",
"s37",
"s38",
"s39",
"s40",
"s41",
"s42",
"s43",
"s44",
"s45",
"s46",
"s47",
"s48",
"s49",
"s50",
"s51",
"s52",
"s53",
"s54",
"s55",
"s56",
"s57",
"s58",
"s59",
"s60",
"s61",
"s62",
"s63",
"s64"
],
"data": [
[
124
],
[
42
],
[
25
],
[
45
],
[
412
],
[
51
],
[
1112
],
[
46
],
[
103
],
[
876
],
[
146
],
[
340
],
[
396
],
[
81
],
[
461
],
[
20
],
[
450
],
[
246
],
[
166
],
[
63
],
[
64
],
[
155
],
[
859
],
[
151
],
[
166
],
[
37
],
[
223
],
[
138
],
[
72
],
[
245
],
[
248
],
[
377
],
[
189
],
[
1843
],
[
180
],
[
537
],
[
519
],
[
455
],
[
406
],
[
365
],
[
942
],
[
776
],
[
372
],
[
163
],
[
101
],
[
20
],
[
283
],
[
1234
],
[
89
],
[
201
],
[
356
],
[
2970
],
[
456
],
[
1235
],
[
24
],
[
1581
],
[
1166
],
[
40
],
[
727
],
[
3808
],
[
791
],
[
1804
],
[
3460
],
[
719
]
]
},
"m": {
"Name": "Cancer Survival",
"Description": "Patients with advanced cancers of the stomach, bronchus, colon, ovary or breast were treated with ascorbate. The purpose of the study was to determine if the survival times differ with respect to the organ affected by the cancer.",
"Reference": "Cameron, E. and Pauling, L. (1978) Supplemental ascorbate in the supportive treatment of cancer: re-evaluation of prolongation of survival times in terminal human cancer. Proceedings of the National Academy of Science USA, 75. Also found in: Manly, B.F.J. (1986) Multivariate Statistical Methods: A Primer, New York: Chapman & Hall, 11. Also found in: Hand, D.J., et al. (1994) A Handbook of Small Data Sets, London: Chapman & Hall, 255."
}
},
config={
"axisTitleFontStyle": "italic",
"citation": "Cameron, E. and Pauling, L. (1978). Proceedings of the National Academy of Science USA, 75.",
"graphType": "Scatter2D",
"histogramBins": 50,
"showTransition": False,
"theme": "CanvasXpress",
"title": "Patients with advanced cancers of the stomach,bronchus, colon, ovary or breast treated with ascorbate.",
"xAxisTitle": "Survival (days)",
"yAxisTitle": "Number of Subjects"
},
width=613,
height=613,
events=CXEvents(),
after_render=[
[
"createHistogram",
[
False,
None,
None
]
]
],
other_init_params={
"version": 35,
"events": False,
"info": False,
"afterRenderInit": False,
"noValidate": True
}
)
display = CXNoteBook(cx)
display.render(output_file="histogram_1.html")
```
| github_jupyter |
# 人脸识别比赛
### 数据集
1、上传zip格式压缩的数据集压缩包
2、运行`! unzip dataset.zip`
3、按上面的方法上传test.zip
4、将数据集放置在根目录下,与work目录同级
```
%matplotlib inline
import torch
import time
from torch import nn, optim
from torch.utils.data import Dataset, DataLoader
import torchvision
from torchvision.datasets import ImageFolder
from torchvision import transforms
from torchvision import models
import os
import sys
sys.path.append("..")
# import d2lzh_pytorch as d2l
device = torch.device('cuda:2' if torch.cuda.is_available() else 'cpu')
def load_data_face(batch_size):
transform = torchvision.transforms.Compose([
# torchvision.transforms.Grayscale(num_output_channels=1), # 彩色图像转灰度图像num_output_channels默认1
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.Resize([330,330]),
torchvision.transforms.CenterCrop([224, 224]),
torchvision.transforms.ToTensor()
])
train_imgs = torchvision.datasets.ImageFolder('../dataset/train', transform=transform)
test_imgs = torchvision.datasets.ImageFolder('../dataset/test', transform=transform)
train_iter = torch.utils.data.DataLoader(train_imgs, batch_size=batch_size, shuffle=True, num_workers=4)
test_iter = torch.utils.data.DataLoader(test_imgs, batch_size=batch_size, shuffle=False, num_workers=4)
return train_iter, test_iter
batch_size = 32
# 如出现“out of memory”的报错信息,可减小batch_size或resize
train_iter, test_iter = load_data_face(batch_size)
# 获取并保存人名和索引的对应关系,用于测试过程中将索引映射为人名
import pickle
# transform = torchvision.transforms.Compose([
# #torchvision.transforms.Grayscale(num_output_channels=1), # 彩色图像转灰度图像num_output_channels默认1
# # torchvision.transforms.Resize([224, 224]),
# torchvision.transforms.ToTensor()
# ])
transform = torchvision.transforms.Compose([
# tosrchvision.transforms.Grayscale(num_output_channels=1), # 彩色图像转灰度图像num_output_channels默认1
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.Resize([330,330]),
torchvision.transforms.CenterCrop([224, 224]),
torchvision.transforms.ToTensor()
])
test_imgs = torchvision.datasets.ImageFolder('../dataset/test', transform=transform)
label = test_imgs.class_to_idx
label = {value:key for key, value in label.items()}
# print(len(label))
# 写入文件
label_hal = open('label.pkl', 'wb')
s = pickle.dumps(label)
label_hal.write(s)
label_hal.close()
net=models.resnet152(pretrained=True)
# net.fc = torch.nn.Linear(512, len(label))
net.fc = nn.Sequential(
nn.ReLU(),
nn.Dropout(0.6),
nn.Linear(2048, len(label))
)
```
### 训练
```
def evaluate_accuracy(data_iter, net, device=None):
if device is None and isinstance(net, torch.nn.Module):
# 如果没指定device就使用net的device
device = list(net.parameters())[0].device
acc_sum, n = 0.0, 0
with torch.no_grad():
for X, y in data_iter:
if isinstance(net, torch.nn.Module):
net.eval() # 评估模式, 这会关闭dropout
acc_sum += (net(X.to(device)).argmax(dim=1) == y.to(device)).float().sum().cpu().item()
net.train() # 改回训练模式
else: # 自定义的模型, 3.13节之后不会用到, 不考虑GPU
if('is_training' in net.__code__.co_varnames): # 如果有is_training这个参数
# 将is_training设置成False
acc_sum += (net(X, is_training=False).argmax(dim=1) == y).float().sum().item()
else:
acc_sum += (net(X).argmax(dim=1) == y).float().sum().item()
n += y.shape[0]
return acc_sum / n
train_acc_list = []
test_acc_list = []
def train_ch5(net, train_iter, test_iter, batch_size, optimizer, device, num_epochs):
net = net.to(device)
print("training on ", device)
loss = torch.nn.CrossEntropyLoss()
for epoch in range(num_epochs):
train_l_sum, train_acc_sum, n, batch_count, start = 0.0, 0.0, 0, 0, time.time()
for X, y in train_iter:
X = X.to(device)
y = y.to(device)
y_hat = net(X)
l = loss(y_hat, y)
optimizer.zero_grad()
l.backward()
optimizer.step()
train_l_sum += l.cpu().item()
train_acc_sum += (y_hat.argmax(dim=1) == y).sum().cpu().item()
n += y.shape[0]
batch_count += 1
test_acc = evaluate_accuracy(test_iter, net)
train_acc_list.append(train_acc_sum / n)
test_acc_list.append(test_acc)
print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f, time %.1f sec'
% (epoch + 1, train_l_sum / batch_count, train_acc_sum / n, test_acc, time.time() - start))
lr, num_epochs = 0.01, 100
# optimizer = torch.optim.Adam(net.parameters(), lr=lr)
optimizer = torch.optim.SGD(net.parameters(), lr=lr, momentum=0.9,weight_decay=3e-4)
train_ch5(net, train_iter, test_iter, batch_size, optimizer, device, num_epochs)
net.fc = nn.Sequential(
nn.ReLU(),
nn.Dropout(0.65),
nn.Linear(512, len(label))
)
# 自己加个画图
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
x = np.arange(1, len(train_acc_list)+1, 1)
y1 = np.array(train_acc_list)
y2 = np.array(test_acc_list)
plt.plot(x, y1, label="train")
plt.plot(x, y2, linestyle = "--", label="test")
plt.xlabel("x")
plt.ylabel("y")
plt.title('train & test')
plt.legend()
plt.show()
```
### 保存模型
```
torch.save(net, './resnet152.pkl')
```
# 生成识别结果文件测试
```
# 读取训练好的模型
model = torch.load('./resnet18.pkl')
# 获取本次训练的人名和索引的对应关系
label = {}
with open('label.pkl','rb') as file:
label = pickle.loads(file.read())
# print(label)
# 测试集label对应关系
import pickle
label_answer = {}
with open('label_answer.pkl','rb') as file:
label_answer = pickle.loads(file.read())
label_answer = {value:key for key, value in label_answer.items()}
# 加载测试数据(在test目录下)
from PIL import Image
import numpy as np
transform = torchvision.transforms.Compose([
torchvision.transforms.Grayscale(num_output_channels=1), # 彩色图像转灰度图像num_output_channels默认1
torchvision.transforms.Resize([224, 224]),
torchvision.transforms.ToTensor()
])
# 生成测试结果文件
path = os.listdir('test')
r_d = {}
for f in path:
img = Image.open('test/' + f)
test_imgs = transform(img).unsqueeze(0)
test_imgs = test_imgs.to(device)
y = model(test_imgs)
pred = torch.argmax(y, dim = 1)
r = label_answer[label[int(pred)]]
r_d[int(f.strip('.jpg'))] = r
# 写入结果文件
r_d = sorted(r_d.items(), key=lambda a:a[0])
r_d = dict(r_d)
ret = open("result.csv","w")
for key, value in r_d.items():
print("%d,%s"%(key, value), file=ret)
ret.close()
# 根据生成识别文件的代码,自行编写main.py文件,要求文件可生成结果文件result.csv
# 已知的坑:main.py中需增加模型类的定义
# 测试main.py生成result.csv
!python main.py
# 生成后自行验证
```
| github_jupyter |
# Part 2: Process the item embedding data in BigQuery and export it to Cloud Storage
This notebook is the second of five notebooks that guide you through running the [Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN](https://github.com/GoogleCloudPlatform/analytics-componentized-patterns/tree/master/retail/recommendation-system/bqml-scann) solution.
Use this notebook to complete the following tasks:
1. Process the song embeddings data in BigQuery to generate a single embedding vector for each song.
1. Use a Dataflow pipeline to write the embedding vector data to CSV files and export the files to a Cloud Storage bucket.
Before starting this notebook, you must run the [01_train_bqml_mf_pmi](01_train_bqml_mf_pmi.ipynb) notebook to calculate item PMI data and then train a matrix factorization model with it.
After completing this notebook, run the [03_create_embedding_lookup_model](03_create_embedding_lookup_model.ipynb) notebook to create a model to serve the item embedding data.
## Setup
Import the required libraries, configure the environment variables, and authenticate your GCP account.
```
!pip install -U -q apache-beam[gcp]
```
### Import libraries
```
import os
import numpy as np
import tensorflow.io as tf_io
import apache_beam as beam
from datetime import datetime
```
### Configure GCP environment settings
Update the following variables to reflect the values for your GCP environment:
+ `PROJECT_ID`: The ID of the Google Cloud project you are using to implement this solution.
+ `BUCKET`: The name of the Cloud Storage bucket you created to use with this solution. The `BUCKET` value should be just the bucket name, so `myBucket` rather than `gs://myBucket`.
+ `REGION`: The region to use for the Dataflow job.
```
PROJECT_ID = 'yourProject' # Change to your project.
BUCKET = 'yourBucketName' # Change to the bucket you created.
REGION = 'yourDataflowRegion' # Change to your Dataflow region.
BQ_DATASET_NAME = 'recommendations'
!gcloud config set project $PROJECT_ID
```
### Authenticate your GCP account
This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
```
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except: pass
```
## Process the item embeddings data
You run the [sp_ExractEmbeddings](sql_scripts/sp_ExractEmbeddings.sql) stored procedure to process the item embeddings data and write the results to the `item_embeddings` table.
This stored procedure works as follows:
1. Uses the [ML.WEIGHTS](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-weights) function to extract the item embedding matrices from the `item_matching_model` model.
1. Aggregates these matrices to generate a single embedding vector for each item.
Because BigQuery ML matrix factorization models are designed for user-item recommendation use cases, they generate two embedding matrices, one for users, and the other of items. However, in this use case, both embedding matrices represent items, but in different axes of the feedback matrix. For more information about how the feedback matrix is calculated, see [Understanding item embeddings](https://cloud.google.com/solutions/real-time-item-matching#understanding_item_embeddings).
### Run the `sp_ExractEmbeddings` stored procedure
```
%%bigquery --project $PROJECT_ID
CALL recommendations.sp_ExractEmbeddings()
```
Get a count of the records in the `item_embeddings` table:
```
%%bigquery --project $PROJECT_ID
SELECT COUNT(*) embedding_count
FROM recommendations.item_embeddings;
```
See a sample of the data in the `item_embeddings` table:
```
%%bigquery --project $PROJECT_ID
SELECT *
FROM recommendations.item_embeddings
LIMIT 5;
```
## Export the item embedding vector data
Export the item embedding data to Cloud Storage by using a Dataflow pipeline. This pipeline does the following:
1. Reads the item embedding records from the `item_embeddings` table in BigQuery.
1. Writes each item embedding record to a CSV file.
1. Writes the item embedding CSV files to a Cloud Storage bucket.
The pipeline in implemented in the [embeddings_exporter/pipeline.py](embeddings_exporter/pipeline.py) module.
### Configure the pipeline variables
Configure the variables needed by the pipeline:
```
runner = 'DataflowRunner'
timestamp = datetime.utcnow().strftime('%y%m%d%H%M%S')
job_name = f'ks-bqml-export-embeddings-{timestamp}'
bq_dataset_name = BQ_DATASET_NAME
embeddings_table_name = 'item_embeddings'
output_dir = f'gs://{BUCKET}/bqml/item_embeddings'
project = PROJECT_ID
temp_location = os.path.join(output_dir, 'tmp')
region = REGION
print(f'runner: {runner}')
print(f'job_name: {job_name}')
print(f'bq_dataset_name: {bq_dataset_name}')
print(f'embeddings_table_name: {embeddings_table_name}')
print(f'output_dir: {output_dir}')
print(f'project: {project}')
print(f'temp_location: {temp_location}')
print(f'region: {region}')
try: os.chdir(os.path.join(os.getcwd(), 'embeddings_exporter'))
except: pass
```
### Run the pipeline
It takes about 5 minutes to run the pipeline. You can see the graph for the running pipeline in the [Dataflow Console](https://console.cloud.google.com/dataflow/jobs).
```
if tf_io.gfile.exists(output_dir):
print("Removing {} contents...".format(output_dir))
tf_io.gfile.rmtree(output_dir)
print("Creating output: {}".format(output_dir))
tf_io.gfile.makedirs(output_dir)
!python runner.py \
--runner={runner} \
--job_name={job_name} \
--bq_dataset_name={bq_dataset_name} \
--embeddings_table_name={embeddings_table_name} \
--output_dir={output_dir} \
--project={project} \
--temp_location={temp_location} \
--region={region}
```
### List the CSV files that were written to Cloud Storage
```
!gsutil ls {output_dir}/embeddings-*.csv
```
## License
Copyright 2020 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
**This is not an official Google product but sample code provided for an educational purpose**
| github_jupyter |
```
wlist = "book the flight through Houston".split()
Lexicon = {
"that:0.1 | this:0 | the:0.6 | a:0.3": "Det",
"book:0.1 | flight:0.3 | meal:0.05 | money:0.05 | food:0.4 | dinner:0.1": "Noun",
"book:0.3 | include:0.3 | prefer:0.4": "Verb",
"I:0.4 | she:0.05 | me:0.15 | you:0.4": "Pronoun",
"Houston:0.6 | NWA:0.4": "Proper-Noun",
"does:0.6 | can:0.4": "Aux",
"from:0.3 | to:0.3 | on:0.2 | near:0.15 | through:0.05": "Preposition"
}
Grammar = {
"S -> NP VP": 0.8,
"S -> Aux NP VP": 0.15,
"S -> VP": 0.05,
"NP -> Pronoun": 0.35,
"NP -> Proper-Noun": 0.3,
"NP -> Det Nominal": 0.2,
"NP -> Nominal": 0.15,
"Nominal -> Noun": 0.75,
"Nominal -> Nominal Noun": 0.2,
"Nominal -> Nominal PP": 0.05,
"VP -> Verb": 0.35,
"VP -> Verb NP": 0.2,
"VP -> Verb NP PP": 0.1,
"VP -> Verb PP": 0.15,
"VP -> Verb NP NP": 0.05,
"VP -> VP PP": 0.15,
"PP -> Preposition NP": 1,
}
def Lexicon2CNF(Lexicon: dict) -> dict:
res = {}
for key,value in Lexicon.items():
for item in key.split(" | "):
w,p = item.split(":")
res[value + " -> " + w] = p
return res
def Grammar2CNF(Grammar: dict) -> dict:
res = {}
i = 0
for k,v in Grammar.items():
l,r = k.split(" -> ")
rlist = r.split(" ")
if len(rlist) == 1:
for wf,n in Lexicon.items():
if n == r:
res[l + " -> " + wf] = v
elif len(rlist) == 2:
res[k] = v
else:
i += 1
newr1 = " ".join(rlist[: 2])
newr2 = "X" + str(i) + " " + rlist[-1]
res["X" + str(i) + " -> " + newr1] = 1
res[l + " -> " + newr2] = v
miduse = res.copy()
# count = {}
# for k,v in Grammar.items():
# l,r = k.split(" -> ")
# rlist = r.split(" ")
# if len(rlist) == 1:
# for kk,vv in miduse.items():
# ll,rr = kk.split(" -> ")
# if r == ll:
# if k in count:
# count[k] += 1
# else:
# count[k] = 1
for k,v in Grammar.items():
l,r = k.split(" -> ")
rlist = r.split(" ")
if len(rlist) == 1:
for kk,vv in miduse.items():
ll,rr = kk.split(" -> ")
if r == ll:
res[l + " -> " + rr] = v
return res
LexiconCNF = Lexicon2CNF(Lexicon)
GrammarCNF = Grammar2CNF(Grammar)
LexiconCNF
GrammarCNF
```
| github_jupyter |
# Results summary
| Linear Regression | LightGBM Regressor | Linear Regression + ATgfe |
|-----------------------------------------------------------------|----------------------------------------------------------------|---------------------------------------------------------|
| <ul> <li>10-CV RMSE: 11.13</li><li>Test-data RMSE: 10.38</li><li>r^2: 0.644</li> </ul> | <ul> <li>10-CV RMSE: 6.44</li><li>Test-data RMSE: **4.23**</li> <li>r^2: **0.941**</li> </ul> | <ul> <li>10-CV RMSE: **6.00**</li><li>Test-data RMSE: 5.45</li> <li>r^2: 0.899</li> </ul> |
# Import packages
```
from atgfe.GeneticFeatureEngineer import GeneticFeatureEngineer
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.linear_model import LinearRegression, Lasso
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.compose import make_column_transformer
from sklearn.metrics import mean_squared_error, make_scorer, r2_score
from yellowbrick.regressor import ResidualsPlot, PredictionError
from lightgbm import LGBMRegressor
from yellowbrick.datasets import load_concrete
dataset = load_concrete(return_dataset=True)
df = dataset.to_dataframe()
df.head()
target = 'strength'
X = df.drop(target, axis=1).copy()
numerical_features = X.columns.tolist()
Y = df.loc[:, target].copy()
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=77)
def display_residual_plot(model):
visualizer = ResidualsPlot(model)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.poof()
def prediction_error_plot(model):
visualizer = PredictionError(model)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.poof()
def rmse(y_true, y_pred):
return np.sqrt(mean_squared_error(y_true, y_pred))
def score_model(model, X, y):
evaluation_metric_scorer = make_scorer(rmse)
scores = cross_val_score(estimator=model, X=X, y=y, cv=10, scoring=evaluation_metric_scorer, n_jobs=-1)
scores_mean = scores.mean()
score_std = scores.std()
print('Scores: {}'.format(scores))
print('Mean of metric: {}, std: {}'.format(scores_mean, score_std))
def score_test_data_for_model(model, X_test, y_test):
y_pred = model.predict(X_test)
print('R2: {}'.format(r2_score(y_test, y_pred)))
print('RMSE: {}'.format(rmse(y_test, y_pred)))
def create_new_model():
model = make_pipeline(StandardScaler(), LinearRegression(n_jobs=-1))
return model
```
# Using LightGBM
```
lgbm_model = LGBMRegressor(n_estimators=100, random_state=7)
score_model(lgbm_model, X, Y)
display_residual_plot(lgbm_model)
prediction_error_plot(lgbm_model)
score_test_data_for_model(lgbm_model, X_test, y_test)
```
# Using Logistic Regression
```
model = create_new_model()
score_model(model, X, Y)
display_residual_plot(model)
prediction_error_plot(model)
score_test_data_for_model(model, X_test, y_test)
```
# Using ATgfe
```
model = create_new_model()
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3, random_state=77)
gfe = GeneticFeatureEngineer(model, x_train=X_train, y_train=y_train, numerical_features=numerical_features,
number_of_candidate_features=18,
number_of_interacting_features=4,
evaluation_metric=rmse, n_jobs=62,
cross_validation_in_objective_func=True, objective_func_cv=3)
np_sqrt = np.sqrt
gfe.add_transformation_operation('np_sqrt', np_sqrt)
np_min = np.min
gfe.add_transformation_operation('np_min', np_min)
np_max = np.max
gfe.add_transformation_operation('np_max', np_max)
np_mean = np.mean
gfe.add_transformation_operation('np_mean', np_mean)
gfe.fit(mu=2000, lambda_=2100, early_stopping_patience=10, mutation_probability=0.2, crossover_probability=0.8)
X = gfe.transform(X)
model = create_new_model()
score_model(model, X, Y)
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3, random_state=77)
display_residual_plot(model)
prediction_error_plot(model)
score_test_data_for_model(model, X_test, y_test)
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import pandas as pd
import networkx as nx
import community
import numpy as np
import matplotlib.pyplot as plt
```
## Introduction
In this chapter, we will use Game of Thrones as a case study to practice our newly learnt skills of network analysis.
It is suprising right? What is the relationship between a fatansy TV show/novel and network science or Python(not dragons).
If you haven't heard of Game of Thrones, then you must be really good at hiding. Game of Thrones is a hugely popular television series by HBO based on the (also) hugely popular book series A Song of Ice and Fire by George R.R. Martin. In this notebook, we will analyze the co-occurrence network of the characters in the Game of Thrones books. Here, two characters are considered to co-occur if their names appear in the vicinity of 15 words from one another in the books.
The figure below is a precusor of what we will analyse in this chapter.

The dataset is publicly avaiable for the 5 books at https://github.com/mathbeveridge/asoiaf. This is an interaction network and were created by connecting two characters whenever their names (or nicknames) appeared within 15 words of one another in one of the books. The edge weight corresponds to the number of interactions.
Blog: https://networkofthrones.wordpress.com
```
from nams import load_data as cf
books = cf.load_game_of_thrones_data()
```
The resulting DataFrame books has 5 columns: Source, Target, Type, weight, and book. Source and target are the two nodes that are linked by an edge. As we know a network can have directed or undirected edges and in this network all the edges are undirected. The weight attribute of every edge tells us the number of interactions that the characters have had over the book, and the book column tells us the book number.
Let's have a look at the data.
```
# We also add this weight_inv to our dataset.
# Why? we will discuss it in a later section.
books['weight_inv'] = 1/books.weight
books.head()
```
From the above data we can see that the characters Addam Marbrand and Tywin Lannister have interacted 6 times in the first book.
We can investigate this data by using the pandas DataFrame. Let's find all the interactions of Robb Stark in the third book.
```
robbstark = (
books.query("book == 3")
.query("Source == 'Robb-Stark' or Target == 'Robb-Stark'")
)
robbstark.head()
```
As you can see this data easily translates to a network problem. Now it's time to create a network.
We create a graph for each book. It's possible to create one `MultiGraph`(Graph with multiple edges between nodes) instead of 5 graphs, but it is easier to analyse and manipulate individual `Graph` objects rather than a `MultiGraph`.
```
# example of creating a MultiGraph
# all_books_multigraph = nx.from_pandas_edgelist(
# books, source='Source', target='Target',
# edge_attr=['weight', 'book'],
# create_using=nx.MultiGraph)
# we create a list of graph objects using
# nx.from_pandas_edgelist and specifying
# the edge attributes.
graphs = [nx.from_pandas_edgelist(
books[books.book==i],
source='Source', target='Target',
edge_attr=['weight', 'weight_inv'])
for i in range(1, 6)]
# The Graph object associated with the first book.
graphs[0]
# To access the relationship edges in the graph with
# the edge attribute weight data (data=True)
relationships = list(graphs[0].edges(data=True))
relationships[0:3]
```
## Finding the most important node i.e character in these networks.
Let's use our network analysis knowledge to decrypt these Graphs that we have just created.
Is it Jon Snow, Tyrion, Daenerys, or someone else? Let's see! Network Science offers us many different metrics to measure the importance of a node in a network as we saw in the first part of the tutorial. Note that there is no "correct" way of calculating the most important node in a network, every metric has a different meaning.
First, let's measure the importance of a node in a network by looking at the number of neighbors it has, that is, the number of nodes it is connected to. For example, an influential account on Twitter, where the follower-followee relationship forms the network, is an account which has a high number of followers. This measure of importance is called degree centrality.
Using this measure, let's extract the top ten important characters from the first book (`graphs[0]`) and the fifth book (`graphs[4]`).
NOTE: We are using zero-indexing and that's why the graph of the first book is acceseed by `graphs[0]`.
```
# We use the in-built degree_centrality method
deg_cen_book1 = nx.degree_centrality(graphs[0])
deg_cen_book5 = nx.degree_centrality(graphs[4])
```
`degree_centrality` returns a dictionary and to access the results we can directly use the name of the character.
```
deg_cen_book1['Daenerys-Targaryen']
```
Top 5 important characters in the first book according to degree centrality.
```
# The following expression sorts the dictionary by
# degree centrality and returns the top 5 from a graph
sorted(deg_cen_book1.items(),
key=lambda x:x[1],
reverse=True)[0:5]
```
Top 5 important characters in the fifth book according to degree centrality.
```
sorted(deg_cen_book5.items(),
key=lambda x:x[1],
reverse=True)[0:5]
```
To visualize the distribution of degree centrality let's plot a histogram of degree centrality.
```
plt.hist(deg_cen_book1.values(), bins=30)
plt.show()
```
The above plot shows something that is expected, a high portion of characters aren't connected to lot of other characters while some characters are highly connected all through the network. A close real world example of this is a social network like Twitter where a few people have millions of connections(followers) but majority of users aren't connected to that many other users. This exponential decay like property resembles power law in real life networks.
```
# A log-log plot to show the "signature" of power law in graphs.
from collections import Counter
hist = Counter(deg_cen_book1.values())
plt.scatter(np.log2(list(hist.keys())),
np.log2(list(hist.values())),
alpha=0.9)
plt.show()
```
### Exercise
Create a new centrality measure, weighted_degree(Graph, weight) which takes in Graph and the weight attribute and returns a weighted degree dictionary. Weighted degree is calculated by summing the weight of the all edges of a node and find the top five characters according to this measure.
```
from nams.solutions.got import weighted_degree
plt.hist(list(weighted_degree(graphs[0], 'weight').values()), bins=30)
plt.show()
sorted(weighted_degree(graphs[0], 'weight').items(), key=lambda x:x[1], reverse=True)[0:5]
```
## Betweeness centrality
Let's do this for Betweeness centrality and check if this makes any difference. As different centrality method use different measures underneath, they find nodes which are important in the network. A centrality method like Betweeness centrality finds nodes which are structurally important to the network, which binds the network together and densely.
```
# First check unweighted (just the structure)
sorted(nx.betweenness_centrality(graphs[0]).items(),
key=lambda x:x[1], reverse=True)[0:10]
# Let's care about interactions now
sorted(nx.betweenness_centrality(graphs[0],
weight='weight_inv').items(),
key=lambda x:x[1], reverse=True)[0:10]
```
We can see there are some differences between the unweighted and weighted centrality measures. Another thing to note is that we are using the weight_inv attribute instead of weight(the number of interactions between characters). This decision is based on the way we want to assign the notion of "importance" of a character. The basic idea behind betweenness centrality is to find nodes which are essential to the structure of the network. As betweenness centrality computes shortest paths underneath, in the case of weighted betweenness centrality it will end up penalising characters with high number of interactions. By using weight_inv we will prop up the characters with high interactions with other characters.
## PageRank
The billion dollar algorithm, PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites.
NOTE: We don't need to worry about weight and weight_inv in PageRank as the algorithm uses weights in the opposite sense (larger weights are better). This may seem confusing as different centrality measures have different definition of weights. So it is always better to have a look at documentation before using weights in a centrality measure.
```
# by default weight attribute in PageRank is weight
# so we use weight=None to find the unweighted results
sorted(nx.pagerank_numpy(graphs[0],
weight=None).items(),
key=lambda x:x[1], reverse=True)[0:10]
sorted(nx.pagerank_numpy(
graphs[0], weight='weight').items(),
key=lambda x:x[1], reverse=True)[0:10]
```
### Exercise
#### Is there a correlation between these techniques?
Find the correlation between these four techniques.
- pagerank (weight = 'weight')
- betweenness_centrality (weight = 'weight_inv')
- weighted_degree
- degree centrality
HINT: Use pandas correlation
```
from nams.solutions.got import correlation_centrality
correlation_centrality(graphs[0])
```
## Evolution of importance of characters over the books
According to degree centrality the most important character in the first book is Eddard Stark but he is not even in the top 10 of the fifth book. The importance changes over the course of five books, because you know stuff happens ;)
Let's look at the evolution of degree centrality of a couple of characters like Eddard Stark, Jon Snow, Tyrion which showed up in the top 10 of degree centrality in first book.
We create a dataframe with character columns and index as books where every entry is the degree centrality of the character in that particular book and plot the evolution of degree centrality Eddard Stark, Jon Snow and Tyrion.
We can see that the importance of Eddard Stark in the network dies off and with Jon Snow there is a drop in the fourth book but a sudden rise in the fifth book
```
evol = [nx.degree_centrality(graph)
for graph in graphs]
evol_df = pd.DataFrame.from_records(evol).fillna(0)
evol_df[['Eddard-Stark',
'Tyrion-Lannister',
'Jon-Snow']].plot()
plt.show()
set_of_char = set()
for i in range(5):
set_of_char |= set(list(
evol_df.T[i].sort_values(
ascending=False)[0:5].index))
set_of_char
```
### Exercise
Plot the evolution of betweenness centrality of the above mentioned characters over the 5 books.
```
from nams.solutions.got import evol_betweenness
evol_betweenness(graphs)
```
## So what's up with Stannis Baratheon?
```
sorted(nx.degree_centrality(graphs[4]).items(),
key=lambda x:x[1], reverse=True)[:5]
sorted(nx.betweenness_centrality(graphs[4]).items(),
key=lambda x:x[1], reverse=True)[:5]
nx.draw(nx.barbell_graph(5, 1), with_labels=True)
```
As we know the a higher betweenness centrality means that the node is crucial for the structure of the network, and in the case of Stannis Baratheon in the fifth book it seems like Stannis Baratheon has characterstics similar to that of node 5 in the above example as it seems to be the holding the network together.
As evident from the betweenness centrality scores of the above example of barbell graph, node 5 is the most important node in this network.
```
nx.betweenness_centrality(nx.barbell_graph(5, 1))
```
## Community detection in Networks
A network is said to have community structure if the nodes of the network can be easily grouped into (potentially overlapping) sets of nodes such that each set of nodes is densely connected internally. There are multiple algorithms and definitions to calculate these communites in a network.
We will use louvain community detection algorithm to find the modules in our graph.
```
plt.figure(figsize=(15, 15))
partition = community.best_partition(graphs[0])
size = float(len(set(partition.values())))
pos = nx.kamada_kawai_layout(graphs[0])
count = 0
colors = ['red', 'blue', 'yellow', 'black',
'brown', 'purple', 'green', 'pink']
for com in set(partition.values()):
list_nodes = [nodes for
nodes in partition.keys()
if partition[nodes] == com]
nx.draw_networkx_nodes(graphs[0],
pos, list_nodes, node_size = 20,
node_color = colors[count])
count = count + 1
nx.draw_networkx_edges(graphs[0], pos, alpha=0.2)
plt.show()
# louvain community detection find us 8 different set of communities
partition_dict = {}
for character, par in partition.items():
if par in partition_dict:
partition_dict[par].append(character)
else:
partition_dict[par] = [character]
len(partition_dict)
partition_dict[2]
```
If we plot these communities of the network we see a denser network as compared to the original network which contains all the characters.
```
nx.draw(nx.subgraph(graphs[0], partition_dict[3]))
nx.draw(nx.subgraph(graphs[0],partition_dict[1]))
```
We can test this by calculating the density of the network and the community.
Like in the following example the network between characters in a community is 5 times more dense than the original network.
```
nx.density(nx.subgraph(
graphs[0], partition_dict[4])
)/nx.density(graphs[0])
```
### Exercise
Find the most important node in the partitions according to degree centrality of the nodes using the partition_dict we have already created.
```
from nams.solutions.got import most_important_node_in_partition
most_important_node_in_partition(graphs[0], partition_dict)
```
## Solutions
Here are the solutions to the exercises above.
```
from nams.solutions import got
import inspect
print(inspect.getsource(got))
```
| github_jupyter |
# Electrical Power Generation 2
## Objective and Prerequisites
This example (which is an extension of the earlier ‘Electrical Power Generation 1 ‘ example) will teach you how to choose an optimal set of power stations to turn on in order to satisfy anticipated power demand over a 24-hour time horizon – but gives you the option of using hydroelectric power plants to satisfy that demand.
This model is example 16 from the fifth edition of Model Building in Mathematical Programming by H. Paul Williams on pages 271-272 and 326-327.
This example is at the intermediate level, where we assume that you know Python and the Gurobi Python API and that you have some knowledge of building mathematical optimization models.
**Download the Repository** <br />
You can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip).
---
## Problem Description
In this problem, thermal power generation units are grouped into three distinct types, with different characteristics for each type (power output, cost per megawatt hour, startup cost, etc.). A unit can be on or off, with a startup cost associated with transitioning from off to on, and power output that can fall anywhere between a specified minimum and maximum value when the unit is on. There are also two hydroelectric plants available, also with different cost and power generation characteristics. A 24-hour time horizon is divided into 5 discrete time periods, each with an expected total power demand. The model decides which units to turn on, and when, in order to satisfy demand for each time period. The model also captures a reserve requirement, where the selected power plants must be capable of increasing their output, while still respecting their maximum output, in order to cope with the situation where actual demand exceeds predicted demand.
A set of generators is available to satisfy power demand for the following day. Anticipated demand is as follows:
| Time Period | Demand (megawatts) |
| --- | --- |
| 12 pm to 6 am | 15000 |
| 6 am to 9 am | 30000 |
| 9 am to 3 pm | 25000 |
| 3 pm to 6 pm | 40000 |
| 6 pm to 12 pm | 27000 |
Thermal generators are grouped into three types, with the following minimum and maximum output for each type (when they are on):
| Type | Number available | Minimum output (MW) | Maximum output (MW) |
| --- | --- | --- | --- |
| 0 | 12 | 850 | 2000 |
| 1 | 10 | 1250 | 1750 |
| 2 | 5 | 1500 | 4000 |
There are costs associated with using a thermal generator: a cost per hour when the generator is on (and generating its minimum output), a cost per megawatt hour above its minimum, and a startup cost for turning a generator on:
| Type | Cost per hour (when on) | Cost per MWh above minimum | Startup cost |
| --- | --- | --- | --- |
| 0 | $\$1000$ | $\$2.00$ | $\$2000$ |
| 1 | $\$2600$ | $\$1.30$ | $\$1000$ |
| 2 | $\$3000$ | $\$3.00$ | $\$500$ |
Two hydroelectric generators are also available, each with a fixed power output (when on):
| Hydro plant | Output (MW) |
| --- | --- |
| A | 900 |
| B | 1400 |
The costs associated with using a hydro plant are slightly different. There's an hourly cost, but it is much smaller than the hourly cost of a thermal generator. The real cost for a hydroelectric plant comes from depletion of the water in the reservoir, which happens at different rates for the two units. The reservoir must be replenished before the end of the time horizon by pumping water into it, which consumes electricity. A hydroelectric plant also has a startup cost.
| Hydro plant | Cost per hour (when on) | Startup cost | Reservoir depth reduction (m/hr) |
| --- | --- | --- | --- |
| A | $\$90$ | $\$1500$ | 0.31 |
| B | $\$150$ | $\$1200$ | 0.47 |
Pumping water into the reservoir consumes electricity at a rate of 3000 MWh of electricity per meter of height. The height of the reservoir at the end of the time horizon must be equal to the height at the beginning.
Generators must meet predicted demand, but they must also have sufficient reserve capacity to be able to cope with the situation where actual demand exceeds predicted demand. For this model, the set of selected thermal generators plus the set of hydro generators must be able to produce as much as 115% of predicted demand.
Which generators should be committed to meet anticipated demand in order to minimize total cost?
---
## Model Formulation
### Sets and Indices
$t \in \text{Types}=\{0,1,2\}$: Types of thermal generators.
$h \in \text{HydroUnits}=\{0,1\}$: Two hydro generators.
$p \in \text{Periods}=\{0,1,2,3,4\}$: Time periods.
### Parameters
$\text{period_hours}_{p} \in \mathbb{N}^+$: Number of hours in each time period.
$\text{demand}_p \in \mathbb{R}^+$: Total power demand for time period $p$.
$\text{generators}_t \in \mathbb{N}^+$: Number of thermal generators of type $t$.
$\text{start0} \in \mathbb{N}^+$: Number of thermal generators that are on at the beginning of the time horizon (and available in time period 0 without paying a startup cost).
$\text{min_output}_t \in \mathbb{R}^+$: Minimum output for thermal generator type $t$ (when on).
$\text{max_output}_t \in \mathbb{R}^+$: Maximum output for thermal generator type $t$.
$\text{base_cost}_t \in \mathbb{R}^+$: Minimum operating cost (per hour) for a thermal generator of type $t$.
$\text{per_mw_cost}_t \in \mathbb{R}^+$: Cost to generate one additional MW (per hour) for a thermal generator of type $t$.
$\text{startup_cost}_t \in \mathbb{R}^+$: Startup cost for thermal generator of type $t$.
$\text{hydro_load}_h \in \mathbb{R}^+$: Output for hydro generator $h$.
$\text{hydro_cost}_h \in \mathbb{R}^+$: Cost for operating hydro generator $h$.
$\text{hydro_startup_cost}_h \in \mathbb{R}^+$: Startup cost for hydro generator $h$.
$\text{hydro_height_reduction}_h \in \mathbb{R}^+$: Hourly reduction in reservoir height from operating hydro generator $h$.
### Decision Variables
$\text{ngen}_{t,p} \in \mathbb{N}^+$: Number of thermal generators of type $t$ that are on in time period $p$.
$\text{output}_{t,p} \in \mathbb{R}^+$: Total power output from thermal generators of type $t$ in time period $p$.
$\text{nstart}_{t,p} \in \mathbb{N}^+$: Number of thermal generators of type $t$ to start in time period $p$.
$\text{hydro}_{h,p} \in [0,1]$: Indicates whether hydro generators $h$ is on in time period $p$.
$\text{hydro_start}_{h,p} \in [0,1]$: Indicates whether hydro generator $h$ starts in time period $p$.
$\text{height}_{p} \in \mathbb{R}^+$: Height of reservoir in time period $p$.
$\text{pumping}_{p} \in \mathbb{R}^+$: Power used to replenish reservoir in time period $p$.
### Objective Function
- **Cost**: Minimize the cost (in USD) to satisfy the predicted electricity demand.
\begin{equation}
\text{Minimize} \quad Z_{on} + Z_{extra} + Z_{startup} + Z_{hydro} + Z_{hydro\_startup}
\end{equation}
\begin{equation}
Z_{on} = \sum_{(t,p) \in \text{Types} \times \text{Periods}}{\text{base_cost}_t*\text{ngen}_{t,p}}
\end{equation}
\begin{equation}
Z_{extra} = \sum_{(t,p) \in \text{Types} \times \text{Periods}}{\text{per_mw_cost}_t*(\text{output}_{t,p} - \text{min_load}_t})
\end{equation}
\begin{equation}
Z_{startup} = \sum_{(t,p) \in \text{Types} \times \text{Periods}}{\text{startup_cost}_t*\text{nstart}_{t,p}}
\end{equation}
\begin{equation}
Z_{hydro} = \sum_{(h,p) \in \text{HydroUnits} \times \text{Periods}}{\text{hydro_cost}_h*\text{hydro}_{h,p}}
\end{equation}
\begin{equation}
Z_{hydro\_startup} = \sum_{(h,p) \in \text{HydroUnits} \times \text{Periods}}{\text{hydro_startup_cost}_h*\text{hydro_start}_{h,p}}
\end{equation}
### Constraints
- **Available generators**: Number of generators used must be less than or equal to the number available.
\begin{equation}
\text{ngen}_{t,p} \leq \text{generators}_t \quad \forall (t,p) \in \text{Types} \times \text{Periods}
\end{equation}
- **Demand**: Total power generated across all generator types must meet anticipated demand plus pumping for each time period $p$.
\begin{equation}
\sum_{t \in \text{Types}}{\text{output}_{t,p}} +
\sum_{h \in \text{HydroUnits}}{\text{hydro_load}_h*\text{hydro}_{h,p}} \geq
\text{demand}_p + \text{pumping}_p \quad \forall p \in \text{Periods}
\end{equation}
- **Min/max generation**: Power generation must respect thermal generator min/max values.
\begin{equation}
\text{output}_{t,p} \geq \text{min_output}_t*\text{ngen}_{t,p} \quad \forall (t,p) \in \text{Types} \times \text{Periods}
\end{equation}
\begin{equation}
\text{output}_{t,p} \leq \text{max_output}_t*\text{ngen}_{t,p} \quad \forall (t,p) \in \text{Types} \times \text{Periods}
\end{equation}
- **Reserve**: Selected thermal generators plus hydro units must be able to satisfy demand that is as much as 15% above the prediction.
\begin{equation}
\sum_{t \in \text{Types}}{\text{max_output}_t*\text{ngen}_{t,p}} +
\sum_{h \in \text{HydroUnits}}{\text{hydro_load}_h} \geq 1.15 * \text{demand}_p \quad \forall p \in \text{Periods}
\end{equation}
- **Thermal startup**: Establish relationship between number of active thermal generators and number of startups (use $start0$ for period before the time horizon starts)
\begin{equation}
\text{ngen}_{t,p} \leq \text{ngen}_{t,p-1} + \text{nstart}_{t,p} \quad \forall (t,p) \in \text{Types} \times \text{Periods}
\end{equation}
- **Hydro startup**: Establish relationship between hydro generator state and number of hydro startups (assume hydro plants are off at the beginning of the horizon)
\begin{equation}
\text{hydro}_{h,p} \leq \text{hydro}_{h,p-1} + \text{hydro_start}_{h,p} \quad \forall (h,p) \in \text{HydroUnits} \times \text{Periods}
\end{equation}
- **Reservoir height**: Track reservoir height. Note that the height at the end of the final time period must equal the height at the beginning of the first.
- Reservoir level constraints: Height increases due to pumping activity and decreases due to hydroelectric generation.
\begin{equation}
\text{height}_{p} = \text{height}_{p-1} + \text{period_hours}_{p}*\text{pumping}_{p}/3000 -
\sum_{h \in \text{HydroUnits}}{\text{period_hours}_{p}*\text{hydro_height_reduction}_{h}*\text{hydro}_{h,p}} \quad \forall p \in \text{Periods}
\end{equation}
- Cyclic constraint: Height at the first period must be equal to height at the last period.
\begin{equation}
\text{height}_{pfirst} = \text{height}_{plast} + \text{period_hours}_{pfirst}*\text{pumping}_{pfirst}/3000 -
\sum_{h \in \text{HydroUnits}}{\text{period_hours}_{pfirst}*\text{hydro_height_reduction}_{h}*\text{hydro}_{h,pfirst}}
\end{equation}
---
## Python Implementation
We import the Gurobi Python Module and other Python libraries.
```
%pip install gurobipy
import numpy as np
import pandas as pd
import gurobipy as gp
from gurobipy import GRB
# tested with Python 3.7.0 & Gurobi 9.0
```
## Input Data
We define all the input data of the model.
```
# Parameters
ntypes = 3
nperiods = 5
maxstart0 = 5
hydrounits = 2
generators = [12, 10, 5]
period_hours = [6, 3, 6, 3, 6]
demand = [15000, 30000, 25000, 40000, 27000]
min_load = [850, 1250, 1500]
max_load = [2000, 1750, 4000]
base_cost = [1000, 2600, 3000]
per_mw_cost = [2, 1.3, 3]
startup_cost = [2000, 1000, 500]
hydro_load = [900, 1400]
hydro_cost = [90, 150]
hydro_height_reduction = [0.31, 0.47]
hydro_startup_cost = [1500, 1200]
```
## Model Deployment
We create a model and the variables. For each time period, we have: an integer decision variable to capture the number of active generators of each type (ngen), an integer variable to capture the number of generators of each type we must start (nstart), a continuous decision variable to capture the total power output for each generator type (output), a binary decision variable that indicates whether a hydro unit is active (hydro), a binary decision variable that indicates whether a hydro unit must be started (hydrstart), a continuous decision variable that captures the amount of enery used to replenish the reservoir (pumping), and a continuous decision variable that captures the height of the reservoir (height).
```
model = gp.Model('PowerGeneration2')
ngen = model.addVars(ntypes, nperiods, vtype=GRB.INTEGER, name="ngen")
nstart = model.addVars(ntypes, nperiods, vtype=GRB.INTEGER, name="nstart")
output = model.addVars(ntypes, nperiods, vtype=GRB.CONTINUOUS, name="genoutput")
hydro = model.addVars(hydrounits, nperiods, vtype=GRB.BINARY, name="hydro")
hydrostart = model.addVars(hydrounits, nperiods, vtype=GRB.BINARY, name="hydrostart")
pumping = model.addVars(nperiods, vtype=GRB.CONTINUOUS, name="pumping")
height = model.addVars(nperiods, vtype=GRB.CONTINUOUS, name="height")
```
Next we insert the constraints:
The number of active generators can't exceed the number of generators:
```
# Generator count
numgen = model.addConstrs(ngen[type, period] <= generators[type]
for type in range(ntypes) for period in range(nperiods))
```
Total power output for a thermal generator type depends on the number of generators of that type that are active.
```
# Respect minimum and maximum output per generator type
min_output = model.addConstrs((output[type, period] >= min_load[type] * ngen[type, period])
for type in range(ntypes) for period in range(nperiods))
max_output = model.addConstrs((output[type, period] <= max_load[type] * ngen[type, period])
for type in range(ntypes) for period in range(nperiods))
```
Total generator output (thermal plus hydro) for each time period must meet predicted demand plus pumping.
```
# Meet demand
meet_demand = model.addConstrs(gp.quicksum(output[type, period] for type in range(ntypes)) +
gp.quicksum(hydro_load[unit]*hydro[unit,period] for unit in range(hydrounits))
>= demand[period] + pumping[period]
for period in range(nperiods))
```
Maintain appropriate reservoir levels
```
# Reservoir levels
reservoir = model.addConstrs(height[period] == height[period-1] + period_hours[period]*pumping[period]/3000 -
gp.quicksum(hydro_height_reduction[unit]*period_hours[period]*hydro[unit,period] for unit in range(hydrounits))
for period in range(1,nperiods))
# cyclic - height at end must equal height at beginning
reservoir0 = model.addConstr(height[0] == height[nperiods-1] + period_hours[0]*pumping[0]/3000 -
gp.quicksum(hydro_height_reduction[unit]*period_hours[0]*hydro[unit,0]
for unit in range(hydrounits)))
```
Selected generators must be able to cope with an excess of demand.
```
# Provide sufficient reserve capacity
reserve = model.addConstrs(gp.quicksum(max_load[type]*ngen[type, period] for type in range(ntypes)) >=
1.15*demand[period] - sum(hydro_load)
for period in range(nperiods))
```
Connect the decision variables that capture active thermal generators with the decision variables that count startups.
```
# Startup constraint
startup0 = model.addConstrs((ngen[type,0] <= maxstart0 + nstart[type,0])
for type in range(ntypes))
startup = model.addConstrs((ngen[type,period] <= ngen[type,period-1] + nstart[type,period])
for type in range(ntypes) for period in range(1,nperiods))
```
Connect hydro decision variables with hydro startup decision variables.
```
# Hydro startup constraint
hydro_startup0 = model.addConstrs((hydro[unit,0] <= hydrostart[unit,0])
for unit in range(hydrounits))
hydro_startup = model.addConstrs((hydro[unit,period] <= hydro[unit,period-1] + hydrostart[unit,period])
for unit in range(hydrounits) for period in range(1,nperiods))
```
Objective: minimize total cost. Cost consists of five components: the cost for running active thermal generation units, the cost to generate power beyond the minimum for each unit, the cost to start up thermal generation units, the cost to operate hydro units, and the cost to start up hydro units.
```
# Objective: minimize total cost
active = gp.quicksum(base_cost[type]*period_hours[period]*ngen[type,period]
for type in range(ntypes) for period in range(nperiods))
per_mw = gp.quicksum(per_mw_cost[type]*period_hours[period]*(output[type,period] - min_load[type]*ngen[type,period])
for type in range(ntypes) for period in range(nperiods))
startup_obj = gp.quicksum(startup_cost[type]*nstart[type,period]
for type in range(ntypes) for period in range(nperiods))
hydro_active = gp.quicksum(hydro_cost[unit]*period_hours[period]*hydro[unit,period]
for unit in range(hydrounits) for period in range(nperiods))
hydro_startup = gp.quicksum(hydro_startup_cost[unit]*hydrostart[unit,period]
for unit in range(hydrounits) for period in range(nperiods))
model.setObjective(active + per_mw + startup_obj + hydro_active + hydro_startup)
```
Next, we start the optimization and Gurobi finds the optimal solution.
```
model.optimize()
```
---
## Analysis
The anticipated demand for electricity over the 24-hour time window can be met for a total cost of $\$1,000,630$, as compared with the $\$1,002,540$ that was required to meet the same demand without the hydro plants. The detailed plan for each time period is as follows.
### Unit Commitments
The following table shows the number of thermal generators of each type that are active in each time period in the optimal solution:
```
rows = ["Thermal" + str(t) for t in range(ntypes)]
units = pd.DataFrame(columns=range(nperiods), index=rows, data=0.0)
for t in range(ntypes):
for p in range(nperiods):
units.loc["Thermal"+str(t), p] = ngen[t,p].x
units
```
The following shows the number of generators of each type that must be started in each time period to achieve this plan (recall that the model assumes that 5 are available at the beginning of the time horizon):
```
rows = ["HydroA", "HydroB"]
hydrotable = pd.DataFrame(columns=range(nperiods), index=rows, data=0.0)
for p in range(nperiods):
hydrotable.loc["HydroA", p] = int(hydro[0,p].x)
hydrotable.loc["HydroB", p] = int(hydro[1,p].x)
hydrotable
```
The hydro plants are lightly used in the schedule - we only use plant B for the last two time periods.
The following shows the pumping that must be done in order to support this level of hydro activity
```
pumptable = pd.DataFrame(columns=range(nperiods), index=["Pumping"], data=0.0)
for p in range(nperiods):
pumptable.loc["Pumping", p] = pumping[p].x
pumptable
```
It is interesting to note that the plan simultaneously operates HydroB and pumps in time period 4.
While it might appear that costs could be reduced by operating the hydro plant at a lower load, in this model the hydro plants have fixed output. If it makes economic sense to turn a hydro plant on, then we have to replenish the water drained, even if it means using some of the generated electricity to pump.
---
## References
H. Paul Williams, Model Building in Mathematical Programming, fifth edition.
Copyright © 2020 Gurobi Optimization, LLC
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tempfile
import seaborn as sns
import random
sns.set(style="darkgrid")
import logging
logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)
from banditpylib import trials_to_dataframe
from banditpylib.arms import GaussianArm
from banditpylib.bandits import MultiArmedBandit
from banditpylib.protocols import CollaborativeLearningProtocol, SinglePlayerProtocol
from banditpylib.learners.mab_collaborative_ftbai_learner import LilUCBHeuristicCollaborative
from banditpylib.learners.mab_fcbai_learner import LilUCBHeuristic
means = [(i/10)**0.5 for i in range(1,10)]
random.shuffle(means)
confidence = 0.99
num_learners, rounds, horizon = 10, 5, 20000
param_dict = {
"rounds": [rounds] * num_learners,
"horizon": [horizon] * num_learners,
"num_agents": [2*i + 1 for i in range(num_learners)]
}
arms = [GaussianArm(mu=mean, std=1) for mean in means]
bandit = MultiArmedBandit(arms=arms)
learners = [LilUCBHeuristicCollaborative(num_agents=param_dict["num_agents"][i],
arm_num=len(arms),
rounds=param_dict["rounds"][i],
horizon=param_dict["horizon"][i],
name="learner %d" % (i+1)) for i in range(num_learners)]
trials = 10
game1 = CollaborativeLearningProtocol(bandit=bandit, learners=learners)
temp_file1 = tempfile.NamedTemporaryFile()
game1.play(trials=trials, output_filename=temp_file1.name)
data_df1 = trials_to_dataframe(temp_file1.name)
learners = [LilUCBHeuristic(len(arms), confidence)]
game2 = SinglePlayerProtocol(bandit=bandit, learners=learners, horizon=horizon)
temp_file2 = tempfile.NamedTemporaryFile()
game2.play(trials=trials, output_filename=temp_file2.name)
data_df2 = trials_to_dataframe(temp_file2.name)
data_df = pd.concat([data_df1, data_df2])
data_df["confidence"] = confidence
fig = plt.figure()
ax = plt.subplot(111)
sns.barplot(x='confidence', y='regret', hue='learner', data=data_df)
plt.ylabel('regret')
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
fig = plt.figure()
ax = plt.subplot(111)
sns.barplot(x='confidence', y='total_actions', hue='learner', data=data_df)
plt.ylabel('pulls')
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
confidence = 0.99
num_learners = 10
rounds1, horizon1 = 3, 20000
rounds2, horizon2 = 10, 20000
param_dict1, param_dict2 = {}, {}
num_agents = [2*i + 1 for i in range(num_learners)]
for i in range(num_learners):
param_dict1[i] = {"rounds": rounds1, "horizon": horizon1, "num_agents": num_agents[i]}
param_dict2[i] = {"rounds": rounds2, "horizon": horizon2, "num_agents": num_agents[i]}
arms = [GaussianArm(mu=mean, std=1) for mean in means]
bandit = MultiArmedBandit(arms=arms)
learners1 = [LilUCBHeuristicCollaborative(num_agents=param_dict1[i]["num_agents"],
arm_num=len(arms),
rounds=param_dict1[i]["rounds"],
horizon=param_dict1[i]["horizon"],
name="learner1 %d" % (i+1)) for i in range(num_learners)]
learners2 = [LilUCBHeuristicCollaborative(num_agents=param_dict2[i]["num_agents"],
arm_num=len(arms),
rounds=param_dict2[i]["rounds"],
horizon=param_dict2[i]["horizon"],
name="learner2 %d" % (i+1)) for i in range(num_learners)]
trials = 10
game1 = CollaborativeLearningProtocol(bandit=bandit, learners=learners1)
temp_file3 = tempfile.NamedTemporaryFile()
game1.play(trials=trials, output_filename=temp_file3.name)
data_df1 = trials_to_dataframe(temp_file3.name)
game2 = CollaborativeLearningProtocol(bandit=bandit, learners=learners2)
temp_file4 = tempfile.NamedTemporaryFile()
game2.play(trials=trials, output_filename=temp_file4.name)
data_df2 = trials_to_dataframe(temp_file4.name)
def get_rounds_from_learner_name(learner):
i = int(learner.split()[0][-1])
if i==1:
return rounds1
return rounds2
def get_num_agents_from_learner_name(learner):
i = int(learner.split()[-1])
return num_agents[i-1]
num_agents
data_df = pd.concat([data_df1, data_df2])
data_df["rounds"] = data_df["learner"].apply(get_rounds_from_learner_name)
data_df["num_agents"] = data_df["learner"].apply(get_num_agents_from_learner_name)
fig = plt.figure()
ax = plt.subplot(111)
sns.lineplot(x='num_agents', y='total_actions', hue='rounds', data=data_df)
plt.ylabel('pulls')
plt.xlabel('number of agents')
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
fig = plt.figure()
ax = plt.subplot(111)
sns.lineplot(x='num_agents', y='regret', hue='rounds', data=data_df)
plt.ylabel('regret')
plt.xlabel('number of agents')
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
```
| github_jupyter |
# Runtime Metrics / Tags Example
## Prerequisites
* Kind cluster with Seldon Installed
* curl
* s2i
* seldon-core-analytics
## Setup Seldon Core
Use the setup notebook to [Setup Cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html) to setup Seldon Core with an ingress.
```
!kubectl create namespace seldon
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
```
## Install Seldon Core Analytics
```
!helm install seldon-core-analytics ../../../helm-charts/seldon-core-analytics -n seldon-system --wait
```
## Define Model
```
%%writefile Model.py
import logging
from seldon_core.user_model import SeldonResponse
def reshape(x):
if len(x.shape) < 2:
return x.reshape(1, -1)
else:
return x
class Model:
def predict(self, features, names=[], meta={}):
X = reshape(features)
logging.info(f"model features: {features}")
logging.info(f"model names: {names}")
logging.info(f"model meta: {meta}")
logging.info(f"model X: {X}")
runtime_metrics = [{"type": "COUNTER", "key": "instance_counter", "value": len(X)}]
runtime_tags = {"runtime": "tag", "shared": "right one"}
return SeldonResponse(data=X, metrics=runtime_metrics, tags=runtime_tags)
def metrics(self):
return [{"type": "COUNTER", "key": "requests_counter", "value": 1}]
def tags(self):
return {"static": "tag", "shared": "not right one"}
```
## Build Image and load into kind cluster
```
%%bash
s2i build -E ENVIRONMENT_REST . seldonio/seldon-core-s2i-python37-ubi8:1.7.0-dev runtime-metrics-tags:0.1
kind load docker-image runtime-metrics-tags:0.1
```
## Deploy Model
```
%%writefile deployment.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: seldon-model-runtime-data
spec:
name: test-deployment
predictors:
- componentSpecs:
- spec:
containers:
- image: runtime-metrics-tags:0.1
name: my-model
graph:
name: my-model
type: MODEL
name: example
replicas: 1
!kubectl apply -f deployment.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=seldon-model-runtime-data -o jsonpath='{.items[0].metadata.name}')
```
## Send few inference requests
```
%%bash
curl -s -H 'Content-Type: application/json' -d '{"data": {"ndarray": [[1, 2, 3]]}}' \
http://localhost:8003/seldon/seldon/seldon-model-runtime-data/api/v1.0/predictions
%%bash
curl -s -H 'Content-Type: application/json' -d '{"data": {"ndarray": [[1, 2, 3], [4, 5, 6]]}}' \
http://localhost:8003/seldon/seldon/seldon-model-runtime-data/api/v1.0/predictions
```
## Check metrics
```
import json
metrics =! kubectl run --quiet=true -it --rm curlmetrics --image=radial/busyboxplus:curl --restart=Never -- \
curl -s seldon-core-analytics-prometheus-seldon.seldon-system/api/v1/query?query=instance_counter_total
json.loads(metrics[0])["data"]["result"][0]["value"][1]
metrics =! kubectl run --quiet=true -it --rm curlmetrics --image=radial/busyboxplus:curl --restart=Never -- \
curl -s seldon-core-analytics-prometheus-seldon.seldon-system/api/v1/query?query=requests_counter_total
json.loads(metrics[0])["data"]["result"][0]["value"][1]
```
## Cleanup
```
!kubectl delete -f deployment.yaml
!helm delete seldon-core-analytics --namespace seldon-system
```
| github_jupyter |
```
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Vertex client library: Custom training image classification model for online prediction using exported dataset
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_image_classification_online_exported_ds.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_image_classification_online_exported_ds.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
## Overview
This tutorial demonstrates how to use the Vertex client library for Python to train and deploy a custom image classification model for online prediction, using an exported `Dataset` resource.
### Dataset
The dataset used for this tutorial is the [Flowers dataset](https://www.tensorflow.org/datasets/catalog/tf_flowers) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of flower an image is from a class of five flowers: daisy, dandelion, rose, sunflower, or tulip.
### Objective
In this tutorial, you learn how to create a custom model using an exported `Dataset` resource from a Python script in a Docker container using the Vertex client library, and then do a prediction on the deployed model. You can alternatively create models using the `gcloud` command-line tool or online using the Google Cloud Console.
The steps performed include:
- Create a Vertex `Dataset` resource.
- Export the `Dataset` resource's manifest.
- Create a Vertex custom job for training a model.
- Import the exported dataset manifest.
- Train the model.
- Retrieve and load the model artifacts.
- View the model evaluation.
- Upload the model as a Vertex `Model` resource.
- Deploy the `Model` resource to a serving `Endpoint` resource.
- Make a prediction.
- Undeploy the `Model` resource.
### Costs
This tutorial uses billable components of Google Cloud (GCP):
* Vertex AI
* Cloud Storage
Learn about [Vertex AI
pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage
pricing](https://cloud.google.com/storage/pricing), and use the [Pricing
Calculator](https://cloud.google.com/products/calculator/)
to generate a cost estimate based on your projected usage.
## Installation
Install the latest version of Vertex client library.
```
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
```
Install the latest GA version of *google-cloud-storage* library as well.
```
! pip3 install -U google-cloud-storage $USER_FLAG
```
### Restart the kernel
Once you've installed the Vertex client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
```
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
## Before you begin
### GPU runtime
*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU**
### Set up your Google Cloud project
**The following steps are required, regardless of your notebook environment.**
1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)
3. [Enable the Vertex APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)
4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook.
5. Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
```
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
```
#### Region
You can also change the `REGION` variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
- Americas: `us-central1`
- Europe: `europe-west4`
- Asia Pacific: `asia-east1`
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the [Vertex locations documentation](https://cloud.google.com/vertex-ai/docs/general/locations)
```
REGION = "us-central1" # @param {type: "string"}
```
#### Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
```
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
```
### Authenticate your Google Cloud account
**If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step.
**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
**Otherwise**, follow these steps:
In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.
**Click Create service account**.
In the **Service account name** field, enter a name, and click **Create**.
In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
```
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
```
### Create a Cloud Storage bucket
**The following steps are required, regardless of your notebook environment.**
When you submit a custom training job using the Vertex client library, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex runs
the code from this package. In this tutorial, Vertex also saves the
trained model that results from your job in the same bucket. You can then
create an `Endpoint` resource based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
```
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
```
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
```
! gsutil mb -l $REGION $BUCKET_NAME
```
Finally, validate access to your Cloud Storage bucket by examining its contents:
```
! gsutil ls -al $BUCKET_NAME
```
### Set up variables
Next, set up some variables used throughout the tutorial.
### Import libraries and define constants
#### Import Vertex client library
Import the Vertex client library into our Python environment.
```
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.struct_pb2 import Value
```
#### Vertex constants
Setup up the following constants for Vertex:
- `API_ENDPOINT`: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
- `PARENT`: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
```
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
```
#### Labeling constants
Set constants unique to `Dataset` resource labeling
```
# Image Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml"
# Image Labeling type
LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_classification_single_label_io_format_1.0.0.yaml"
```
#### Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for training and prediction.
Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
For GPU, available accelerators include:
- aip.AcceleratorType.NVIDIA_TESLA_K80
- aip.AcceleratorType.NVIDIA_TESLA_P100
- aip.AcceleratorType.NVIDIA_TESLA_P4
- aip.AcceleratorType.NVIDIA_TESLA_T4
- aip.AcceleratorType.NVIDIA_TESLA_V100
Otherwise specify `(None, None)` to use a container image to run on a CPU.
*Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
```
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
```
#### Container (Docker) image
Next, we will set the Docker container images for training and prediction
- TensorFlow 1.15
- `gcr.io/cloud-aiplatform/training/tf-cpu.1-15:latest`
- `gcr.io/cloud-aiplatform/training/tf-gpu.1-15:latest`
- TensorFlow 2.1
- `gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest`
- `gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest`
- TensorFlow 2.2
- `gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest`
- `gcr.io/cloud-aiplatform/training/tf-gpu.2-2:latest`
- TensorFlow 2.3
- `gcr.io/cloud-aiplatform/training/tf-cpu.2-3:latest`
- `gcr.io/cloud-aiplatform/training/tf-gpu.2-3:latest`
- TensorFlow 2.4
- `gcr.io/cloud-aiplatform/training/tf-cpu.2-4:latest`
- `gcr.io/cloud-aiplatform/training/tf-gpu.2-4:latest`
- XGBoost
- `gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1`
- Scikit-learn
- `gcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest`
- Pytorch
- `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-4:latest`
- `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-5:latest`
- `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-6:latest`
- `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-7:latest`
For the latest list, see [Pre-built containers for training](https://cloud.google.com/vertex-ai/docs/training/pre-built-containers).
- TensorFlow 1.15
- `gcr.io/cloud-aiplatform/prediction/tf-cpu.1-15:latest`
- `gcr.io/cloud-aiplatform/prediction/tf-gpu.1-15:latest`
- TensorFlow 2.1
- `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest`
- `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-1:latest`
- TensorFlow 2.2
- `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest`
- `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-2:latest`
- TensorFlow 2.3
- `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-3:latest`
- `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-3:latest`
- XGBoost
- `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-2:latest`
- `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest`
- `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-90:latest`
- `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-82:latest`
- Scikit-learn
- `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-23:latest`
- `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-22:latest`
- `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-20:latest`
For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers)
```
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
```
#### Machine Type
Next, set the machine type to use for training and prediction.
- Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction.
- `machine type`
- `n1-standard`: 3.75GB of memory per vCPU.
- `n1-highmem`: 6.5GB of memory per vCPU
- `n1-highcpu`: 0.9 GB of memory per vCPU
- `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]
*Note: The following is not supported for training:*
- `standard`: 2 vCPUs
- `highcpu`: 2, 4 and 8 vCPUs
*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*.
```
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
```
# Tutorial
Now you are ready to start creating your own custom model and training for Flowers.
## Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
- Dataset Service for `Dataset` resources.
- Model Service for `Model` resources.
- Endpoint Service for deployment.
- Job Service for batch jobs and custom training.
- Prediction Service for serving.
```
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["job"] = create_job_client()
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
```
## Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
### Create `Dataset` resource instance
Use the helper function `create_dataset` to create the instance of a `Dataset` resource. This function does the following:
1. Uses the dataset client service.
2. Creates an Vertex `Dataset` resource (`aip.Dataset`), with the following parameters:
- `display_name`: The human-readable name you choose to give it.
- `metadata_schema_uri`: The schema for the dataset type.
3. Calls the client dataset service method `create_dataset`, with the following parameters:
- `parent`: The Vertex location root path for your `Database`, `Model` and `Endpoint` resources.
- `dataset`: The Vertex dataset object instance you created.
4. The method returns an `operation` object.
An `operation` object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.
You can use the `operation` object to get status on the operation (e.g., create `Dataset` resource) or to cancel the operation, by invoking an operation method:
| Method | Description |
| ----------- | ----------- |
| result() | Waits for the operation to complete and returns a result object in JSON format. |
| running() | Returns True/False on whether the operation is still running. |
| done() | Returns True/False on whether the operation is completed. |
| canceled() | Returns True/False on whether the operation was canceled. |
| cancel() | Cancels the operation (this may take up to 30 seconds). |
```
TIMEOUT = 90
def create_dataset(name, schema, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
dataset = aip.Dataset(
display_name=name, metadata_schema_uri=schema, labels=labels
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("flowers-" + TIMESTAMP, DATA_SCHEMA)
```
Now save the unique dataset identifier for the `Dataset` resource instance you created.
```
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
```
### Data preparation
The Vertex `Dataset` resource for images has some requirements for your data:
- Images must be stored in a Cloud Storage bucket.
- Each image file must be in an image format (PNG, JPEG, BMP, ...).
- There must be an index file stored in your Cloud Storage bucket that contains the path and label for each image.
- The index file must be either CSV or JSONL.
#### CSV
For image classification, the CSV index file has the requirements:
- No heading.
- First column is the Cloud Storage path to the image.
- Second column is the label.
#### Location of Cloud Storage training data.
Now set the variable `IMPORT_FILE` to the location of the CSV index file in Cloud Storage.
```
IMPORT_FILE = (
"gs://cloud-samples-data/vision/automl_classification/flowers/all_data_v2.csv"
)
```
#### Quick peek at your data
You will use a version of the Flowers dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (`wc -l`) and then peek at the first few rows.
```
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
```
### Import data
Now, import the data into your Vertex Dataset resource. Use this helper function `import_data` to import the data. The function does the following:
- Uses the `Dataset` client.
- Calls the client method `import_data`, with the following parameters:
- `name`: The human readable name you give to the `Dataset` resource (e.g., flowers).
- `import_configs`: The import configuration.
- `import_configs`: A Python list containing a dictionary, with the key/value entries:
- `gcs_sources`: A list of URIs to the paths of the one or more index files.
- `import_schema_uri`: The schema identifying the labeling type.
The `import_data()` method returns a long running `operation` object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break.
```
def import_data(dataset, gcs_sources, schema):
config = [{"gcs_source": {"uris": gcs_sources}, "import_schema_uri": schema}]
print("dataset:", dataset_id)
start_time = time.time()
try:
operation = clients["dataset"].import_data(
name=dataset_id, import_configs=config
)
print("Long running operation:", operation.operation.name)
result = operation.result()
print("result:", result)
print("time:", int(time.time() - start_time), "secs")
print("error:", operation.exception())
print("meta :", operation.metadata)
print(
"after: running:",
operation.running(),
"done:",
operation.done(),
"cancelled:",
operation.cancelled(),
)
return operation
except Exception as e:
print("exception:", e)
return None
import_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA)
```
### Export dataset index
Next, you will export the dataset index to a JSONL file which will then be used by your custom training job to get the data and corresponding labels for training your Flowers model. Use this helper function `export_data` to export the dataset index. The function does the following:
- Uses the dataset client.
- Calls the client method `export_data`, with the following parameters:
- `name`: The human readable name you give to the dataset (e.g., flowers).
- `export_config`: The export configuration.
- `export_config` A python list containing a dictionary, with the key/value entries:
- `gcs_destination`: The Cloud Storage bucket to write the JSONL dataset index file to.
The `export_data()` method returns a long running `operation` object. This will take a few minutes to complete. The helper function will return the long running operation and the result of the operation when the export has completed.
```
EXPORT_FILE = BUCKET_NAME + "/export"
def export_data(dataset_id, gcs_dest):
config = {"gcs_destination": {"output_uri_prefix": gcs_dest}}
start_time = time.time()
try:
operation = clients["dataset"].export_data(
name=dataset_id, export_config=config
)
print("Long running operation:", operation.operation.name)
result = operation.result()
print("result:", result)
print("time:", int(time.time() - start_time), "secs")
print("error:", operation.exception())
print("meta :", operation.metadata)
print(
"after: running:",
operation.running(),
"done:",
operation.done(),
"cancelled:",
operation.cancelled(),
)
return operation, result
except Exception as e:
print("exception:", e)
return None, None
_, result = export_data(dataset_id, EXPORT_FILE)
```
#### Quick peak at your exported dataset index file
Let's now take a quick peak at the contents of the exported dataset index file. When the `export_data()` completed, the response object was obtained from the `result()` method of the long running operation. The response object contains the property:
- `exported_files`: A list of the paths to the exported dataset index files, which in this case will be one file.
You will get the path to the exported dataset index file (`result.exported_files[0]`) and then display the first ten JSON objects in the file -- i.e., data items.
The JSONL format for each data item is:
{ "imageGcsUri": path_to_the_image, "classificationAnnotation": { "displayName": label } }
```
jsonl_index = result.exported_files[0]
! gsutil cat $jsonl_index | head
```
#### Reading the index file
You will need to add code to your custom training Python script to read the exported dataset index, so that you can generate training batches for custom training your model.
Below is an example of how you might each the exported dataset index file:
1. Use Tensorflow's Cloud Storage file methods to open the file (`tf.io.gfile.GFile()`) and read all the lines (`f.readlines()`), where each line is a data item represented as a JSONL object.
2. For each line in the file, convert the line to a JSON object (`json.loads()`).
3. Extract the path to the image (`['imageGcsUri']`) and label (`['classificationAnnotation']['displayName']`).
```
import json
import tensorflow as tf
with tf.io.gfile.GFile(jsonl_index, "r") as f:
export_data_items = f.readlines()
for _ in range(10):
j = json.loads(export_data_items[_])
print(j["imageGcsUri"], j["classificationAnnotation"]["displayName"])
```
#### Create TFRecords
Next, we needs to create a feeder mechanism to feed data to the model you will train from the dataset index file. There are lots of choices for how to construct a feeder. We will cover a two options here, both using TFRecords:
1. Storing the image data as raw uncompressed image data (1 byte per pixel).
2. Storing the image data as preprocessed data -- machine learning ready -- (4 bytes per pixel).
These two methods demonstrate a trade-off between disk storage and compute time. In both cases, we do a prepass over the image data to cache the data into a form that will accelerate the training time for the model. But, by caching we are both using disk space and increasing I/O traffic from the disk to the compute device -- e.g., CPU, GPU, TPU.
In the raw uncompressed format, you are minimizing the size on disk and I/O traffic for the cache data, but have the overhead that on each epoch, the preprocessing of the image data has to be repeated. In the preprocessed format, you are minimizing the compute time by preprocessing once and caching the preprocessed data -- i.e., machine learning ready. The amount of data on disk will be four times the size when training as Float32, and you are increasing by the same amount disk space and I/O traffic from the disk to the compute engine.
The helper functions `TFExampleImageUncompressed` and `TFExampleImagePreprocessed` both take the parameters:
- `path`: The Cloud Storage path to the image file.
- `label`: The corresponding label for the image file.
- `shape`: The (H,W) input shape to resize the image. If `None`, no resizing occurs.
The helper function `TFExampleImagePreprocessed` has an additional parameter:
- `dtype`: The floating point representation after the pixel data has been normalized. By default, it is set to 32-bit float (np.float32). If you are using NVIDIA GPUs or TPUs you can alternatively train in 16-bit float, by setting `dtype = np.float16`. There are two benefits to training with 16-bit float, when it does not effect the accuracy or number of epochs:
1. Each matrix multiply operation is 4 times faster than the 32-bit equivalent -- albeit the model weights need to be stored as 16-bit as well.
2. The disk space and I/O bandwidth is reduced by 1/2.
Let's look at bit deeper into the functions for creating `TFRecord` training data. First, `TFRecord` is a serialized binary encoding of the training data. As an encoding, one needs to specify a schema for how the fields are encoded, which is then used later to decode during when feeding training data to your model.
The schema is defined as an instance of `tf.train.Example` per data item in the training data. Each instance of `tf.train.Example` consists of a sequence fields, each defined as a key/value pair. In our helper function, the key entries are:
- `image`: The encoded raw image data.
- `label`: The label assigned to the image.
- `shape`: The shape of the image when decoded.
The value for each key/value pair is an instance of `tf.train.Feature`, where:
- `bytes_list`: the data to encode is a byte string.
- `int64_list`: the data to encode is an array of one or more integer values.
- `float_list`: the data to encode is an array of one or more floating point values.
```
import numpy as np
def TFExampleImageUncompressed(path, label, shape=None):
""" The uncompressed version of the image """
# read in (and uncompress) the image
with tf.io.gfile.GFile(path, "rb") as f:
data = f.read()
image = tf.io.decode_image(data)
if shape:
image = tf.image.resize(image, shape)
image = image.numpy()
shape = image.shape
# make the record
return tf.train.Example(
features=tf.train.Features(
feature={
"image": tf.train.Feature(
bytes_list=tf.train.BytesList(value=[image.tostring()])
),
"label": tf.train.Feature(int64_list=tf.train.Int64List(value=[label])),
"shape": tf.train.Feature(
int64_list=tf.train.Int64List(value=[shape[0], shape[1], shape[2]])
),
}
)
)
def TFExampleImagePreprocessed(path, label, shape=None, dtype=np.float32):
""" The normalized version of the image """
# read in (uncompress) the image and normalize the pixel data
image = (cv2.imread(path) / 255.0).astype(dtype)
if shape:
image = tf.image.resize(image, shape)
image = image.numpy()
shape = image.shape
# make the record
return tf.train.Example(
features=tf.train.Features(
feature={
"image": tf.train.Feature(
bytes_list=tf.train.BytesList(value=[image.tostring()])
),
"label": tf.train.Feature(int64_list=tf.train.Int64List(value=[label])),
"shape": tf.train.Feature(
int64_list=tf.train.Int64List(value=[shape[0], shape[1], shape[2]])
),
}
)
)
```
#### Write training data to TFRecord file
Next, you will create a single TFRecord for all the training data specified in the exported dataset index:
- Specify the cache method by setting the variable `CACHE` to either `TFExampleImageUncompressed` or `TFExampleImagePreprocessed`.
- Convert class names from the dataset to integer labels, using `cls2label`.
- Read in the data item list from the exported dataset index file -- `tf.io.gfile.GFile(jsonl_index, 'r')`.
- Set the Cloud Storage location to store the cached TFRecord file -- `GCS_TFRECORD_URI`.
- Generate the cached data using `tf.io.TFRecordWriter(gcs_tfrecord_uri)` for each data item in the exported dataset index.
- Extract the Cloud Storage path and class name - `json.loads(data_item)`
- Convert class name to integer label - `label = cls2label[cls]`
- Encode the data item - `example = CACHE(image, label)`
- Write the encoded data item to the TFRecord file - `writer.write(example.SerializeToString())`
This may take about 20 minutes.
```
# Select TFRecord method of encoding
CACHE = TFExampleImageUncompressed # [ TFExampleImageUncompressed, TFExampleImagePreprocessed]
# Map labels to class names
cls2label = {"daisy": 0, "dandelion": 1, "roses": 2, "sunflowers": 3, "tulips": 4}
# Read in each example from exported dataset index
with tf.io.gfile.GFile(jsonl_index, "r") as f:
data = f.readlines()
# The path to the TFRecord cached file.
GCS_TFRECORD_URI = BUCKET_NAME + "/flowers.tfrecord"
# Create the TFRecord cached file
with tf.io.TFRecordWriter(GCS_TFRECORD_URI) as writer:
n = 0
for data_item in data:
j = json.loads(data_item)
image = j["imageGcsUri"]
cls = j["classificationAnnotation"]["displayName"]
label = cls2label[cls]
example = CACHE(image, label, shape=(128, 128))
writer.write(example.SerializeToString())
n += 1
if n % 10 == 0:
print(n, image)
listing = ! gsutil ls -la $GCS_TFRECORD_URI
print("TFRecord File", listing)
```
## Train a model
There are two ways you can train a custom model using a container image:
- **Use a Google Cloud prebuilt container**. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model.
- **Use your own custom container image**. If you use your own container, the container needs to contain your code for training a custom model.
## Prepare your custom job specification
Now that your clients are ready, your first step is to create a Job Specification for your custom training job. The job specification will consist of the following:
- `worker_pool_spec` : The specification of the type of machine(s) you will use for training and how many (single or distributed)
- `python_package_spec` : The specification of the Python package to be installed with the pre-built container.
### Prepare your machine specification
Now define the machine specification for your custom training job. This tells Vertex what type of machine instance to provision for the training.
- `machine_type`: The type of GCP instance to provision -- e.g., n1-standard-8.
- `accelerator_type`: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable `TRAIN_GPU != None`, you are using a GPU; otherwise you will use a CPU.
- `accelerator_count`: The number of accelerators.
```
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
```
### Prepare your disk specification
(optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training.
- `boot_disk_type`: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD.
- `boot_disk_size_gb`: Size of disk in GB.
```
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
JOB_NAME = "custom_job_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME)
if not TRAIN_NGPU or TRAIN_NGPU < 2:
TRAIN_STRATEGY = "single"
else:
TRAIN_STRATEGY = "mirror"
EPOCHS = 20
STEPS = 100
PARAM_FILE = BUCKET_NAME + "/params.txt"
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_flowers.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
```
### Assemble a job specification
Now assemble the complete description for the custom job specification:
- `display_name`: The human readable name you assign to this custom job.
- `job_spec`: The specification for the custom job.
- `worker_pool_specs`: The specification for the machine VM instances.
- `base_output_directory`: This tells the service the Cloud Storage location where to save the model artifacts (when variable `DIRECT = False`). The service will then pass the location to the training script as the environment variable `AIP_MODEL_DIR`, and the path will be of the form:
<output_uri_prefix>/model
```
if DIRECT:
job_spec = {"worker_pool_specs": worker_pool_spec}
else:
job_spec = {
"worker_pool_specs": worker_pool_spec,
"base_output_directory": {"output_uri_prefix": MODEL_DIR},
}
custom_job = {"display_name": JOB_NAME, "job_spec": job_spec}
```
### Examine the training package
#### Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
- PKG-INFO
- README.md
- setup.cfg
- setup.py
- trainer
- \_\_init\_\_.py
- task.py
The files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.
The file `trainer/task.py` is the Python script for executing the custom training job. *Note*, when we referred to it in the worker pool specification, we replace the directory slash with a dot (`trainer.task`) and dropped the file suffix (`.py`).
#### Package Assembly
In the following cells, you will assemble the training package.
```
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Flowers image classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
```
#### Task.py contents
In the next cell, you write the contents of the training script task.py. We won't go into detail, it's just there for you to browse. In summary:
- Get the directory where to save the model artifacts from the command line (`--model_dir`), and if not specified, then from the environment variable `AIP_MODEL_DIR`.
- Loads Flowers dataset from TFRecords (`args.training-data`)
- Creates a tf.data.Dataset generator from the TFRecord.
- Builds a simple ConvNet model using TF.Keras model API.
- Compiles the model (`compile()`).
- Sets a training distribution strategy for multi-workers using `args.distribute`.
- Trains the model (`fit()`) with epochs and steps according to the arguments `args.epochs` and `args.steps`
- Saves the trained model (`save(args.model_dir)`) to the specified model directory.
```
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for Flowers
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default='/tmp/saved_model', type=str, help='Model dir.')
parser.add_argument('--training-data', dest='tfrecord_uri')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=100, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
dataset = tf.data.TFRecordDataset(args.tfrecord_uri)
feature_description = {
'image': tf.io.FixedLenFeature([], tf.string),
'label': tf.io.FixedLenFeature([], tf.int64),
'shape': tf.io.FixedLenFeature([3], tf.int64),
}
def _parse_function(proto):
''' parse the next serialized tf.train.Example using the feature description '''
example = tf.io.parse_single_example(proto, feature_description)
image = tf.io.decode_raw(example['image'], tf.int32)
shape = tf.cast(example['shape'], tf.int32)
label = tf.cast(example['label'], tf.int32)
image.set_shape([128 * 128 * 3])
image = tf.reshape(image, (128, 128, 3))
image = ( tf.cast(image, tf.float32) / 255.0)
return image, label
return dataset.map(_parse_function).cache().shuffle(BUFFER_SIZE).repeat()
# Build the Keras model
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(128, 128, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(64, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(128, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(5, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_cnn_model()
# Train the model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(args.model_dir)
```
#### Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
```
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_flowers.tar.gz
```
### Train the model
Now start the training of your custom training job on Vertex. Use this helper function `create_custom_job`, which takes the following parameter:
-`custom_job`: The specification for the custom job.
The helper function calls job client service's `create_custom_job` method, with the following parameters:
-`parent`: The Vertex location path to `Dataset`, `Model` and `Endpoint` resources.
-`custom_job`: The specification for the custom job.
You will display a handful of the fields returned in `response` object, with the two that are of most interest are:
`response.name`: The Vertex fully qualified identifier assigned to this custom training job. You save this identifier for using in subsequent steps.
`response.state`: The current state of the custom training job.
```
def create_custom_job(custom_job):
response = clients["job"].create_custom_job(parent=PARENT, custom_job=custom_job)
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = create_custom_job(custom_job)
```
Now get the unique identifier for the custom job you created.
```
# The full unique ID for the custom job
job_id = response.name
# The short numeric ID for the custom job
job_short_id = job_id.split("/")[-1]
print(job_id)
```
### Get information on a custom job
Next, use this helper function `get_custom_job`, which takes the following parameter:
- `name`: The Vertex fully qualified identifier for the custom job.
The helper function calls the job client service's`get_custom_job` method, with the following parameter:
- `name`: The Vertex fully qualified identifier for the custom job.
If you recall, you got the Vertex fully qualified identifier for the custom job in the `response.name` field when you called the `create_custom_job` method, and saved the identifier in the variable `job_id`.
```
def get_custom_job(name, silent=False):
response = clients["job"].get_custom_job(name=name)
if silent:
return response
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = get_custom_job(job_id)
```
# Deployment
Training the above model may take upwards of 20 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting `end_time` from `start_time`. For your model, we will need to know the location of the saved model, which the Python script saved in your local Cloud Storage bucket at `MODEL_DIR + '/saved_model.pb'`.
```
while True:
response = get_custom_job(job_id, True)
if response.state != aip.JobState.JOB_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_path_to_deploy = None
if response.state == aip.JobState.JOB_STATE_FAILED:
break
else:
if not DIRECT:
MODEL_DIR = MODEL_DIR + "/model"
model_path_to_deploy = MODEL_DIR
print("Training Time:", response.update_time - response.create_time)
break
time.sleep(60)
print("model_to_deploy:", model_path_to_deploy)
```
## Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras `model.load_model()` method passing it the Cloud Storage path where the model is saved -- specified by `MODEL_DIR`.
```
import tensorflow as tf
model = tf.keras.models.load_model(MODEL_DIR)
```
## Evaluate the model
Now find out how good the model is.
### Load evaluation data
You will load the Flowers test (holdout) data from `tf.keras.datasets`, using the method `load_data()`. This will return the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the image data, and the corresponding labels.
You don't need the training data, and hence why we loaded it as `(_, _)`.
Before you can run the data through evaluation, you need to preprocess it:
x_test:
1. Normalize (rescaling) the pixel data by dividing each pixel by 255. This will replace each single byte integer pixel with a 32-bit floating point number between 0 and 1.
y_test:<br/>
2. The labels are currently scalar (sparse). If you look back at the `compile()` step in the `trainer/task.py` script, you will find that it was compiled for sparse labels. So we don't need to do anything more.
## Evaluate the model
Now let's find out how good the model is.
### Load evaluation data
You will load some sample data from the end of the Flowers dataset, last 10 items. We will then preprocessed the data items to form:
- `x_test`: The preprocessed image data in memory.
- `y_test`: The corresponding labels.
```
x_test = []
y_test = []
data_items = export_data_items[-10:]
for data_item in data_items:
data_item = json.loads(data_item)
print("FILE", data_item["imageGcsUri"])
with tf.io.gfile.GFile(data_item["imageGcsUri"], "rb") as f:
data = f.read()
image = tf.io.decode_image(data)
image = tf.image.resize(image, (128, 128))
image = (image.numpy() / 255.0).astype(np.float32)
cls = data_item["classificationAnnotation"]["displayName"]
label = cls2label[cls]
x_test.append(image)
y_test.append(label)
x_test = np.asarray(x_test)
y_test = np.asarray(y_test)
```
### Perform the model evaluation
Now evaluate how well the model in the custom job did.
```
model.evaluate(x_test, y_test)
```
## Upload the model for serving
Next, you will upload your TF.Keras model from the custom job to Vertex `Model` service, which will create a Vertex `Model` resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex, your serving function ensures that the data is decoded on the model server before it is passed as input to your model.
### How does the serving function work
When you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a `tf.string`.
The serving function consists of two parts:
- `preprocessing function`:
- Converts the input (`tf.string`) to the input shape and data type of the underlying model (dynamic graph).
- Performs the same preprocessing of the data that was done during training the underlying model -- e.g., normalizing, scaling, etc.
- `post-processing function`:
- Converts the model output to format expected by the receiving application -- e.q., compresses the output.
- Packages the output for the the receiving application -- e.g., add headings, make JSON object, etc.
Both the preprocessing and post-processing functions are converted to static graphs which are fused to the model. The output from the underlying model is passed to the post-processing function. The post-processing function passes the converted/packaged output back to the HTTP server. The HTTP server returns the output as the HTTP response content.
One consideration you need to consider when building serving functions for TF.Keras models is that they run as static graphs. That means, you cannot use TF graph operations that require a dynamic graph. If you do, you will get an error during the compile of the serving function which will indicate that you are using an EagerTensor which is not supported.
### Serving function for image data
To pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model.
To resolve this, define a serving function (`serving_fn`) and attach it to the model as a preprocessing step. Add a `@tf.function` decorator so the serving function is fused to the underlying model (instead of upstream on a CPU).
When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (`tf.string`), which is passed to the serving function (`serving_fn`). The serving function preprocesses the `tf.string` into raw (uncompressed) numpy bytes (`preprocess_fn`) to match the input requirements of the model:
- `io.decode_jpeg`- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB).
- `image.convert_image_dtype` - Changes integer pixel values to float 32, and rescales pixel data between 0 and 1.
- `image.resize` - Resizes the image to match the input shape for the model.
At this point, the data can be passed to the model (`m_call`).
```
CONCRETE_INPUT = "numpy_inputs"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(128, 128))
return resized
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
decoded_images = tf.map_fn(
_preprocess, bytes_inputs, dtype=tf.float32, back_prop=False
)
return {
CONCRETE_INPUT: decoded_images
} # User needs to make sure the key matches model's input
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return prob
m_call = tf.function(model.call).get_concrete_function(
[tf.TensorSpec(shape=[None, 128, 128, 3], dtype=tf.float32, name=CONCRETE_INPUT)]
)
tf.saved_model.save(
model, model_path_to_deploy, signatures={"serving_default": serving_fn}
)
```
## Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
For your purpose, you need the signature of the serving function. Why? Well, when we send our data for prediction as a HTTP request packet, the image data is base64 encoded, and our TF.Keras model takes numpy input. Your serving function will do the conversion from base64 to a numpy array.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
```
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
```
### Upload the model
Use this helper function `upload_model` to upload your model, stored in SavedModel format, up to the `Model` service, which will instantiate a Vertex `Model` resource instance for your model. Once you've done that, you can use the `Model` resource instance in the same way as any other Vertex `Model` resource instance, such as deploying to an `Endpoint` resource for serving predictions.
The helper function takes the following parameters:
- `display_name`: A human readable name for the `Endpoint` service.
- `image_uri`: The container image for the model deployment.
- `model_uri`: The Cloud Storage path to our SavedModel artificat. For this tutorial, this is the Cloud Storage location where the `trainer/task.py` saved the model artifacts, which we specified in the variable `MODEL_DIR`.
The helper function calls the `Model` client service's method `upload_model`, which takes the following parameters:
- `parent`: The Vertex location root path for `Dataset`, `Model` and `Endpoint` resources.
- `model`: The specification for the Vertex `Model` resource instance.
Let's now dive deeper into the Vertex model specification `model`. This is a dictionary object that consists of the following fields:
- `display_name`: A human readable name for the `Model` resource.
- `metadata_schema_uri`: Since your model was built without an Vertex `Dataset` resource, you will leave this blank (`''`).
- `artificat_uri`: The Cloud Storage path where the model is stored in SavedModel format.
- `container_spec`: This is the specification for the Docker container that will be installed on the `Endpoint` resource, from which the `Model` resource will serve predictions. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated.
Uploading a model into a Vertex Model resource returns a long running operation, since it may take a few moments. You call response.result(), which is a synchronous call and will return when the Vertex Model resource is ready.
The helper function returns the Vertex fully qualified identifier for the corresponding Vertex Model instance upload_model_response.model. You will save the identifier for subsequent steps in the variable model_to_deploy_id.
```
IMAGE_URI = DEPLOY_IMAGE
def upload_model(display_name, image_uri, model_uri):
model = {
"display_name": display_name,
"metadata_schema_uri": "",
"artifact_uri": model_uri,
"container_spec": {
"image_uri": image_uri,
"command": [],
"args": [],
"env": [{"name": "env_name", "value": "env_value"}],
"ports": [{"container_port": 8080}],
"predict_route": "",
"health_route": "",
},
}
response = clients["model"].upload_model(parent=PARENT, model=model)
print("Long running operation:", response.operation.name)
upload_model_response = response.result(timeout=180)
print("upload_model_response")
print(" model:", upload_model_response.model)
return upload_model_response.model
model_to_deploy_id = upload_model(
"flowers-" + TIMESTAMP, IMAGE_URI, model_path_to_deploy
)
```
### Get `Model` resource information
Now let's get the model information for just your model. Use this helper function `get_model`, with the following parameter:
- `name`: The Vertex unique identifier for the `Model` resource.
This helper function calls the Vertex `Model` client service's method `get_model`, with the following parameter:
- `name`: The Vertex unique identifier for the `Model` resource.
```
def get_model(name):
response = clients["model"].get_model(name=name)
print(response)
get_model(model_to_deploy_id)
```
## Deploy the `Model` resource
Now deploy the trained Vertex custom `Model` resource. This requires two steps:
1. Create an `Endpoint` resource for deploying the `Model` resource to.
2. Deploy the `Model` resource to the `Endpoint` resource.
### Create an `Endpoint` resource
Use this helper function `create_endpoint` to create an endpoint to deploy the model to for serving predictions, with the following parameter:
- `display_name`: A human readable name for the `Endpoint` resource.
The helper function uses the endpoint client service's `create_endpoint` method, which takes the following parameter:
- `display_name`: A human readable name for the `Endpoint` resource.
Creating an `Endpoint` resource returns a long running operation, since it may take a few moments to provision the `Endpoint` resource for serving. You call `response.result()`, which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the `Endpoint` resource: `response.name`.
```
ENDPOINT_NAME = "flowers_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
```
Now get the unique identifier for the `Endpoint` resource you created.
```
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
```
### Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests:
- Single Instance: The online prediction requests are processed on a single compute instance.
- Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to one.
- Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.
- Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.
- Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.
- Set the minimum (`MIN_NODES`) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field `min_replica_count` and the maximum number of compute instances corresponds to the field `max_replica_count`, in your subsequent deployment request.
```
MIN_NODES = 1
MAX_NODES = 1
```
### Deploy `Model` resource to the `Endpoint` resource
Use this helper function `deploy_model` to deploy the `Model` resource to the `Endpoint` resource you created for serving predictions, with the following parameters:
- `model`: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.
- `deploy_model_display_name`: A human readable name for the deployed model.
- `endpoint`: The Vertex fully qualified endpoint identifier to deploy the model to.
The helper function calls the `Endpoint` client service's method `deploy_model`, which takes the following parameters:
- `endpoint`: The Vertex fully qualified `Endpoint` resource identifier to deploy the `Model` resource to.
- `deployed_model`: The requirements specification for deploying the model.
- `traffic_split`: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
- If only one model, then specify as **{ "0": 100 }**, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
- If there are existing models on the endpoint, for which the traffic will be split, then use `model_id` to specify as **{ "0": percent, model_id: percent, ... }**, where `model_id` is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
Let's now dive deeper into the `deployed_model` parameter. This parameter is specified as a Python dictionary with the minimum required fields:
- `model`: The Vertex fully qualified model identifier of the (upload) model to deploy.
- `display_name`: A human readable name for the deployed model.
- `disable_container_logging`: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.
- `dedicated_resources`: This refers to how many compute instances (replicas) that are scaled for serving prediction requests.
- `machine_spec`: The compute instance to provision. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated.
- `min_replica_count`: The number of compute instances to initially provision, which you set earlier as the variable `MIN_NODES`.
- `max_replica_count`: The maximum number of compute instances to scale to, which you set earlier as the variable `MAX_NODES`.
#### Traffic Split
Let's now dive deeper into the `traffic_split` parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.
Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.
#### Response
The method returns a long running operation `response`. We will wait sychronously for the operation to complete by calling the `response.result()`, which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
```
DEPLOYED_NAME = "flowers_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"dedicated_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
"machine_spec": machine_spec,
},
"disable_container_logging": False,
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
```
## Make a online prediction request
Now do a online prediction to your deployed model.
```
# Last data item in exported dataset index
data_items = export_data_items[-1:]
data_item = json.loads(data_items[0])
image_path = data_item["imageGcsUri"]
print("IMAGE PATH", image_path)
```
### Prepare the request content
You are going to send the image as compressed JPG image, instead of the raw uncompressed bytes:
- `cv2.imread`: Read the compressed JPG images back into memory as raw bytes.
- `tf.image.resize`: Resize the image to the input shape of the model -- (128, 128, 3).
- cv2.imwrite: Write resized image back to disk.
- `base64.b64encode`: Read back the resized compressed image and encode into a base 64 encoded string.
```
! gsutil cp $image_path tmp.jpg
import base64
import cv2
test_image = cv2.imread("tmp.jpg", cv2.IMREAD_COLOR)
print("before:", test_image.shape)
test_image = cv2.resize(test_image, (128, 128))
print("after:", test_image.shape)
cv2.imwrite("tmp.jpg", test_image.astype(np.uint8))
# bytes = tf.io.read_file('tmp.jpg')
with open("tmp.jpg", "rb") as f:
bytes = f.read()
b64str = base64.b64encode(bytes).decode("utf-8")
```
### Send the prediction request
Ok, now you have a test image. Use this helper function `predict_image`, which takes the following parameters:
- `image`: The test image data as a numpy array.
- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource was deployed to.
- `parameters_dict`: Additional parameters for serving.
This function calls the prediction client service `predict` method with the following parameters:
- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource was deployed to.
- `instances`: A list of instances (encoded images) to predict.
- `parameters`: Additional parameters for serving.
To pass the image data to the prediction service, in the previous step you encoded the bytes into base64 -- which makes the content safe from modification when transmitting binary data over the network. You need to tell the serving binary where your model is deployed to, that the content has been base64 encoded, so it will decode it on the other end in the serving binary.
Each instance in the prediction request is a dictionary entry of the form:
{serving_input: {'b64': content}}
- `input_name`: the name of the input layer of the underlying model.
- `'b64'`: A key that indicates the content is base64 encoded.
- `content`: The compressed JPG image bytes as a base64 encoded string.
Since the `predict()` service can take multiple images (instances), you will send your single image as a list of one image. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the `predict()` service.
The `response` object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction:
- `predictions`: Confidence level for the prediction, between 0 and 1, for each of the classes.
```
def predict_image(image, endpoint, parameters_dict):
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [{serving_input: {"b64": image}}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters_dict
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", prediction)
predict_image(b64str, endpoint_id, None)
```
## Undeploy the `Model` resource
Now undeploy your `Model` resource from the serving `Endpoint` resoure. Use this helper function `undeploy_model`, which takes the following parameters:
- `deployed_model_id`: The model deployment identifier returned by the endpoint service when the `Model` resource was deployed to.
- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` is deployed to.
This function calls the endpoint client service's method `undeploy_model`, with the following parameters:
- `deployed_model_id`: The model deployment identifier returned by the endpoint service when the `Model` resource was deployed.
- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource is deployed.
- `traffic_split`: How to split traffic among the remaining deployed models on the `Endpoint` resource.
Since this is the only deployed model on the `Endpoint` resource, you simply can leave `traffic_split` empty by setting it to {}.
```
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
```
# Cleaning up
To clean up all GCP resources used in this project, you can [delete the GCP
project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
- Dataset
- Pipeline
- Model
- Endpoint
- Batch Job
- Custom Job
- Hyperparameter Tuning Job
- Cloud Storage Bucket
```
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
```
| github_jupyter |
# Fantasy football
## Steps
1. Collect data from files
2. Feature exploration, selection and engineering
3. Fit model to predict points for all the players
4. Use Linear programming to get the best 15 based on constraints
```
import pandas as pd
import numpy as np
from bs4 import BeautifulSoup
import bs4
import requests
import json
import matplotlib.pyplot as plt
import glob
from time import time
import os
from IPython.display import display, Markdown, Latex
import seaborn as sns
# scikit libraries aka holy grail
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import cross_val_score
from sklearn.metrics import f1_score, make_scorer, accuracy_score, recall_score
from sklearn.metrics import roc_auc_score
from sklearn.preprocessing import StandardScaler
from sklearn.neural_network import MLPClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import LinearSVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from xgboost import XGBClassifier
from sklearn.metrics import classification_report
from sklearn.preprocessing import Imputer
from sklearn.externals import joblib
%matplotlib inline
```
## Concatenate historic gameweek data
```
col_names = ['FirstName', 'Surname', 'PositionsList', 'Team', 'Cost',
'PointsLastRound', 'TotalPoints', 'YellowCards', 'GoalsConceded',
'Saves', 'GoalsScored', 'ValueSeason', 'TransfersOutRound', 'PriceFallRound',
'LastSeasonPoints', 'ValueForm', 'PenaltiesMissed', 'Form',
'Bonus', 'CleanSheets', 'Assists', 'SelectedByPercent', 'OwnGoals',
'PenaltiesSaved', 'DreamteamCount', 'MinutesPlayed', 'TransfersInRound',
'PriceRiseRound', 'RedCards', 'BPS','NextFixture2']
# 'NextFixture1', 'NextFixture2', 'NextFixture3', 'NextFixture4', 'NextFixture5' - use difficulty of teams
# as feature
features = ['PositionsList','Cost','TransfersOutRound','PriceFallRound',
'LastSeasonPoints', 'ValueForm','Form','Bonus','SelectedByPercent','TransfersInRound',
'PriceRiseRound']
# add team as feature and try to model
cumulative_measures = ['YellowCards', 'GoalsConceded',
'Saves', 'GoalsScored', 'PenaltiesMissed',
'CleanSheets', 'Assists', 'OwnGoals' ,
'PenaltiesSaved', 'DreamteamCount', 'MinutesPlayed',
'RedCards', 'BPS']
all_files = glob.glob(os.path.join('./data/historic_data/', '*.csv'))
# df = pd.concat(map(pd.read_csv(usecols=col_names), )))
def read_data(filename):
dataframe = pd.read_csv(filename, usecols=col_names)
file_year_gw = filename.split('FPL')[1]
fpl_year = '20' + file_year_gw[0:2]
fpl_gw = file_year_gw.split('GW')[1].split('.csv')[0]
fpl_gw = ('0' + fpl_gw) if len(fpl_gw) == 1 else fpl_gw
dataframe['Cost'] = dataframe['Cost']/1e6 #Convert cost to millions unit
dataframe['PlayerName'] = dataframe['FirstName'].fillna('') + ' ' + dataframe['Surname'].fillna('') # combine first and last name
dataframe['Year'] = int(fpl_year)
dataframe['Gameweek'] = int(fpl_gw)
dataframe.drop(['FirstName','Surname'], axis = 1, inplace = True)
cols = list(dataframe.columns)
cols = cols[-3:] + cols[:-3] # reorder columns
return dataframe[cols]
df = pd.concat(map(read_data, all_files)).reset_index(drop = True)
df.sort_values(['PlayerName','Year','Gameweek'], inplace=True)
df['PointsLastRound'] = pd.to_numeric(df.PointsLastRound, errors='coerce').fillna(0)
df = df[df.Gameweek != 0].reset_index(drop=True) # remove cumulative game week files (gameweek 0)
df.head()
# players who have played less than 400 minutes throught the season
df_lesstime = (df.groupby(['PlayerName','Year'])['PointsLastRound','MinutesPlayed']
.agg({'PointsLastRound':'sum', 'MinutesPlayed':'max'})[['MinutesPlayed']]
.transform(lambda x: x < 400)
.query('MinutesPlayed == True')
.reset_index()
)
df_lesstime.head()
df_lesstime.PlayerName.unique()
# remove players for seasons where they have played less than 400 minutes
df_filt = df[~((df.PlayerName.isin(df_lesstime.PlayerName))&(df.Year.isin(df_lesstime.Year)))]
df_filt.head()
```
> **Output columns:** Points Last Round/ Total points
## Explore data
```
df_filt.hist(figsize = (15,15))
df_filt.dtypes
```
> There are a lot of fields with 0. This could be because people a) A lot of players hardly play or b) Lot of fields not have have a good spread of values
## Feature engineering
```
top_6 = ['Arsenal','Chelsea','Man Utd','Man City','Liverpool','Tottenham']
df_filt.columns
t0 = time()
final_df = list()
for i,j in df_filt.groupby(['PlayerName','Year']):
df_player = pd.DataFrame(j[cumulative_measures].apply(pd.to_numeric, errors='coerce'))
df_player = df_player.diff().fillna(df_player.iloc[0])
df_player['PlayerName'] = i[0].strip()
df_player['Gameweek'] = j['Gameweek']
df_player['Year'] = i[1]
df_player['PointsGameweek'] = j['PointsLastRound']
df_player['TopSixOpposition'] = np.where((j['NextFixture2'].isin(top_6)), True, False)
df_player[features] = j[features]
PointsNextFixture1 = df_player[['PointsGameweek']].shift(-1).fillna(0)
PointsNextFixture2 = df_player[['PointsGameweek']].shift(-2).fillna(0)
PointsNextFixture3 = df_player[['PointsGameweek']].shift(-3).fillna(0)
df_player['PointsNextWeek1'] = PointsNextFixture1
df_player['PointsNextWeek1_classify'] = PointsNextFixture1 >= 4 # for classification problem
# Compute cumulative points for the next 3 gameweeks
df_player['CumuPoints3Weeks'] = (PointsNextFixture1 +
PointsNextFixture2 +
PointsNextFixture3)
# Compute weighted average of points for next 3 gameweeks(Weights by week: 0.5, 0.3 and 0.2)
df_player['WeightAvgPoints3Weeks'] = (PointsNextFixture1 * 0.5 +
PointsNextFixture2 * 0.3 +
PointsNextFixture3 * 0.2 )
cols = list(df_player.columns)
rearrange_no = len(cols) - len(cumulative_measures)
cols = cols[-rearrange_no:] + cols[:-rearrange_no] # reorder columns
final_df.append(df_player[cols])
final_df = pd.concat(final_df, axis=0)
print('Time taken {:.3f}'.format(time()-t0))
final_df.head()
# source for code: https://seaborn.pydata.org/examples/many_pairwise_correlations.html
d = final_df.loc[:,[cols for cols in final_df.columns if final_df[cols].dtype in [float,int]]]
corr = d.corr()
# Generate a mask for the upper triangle
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(11, 9))
# Generate a custom diverging colormap
cmap = sns.diverging_palette(220, 10, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5}).set_title('Pairwise correlation')
output_cols = ['PointsNextWeek1', 'CumuPoints3Weeks', 'WeightAvgPoints3Weeks','PointsNextWeek1_classify']
display(Markdown('**Average points of players for Gameweeks they made it to the Dream Team, the points they scored in the next week and the Weighted average of points for the next 3 weeks**'))
(final_df[(final_df.DreamteamCount == 1) & (final_df.Gameweek <= 35)]
.groupby(['Gameweek','Year'])
.agg('mean')[['PointsGameweek','PointsNextWeek1','WeightAvgPoints3Weeks','PointsNextWeek1_classify']]
.mean())
final_df = pd.concat([final_df.drop('PositionsList', axis=1),
pd.get_dummies(final_df['PositionsList'])],
axis=1).reset_index(drop = True)
final_df.columns
final_df[['Gameweek','Year']] = final_df[['Gameweek','Year']].applymap(str)
final_df.head()
final_df['PointsNextWeek1_classify'].value_counts()
print('Ratio of players with more than 4 points in the next week to ones less than 4 points is {:.3f}'.format(4178/19663))
```
Might have to balance the dataset before prediction. There is a 1:5 ratio between the 2 classes (Points >= 4 and points <= 4)
```
final_df.to_csv('./data/Processed_data.csv')
```
## Train-test split
```
final_df.dtypes
model_feats = [col for col in final_df.columns if col not in output_cols and final_df[col].dtype != object]
model_feats
X_train_valid, X_test, y_train_valid, y_test = train_test_split(final_df.loc[:,model_feats],
final_df.loc[:,output_cols],
test_size=0.30, random_state=123)
X_train_valid.columns
X_train, X_valid, y_train, y_valid = train_test_split(X_train_valid,
y_train_valid,
test_size=0.20, random_state=123)
X_train.shape
y_train.columns
```
## Feature importance
Two ways to go about prediction --
1. Classic regression task where you try and predict points and find root mean square difference to compare and evaluate performance of models
2. Make it a classification task where you predict if player is likely to score more than 4 points in a gameweek. Why 4? Because that is the number of points you lose while you make more than one transfer per week.
```
param_grid = {'n_estimators': [100,500]}
clf = GridSearchCV(RandomForestClassifier(verbose=0, n_jobs=4, class_weight='balanced'), param_grid, cv=7, error_score=make_scorer(recall_score))
clf.fit(X_train_valid.values, y_train_valid['PointsNextWeek1_classify'].values)
clf.best_score_
X_train_valid.columns
feat_imp = list(zip(list(X_train_valid.columns), clf.best_estimator_.feature_importances_))
pd.DataFrame(feat_imp, columns=['Features','Importance']).sort_values('Importance',ascending = False).reset_index(drop=True)
```
| github_jupyter |
```
from hazma.decay import muon
from hazma.decay import neutral_pion
from hazma.decay import charged_pion
from hazma.parameters import muon_mass as mmu
from hazma.parameters import neutral_pion_mass as mnp
from hazma.parameters import charged_pion_mass as mcp
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
eng_gams = np.logspace(0., 3., num=500)
emu1 = 0.0
emu2 = mmu
emu3 = 2. * mmu
muspec1 = muon(eng_gams, emu1)
muspec2 = muon(eng_gams, emu2)
muspec3 = muon(eng_gams, emu3)
plt.figure(dpi=100)
plt.loglog(eng_gams, muspec2)
plt.loglog(eng_gams, muspec3)
np.save('../../test/decay/muon_data/spec1.npy', muspec1)
np.save('../../test/decay/muon_data/spec2.npy', muspec2)
np.save('../../test/decay/muon_data/spec3.npy', muspec3)
np.save('../../test/decay/muon_data/egams.npy', eng_gams)
np.save('../../test/decay/muon_data/emu1.npy', emu1)
np.save('../../test/decay/muon_data/emu2.npy', emu2)
np.save('../../test/decay/muon_data/emu3.npy', emu3)
enp1 = 0.0
enp2 = 1.1 * mnp
enp3 = 2. * mnp
npspec1 = neutral_pion(eng_gams, enp1)
npspec2 = neutral_pion(eng_gams, enp2)
npspec3 = neutral_pion(eng_gams, enp3)
plt.figure(dpi=100)
plt.loglog(eng_gams, npspec2)
plt.loglog(eng_gams, npspec3)
np.save('../../test/decay/npion_data/spec1.npy', npspec1)
np.save('../../test/decay/npion_data/spec2.npy', npspec2)
np.save('../../test/decay/npion_data/spec3.npy', npspec3)
np.save('../../test/decay/npion_data/egams.npy', eng_gams)
np.save('../../test/decay/npion_data/enp1.npy', enp1)
np.save('../../test/decay/npion_data/enp2.npy', enp2)
np.save('../../test/decay/npion_data/enp3.npy', enp3)
ecp1 = 0.0
ecp2 = mcp
ecp3 = 2. * mcp
cpspec1 = charged_pion(eng_gams, ecp1)
cpspec2 = charged_pion(eng_gams, ecp2)
cpspec3 = charged_pion(eng_gams, ecp3)
plt.figure(dpi=100)
plt.loglog(eng_gams, cpspec2)
plt.loglog(eng_gams, cpspec3)
np.save('../../test/decay/cpion_data/spec1.npy', cpspec1)
np.save('../../test/decay/cpion_data/spec2.npy', cpspec2)
np.save('../../test/decay/cpion_data/spec3.npy', cpspec3)
np.save('../../test/decay/cpion_data/egams.npy', eng_gams)
np.save('../../test/decay/cpion_data/ecp1.npy', ecp1)
np.save('../../test/decay/cpion_data/ecp2.npy', ecp2)
np.save('../../test/decay/cpion_data/ecp3.npy', ecp3)
```
## Boost Spectrum
```
eng_gams = np.logspace(1., 3., num=500)
emu_rf = mmu
emu_lf = 2. * mmu
muspec_rf = muon(eng_gams, emu_rf)
muspec_lr = muon(eng_gams, emu_lf)
plt.figure(dpi=100)
plt.loglog(eng_gams, muspec_rf, label="RF", lw=4)
plt.loglog(eng_gams, muspec_lr, label="LF", lw=4)
plt.ylabel(r"$\frac{dN}{dE}\ (\mathrm{MeV})^{-1}$", fontsize=20)
plt.xlabel(r"$E_{\gamma}$ (MeV)", fontsize=20)
plt.xlim([1e1,10**2.5])
plt.legend(fontsize=18)
plt.tight_layout()
# plt.savefig("lab_rest_frame.pdf")
plt.figure(dpi=100)
plt.loglog(eng_gams, muspec_lr, label="lab frame", lw=4)
plt.ylabel(r"$\frac{dN}{dE}\ (\mathrm{MeV})^{-1}$", fontsize=20)
plt.xlabel(r"$E_{\gamma}$ (MeV)", fontsize=20)
plt.xlim([1e0,10**2.5])
# plt.savefig("lab_frame.pdf")
```
| github_jupyter |
## _*Using Qiskit Aqua for clique problems*_
This Qiskit Aqua Optimization notebook demonstrates how to use the VQE quantum algorithm to compute the clique of a given graph.
The problem is defined as follows. A clique in a graph $G$ is a complete subgraph of $G$. That is, it is a subset $K$ of the vertices such that every two vertices in $K$ are the two endpoints of an edge in $G$. A maximal clique is a clique to which no more vertices can be added. A maximum clique is a clique that includes the largest possible number of vertices.
We will go through two examples to show:
1. How to run the optimization.
2. How how to run the optimization with the VQE.
Note that the solution may not be unique.
#### The problem and a brute-force method.
```
import numpy as np
from qiskit import BasicAer
from qiskit.optimization.ising import clique
from qiskit.optimization.ising.common import random_graph, sample_most_likely
from qiskit.aqua.algorithms import ExactEigensolver
```
first, let us have a look at the graph, which is in the adjacent matrix form.
```
K = 3 # K means the size of the clique
np.random.seed(100)
num_nodes = 5
w = random_graph(num_nodes, edge_prob=0.8, weight_range=10)
print(w)
```
Let us try a brute-force method. Basically, we exhaustively try all the binary assignments. In each binary assignment, the entry of a vertex is either 0 (meaning the vertex is not in the clique) or 1 (meaning the vertex is in the clique). We print the binary assignment that satisfies the definition of the clique (Note the size is specified as K).
```
def brute_force():
# brute-force way: try every possible assignment!
def bitfield(n, L):
result = np.binary_repr(n, L)
return [int(digit) for digit in result]
L = num_nodes # length of the bitstring that represents the assignment
max = 2**L
has_sol = False
for i in range(max):
cur = bitfield(i, L)
cur_v = clique.satisfy_or_not(np.array(cur), w, K)
if cur_v:
has_sol = True
break
return has_sol, cur
has_sol, sol = brute_force()
if has_sol:
print("Solution is ", sol)
else:
print("No solution found for K=", K)
qubit_op, offset = clique.get_operator(w, K)
```
#### Part I: Run the optimization using the programmatic approach
Here we directly construct the algorithm and then run() it to get the result.
```
# We will use the qubit_op and offset from above
algo = ExactEigensolver(qubit_op)
result = algo.run()
x = sample_most_likely(result['eigvecs'][0])
ising_sol = clique.get_graph_solution(x)
if clique.satisfy_or_not(ising_sol, w, K):
print("Solution is", ising_sol)
else:
print("No solution found for K=", K)
```
#### Part II: Run the optimization with the VQE
We can create the objects directly ourselves too and run VQE for the result
```
from qiskit.aqua import aqua_globals
from qiskit.aqua.algorithms import VQE
from qiskit.aqua.components.optimizers import COBYLA
from qiskit.aqua.components.variational_forms import RY
aqua_globals.random_seed = 10598
optimizer = COBYLA()
var_form = RY(qubit_op.num_qubits, depth=5, entanglement='linear')
vqe = VQE(qubit_op, var_form, optimizer)
backend = BasicAer.get_backend('statevector_simulator')
result = vqe.run(backend)
x = sample_most_likely(result['eigvecs'][0])
ising_sol = clique.get_graph_solution(x)
if clique.satisfy_or_not(ising_sol, w, K):
print("Solution is", ising_sol)
else:
print("No solution found for K=", K)
```
| github_jupyter |
# Make a copy of this template
You will need to have access to Quantum Computing Service before running this colab.
This notebook can serve as a starter kit for you to run programs on Google's quantum hardware. You can download it using the directions below, open it in colab (or Jupyter), and modify it to begin your experiments.
## How to download iPython notebooks from github
You can retrieve ipython notebooks in the cirq repository by
going to the [doc directory](https://github.com/quantumlib/Cirq/tree/master/docs). For instance, this colab template can be found [here](https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/colab.ipynb). Select the file that you would like to download and then click the "Raw" button in the upper right part of the window:

This will show the entire file contents. Right-click and select "Save as" to save this file to your computer. Make sure to save to a file with a ".ipynb" extension. (Note: you may need to select "All files" from the format dropdown instead of "text"). You can also get to this colab's [raw contenti directly](https://raw.githubusercontent.com/quantumlib/Cirq/master/docs/tutorials/google/colab.ipynb)
You can also retrieve the entire cirq repository by running the following command in a terminal that has git installed. `git checkout https://github.com/quantumlib/Cirq.git`
## How to open colab
You can open a new colab from your Google Drive window or by visiting the [colab site](https://colab.research.google.com/notebooks/intro.ipynb). From the colaboratory site, you can use the menu to upload an ipython notebook:

This will upload the ipynb file that you downloaded before. You can now run all the commands, modify it to suit your goals, and share it with others.
### More Documentation Links
* [Quantum Engine concepts](../../google/concepts.md)
* [Quantum Engine documentation](../../google/engine.md)
* [Cirq documentation](https://cirq.readthedocs.io)
* [Colab documentation](https://colab.sandbox.google.com/notebooks/welcome.ipynb)
## Authenticate and install cirq
For details of authentication and installation, please see [go/quantum-engine-quickstart](https://go/quantum-engine-quickstart).
Note: The below code will install the latest stable release of cirq. If you need the latest and greatest features and don't mind if a few things aren't quite working correctly, you can install `cirq-unstable` instead of `cirq` to get the most up-to-date features of cirq.
1. Enter the cloud project_id you'd like to use in the 'project_id' field.
2. Then run the cell below (and go through the auth flow for access to the project id you entered).

```
# The Google Cloud Project id to use.
project_id = 'quantum-cloud-client' #@param {type:"string"}
def setup_auth():
"""Runs the user through the Colab OAuth process.
Sets the local Application Default Credentials. For more information on on
using Application Default Credentials see
https://cloud.google.com/docs/authentication/production
"""
from google.colab import auth
auth.authenticate_user(clear_output=False)
print("Getting OAuth2 credentials.")
print("Press enter after entering the verification code.")
setup_auth()
print("Authentication complete.")
!pip install cirq
print("Cirq installed.")
```
## Create an Engine variable
The following creates and engine variable which can be used to run programs under the project id you entered. above
```
import cirq
# Create an Engine object to use, providing the project id and the args
# used for authentication (produced by running the authentication above).
engine = cirq.google.Engine(project_id=project_id)
```
## Example
```
# A simple example.
q = cirq.GridQubit(5, 2)
circuit = cirq.Circuit(cirq.X(q)**0.5, cirq.measure(q, key='m'))
job = engine.run_sweep(
program=circuit,
repetitions=10000,
processor_ids=['rainbow'],
gate_set=cirq.google.SYC_GATESET)
results = [str(int(b)) for b in job.results()[0].measurements['m'][:, 0]]
print('Success! Results:')
print(''.join(results))
```
| github_jupyter |
```
import os
import os.path as pth
import json
import shutil
import pandas as pd
from tqdm import tqdm
import tensorflow as tf
import tensorflow.keras as keras
gpus = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(gpus[0], True)
BASE_MODEL_NAME = 'InceptionV3-for-upload'
my_model_base = keras.applications.inception_v3
my_model = my_model_base.InceptionV3
config = {
'is_zscore':True,
# 'input_shape': (540, 960, 3),
'aug': {
#'resize': (270, 480),
'resize': (297, 528),
},
# 'input_shape': (224, 360, 3),
#'input_shape': (270, 480, 3),
'input_shape': (270, 480, 3),
'output_activation': 'softmax',
'num_class': 1049,
'output_size': 1049,
'conv':{
'conv_num': (0), # (3,5,3),
'base_channel': 0, # 4,
'kernel_size': 0, # 3,
'padding':'same',
'stride':'X'
},
'pool':{
'type':'X',
'size':'X',
'stride':'X',
'padding':'same'
},
'fc':{
'fc_num': 0,
},
'activation':'relu',
'between_type': 'avg',
'is_batchnorm': True,
'is_dropout': False,
'dropout_rate': 0.5,
'add_dense':True,
'dense_size': 1024,
'batch_size': 64, #64,
'buffer_size': 256, #256,
'loss': 'CategoricalCrossentropy',
'num_epoch': 10000,
'learning_rate': 1e-5, #1e-3,
'random_state': 7777
}
image_feature_description = {
'image_raw': tf.io.FixedLenFeature([], tf.string),
'randmark_id': tf.io.FixedLenFeature([], tf.int64),
# 'id': tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function(example_proto):
return tf.io.parse_single_example(example_proto, image_feature_description)
def map_func(target_record):
img = target_record['image_raw']
label = target_record['randmark_id']
img = tf.image.decode_jpeg(img, channels=3)
img = tf.dtypes.cast(img, tf.float32)
return img, label
def resize_and_crop_func(image, label):
result_image = tf.image.resize(image, config['aug']['resize'])
result_image = tf.image.random_crop(image, size=config['input_shape'], seed=7777)
return result_image, label
def image_aug_func(image, label):
pass
return image, label
def post_process_func(image, label):
# result_image = result_image / 255
result_image = my_model_base.preprocess_input(image)
onehot_label = tf.one_hot(label, depth=config['num_class'])
return result_image, onehot_label
data_base_path = pth.join('data', 'public')
os.makedirs(data_base_path, exist_ok=True)
category_csv_name = 'category.csv'
category_json_name = 'category.json'
submission_csv_name = 'sample_submisstion.csv'
train_csv_name = 'train.csv'
# train_zip_name = 'train.zip'
train_tfrecord_name = 'all_train.tfrecords'
train_tfrecord_path = pth.join(data_base_path, train_tfrecord_name)
val_tfrecord_name = 'all_val.tfrecords'
val_tfrecord_path = pth.join(data_base_path, val_tfrecord_name)
# test_zip_name = 'test.zip'
test_tfrecord_name = 'test.tfrecords'
test_tfrecord_path = pth.join(data_base_path, test_tfrecord_name)
train_csv_path = pth.join(data_base_path, train_csv_name)
train_df = pd.read_csv(train_csv_path)
train_dict = {k:v for k, v in train_df.values}
submission_csv_path = pth.join(data_base_path, submission_csv_name)
submission_df = pd.read_csv(submission_csv_path)
# submission_df.head()
category_csv_path = pth.join(data_base_path, category_csv_name)
category_df = pd.read_csv(category_csv_path)
category_dict = {k:v for k, v in category_df.values}
# category_df.head()
train_tfrecord_path
```
### Model
```
import tensorflow as tf
from tensorflow.keras.preprocessing import image
import cv2
import matplotlib.pyplot as plt
from PIL import Image
from sklearn.model_selection import train_test_split, KFold, RepeatedKFold, GroupKFold, RepeatedStratifiedKFold
from sklearn.utils import shuffle
import numpy as np
import pandas as pd
import os
import os.path as pth
import shutil
import time
from tqdm import tqdm
import itertools
from itertools import product, combinations
import numpy as np
from PIL import Image
from IPython.display import clear_output
from multiprocessing import Process, Queue
import datetime
import tensorflow.keras as keras
from tensorflow.keras.utils import to_categorical, Sequence
from tensorflow.keras.layers import Input, Dense, Activation, BatchNormalization, \
Flatten, Conv3D, AveragePooling3D, MaxPooling3D, Dropout, \
Concatenate, GlobalMaxPool3D, GlobalAvgPool3D
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.optimizers import SGD, Adam
from tensorflow.keras.callbacks import ModelCheckpoint,LearningRateScheduler, \
EarlyStopping
from tensorflow.keras.losses import mean_squared_error, mean_absolute_error
from tensorflow.keras import backend as K
from tensorflow.keras.constraints import max_norm
conv_comb_list = []
conv_comb_list += [(0,)]
base_channel_list = [0]
fc_list = [0] # 128, 0
# between_type_list = [None, 'avg', 'max']
between_type_list = ['avg']
batch_size_list = [80]
activation_list = ['relu']
# len(conv_comb_list), conv_comb_list
def build_cnn(config):
input_layer = Input(shape=config['input_shape'], name='input_layer')
pret_model = my_model(
input_tensor=input_layer, include_top=False, weights='imagenet',
input_shape=config['input_shape'], pooling=config['between_type'],
classes=config['output_size']
)
pret_model.trainable = False
x = pret_model.output
if config['between_type'] == None:
x = Flatten(name='flatten_layer')(x)
if config['is_dropout']:
x = Dropout(config['dropout_rate'], name='output_dropout')(x)
if config['add_dense']:
x = Dense(config['dense_size'], activation=config['activation'],
name='dense_layer')(x)
x = Dense(config['output_size'], activation=config['output_activation'],
name='output_fc')(x)
# x = Activation(activation=config['output_activation'], name='output_activation')(x)
model = Model(inputs=input_layer, outputs=x, name='{}'.format(BASE_MODEL_NAME))
return model
model = build_cnn(config)
for i, layer in enumerate(model.layers):
print(i, layer.name)
#model.summary(line_length=150)
del model
origin_train_len = len(train_df) / 5 * 4
origin_val_len = len(train_df) / 5 * 1
train_num_steps = int(np.ceil((origin_train_len)/config['batch_size']))
val_num_steps = int(np.ceil((origin_val_len)/config['batch_size']))
print(train_num_steps, val_num_steps)
model_base_path = data_base_path
model_checkpoint_path = pth.join(model_base_path, 'checkpoint')
def get_lr_callback():
lr_start = 0.000001*10*0.5
lr_max = 0.0000005 * config['batch_size'] * 10*0.5
lr_min = 0.000001 * 10*0.5
lr_ramp_ep = 5
lr_sus_ep = 0
lr_decay = 0.8
def lrfn(epoch):
if epoch < lr_ramp_ep:
lr = (lr_max - lr_start) / lr_ramp_ep * epoch + lr_start
elif epoch < lr_ramp_ep + lr_sus_ep:
lr = lr_max
else:
lr = (lr_max - lr_min) * lr_decay**(epoch - lr_ramp_ep - lr_sus_ep) + lr_min
return lr
lr_callback = tf.keras.callbacks.LearningRateScheduler(lrfn, verbose = False)
return lr_callback
for conv_comb, activation, base_channel, \
between_type, fc_num, batch_size \
in itertools.product(conv_comb_list, activation_list,
base_channel_list, between_type_list, fc_list,
batch_size_list):
config['conv']['conv_num'] = conv_comb
config['conv']['base_channel'] = base_channel
config['activation'] = activation
config['between_type'] = between_type
config['fc']['fc_num'] = fc_num
config['batch_size'] = batch_size
base = BASE_MODEL_NAME
base += '_resize_{}'.format(config['aug']['resize'][0])
base += '_conv_{}'.format('-'.join(map(lambda x:str(x),config['conv']['conv_num'])))
base += '_basech_{}'.format(config['conv']['base_channel'])
base += '_act_{}'.format(config['activation'])
base += '_pool_{}'.format(config['pool']['type'])
base += '_betw_{}'.format(config['between_type'])
base += '_fc_{}'.format(config['fc']['fc_num'])
base += '_zscore_{}'.format(config['is_zscore'])
base += '_batch_{}'.format(config['batch_size'])
if config['is_dropout']:
base += '_DO_'+str(config['dropout_rate']).replace('.', '')
if config['is_batchnorm']:
base += '_BN'+'_O'
else:
base += '_BN'+'_X'
model_name = base
print(model_name)
### Define dataset
dataset = tf.data.TFRecordDataset(train_tfrecord_path, compression_type='GZIP')
dataset = dataset.map(_parse_image_function, num_parallel_calls=tf.data.experimental.AUTOTUNE)
# dataset = dataset.cache()
dataset = dataset.map(map_func, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.map(resize_and_crop_func, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.map(image_aug_func, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.shuffle(config['buffer_size'])
dataset = dataset.batch(config['batch_size'])
dataset = dataset.map(post_process_func, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
val_dataset = tf.data.TFRecordDataset(val_tfrecord_path, compression_type='GZIP')
val_dataset = val_dataset.map(_parse_image_function, num_parallel_calls=tf.data.experimental.AUTOTUNE)
val_dataset = val_dataset.map(map_func, num_parallel_calls=tf.data.experimental.AUTOTUNE)
val_dataset = val_dataset.map(resize_and_crop_func, num_parallel_calls=tf.data.experimental.AUTOTUNE)
# val_dataset = val_dataset.map(image_aug_func, num_parallel_calls=tf.data.experimental.AUTOTUNE)
# val_dataset = val_dataset.shuffle(config['buffer_size'])
val_dataset = val_dataset.batch(config['batch_size'])
val_dataset = val_dataset.map(post_process_func, num_parallel_calls=tf.data.experimental.AUTOTUNE)
# val_dataset = val_dataset.cache()
val_dataset = val_dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
model_path = pth.join(
model_checkpoint_path, model_name,
)
model = build_cnn(config)
# model.summary()
initial_epoch = 0
if pth.isdir(model_path) and len([_ for _ in os.listdir(model_path) if _.endswith('hdf5')]) >= 1:
model.compile(loss=config['loss'], optimizer=Adam(lr=config['learning_rate']),
metrics=['acc', 'Precision', 'Recall', 'AUC'])
model_chk_name = sorted(os.listdir(model_path))[-1]
initial_epoch = int(model_chk_name.split('-')[0])
model.load_weights(pth.join(model_path, model_chk_name))
else:
# first: train only the top layers (which were randomly initialized)
# i.e. freeze all convolutional InceptionV3 layers
model.compile(optimizer='rmsprop', loss='categorical_crossentropy',
metrics=['acc', 'Precision', 'Recall', 'AUC'])
PRE_TRAIN_EPOCH = 6
model.fit(
x=dataset, epochs=PRE_TRAIN_EPOCH, # train only top layers for just a few epochs.
validation_data=val_dataset, shuffle=True,
#callbacks = [checkpointer, es], #batch_size=config['batch_size']
initial_epoch=initial_epoch,
# steps_per_epoch=train_num_steps, validation_steps=val_num_steps,
verbose=1)
# at this point, the top layers are well trained and we can start fine-tuning
# convolutional layers from inception V3. We will freeze the bottom N layers
# and train the remaining top layers.
# let's visualize layer names and layer indices to see how many layers
# we should freeze:
for i, layer in enumerate(model.layers):
print(i, layer.name)
# we chose to train the top 2 inception blocks, i.e. we will freeze
# the first 249 layers and unfreeze the rest:
for layer in model.layers[:229]: # [:249]:
layer.trainable = False
for layer in model.layers[229:]: # [249:]:
layer.trainable = True
# we need to recompile the model for these modifications to take effect
# we use Adam with a low learning rate
model.compile(loss=config['loss'], optimizer=Adam(lr=config['learning_rate']),
metrics=['acc', 'Precision', 'Recall', 'AUC'])
initial_epoch = PRE_TRAIN_EPOCH
# IGNORE 4 lines below in InceptionV3
# ### Freeze first layer
# conv_list = [layer for layer in model.layers if isinstance(layer, keras.layers.Conv2D)]
# conv_list[0].trainable = False
# # conv_list[1].trainable = False
os.makedirs(model_path, exist_ok=True)
model_filename = pth.join(model_path, '{epoch:06d}-{val_loss:0.6f}-{loss:0.6f}.hdf5')
checkpointer = ModelCheckpoint(
filepath=model_filename, verbose=1,
period=1, save_best_only=True,
monitor='val_loss'
)
es = EarlyStopping(monitor='val_loss', verbose=1, patience=16) ### 16 at night. 10 genral, 6 for experiment
hist = model.fit(
x=dataset, epochs=config['num_epoch'],
validation_data=val_dataset, shuffle=True,
callbacks = [get_lr_callback(), checkpointer, es], #batch_size=config['batch_size']
initial_epoch=initial_epoch,
# steps_per_epoch=train_num_steps, validation_steps=val_num_steps,
verbose=1
)
model_analysis_path = model_path.replace('checkpoint', 'analysis')
visualization_path = pth.join(model_analysis_path,'visualization')
os.makedirs(visualization_path, exist_ok=True)
print()
# clear_output()
for each_label in ['loss', 'acc', 'precision', 'recall', 'auc']:
fig, ax = plt.subplots()
ax.plot(hist.history[each_label], 'g', label='train_{}'.format(each_label))
ax.plot(hist.history['val_{}'.format(each_label)], 'r', label='val_{}'.format(each_label))
ax.set_xlabel('epoch')
ax.set_ylabel('loss')
ax.legend(loc='upper left')
if not each_label == 'loss':
plt.ylim(0, 1)
plt.show()
filename = 'learning_curve_{}'.format(each_label)
# fig.savefig(pth.join(visualization_path, filename), transparent=True)
plt.cla()
plt.clf()
plt.close('all')
np.savez_compressed(pth.join(visualization_path, 'learning_curve'),
loss=hist.history['loss'],
val_loss=hist.history['val_loss'],
acc=hist.history['acc'],
val_acc=hist.history['val_acc'],
precision=hist.history['precision'],
vaval_precisionl_mae=hist.history['val_precision'],
recall=hist.history['recall'],
val_recall=hist.history['val_recall'],
auc=hist.history['auc'],
val_auc=hist.history['val_auc']
)
model.save(pth.join(model_path, '000000_last.hdf5'))
K.clear_session()
del(model)
model_analysis_base_path = pth.join(model_base_path, 'analysis', model_name)
with open(pth.join(model_analysis_base_path, 'config.json'), 'w') as f:
json.dump(config, f)
chk_name_list = sorted([name for name in os.listdir(model_path) if name != '000000_last.hdf5'])
for chk_name in chk_name_list[:-2]:
os.remove(pth.join(model_path, chk_name))
# clear_output()
```
### Inference
```
image_feature_description_for_test = {
'image_raw': tf.io.FixedLenFeature([], tf.string),
# 'randmark_id': tf.io.FixedLenFeature([], tf.int64),
# 'id': tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function_for_test(example_proto):
return tf.io.parse_single_example(example_proto, image_feature_description_for_test)
def map_func_for_test(target_record):
img = target_record['image_raw']
img = tf.image.decode_jpeg(img, channels=3)
img = tf.dtypes.cast(img, tf.float32)
return img
def resize_and_crop_func_for_test(image):
result_image = tf.image.resize(image, config['aug']['resize'])
result_image = tf.image.random_crop(image, size=config['input_shape'], seed=7777)
return result_image
def post_process_func_for_test(image):
# result_image = result_image / 255
result_image = my_model_base.preprocess_input(image)
return result_image
submission_base_path = pth.join(data_base_path, 'submission')
os.makedirs(submission_base_path, exist_ok=True)
preds = []
for conv_comb, activation, base_channel, \
between_type, fc_num, batch_size \
in itertools.product(conv_comb_list, activation_list,
base_channel_list, between_type_list, fc_list,
batch_size_list):
config['conv']['conv_num'] = conv_comb
config['conv']['base_channel'] = base_channel
config['activation'] = activation
config['between_type'] = between_type
config['fc']['fc_num'] = fc_num
config['batch_size'] = batch_size
base = BASE_MODEL_NAME
base += '_resize_{}'.format(config['aug']['resize'][0])
base += '_conv_{}'.format('-'.join(map(lambda x:str(x),config['conv']['conv_num'])))
base += '_basech_{}'.format(config['conv']['base_channel'])
base += '_act_{}'.format(config['activation'])
base += '_pool_{}'.format(config['pool']['type'])
base += '_betw_{}'.format(config['between_type'])
base += '_fc_{}'.format(config['fc']['fc_num'])
base += '_zscore_{}'.format(config['is_zscore'])
base += '_batch_{}'.format(config['batch_size'])
if config['is_dropout']:
base += '_DO_'+str(config['dropout_rate']).replace('.', '')
if config['is_batchnorm']:
base += '_BN'+'_O'
else:
base += '_BN'+'_X'
model_name = base
print(model_name)
### Define dataset
test_dataset = tf.data.TFRecordDataset(test_tfrecord_path, compression_type='GZIP')
test_dataset = test_dataset.map(_parse_image_function_for_test, num_parallel_calls=tf.data.experimental.AUTOTUNE)
test_dataset = test_dataset.map(map_func_for_test, num_parallel_calls=tf.data.experimental.AUTOTUNE)
test_dataset = test_dataset.map(resize_and_crop_func_for_test, num_parallel_calls=tf.data.experimental.AUTOTUNE)
test_dataset = test_dataset.batch(config['batch_size'])
test_dataset = test_dataset.map(post_process_func_for_test, num_parallel_calls=tf.data.experimental.AUTOTUNE)
test_dataset = test_dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
model_path = pth.join(
model_checkpoint_path, model_name,
)
model = build_cnn(config)
# model.summary()
model.compile(loss=config['loss'], optimizer=Adam(lr=config['learning_rate']),
metrics=['acc', 'Precision', 'Recall', 'AUC'])
initial_epoch = 0
model_chk_name = sorted(os.listdir(model_path))[-1]
initial_epoch = int(model_chk_name.split('-')[0])
model.load_weights(pth.join(model_path, model_chk_name))
preds = model.predict(test_dataset, verbose=1)
#pred_labels = np.argmax(preds, axis=1)
#pred_probs = np.array([pred[indice] for pred, indice in zip(preds, pred_labels)])
# argmax --> top3
pred_labels = np.argsort(-preds)
submission_csv_path = pth.join(data_base_path, submission_csv_name)
submission_df = pd.read_csv(submission_csv_path)
merged_df = []
RANK_TO_SAVE = 3
for i in range(RANK_TO_SAVE):
tmp_df = submission_df.copy()
tmp_labels = pred_labels[:, i]
tmp_df['landmark_id'] = tmp_labels
tmp_df['conf'] = np.array([pred[indice] for pred, indice in zip(preds, tmp_labels)])
merged_df.append(tmp_df)
submission_df = pd.concat(merged_df)
#submission_df['landmark_id'] = pred_labels
#submission_df['conf'] = pred_probs
today_str = datetime.date.today().strftime('%Y%m%d')
result_filename = '{}.csv'.format(model_name)
submission_csv_fileaname = pth.join(submission_base_path, '_'.join([today_str, result_filename]))
submission_df.to_csv(submission_csv_fileaname, index=False)
```
| github_jupyter |
# Two photon emission rate of Rb-87 atom in the free space
This code calculates the two photon emission rates for rubidium-87 atom in the free space. The details of the physics is given in ref. [1]. This code uses lots of the libraries developed by https://arc-alkali-rydberg-calculator.readthedocs.io/en/latest/detailed_doc.html [2].
[1]. Generating heralded high-dimensional hyper-entangled photons using Rydberg atoms,...
[2]. N. Sibalic, J.D. Pritchard, C.S. Adams, and K.J.Weatherill, Arc: An open-source library for calculating properties of alkali rydberg atoms, Computer Physics Communications 220, 319-331 (2017).
In this file, we have calculated the TPE rates of Rb atom without cavity. We have not added the contribution of continuum states in this calculation because we found that for Rb atoms the contribution of continuum states are negligible. Detail explanation is given in ref.[1]
The TPE rate for rubidium atom in the free space is given as:
\begin{align}
\Gamma &= \frac{3^2 Z^{10}}{2^{11}} R_H \alpha^6 c \Big( \frac{k_{fi}}{k_0}\Big)^5 \int_{y=0}^{1} y^3 (1-y)^3 dy \Big| \sum_m d_{fm}d_{mi} \Big( \frac{1}{y - y_{im}} + \frac{1}{1 - y - y_{im}} \Big) \Big|^2
\end{align}
Please check the Appendix B of ref. [1] for better understanding of the formula above.
```
#This program is to connect with the library
# Configure the matplotlib graphics library and configure it to show
# show figures inline in the notebook
%matplotlib inline
import matplotlib.pyplot as plt # Import library for direct plotting functions
import numpy as np # Import Numerical Python
from IPython.core.display import display, HTML #Import HTML for formatting output
# NOTE: Uncomment following lines ONLY if you are not using installation via pip
import sys,os
rootDir = '/home/sutapa/ARC-Alkali-Rydberg-Calculator-2.0.5/' # e.g. '/Users/Username/Desktop/ARC-Alkali-Rydberg-Calculator'
sys.path.insert(0,rootDir)
from arc import * #Import ARC (Alkali Rydberg Calculator)
```
The two photon rate calculation for Rb Rydberg state
```
from scipy.special import assoc_laguerre
from scipy.special import factorial
from scipy import integrate
import array as arr
import matplotlib.pyplot as plt
#This function calculates the product of the dipole matrix for the transition from (n1,l1,j1) to (m,lm,jm) and the transition from (m,lm,jm) to (n2,l2,j2)
def kappa(n1,l1,n2,l2,m,lm):
Adip1=atom.getRadialMatrixElement(n2,l2,j2,m,lm,jm)
Adip2=atom.getRadialMatrixElement(m,lm,jm,n1,l1,j1)
return(Adip1*Adip2)
def Mnr(y,ym):
fm= 1/(y+ym)-(1/(y-1-ym))
return(fm)
# Defining the atoms
atom=Rubidium87()
atom1=Hydrogen()
# Initial state
n1 = 60
l1 = 0
j1=0.5
#final state
n2= 5
l2=0
j2=0.5
#virtual state,m
lm = 1
jm=1.5
#Physical parameters
alpha= 1/137.036 #fine structure spliting
c = 2.99792458*pow(10,8)
Rh = 10973731.5685 #Rydberg constant
coefficient=9*pow(alpha,6)*Rh*c/2**10*(abs((atom.getEnergy(n2,l2,j2)-atom.getEnergy(n1,l1,j1))/(atom1.getEnergy(2,0,0.5)-atom1.getEnergy(1,0,0.5))))**5
sumnr=0.0;
y= np.arange(0.00001,1,0.00001)
for m in range(n1,n1+100):
ym=(atom.getEnergy(m,lm,jm)-atom.getEnergy(n1,l1,j1))/(atom.getEnergy(n1,l1,j1)-atom.getEnergy(n2,l2,j2))
kap=kappa(n1,l1,n2,l2,m,lm)
sumnr=sumnr+(Mnr(y,ym)*abs(kap))
#TPE spectrum
phi= coefficient*pow(y,3)*pow((1-y),3)*(sumnr*sumnr)
#TPE rates
Anr=0.5*integrate.simps(phi,y)
print(Anr)
#Plot
plt.figure(1)
fig, axs = plt.subplots(1, 1, constrained_layout=True)
plt.plot(y,phi, color='blue', linewidth=4,label='60S to 6S')
plt.legend(loc="upper right")
plt.ylabel('$ϕ_{free}$($sec^{-1}$)', fontsize = 20)
plt.xlabel('$E/E_{if}$', fontsize = 20)
#plt.title('Rb-atom, 60S -> 6S', fontsize = 20)
plt.xticks(size = 10)
plt.yticks(size = 10)
#np.savetxt('phi_free_60D_6S.csv', phi)
#np.savetxt('y_free_60D_6S.csv',y)
#plt.savefig('phi_free_60S.pdf',dpi=100)
```
| github_jupyter |
```
import prtools as pr
import numpy as np
from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split
from jupyterthemes import jtplot
jtplot.style(theme="oceans16")
help (pr.gendath)
```
# Exercise 3.3
```
def genstdNoise(x):
return np.random.randn(x.shape[0], x.shape[1])
```
(a)\
Regularization can have quite a beneficial effect.
```
x_train = np.random.uniform(low=0.0, high=1.0, size=(100,1))
y_train = x_train + genstdNoise(x_train)
x_test = np.random.uniform(low=0.0, high=1.0,size=(10000,1))
y_test = x_test + genstdNoise(x_test)
trainData = pr.gendatr(x_train,targets=y_train)
testData = pr.gendatr(x_test,targets=y_test)
print("trainData shape: ", trainData.shape)
print("testData shape: ", testData.shape)
lamb = np.array([0.001, 0.1,1, 10, 100, 1000])
plt.figure(figsize=(18,10))
for i in range(len(lamb)):
w = pr.ridger(trainData, lamb[i])
y_hat = w(x_test)
plt.subplot(2,3,i+1)
pr.scatterr(testData)
pr.plotr(w, color="r")
plt.title("lambda="+str(lamb[i]))
print("SSE"+str(i)+":", np.sum((+y_hat - +y_test)**2))
print("w"+str(i)+":",+w)
print(" ")
```
### squared error first decreases as the $\lambda$ increases from 0.001 to 1 bacause $\tau$ is inversely prop. to $\lambda$, and then the squared error increases as the $\lambda$ increases to 1000. Meanwhile $w$ shrinks to zero due to the large $\lambda$.
When the regularization parameters varying from 0 to 10(for instance), the regression line approaches to the line lying on the x axis. This means that $w<=\tau$ where $\tau$ decreases as $\lambda$ increases. Link to 3.1(e) w=0
(b)
### 拟合过程中倾向于让w尽可能小,最后构造一个所有参数都比较小的模型。原因:一般认为small parameter values 的model比较简单,能适应不同的数据集,在一定程度上避免了overfitting的现象。设想一个线性回归方程,若参数很大,那么只要数据偏移一点点,就会对结果造成很大影响;如果参数足够小,数据偏移得多一些也没有关系。【抗扰动能力强】
```
lamb = np.arange(0.01, 10, 0.05)
SSE = np.zeros(len(lamb))
for i in range (len(lamb)):
w = pr.ridger(trainData, lamb[i])
y_hat = w(x_test)
SSE[i] = (np.sum((+y_hat - y_test)**2))
plt.plot(lamb,SSE)
plt.xlabel("lambda")
plt.ylabel("SSE")
plt.xticks(np.arange(2,5,0.5))
plt.show
```
The optimum for a training set size of 10 is $\lambda = 1.0$ \
The optimum for a training set size of 100 is $\lambda = 3.0$
# Exercise 3.4
See above. But cross validation will be better.
# Exercise 3.7
(a)
No, none of the entries ever become zero really. The probability that this happens is zero.
### In the limit, for $\lambda$ is larger and larger, $w$ should shrink to zero.
```
x = np.random.randn(20,2)
y = x[:,0] + 0.2*np.random.randn(20)
data = pr.gendatr(x,y)
w = pr.ridger(data,0.01) #standard L2 regularization for different values of \lambda, from 0.001 to 1000
pr.scatterr(data)
pr.plotr(w, gridsize=5)
print(+w)
```
(b)\
In this setting, there will be a finite $\lambda$ for which at least one of the entries(the second) becomes zero. For an even larger $\lambda$, the other entry will become zero.\
Here we have two zero entries with $\lambda$=1.
```
w = pr.lassor(data,1)
pr.scatterr(data)
pr.plotr(w, gridsize=5)
print(+w)
```
# Exercise 3.13
```
t = pr.gendath(n=(500,500))
for i in range (30):
a = pr.gendath(n=(20,20))
LDC = pr.ldc(a)
error = pr.testc(LDC.eval(t))
error1 = t * pr.ldc(a) * pr.testc() # given in pdf
print("error rate: ",error1)
```
(a)\
The training set changes every time, so the fitting curve changes every time, resulting in the error rate changes every time.
```
a = pr.gendath(n=(20,20))
w = pr.ldc(a)
for i in range (30):
t = pr.gendath(n=(500,500))
error = pr.testc(w.eval(t))
error1 = t * w * pr.testc()
print("error rate: ",error1)
```
(b)
### Since the testing set changes every time, the error rate changes every time. The variance of the latter results is smaller than the former one.
# Exercise 3.14 Learning Curves
```
data = pr.gendath(n=(1000,1000))
feature = +data # What's the effect of plus sign
label = pr.genlab(n=(1000,1000),lab=[-1,1])
noiseFeature = np.hstack((feature,np.random.rand(2000,60)))
noiseData = pr.prdataset(noiseFeature, label)
u = [pr.nmc(), pr.ldc(), pr.qdc()]
e_nmc = pr.cleval(noiseData, u[0], trainsize=[64, 128, 256, 512], nrreps=10) # default
e_ldc = pr.cleval(noiseData, u[1], trainsize=[64, 128, 256, 512], nrreps=10)
e_qdc = pr.cleval(noiseData, u[2], trainsize=[64, 128, 256, 512], nrreps=10)
plt.legend()
plt.title("Learning Curve")
```
(a)
### The error rate of the testing curve is decreasing, and the training curve is increasing. \
### Both curves should converge in the end. Where the curves converge depends on what classifier you use. More flexible classifiers get a low asymptotic error.\
With same amount of training data, QDA is above LDA above NMC, which indicates that the variance increases with the complexity.\
### Therefore, for the most complex classifier, the result of new samples in limited small amount will result in much more different results.
(b)
### The curves intersect because simpler classifier works generally better when sample sizes are small, while complex classifier works better in larger sample size. None of the classifier works best. They are suitable for different situations.
(c)\
I expect that the limiting behavior of learning curves is that training and test error converge to same value. QDA<LDA<NMC
# Exercise 3.15
### study learning curve for the 1NN classifier
```
data = pr.gendath(n=(500,500)) # generate Highleyman classes
feature = +data
label = pr.genlab(n=(500,500),lab=[-1,1])
noiseFeature = np.hstack((feature, np.random.rand(1000,60))) # 1000 rows and 60 columns
noiseData = pr.prdataset(noiseFeature, label)
e_nmc = pr.cleval(noiseData, pr.knnc(1), trainsize=[25, 50, 100, 200], nrreps=10)
```
#### The 1-NN classifier has a zero apparent error.
# Exercise 3.16
```
data = pr.gendats(n=(1000,1000)) # Generate a two-class dataset A from two DIM-dimensional Gaussian distributions
feature = +data # What's the effect of plus sign
label = pr.genlab(n=(1000,1000),lab=[-1,1])
noiseFeature = np.hstack((feature,np.random.rand(2000,60)))
noiseData = pr.prdataset(noiseFeature, label)
u = [pr.nmc(), pr.ldc(), pr.qdc()]
e_nmc = pr.cleval(noiseData, u[0], trainsize=[64, 128, 256, 512], nrreps=10) # default
e_ldc = pr.cleval(noiseData, u[1], trainsize=[64, 128, 256, 512], nrreps=10)
e_qdc = pr.cleval(noiseData, u[2], trainsize=[64, 128, 256, 512], nrreps=10)
plt.legend()
plt.title("Learning Curve of a simple classification data")
data = pr.gendatd(n=(1000,1000)) # Generate a two-class dataset A from two DIM-dimensional Gaussian distributions
feature = +data # What's the effect of plus sign
label = pr.genlab(n=(1000,1000),lab=[-1,1])
noiseFeature = np.hstack((feature,np.random.rand(2000,60)))
noiseData = pr.prdataset(noiseFeature, label)
u = [pr.nmc(), pr.ldc(), pr.qdc()]
e_nmc = pr.cleval(noiseData, u[0], trainsize=[64, 128, 256, 512], nrreps=10) # default
e_ldc = pr.cleval(noiseData, u[1], trainsize=[64, 128, 256, 512], nrreps=10)
e_qdc = pr.cleval(noiseData, u[2], trainsize=[64, 128, 256, 512], nrreps=10)
plt.legend()
plt.title("Learning Curve of a difficult classification data")
```
#### The curves will perform in a different kind of way because of the difference in data distributions and the way the classifiers fit these distributions.
# Exercise 3.17
```
data = pr.gendatc(n=(1000,1000),dim=2, mu=0.0)
feature = +data
label = pr.genlab(n=(1000,1000),lab=[-1,1])
noiseFeature = np.hstack((feature,np.random.rand(2000,60)))
noiseData = pr.prdataset(noiseFeature, label)
e_nmc = pr.cleval(noiseData, pr.nmc(), trainsize=[5, 10, 20, 40], nrreps=10) # default
plt.title("Learning Curve for nmc with gendatc")
```
#### The classifier fits better, though still badly, to the data when there are fewer data points. As a result we see the not-so-well-known Dipping Phenomenon .
phenomenon: particular classifiers attain their optimum performance at <u>a training set size which is finite.</u> \
ref: https://rdcu.be/ccxYS
# Exercise 3.19
This dataset consists of features of handwritten numerals (0-9) extracted from a collection of Dutch utility maps. 200 patterns per class (for a total of 2,000 patterns) have been digitized in binary images.\
In each file the 2000 patterns are stored in ASCI on 2000 lines. The first 200 patterns are of class 0, followed by sets of 200 patterns for each of the classes 1-9. Corresponding patterns in different feature sets (files) correspond to the same original character.
```
help (np.loadtxt)
help (pr.genlab)
kar = np.loadtxt("mfeat-kar.txt", dtype=np.float64)
label = pr.genlab(n=[200]*10, lab = [0,1,2,3,4,5,6,7,8,9])
data = pr.prdataset(kar, targets=label)
print(data)
help (pr.clevalf)
classifier = pr.qdc()
e_qdc = pr.clevalf(data, classifier,trainsize=0.4,nrreps=10) # trainsize is the percentage of data to be used
plt.legend()
plt.show()
```
(a)\
When you do not change the training set, nothing will change during the redoing of the experiment. In the function <b>clevalf</b> each time the first 4, 8, 16, 32 or 64 features are used, so there is no randomization is used here.\
(b)\
You only get the first part of the learning curve, because less data is available. When you use only 40% of the data, you tend to see only overfitting on the training set.
# Exercise 3.20
##### Generate a small data set using gendatb, say, with 10 objects per class.
```
help (pr.gendatb)
```
(a)
```
help (np.round)
help (pr.prcrossval)
data = pr.gendatb(n=(10,10),s=2.0)
knn1 = pr.knnc((1))
knn3 = pr.knnc((3))
for n in range(2,11):
# performing the cross-validation 10 times
e_knn1 = pr.prcrossval(data, knn1, k=n, nrrep=3)
e_knn3 = pr.prcrossval(data, knn3, k=n, nrrep=3)
e1 = np.sum(e_knn1, axis=1)/e_knn1.shape[1]
e3 = np.sum(e_knn3, axis=1)/e_knn3.shape[1]
print("N= " + str(n), "\t", "mean","\t", "variance")
print("1-NN \t", np.round(np.mean(e1),3), "\t", np.round(np.var(e1),3))
print("3-NN \t", np.round(np.mean(e3),3), "\t", np.round(np.var(e3),3))
print()
```
(b,c)\
Typically you get a <b>less biased estimate of the error</b> when you <b>increase n</b>, but the <b>variance increases a bit</b> with larger n.
(d)\
For a larger dataset, the bias and the variance are a bit smaller.
# Exercise 3.22
(b) lab is the true target, whereas lab2 is the predicted target
# Exercise 3.23
```
# C(i, j) gives the number of objects that are from i, but are labeled as j.
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
```
##### (a)
```
# Load the digits data sets mfeat zer and mfeat kar.
kar = np.loadtxt("mfeat-kar.txt", dtype = np.float64)
zer = np.loadtxt("mfeat-zer.txt", dtype = np.float64)
label = pr.genlab(n=[200]*10, lab = [0,1,2,3,4,5,6,7,8,9])
data_kar = kar
data_zer = zer
print("kar ser: ", data_kar.shape)
print("zer set: ", data_zer.shape)
help (train_test_split)
x_train, x_test, y_train, y_test = train_test_split(data_kar, label, test_size = 0.33, random_state = 42)
print("total test sample: ", len(x_test))
help (confusion_matrix)
qdc = QuadraticDiscriminantAnalysis().fit(x_train, y_train)
y_hat = qdc.predict(x_test)
print(confusion_matrix(y_test, y_hat, labels= [0,1,2,3,4,5,6,7,8,9]))
error = np.sum(y_hat != y_test)/ len(y_hat)
print("error rate: ",error)
x_train, x_test, y_train, y_test = train_test_split(data_zer, label, test_size = 0.33, random_state = 42)
print("total test sample: ", len(x_test))
qdc = QuadraticDiscriminantAnalysis().fit(x_train, y_train)
y_hat = qdc.predict(x_test)
print(confusion_matrix(y_test, y_hat, labels= [0,1,2,3,4,5,6,7,8,9]))
error = np.sum(y_hat != y_test)/ len(y_hat)
print("error rate: ",error)
```
#### The error using the QDA on the zer dataset is around 21% while on the kar dataset it is just 3%.
#### (b)
#### There is a lot of confusion between classes 6 and 9 in the zer dataset. That's because the zer moments are rotationally invariant.
# Exercise 3.24
done
| github_jupyter |

<br>
**References and Additional Resources**
<br>
> [ScikitLearn Regression Metrics](https://scikit-learn.org/stable/modules/model_evaluation.html#regression-metrics)<br>
> [ScikitLearn Classification Metrics](https://scikit-learn.org/stable/modules/model_evaluation.html#classification-metrics)<br>
___
#### NAME:
#### STUDENT ID:
___
## Homework - Model Evaluation
It is now your turn to hone your model evaluation skills, and develop a logical and replicable machine learning model evaluation procedure. The skills you develop in this homework assignment will come in handy as you transition to more complicated models and datasets over the coming weeks.
Using one of the following toy data sets easily loaded from the ScikitLearn library, perform an end-to-end model evaluation.
> [Boston House Prices Dataset (Regression)](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html#sklearn.datasets.load_boston)<br>
> [Diabetes Dataset (Regression)](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html#sklearn.datasets.load_diabetes)<br>
> [Breast Cancer Wisconsin Dataset (Classification)](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html#sklearn.datasets.load_breast_cancer)<br>
> [Iris Dataset (Classification)](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_wine.html#sklearn.datasets.load_wine)<br>
<br>
**General Recommendations:**
> 1. Your code should be legible and well documented.
> 2. Your code should seek to 'tell a story' of how you logically progressed through the evaluation process. Not sure how to do such a thing. Here are [some examples](https://github.com/jupyter/jupyter/wiki/A-gallery-of-interesting-Jupyter-Notebooks)
> 3. Site your sources in a comment line on the code you borrowed, or by referencing it in a markdown cell immediately below.
> 4. Work on asking good questions, interpreting your models, visualizing your experiments, and using visualization and clear language to explain your decisions -- effort to master these areas will pay great dividends in the long run.
```
# load libraries here or as you go
# feel free to use the visualization library of your choice
```
<br>
## Load and Preprocess Data
Using what you learned from the m180_evaluating_and_improving_machine_learning_models notebook, prepare your data for modeling.
0. Set a random seed globally for reproducibility.
1. Load and inspect your data.
2. Check for any issues that may influence your model (e.g. heteroscedasticity, data inbalance, missing values, etc.) and correct the issue(s).
3. Clearly explain how you identified 2 or create two plots that show how you identified 2 above: One should show how you identified the issue, the other what the result of your correction was.
___
```
# your code here (use as many cells as you need)
# your code here (use as many cells as you need)
# your code here (use as many cells as you need)
```
<br>
## Establish a Baseline Model
> 1. Pick a baseline model and fit the model.
> 2. Elect 2 the metrics you will use to evaluate this baseline model against other models (e.g. RMSE, Accuracy, etc.) and print them out.
> 3. Write a brief description of why you chose the baseline model you did and comment on the metrics you will be using to compare your baseline against alternate models.
> 4. Visualize your metrics of interest (Optional for Undergrads; Required for Grad Students).
```
# your code here (use as many cells as you need)
# your code here (use as many cells as you need)
# your code here (use as many cells as you need)
```
<br>
## Model Selection
> 1. Pick two alternative models and train them (do not pre-tune the alternative models).
> 2. Compare your alternatives to the baseline model using the metrics you chose previously.
> 3. Write a brief description of why you chose the two alternative models, and elaborate on why a particular model did better on the particular data set you used, as well as how you discerned this.
> 4. Visualize your metrics of interest accross all models (Optional for Undergrads; Required for Grad Students).
```
# your code here (use as many cells as you need)
# your code here (use as many cells as you need)
# your code here (use as many cells as you need)
```
<br>
## Model Evaluation and Tunning
> 1. Using the best model from your previous work, use metrics and visualizations to diagnose problems with your model.
> 2. Propose solutions to any issues to your model and implement them (e.g. High bias --> feature engineering; High variance --> regularization). Breifly explain your reasoning.
> 3. Iteratively repeat process until you deem appropriate to stop. Explain how you knew when to stop the iterative process.
> 4. Write a bried summary of the entire process, key observations, and conclusions. For good examples of how to create interesting and engaging jupyter notebooks, see [here](https://github.com/jupyter/jupyter/wiki/A-gallery-of-interesting-Jupyter-Notebooks)
**Note:** at this time, take care to only implement one change at a time and re-evaluate your model prior to making any additional changes. Try to adhere to an [Occam Learning](https://en.wikipedia.org/wiki/Occam_learning) approach.
```
# your code here (use as many cells as you need)
# your code here (use as many cells as you need)
# your code here (use as many cells as you need)
```
___
### Deliverables
Please submit your the following via the instructed method (lecture or Syllabus):
>(1) A copy of your work, either a downloaded notebook or a pdf, by the assignment deadline
<br>
**Note:** Don't gorget to restart your kernel prior to extracting your data.
>```Kernel --> Restart Kernel and Run all Cells```<br>
>```File --> Export Notebooks As --> PDF``` (or as instructed)
___
| github_jupyter |
# Create Geo Map of Ontario B Corporations
author: Keanna Knebel
date: 2020-08-07
```
import pandas as pd
import numpy as np
import re
import json
# Plotly
import plotly.graph_objects as go
import plotly.express as px
with open('../creds.json') as f:
data = json.load(f)
mapbox_access_token = data['map_box_key']
df = pd.read_csv("../data/ont_bcorps.csv")
df = df[df.current_status == 'certified']
#df.drop_duplicates(subset=['company_name'], inplace=True, ignore_index=True)
df.head()
# initialize empty lists
lat = []
long = []
# format the geom column into longitude and latitude
for x in df.geom:
x = re.sub('[{}\',]', '', x)
x = x.split()
lat.append(float(x[1]))
long.append(float(x[3]))
df['lat'] = lat
df['long'] = long
df.head()
# add column for certification year
df['date_certified'] = pd.to_datetime(df['date_certified'])
df['year_certified'] = df['date_certified'].dt.year
df['date_first_certified'] = pd.to_datetime(df['date_first_certified'])
df['year_first_certified'] = df['date_first_certified'].dt.year
# replace NaN values with string 'N/A'
df.replace(np.nan, {'industry': 'N/A ', 'industry_category': 'N/A '}, inplace=True)
# Save dataframe to CSV file
df.to_csv('../data/mapped_ont_bcorps.csv', index=False)
latInitial = 44
lonInitial = -79
zoom = 5.5
opacity = 0.75
customdata = pd.DataFrame({
'Business Name': df.company_name,
'Industry': df.industry_category,
'Cert': df.date_first_certified,
'Impact Score' : df.overall_score})
fig = (go.Figure(
data=go.Scattermapbox(
lat=df['lat'],
lon=df['long'],
mode="markers",
customdata=customdata,
marker=dict(
opacity=opacity,
size=7,
color=df['overall_score'],
colorscale='YlOrRd',
showscale=True,
colorbar=dict(
title="Overall Impact Score",
titlefont=dict(
family='Open Sans',
size=16),
bgcolor="#404349",
x=1,
xanchor='left',
ticks="outside",
tickcolor="white",
tickwidth=2,)),
hovertemplate=
"<b>%{customdata[0]}</b><br>" +
"<br>Industry: %{customdata[1]}" +
"<br>Certified since: %{customdata[2]}" +
"<br>Overall Impact Score: %{customdata[3]}" +
"<extra></extra>"
),
layout=go.Layout(
margin={'l': 0, 'r': 0, 't': 0, 'b': 0},
autosize=True,
font={"color": "white"},
mapbox=dict(
accesstoken=mapbox_access_token,
center=dict(
lat=latInitial,
lon=lonInitial),
style="dark",
zoom=zoom,
bearing=0
),
)
))
fig.update_layout(
height=100)
industries = df.industry_category.unique()
counts = df.industry_category.value_counts()
ind = list(counts.index)
fig = go.Figure(
data=go.Bar(
y=ind,
x=counts,
marker_color='#19B1BA',
hovertemplate="%{x}: %{y}<extra></extra>",
orientation='h'),
layout=go.Layout(
margin={'l': 10, 'r': 10, 't': 10, 'b': 10},
template='simple_white',
annotations=[
dict(
x=xi,
y=yi,
text=yi,
xanchor="left",
yanchor="middle",
showarrow=False,
)
for xi, yi in zip(counts, ind)
],
yaxis=go.XAxis(
showticklabels=False)
))
fig.update_layout(
barmode='group',
xaxis_title="Number of Businesses",
yaxis_title="Industry",
height=350)
fig
df
```
| github_jupyter |
## Initial Processing
### Grab relevant information for bus stops for a specific route, join the data together.
```
import requests
import pandas as pd
import glob
import json
import time
import os
import platform
# from analysis_functions import *
from analysis_functions import read_in_dwell_runtime, timepoint_finder
from analysis_functions import pull_ridership_by_stop, dwell_runtime, stop_frequency_percent
with open('config.json', 'r') as f:
config = json.load(f)
SWIFTLY_API_KEY = config['DEFAULT']['SWIFTLY_API_KEY']
MSSQL_USERNAME = config['DEFAULT']['MSSQL_USERNAME']
MSSQL_PASSWORD = config['DEFAULT']['MSSQL_PASSWORD']
if platform.system() == 'Darwin':
import pymssql
connection = pymssql.connect(server='ELTDBPRD\ELTDBPRD',
user=MSSQL_USERNAME, password=MSSQL_PASSWORD, database='ACS_13')
elif platform.system() == 'Windows':
import pyodbc
connection_string = 'DRIVER={SQL Server};SERVER=ELTDBPRD\ELTDBPRD;DATABASE=ACS_13;UID=%s;PWD=%s' % (MSSQL_USERNAME, MSSQL_PASSWORD)
connection = pyodbc.connect(connection_string)
# DEBUG = True
```
## Editable Parameters
```
line_numbers = [
22, 181, 68, 23, 66, 25, 10, 200, 55, 304, 19,
64, 72, 70, 180, 62, 71, 57, 121, 168, 26, 60,
102, 27, 35, 48, 65, 32, 328, 103, 104, 522, 61,
122, 77, 73, 46, 101, 54, 88, 89, 40, 81, 52, 53,
58, 31, 63, 323, 82, 47, 120, 182, 201, 330, 140,
49, 37, 16, 13, 42, 18, 14, 17, 45, 39, 321, 185,
34, 900, 901, 902
]
# Skip VTA special lines 831, 827, 828, 826, 825, 823, 95, 12, 231, 235
days_to_consider = [2,3,4,5,9,10,11,12,16,17,18,19,23,24,25,26,30]
month_to_consider = 10
year_to_consider = 2018
date_range_to_consider = "'2017-10-01' and '2017-11-1'"
transitfeeds_url_relevant_gtfs = 'http://transitfeeds.com/p/vta/45/20170929/download'
swiftly_source_data_df = read_in_dwell_runtime(month=month_to_consider, year=year_to_consider)
timepoints = timepoint_finder(transitfeeds_url = transitfeeds_url_relevant_gtfs)
for line_number in line_numbers:
print(line_number)
rid_by_stop_df = pull_ridership_by_stop(line_number)
rid_by_stop_df.head()
df_dwell_runtime, df_stop_path_length, df_min_travel_time = dwell_runtime(swiftly_source_data_df, line_number, days_to_consider)
rid_dwell = pd.merge(pd.merge(pd.merge(rid_by_stop_df,df_dwell_runtime,how='outer'),df_stop_path_length, how='outer'),df_min_travel_time, how='outer')
stops_visited_counts, trips_sampled_count = stop_frequency_percent(connection, line_number, days_to_consider, date_range= date_range_to_consider)
del stops_visited_counts['current_route_id']
del trips_sampled_count['current_route_id']
bus_df_frequency = pd.merge(pd.merge(rid_dwell, stops_visited_counts, how="outer"),trips_sampled_count, how="outer")
# stop_frequency['percent_stopped'] = (stop_frequency['number_of_times_stopped']/stop_frequency['total_trips_sampled']).round(2)
# bus_df_frequency['percent_stopped'] = (bus_df_frequency['number_of_times_stopped'].dividebus_df_frequency['total_trips_sampled']).round(2)
bus_df_frequency['percent_stopped'] = (bus_df_frequency['number_of_times_stopped'].divide(bus_df_frequency['total_trips_sampled'],fill_value=0)).round(2)
bus_df_frequency['travel_speed_meters_second'] = (bus_df_frequency['stop_path_length_meters']/bus_df_frequency['travel_time_secs_mean']).round(2)
bus_df_frequency['travel_speed_miles_per_hour'] = ((bus_df_frequency['stop_path_length_meters']/bus_df_frequency['travel_time_secs_mean'])*2.23694).round(2)
bus_df_frequency['route_id']=line_number
bus_array = pd.merge(bus_df_frequency,timepoints, how='left')
bus_array.loc[bus_array['timepoint'].isnull(),'timepoint'] = 0
bus_array.to_csv("results/bus_stop_data_analysis_dwell_" + str(line_number) + ".csv",index=False)
```
| github_jupyter |
# SageMaker Inference Pipeline with Scikit Learn and XGBoost
Amazon SageMaker provides a very rich set of [builtin algorithms](https://docs.aws.amazon.com/sagemaker/latest/dg/algorithms-choose.html) for model training and development. This notebook uses [Amazon SageMaker XGBoost Algorithm](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost.html) on training dataset to perform model training. The XGBoost is a popular and efficient open-source implementation of the gradient boosted trees algorithm. Gradient boosting is a supervised learning algorithm that attempts to accurately predict a target variable by combining an ensemble of estimates from a set of simpler and weaker models.
ML Model development is an iterative process with several tasks that data scientists go through to produce an effective model that can solve business problem. The process typically involves:
* Data exploration and analysis
* Feature engineering
* Model development
* Model training and tuning
* Model deployment
The accompanying notebook [pacs008_xgboost_local.ipynb](./pacs008_xgboost_local.ipynb) demonstrates data exploration, analysis and feature engineering, focussing on text feature engineering. This notebook uses the result of analysis in [pacs008_xgboost_local.ipynb](./pacs008_xgboost_local.ipynb) to create a feature engineering pipeline using [SageMaker Inference Pipeline](https://docs.aws.amazon.com/sagemaker/latest/dg/inference-pipelines.html).
Here we define the ML problem to be a `binary classification` problem, that of predicting if a pacs.008 XML message with be processed sucessfully or lead to exception process. The predicts `Success` i.e. 1 or `Failure` i.e. 0.
**Feature Engineering**
Data pre-processing and featurizing the dataset by incorporating standard techniques or prior knowledge is a standard mechanism to make dataset meaningful for training. Once data has been pre-processed and transformed, it can be finally used to train an ML model using an algorithm. However, when the trained model is used for processing real time or batch prediction requests, the model receives data in a format which needs to be pre-processed (e.g. featurized) before it can be passed to the algorithm. In this notebook, we will demonstrate how you can build your ML Pipeline leveraging the Sagemaker Scikit-learn container and SageMaker XGBoost algorithm. After a model is trained, we deploy the Pipeline (Data preprocessing and XGBoost) as an **Inference Pipeline** behind a **single Endpoint** for real time inference and for **batch inferences** using Amazon SageMaker Batch Transform.
We use pacs.008 xml element `<InstrForNxtAgt><InstrInf>TEXT</InstrForNxtAgt></InstrInf>` to perform feature engineer i.e featurize text into new numeric features that can be used in making prodictions.
Since we featurize `InstrForNxtAgt` to numeric representations during training, we have to pre-processs to transform text into numeric features before using the trained model to make predictions.
**Inference Pipeline**
The diagram below shows how Amazon SageMaker Inference Pipeline works. It is used to deploy multi-container endpoints.

**Inference Endpoint**
The diagram below shows the places in the cross-border payment message flow where a call to ML inference endpoint can be injected to get inference from the ML model. The inference result can be used to take additional actions, including corrective actions before sending the message downstream.

**Further Reading:**
For information on Amazon SageMaker XGBoost algorithm and SageMaker Inference Pipeline visit the following references:
- [SageMaker XGBoost Algorithm](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost.html)
- [SageMaker Inference Pipeline](https://docs.aws.amazon.com/sagemaker/latest/dg/inference-pipelines.html)
## Basic Setup
In this step we do basic setup needed for rest of the notebook:
* Amazon SageMaker API client using boto3
* Amazon SageMaker session object
* AWS region
* AWS IAM role
```
import os
import boto3
import sagemaker
from sagemaker import get_execution_role
sm_client = boto3.Session().client('sagemaker')
sm_session = sagemaker.Session()
region = boto3.session.Session().region_name
role = get_execution_role()
print ("Notebook is running with assumed role {}".format (role))
print("Working with AWS services in the {} region".format(region))
```
### Provide S3 Bucket Name
```
# Working directory for the notebook
WORKDIR = os.getcwd()
BASENAME = os.path.dirname(WORKDIR)
print(f"WORKDIR: {WORKDIR}")
print(f"BASENAME: {BASENAME}")
# Create a directory storing local data
iso20022_data_path = 'iso20022-data'
if not os.path.exists(iso20022_data_path):
# Create a new directory because it does not exist
os.makedirs(iso20022_data_path)
# Store all prototype assets in this bucket
s3_bucket_name = 'iso20022-prototype-t3'
s3_bucket_uri = 's3://' + s3_bucket_name
# Prefix for all files in this prototype
prefix = 'iso20022'
pacs008_prefix = prefix + '/pacs008'
raw_data_prefix = pacs008_prefix + '/raw-data'
labeled_data_prefix = pacs008_prefix + '/labeled-data'
training_data_prefix = pacs008_prefix + '/training-data'
training_headers_prefix = pacs008_prefix + '/training-headers'
test_data_prefix = pacs008_prefix + '/test-data'
training_job_output_prefix = pacs008_prefix + '/training-output'
print(f"Training data with headers will be uploaded to {s3_bucket_uri + '/' + training_headers_prefix}")
print(f"Training data will be uploaded to {s3_bucket_uri + '/' + training_data_prefix}")
print(f"Test data will be uploaded to {s3_bucket_uri + '/' + test_data_prefix}")
print(f"Training job output will be stored in {s3_bucket_uri + '/' + training_job_output_prefix}")
labeled_data_location = s3_bucket_uri + '/' + labeled_data_prefix
training_data_w_headers_location = s3_bucket_uri + '/' + training_headers_prefix
training_data_location = s3_bucket_uri + '/' + training_data_prefix
test_data_location = s3_bucket_uri + '/' + test_data_prefix
print(f"Raw labeled data location = {labeled_data_location}")
print(f"Training data with headers location = {training_data_w_headers_location}")
print(f"Training data location = {training_data_location}")
print(f"Test data location = {test_data_location}")
```
## Prepare Training Dataset
1. Select training dataset from raw labeled dataset.
1. Split labeled dataset to training and test datasets.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import string
from sklearn.model_selection import train_test_split
from sklearn import ensemble, metrics, model_selection, naive_bayes
color = sns.color_palette()
%matplotlib inline
```
### Download raw labeled dataset
```
# Download labeled raw dataset from S3
s3_client = boto3.client('s3')
s3_client.download_file(s3_bucket_name, labeled_data_prefix + '/labeled_data.csv', 'iso20022-data/labeled_data.csv')
# Read the train and test dataset and check the top few lines ##
labeled_raw_df = pd.read_csv("iso20022-data/labeled_data.csv")
labeled_raw_df.head()
```
### Select features for training
```
# Training features
fts=[
'y_target',
'Document_FIToFICstmrCdtTrf_CdtTrfTxInf_Dbtr_PstlAdr_Ctry',
'Document_FIToFICstmrCdtTrf_CdtTrfTxInf_Cdtr_PstlAdr_Ctry',
'Document_FIToFICstmrCdtTrf_CdtTrfTxInf_RgltryRptg_DbtCdtRptgInd',
'Document_FIToFICstmrCdtTrf_CdtTrfTxInf_RgltryRptg_Authrty_Ctry',
'Document_FIToFICstmrCdtTrf_CdtTrfTxInf_RgltryRptg_Dtls_Cd',
'Document_FIToFICstmrCdtTrf_CdtTrfTxInf_InstrForNxtAgt_InstrInf',
]
# New data frame with selected features
selected_df = labeled_raw_df[fts]
selected_df.head()
# Rename columns
selected_df = selected_df.rename(columns={
'Document_FIToFICstmrCdtTrf_CdtTrfTxInf_Dbtr_PstlAdr_Ctry': 'Dbtr_PstlAdr_Ctry',
'Document_FIToFICstmrCdtTrf_CdtTrfTxInf_Cdtr_PstlAdr_Ctry': 'Cdtr_PstlAdr_Ctry',
'Document_FIToFICstmrCdtTrf_CdtTrfTxInf_RgltryRptg_DbtCdtRptgInd': 'RgltryRptg_DbtCdtRptgInd',
'Document_FIToFICstmrCdtTrf_CdtTrfTxInf_RgltryRptg_Authrty_Ctry': 'RgltryRptg_Authrty_Ctry',
'Document_FIToFICstmrCdtTrf_CdtTrfTxInf_RgltryRptg_Dtls_Cd': 'RgltryRptg_Dtls_Cd',
'Document_FIToFICstmrCdtTrf_CdtTrfTxInf_InstrForNxtAgt_InstrInf': 'InstrForNxtAgt',
})
selected_df.head()
from sklearn.preprocessing import LabelEncoder
# Assign Pandas data types.
categorical_fts=[
'Dbtr_PstlAdr_Ctry',
'Cdtr_PstlAdr_Ctry',
'RgltryRptg_DbtCdtRptgInd',
'RgltryRptg_Authrty_Ctry',
'RgltryRptg_Dtls_Cd'
]
integer_fts=[
]
numeric_fts=[
]
text_fts=[
# Leave text as object
# 'InstrForNxtAgt'
]
# Categorical features to categorical data type.
for col in categorical_fts:
selected_df[col] = selected_df[col].astype(str).astype('category')
# Integer features to int64 data type.
for col in integer_fts:
selected_df[col] = selected_df[col].astype(str).astype('int64')
# Numeric features to float64 data type.
for col in numeric_fts:
selected_df[col] = selected_df[col].astype(str).astype('float64')
# Text features to string data type.
for col in text_fts:
selected_df[col] = selected_df[col].astype(str).astype('string')
# Label encode target variable, needed by XGBoost and transformations
label_encoder = LabelEncoder()
selected_df['y_target'] = label_encoder.fit_transform(selected_df['y_target'])
selected_df.dtypes
selected_df.info()
selected_df
X_train_df, X_test_df, y_train_df, y_test_df = train_test_split(selected_df, selected_df['y_target'], test_size=0.20, random_state=299, shuffle=True)
print("Number of rows in train dataset : ",X_train_df.shape[0])
print("Number of rows in test dataset : ",X_train_df.shape[0])
X_train_df
X_test_df
## Save training and test datasets to CSV
train_data_w_headers_output_path = 'iso20022-data/train_data_w_headers.csv'
print(f'Saving training data with headers to {train_data_w_headers_output_path}')
X_train_df.to_csv(train_data_w_headers_output_path, index=False)
train_data_output_path = 'iso20022-data/train_data.csv'
print(f'Saving training data without headers to {train_data_output_path}')
X_train_df.to_csv(train_data_output_path, header=False, index=False)
test_data_output_path = 'iso20022-data/test_data.csv'
print(f'Saving test data without headers to {test_data_output_path}')
X_test_df.to_csv(test_data_output_path, header=False, index=False)
```
### Upload training and test datasets to S3 for training
```
train_input_data_location = sm_session.upload_data(
path=train_data_w_headers_output_path,
bucket=s3_bucket_name,
key_prefix=training_headers_prefix,
)
print(f'Uploaded traing data with headers to: {train_input_data_location}')
train_input_data_location = sm_session.upload_data(
path=train_data_output_path,
bucket=s3_bucket_name,
key_prefix=training_data_prefix,
)
print(f'Uploaded data without headers to: {train_input_data_location}')
test_input_data_location = sm_session.upload_data(
path=test_data_output_path,
bucket=s3_bucket_name,
key_prefix=test_data_prefix,
)
print(f'Uploaded data without headers to: {test_input_data_location}')
```
# Feature Engineering
## Create a Scikit-learn script to train with <a class="anchor" id="create_sklearn_script"></a>
To run Scikit-learn on Sagemaker `SKLearn` Estimator with a script as an entry point. The training script is very similar to a training script you might run outside of SageMaker, but you can access useful properties about the training environment through various environment variables, such as:
* SM_MODEL_DIR: A string representing the path to the directory to write model artifacts to. These artifacts are uploaded to S3 for model hosting.
* SM_OUTPUT_DIR: A string representing the filesystem path to write output artifacts to. Output artifacts may include checkpoints, graphs, and other files to save, not including model artifacts. These artifacts are compressed and uploaded to S3 to the same S3 prefix as the model artifacts.
Supposing two input channels, 'train' and 'test', were used in the call to the Chainer estimator's fit() method, the following will be set, following the format SM_CHANNEL_[channel_name]:
* SM_CHANNEL_TRAIN: A string representing the path to the directory containing data in the 'train' channel
* SM_CHANNEL_TEST: Same as above, but for the 'test' channel.
A typical training script loads data from the input channels, configures training with hyperparameters, trains a model, and saves a model to model_dir so that it can be hosted later. Hyperparameters are passed to your script as arguments and can be retrieved with an argparse.ArgumentParser instance.
### Create SageMaker Scikit Estimator <a class="anchor" id="create_sklearn_estimator"></a>
To run our Scikit-learn training script on SageMaker, we construct a `sagemaker.sklearn.estimator.sklearn` estimator, which accepts several constructor arguments:
* __entry_point__: The path to the Python script SageMaker runs for training and prediction.
* __role__: Role ARN
* __framework_version__: Scikit-learn version you want to use for executing your model training code.
* __train_instance_type__ *(optional)*: The type of SageMaker instances for training. __Note__: Because Scikit-learn does not natively support GPU training, Sagemaker Scikit-learn does not currently support training on GPU instance types.
* __sagemaker_session__ *(optional)*: The session used to train on Sagemaker.
```
from sagemaker.sklearn.estimator import SKLearn
preprocessing_job_name = 'pacs008-preprocessor-xgb'
print('data preprocessing job name: ' + preprocessing_job_name)
FRAMEWORK_VERSION = "0.23-1"
source_dir = "../sklearn-transformers"
script_file = "pacs008_sklearn_featurizer.py"
sklearn_preprocessor = SKLearn(
entry_point=script_file,
source_dir=source_dir,
role=role,
framework_version=FRAMEWORK_VERSION,
instance_type="ml.c4.xlarge",
sagemaker_session=sm_session,
base_job_name=preprocessing_job_name
)
sklearn_preprocessor.fit({"train": train_input_data_location})
```
### Batch transform our training data <a class="anchor" id="preprocess_train_data"></a>
Now that our proprocessor is properly fitted, let's go ahead and preprocess our training data. Let's use batch transform to directly preprocess the raw data and store right back into s3.
```
# Define a SKLearn Transformer from the trained SKLearn Estimator
transformer = sklearn_preprocessor.transformer(
instance_count=1,
instance_type="ml.m5.xlarge",
assemble_with="Line",
accept="text/csv",
)
# Preprocess training input
transformer.transform(train_input_data_location, content_type="text/csv")
print("Waiting for transform job: " + transformer.latest_transform_job.job_name)
transformer.wait()
preprocessed_train = transformer.output_path
```
# Train an XGBoost Model
## Fit an XGBoost Model with the preprocessed data <a class="anchor" id="training_model"></a>
Let's take the preprocessed training data and fit an XGBoost Model. Sagemaker provides prebuilt algorithm containers that can be used with the Python SDK. The previous Scikit-learn job preprocessed the labeled raw pacs.008 dataset into useable training data that we can now use to fit a binary classifier Linear Learner model.
For more on SageMaker XGBoost algorithm see: https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost.html
```
from sagemaker.image_uris import retrieve
xgboost_image = sagemaker.image_uris.retrieve("xgboost", region, "1.2-2")
print(xgboost_image)
# Set job name
training_job_name = 'pacs008-xgb-training'
print('XGBoost training job name: ' + training_job_name)
# S3 bucket for storing model artifacts
training_job_output_location = s3_bucket_uri + '/' + training_job_output_prefix + '/xgboost_model'
# initialize hyperparameters
# eval_metric: error is default classification
# - error | auc | logloss
# - can pass multiple using list ["error", "auc"]
# num_round=50, required, no default
# early_stopping_rounds=e.g 20, early stop if validation error doesn't decrease at least this number to continue training
# eta=0.2 step size reduction to prevent overfitting, shrinks weights by this number to make boosting process convervative
# max_depth=5, default is 6, max tree depth, to prevent overfitting
# alpha=3, default 0, L1 regularization term on weights, higher value means more conservative
# lambda=, default 1, L2 regularization term on weights, higher value means more conservative
# gamma= Min loss reduction needed to make a further tree partition on a leaf node, larger more conversative to reduce overfit
# subsample=0.7, default is 1, to prevent overfitting
# min_child_weight=6, default is 1, to stop tree parititioning, prevents overfitting
hyperparameters = {
#"objective":"multi:softprob", # probability with prediction
#"num_classes"=2, # used with multi:softprob
"objective":"binary:logistic", # binary class with probability
"eval_metric": "error",
"num_round":"50",
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
}
xgb_estimator = sagemaker.estimator.Estimator(
image_uri=xgboost_image,
role=role,
hyperparameters=hyperparameters,
instance_count=1,
instance_type='ml.m5.2xlarge',
volume_size=20, # 5 GB
input_mode="File",
output_path=training_job_output_location,
sagemaker_session=sm_session,
base_job_name=training_job_name
)
xgb_train_data = sagemaker.inputs.TrainingInput(
preprocessed_train, # set after preprocessing job completes
distribution="FullyReplicated",
content_type="text/csv",
s3_data_type="S3Prefix",
)
xgb_val_data = sagemaker.inputs.TrainingInput(
preprocessed_train, # set after preprocessing job completes
distribution="FullyReplicated",
content_type="text/csv",
s3_data_type="S3Prefix",
)
data_channels = {
"train": xgb_train_data,
#"validation": xgb_val_data
}
xgb_estimator.fit(inputs=data_channels, logs=True)
```
# Serial Inference Pipeline with Scikit preprocessor and XGBoost <a class="anchor" id="serial_inference"></a>
## Set up the inference pipeline <a class="anchor" id="pipeline_setup"></a>
Setting up a Machine Learning pipeline can be done with the Pipeline Model. This sets up a list of models in a single endpoint. We configure our pipeline model with the fitted Scikit-learn inference model (data preprocessing/feature engineering model) and the fitted XGBoost model. Deploying the model follows the standard ```deploy``` pattern in the SageMaker Python SDK.
```
from sagemaker.model import Model
from sagemaker.pipeline import PipelineModel
import boto3
from time import gmtime, strftime
timestamp_prefix = strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# The two SageMaker Models: one for data preprocessing, and second for inference
scikit_learn_preprocessor_model = sklearn_preprocessor.create_model()
# Set XGBoost support to text/csv, default of application/json doesn't work with XGBoost
scikit_learn_preprocessor_model.env = {"SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT":"text/csv"}
xgb_model = xgb_estimator.create_model()
model_name = "pacs008-xgboost-inference-pipeline-" + timestamp_prefix
endpoint_name = "pacs008-xgboost-inference-pipeline-ep-" + timestamp_prefix
print(f"model_name: {model_name}")
print(f"endpoint_name: {endpoint_name}")
sm_model = PipelineModel(
name=model_name, role=role, models=[scikit_learn_preprocessor_model, xgb_model]
)
sm_model.deploy(initial_instance_count=1, instance_type="ml.c4.xlarge", endpoint_name=endpoint_name)
```
### Store Model Name and Endpoint Name in Notebook Magic Store
These notebook magic store values are used in the example batch transform notebook.
```
%store model_name
%store endpoint_name
```
## Make a request to our pipeline endpoint <a class="anchor" id="pipeline_inference_request"></a>
**Inference Endpoint**
The diagram below shows the places in the cross-border payment message flow where a call to ML inference endpoint can be injected to get inference from the ML model. The inference result can be used to take additional actions, including corrective actions before sending the message downstream.

Here we just grab the first line from the test data (you'll notice that the inference python script is very particular about the ordering of the inference request data). The ```ContentType``` field configures the first container, while the ```Accept``` field configures the last container. You can also specify each container's ```Accept``` and ```ContentType``` values using environment variables.
We make our request with the payload in ```'text/csv'``` format, since that is what our script currently supports. If other formats need to be supported, this would have to be added to the ```output_fn()``` method in our entry point. Note that we set the ```Accept``` to ```text/csv```, since XGBoost requires ```text/csv``` or `libsvm` or `parquet` or `protobuf-recordio` formats. The inference output in this case is trying to predict `Success` or `Failure` of ISO20022 pacs.008 payment message using only the subset of message XML elements in the message i.e. features on which model was trained.
```
# Serializer from new package
from sagemaker.serializers import CSVSerializer, JSONSerializer
from sagemaker.deserializers import CSVDeserializer, JSONDeserializer
from sagemaker.predictor import RealTimePredictor, Predictor
from decimal import Decimal, ROUND_HALF_EVEN, ROUND_HALF_UP
def prediction_proba(prediction):
y_pred_str = str(prediction.decode('utf-8'))
y_pred = Decimal(y_pred_str).quantize(Decimal('.0000001'), rounding=ROUND_HALF_UP)
half_decimal = Decimal(0.5).quantize(Decimal('.0000001'), rounding=ROUND_HALF_UP)
y_pred = f'Predicted: Success, Probability: {y_pred_str}' if y_pred > half_decimal else f'Predicted: Failure, Probability: {y_pred_str}'
return y_pred
# payload_1, expect: Failure
#payload_1 = "US,GB,,,,/SVC/It is to be delivered in three days. Greater than three days penalty add 2bp per day"
payload_1 = "MX,GB,,,,/SVC/It is to be delivered in four days. Greater than four days penalty add 2bp per day"
# payload_2, expect: Success
payload_2 = "MX,GB,,,,"
# payload_3, expect: Failure
payload_3 = "TH,US,,,,/SVC/It is to be delivered in four days. Greater than four days penalty add 2bp per day"
#payload_3 = "CA,US,,,,/SVC/It is to be delivered in three days. Greater than three days penalty add 2bp per day"
# payload_4, expect: Success
payload_4 = "IN,CA,DEBT,IN,00.P0006,"
# payload_5, expect: Success, Creditor Country IN
payload_5 = "GB,IN,CRED,IN,0,/REG/15.X0001 FDI in Retail"
#payload_5 = "MX,IN,CRED,IN,0,/REG/15.X0002 FDI in Agriculture"
#payload_5 = "IE,IN,CRED,IN,0,/REG/15.X0003 FDI in Transportation"
# this should fail as 15.X0009 is not a valid reg code for CRED & IN combination
#payload_5 = "MX,IN,CRED,IN,0,/REG/15.X0009 FDI Agriculture"
# payload_6, expect: Failure
payload_6 = "IE,IN,CRED,IN,0,/REG/99.C34698"
endpoint_name = 'pacs008-xgboost-inference-pipeline-ep-2021-11-25-01-32-31'
predictor = Predictor(
endpoint_name=endpoint_name, sagemaker_session=sm_session, serializer=CSVSerializer()
)
print(f"1. Expect Failure i.e. 0, {prediction_proba(predictor.predict(payload_1))}")
print(f"2. Expect Success i.e. 1, {prediction_proba(predictor.predict(payload_2))}")
print(f"3. Expect Failure i.e. 0, {prediction_proba(predictor.predict(payload_3))}")
print(f"4. Expect Success i.e. 1, {prediction_proba(predictor.predict(payload_4))}")
print(f"5. Expect Success i.e. 1, {prediction_proba(predictor.predict(payload_5))}")
print(f"6. Expect Failure i.e. 0, {prediction_proba(predictor.predict(payload_6))}")
```
# Delete Endpoint
Once we are finished with the endpoint, we clean up the resources!
```
sm_client = sm_session.boto_session.client("sagemaker")
sm_client.delete_endpoint(EndpointName=endpoint_name)
```
| github_jupyter |
```
# default_exp metrics_np
```
# Metrics NP
> API details.
```
#hide
%load_ext autoreload
%autoreload 2
from nbdev.showdoc import *
import warnings
warnings.filterwarnings("ignore")
#export
from mean_average_precision import MetricBuilder
#from mean_average_precision import MeanAveragePrecision
#from fastai.metrics import AvgMetric
from fastai.metrics import Metric
from fastai.torch_basics import *
from fastai.torch_core import *
#from functools import partial
#export
class mAP_Metric_np():
"Metric to calculate mAP for different IoU thresholds"
def __init__(self, iou_thresholds, name, remove_background_class=True):
self.__name__ = name
self.iou_thresholds = iou_thresholds
self.remove_background_class = remove_background_class
def __call__(self, preds, targs, num_classes):
if self.remove_background_class:
num_classes=num_classes-1
metric_fn = MetricBuilder.build_evaluation_metric("map_2d", async_mode=True, num_classes=num_classes)
for sample_preds, sample_targs in self.create_metric_samples(preds, targs):
metric_fn.add(sample_preds, sample_targs)
metric_batch = metric_fn.value(iou_thresholds=self.iou_thresholds,
recall_thresholds=np.arange(0., 1.01, 0.01),
mpolicy='soft')['mAP']
return metric_batch
def create_metric_samples(self, preds, targs):
pred_samples = []
for pred in preds:
res = torch.cat([pred["boxes"], pred["labels"].unsqueeze(-1), pred["scores"].unsqueeze(-1)], dim=1)
pred_np = res.detach().cpu().numpy()
if self.remove_background_class:
# first idx is background
try:
pred_np= pred_np-np.array([0,0,0,0,1,0])
except: pass
pred_samples.append(pred_np)
targ_samples = []
for targ in targs: # targs : yb[0]
targ = torch.cat([targ["boxes"],targ["labels"].unsqueeze(-1)], dim=1)
targ = torch.cat([targ, torch.zeros([targ.shape[0], 2], device=targ.device)], dim=1)
targ_np = np.array(targ.detach().cpu())
if self.remove_background_class:
# first idx is background
try:
targ_np= targ_np-np.array([0,0,0,0,1,0,0])
except: pass
targ_samples.append(targ_np)
return [s for s in zip(pred_samples, targ_samples)]
#export
class _AvgMetric_ObjectDetection(Metric):
"Average the values of `func` taking into account potential different batch sizes"
def __init__(self, func): self.func = func
def reset(self): self.total,self.count = 0.,0
def accumulate(self, learn):
bs = len(learn.xb[0])
self.total += learn.to_detach(self.func(learn.pred, *learn.yb, num_classes=len(learn.dls.vocab)))*bs
self.count += bs
@property
def value(self): return self.total/self.count if self.count != 0 else None
@property
def name(self): return self.func.func.__name__ if hasattr(self.func, 'func') else self.func.__name__
#export
def create_mAP_metric_np(iou_tresh=np.arange(0.5, 1.0, 0.05), metric_name="mAP@IoU 0.5:0.95", remove_background_class=False):
return _AvgMetric_ObjectDetection(mAP_Metric_np(iou_tresh, metric_name, remove_background_class=remove_background_class))
#export
mAP_at_IoU40_np = _AvgMetric_ObjectDetection(mAP_Metric_np(0.4, "mAP@IoU>0.4", remove_background_class=True))
mAP_at_IoU50_np = _AvgMetric_ObjectDetection(mAP_Metric_np(0.5, "mAP@IoU>0.5", remove_background_class=True))
mAP_at_IoU60_np = _AvgMetric_ObjectDetection(mAP_Metric_np(0.6, "mAP@IoU>0.6", remove_background_class=True))
mAP_at_IoU70_np = _AvgMetric_ObjectDetection(mAP_Metric_np(0.7, "mAP@IoU>0.7", remove_background_class=True))
mAP_at_IoU80_np = _AvgMetric_ObjectDetection(mAP_Metric_np(0.8, "mAP@IoU>0.8", remove_background_class=True))
mAP_at_IoU90_np = _AvgMetric_ObjectDetection(mAP_Metric_np(0.9, "mAP@IoU>0.9", remove_background_class=True))
mAP_at_IoU50_95_np = _AvgMetric_ObjectDetection(mAP_Metric_np(np.arange(0.5, 1.0, 0.05), "mAP@IoU 0.5:0.95", remove_background_class=True))
```
| github_jupyter |
```
import tqdm
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from utils import split_sequence, get_apple_close_price, plot_series
apple_close_price = get_apple_close_price()
from sklearn.model_selection import train_test_split
from scipy.stats import boxcox
from scipy.special import inv_boxcox
train, test = train_test_split(apple_close_price,
test_size=0.05,
shuffle=False)
# boxcox_series, lmbda = boxcox(train.values)
# transformed_train = boxcox_series
# transformed_test = boxcox(test, lmbda=lmbda)
transformed_train = train.values
transformed_test = test.values
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
# scaled_train = scaler.fit_transform(transformed_train.reshape(-1, 1))
# scaled_test = scaler.transform(transformed_test.reshape(-1, 1))
scaled_train = scaler.fit_transform(transformed_train.reshape(-1, 1))
scaled_test = scaler.transform(transformed_test.reshape(-1, 1))
# Core layers
from keras.layers \
import Activation, Dropout, Flatten, Dense, Input, LeakyReLU, Reshape
# Recurrent layers
from keras.layers import LSTM
# Convolutional layers
from keras.layers import Conv1D, MaxPooling1D
# Normalization layers
from keras.layers import BatchNormalization
# Merge layers
from keras.layers import concatenate
# Layer wrappers
from keras.layers import Bidirectional, TimeDistributed
# Keras models
from keras.models import Model, Sequential
# Keras optimizers
from keras.optimizers import Adam, RMSprop
import keras.backend as K
import warnings
warnings.simplefilter('ignore')
look_back = 3
n_features = 1
X_train, y_train = split_sequence(scaled_train, look_back)
X_test, y_test = split_sequence(scaled_test, look_back)
def build_generator_net(look_back, n_features):
net = Sequential()
net.add(LSTM(50, input_shape=(look_back, n_features)))
net.add(LeakyReLU(alpha=0.2))
net.add(Dense(n_features))
print('Generator summary:')
net.summary()
return net
def build_discriminator_net(look_back, n_features):
net = Sequential()
net.add(Conv1D(64,
kernel_size=2,
padding='same',
input_shape=(look_back + 1, n_features))) # +1 => +target
net.add(LeakyReLU(alpha=0.2))
net.add(Flatten())
net.add(Dense(1, activation='sigmoid'))
print('Discriminator summary:')
net.summary()
return net
def build_discriminator_model(look_back, n_features, optimizer=Adam()):
dis_net = build_discriminator_net(look_back, n_features)
net = Sequential()
net.add(dis_net)
seq = Input((look_back + 1, n_features))
valid = net(seq)
model = Model(seq, valid)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
return model, dis_net
def build_adversarial_model(look_back, n_features, dm_optimizer=Adam(), am_optimizer=Adam()):
dis_model, dis_net = build_discriminator_model(look_back, n_features, optimizer=dm_optimizer)
gen_net = build_generator_net(look_back, n_features)
net = Sequential()
net.add(gen_net)
gen_input = Input((look_back, n_features))
gen_output = net(gen_input)
gen_output = Reshape((1, 1))(gen_output)
gen_output_plus_pred = concatenate([gen_input, gen_output], axis=1)
valid = dis_net(gen_output_plus_pred)
dis_net.trainable = False # We need to freeze the discriminator's weights
model = Model(gen_input, valid)
model.compile(loss='binary_crossentropy', optimizer=am_optimizer)
print('Adversarial summary:')
model.summary()
return model, dis_model, dis_net, gen_net
adv_model, dis_model, dis_net, gen_net = build_adversarial_model(look_back,
n_features,
dm_optimizer=Adam(lr=0.0001),
am_optimizer=Adam(lr=0.0001))
def get_batch(X, y, batch_idx, batch_size):
X_batch = X[batch_idx:batch_idx+batch_size]
y_batch = y[batch_idx:batch_idx+batch_size]
return X_batch, y_batch
def get_real_and_fake_samples(X, y, generator):
X_real = np.concatenate((X, y.reshape(-1, 1, 1)), axis=1)
y_pred = generator.predict(X)
X_fake = np.concatenate((X, y_pred.reshape(-1, 1, 1)), axis=1)
return X_real, X_fake
def train_GAN(X, y, adv_model, dis_model, gen_net, n_epochs=100, batch_size=100, look_back=3, n_features=1):
data_len = len(X)
n_points = look_back + 1 # look back + prediction
hist_d_loss_real = []
hist_d_loss_fake = []
hist_d_loss = []
hist_g_loss = []
for epoch in range(n_epochs):
for batch_idx in range(0, data_len, batch_size):
X_batch, y_batch = get_batch(X, y, batch_idx, batch_size)
X_real, X_fake = get_real_and_fake_samples(X_batch, y_batch, gen_net)
y_real = np.ones((X_real.shape[0], 1))
y_fake = np.zeros((X_fake.shape[0], 1))
# Train discriminator
d_loss_real = dis_model.train_on_batch(X_real, y_real)
d_loss_fake = dis_model.train_on_batch(X_fake, y_fake)
d_loss = np.add(d_loss_real, d_loss_fake) / 2
# Train generator
g_loss = adv_model.train_on_batch(X_batch, y_real)
print ("Epoch %d/%d [D loss: %f] [G loss: %f]" % (epoch+1, n_epochs, d_loss, g_loss))
rnd_idx = np.random.randint(0, len(X_real))
# print('Real example: ', X_real[rnd_idx].reshape(-1,))
# print('Fake example: ', X_fake[rnd_idx].reshape(-1,))
hist_d_loss_real.append(d_loss_real)
hist_d_loss_fake.append(d_loss_fake)
hist_d_loss.append(d_loss)
hist_g_loss.append(g_loss)
return hist_d_loss_real, hist_d_loss_fake, hist_d_loss, hist_g_loss
hist_d_loss_real, hist_d_loss_fake, hist_d_loss, hist_g_loss = \
train_GAN(X_train, y_train, adv_model, dis_model, gen_net, n_epochs=100, batch_size=100)
fig, ax = plt.subplots(figsize=(15, 6))
plt.plot(hist_d_loss_real)
plt.plot(hist_d_loss_fake)
plt.plot(hist_d_loss)
plt.plot(hist_g_loss)
ax.set_xlabel('Epochs')
ax.legend(['D loss - real',
'D loss - fake',
'D loss',
'G loss'])
# walk-forward validation
def GAN_walk_forward(model, X, y, size=1, debug=True):
predictions = list()
limit_range = len(X[:size])
for t in range(0, limit_range):
x_input = X[t]
x_input = x_input.reshape(1, x_input.shape[0], x_input.shape[1])
y_output = model.predict(x_input)
predicted = y_output.reshape(1,)
expected = y[t:t+1].reshape(1,)
predictions.append(predicted)
if debug == True:
print('predicted={}, expected={}'.format(predicted, expected))
return np.array(predictions)
size = 21 # (approx. one month) and 1 day prediction by default
predictions = GAN_walk_forward(gen_net, X_test, y_test, size=size)
def revert_transformations(test, predictions):
original_y_test = scaler.inverse_transform(test)
original_predictions = scaler.inverse_transform(predictions.reshape(-1, 1))
# original_y_test = inv_boxcox(original_y_test, lmbda)
# original_predictions = inv_boxcox(original_predictions, lmbda)
return original_y_test, original_predictions
original_y_test, original_predictions = revert_transformations(y_test, predictions)
from utils import rmse, plot_walk_forward_validation
plot_walk_forward_validation(original_y_test, original_predictions, size)
print('GAN[%d days, %d day forecast] - RMSE: %.3f' % (size, 1, rmse(original_predictions, original_y_test[:size])))
# walk-forward validation by looping over the steps
def GAN_walk_forward_by_looping(model, X, y, size=1, steps=1, debug=True):
predictions = list()
limit_range = len(X[:size])
look_back = X[0].shape[0]
n_features = X[0].shape[1]
for t in range(0, limit_range, steps):
x_input = X[t]
x_input = x_input.reshape(1, look_back, n_features)
step_predictions = []
for p in range(steps):
y_output = model.predict(x_input)
# save the prediction in the sequence
out_val = y_output.item(0)
step_predictions.append(out_val)
# get rid of the first price
in_vals = x_input.flatten()[1:]
# appends the new predicted one
in_vals = np.append(in_vals, out_val)
# reshape for the next prediction
x_input = in_vals.reshape(1, look_back, n_features)
predicted = np.array(step_predictions).reshape(steps,)
expected = y[t:t+steps].reshape(steps,)
predictions.append(predicted)
if debug == True:
print('predicted={}, expected={}'.format(predicted, expected))
return np.array(predictions)
size = 21 # (approx. one month)
steps = 3 # 3 days prediction
predictions = GAN_walk_forward_by_looping(gen_net, X_test, y_test, size=size, steps=steps)
original_y_test, original_predictions = revert_transformations(y_test, predictions)
plot_walk_forward_validation(original_y_test, original_predictions, size, steps)
print('GAN[%d days, %d day forecast] - RMSE: %.3f' % (size, steps, rmse(original_predictions, original_y_test[:size])))
size = 21 # approx. one month
steps = 7 # 7 days prediction
predictions = GAN_walk_forward_by_looping(gen_net, X_test, y_test, size=size, steps=steps)
original_y_test, original_predictions = revert_transformations(y_test, predictions)
plot_walk_forward_validation(original_y_test, original_predictions, size, steps)
print('GAN[%d days, %d day forecast] - RMSE: %.3f' % (size, steps, rmse(original_predictions, original_y_test[:size])))
size = 21 # approx. one month
steps = 21 # 21 days prediction
predictions = GAN_walk_forward_by_looping(gen_net, X_test, y_test, size=size, steps=steps)
original_y_test, original_predictions = revert_transformations(y_test, predictions)
plot_walk_forward_validation(original_y_test, original_predictions, size, steps)
print('GAN[%d days, %d day forecast] - RMSE: %.3f' % (size, steps, rmse(original_predictions, original_y_test[:size])))
size = 42 # approx. two months
steps = 42 # 42 days prediction
predictions = GAN_walk_forward_by_looping(gen_net, X_test, y_test, size=size, steps=steps)
original_y_test, original_predictions = revert_transformations(y_test, predictions)
plot_walk_forward_validation(original_y_test, original_predictions, size, steps)
print('GAN[%d days, %d day forecast] - RMSE: %.3f' % (size, steps, rmse(original_predictions, original_y_test[:size])))
size = 252 # approx. one year
steps = 252 # 252 days prediction
predictions = GAN_walk_forward_by_looping(gen_net, X_test, y_test, size=size, steps=steps)
original_y_test, original_predictions = revert_transformations(y_test, predictions)
plot_walk_forward_validation(original_y_test, original_predictions, size, steps)
print('GAN[%d days, %d day forecast] - RMSE: %.3f' % (size, steps, rmse(original_predictions, original_y_test[:size])))
```
| github_jupyter |
# Cluster Analysis
This exercise sheet covers the following concepts.
- $k$-Means Clustering
- EM-Clustering
- DBSCAN Clustering
- Hiearchical Clustering
## Libraries and Data
We use the boston house price data in this exercise. The data is available as part of ```sklearn``` for [Python](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.datasets).
Last week we explored the boston data, this week we use it for clustering. You will apply both $k$-means clustering and DB clustering to the boston data using all fourteen columns. Functions for all clustering algorithms are available in ```sklearn``` for Python. If you experience problems with the visualizations, ensure that your ```matplotlib``` version is at least 3.0.1 and your ```seaborn``` version is at least 0.9.0.
There are a couple of problems with clustering data like the boston data, that you will have to solve during this exercise.
- The different features of the data are on different scales, which influences the results.
- The data has fourteen dimensions. This makes visualizing the clusters difficult. You can try a dimension reduction technique like Principle Component Analysis (PCA) to get only two dimensions or use pair-wise plots. Both have advantages and drawbacks, which you should explore as part of this exercise.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure(figsize = (16,9))
plt.rcParams['figure.dpi'] = 150
from sklearn.datasets import *
boston_dataset = load_boston()
boston= pd.DataFrame(boston_dataset.data, columns= boston_dataset.feature_names)
from sklearn.preprocessing import StandardScaler
x = StandardScaler().fit_transform(boston)
x = pd.DataFrame(x, columns=boston_dataset.feature_names)
print(boston_dataset.DESCR)
sns.distplot(x.NOX, label="new")
sns.distplot(boston.NOX, label="old")
plt.legend()
```
# Approach 1: Principle Component
```
from sklearn.decomposition import PCA
pcamodel = PCA(n_components=1) #PCA is clever, it also centers the data first
pca1 = pcamodel.fit_transform(x)
pca1.shape
sns.rugplot(pca1.flatten())
sns.swarmplot(x=pca1.flatten())
pcamodel = PCA(n_components=2) #PCA is clever, it also centers the data first
pca2 = pcamodel.fit_transform(x)
pca2.shape
pca2
pca2[:,0], pca2[:,1]
sns.scatterplot(pca2[:,0], pca2[:,1], alpha=0.3)
plt.title(f"Principle components with variance ratio [c1,c2] = {pcamodel.explained_variance_ratio_}");
X= -2 * np.random.rand(100,2)X1 = 1 + 2 * np.random.rand(50,2)X[50:100, :] = X1plt.scatter(X[ : , 0], X[ :, 1], s = 50, c = ‘b’)plt.show()
plt.rcParams['figure.dpi'] = 100
X= -2 * np.random.rand(100,2)
X1 = 1 + 2 * np.random.rand(50,2)
X[50:100, :] = X1
plt.scatter(X[ : , 0], X[ :, 1], s = 50, c = "b")
plt.show()
from sklearn.cluster import KMeans
Kmean = KMeans(n_clusters=2)
Kmean.fit(X)
Kmean.cluster_centers_
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
# create dataset
X, y = make_blobs(
n_samples=150, n_features=2,
centers=3, cluster_std=0.5,
shuffle=True, random_state=0
)
# plot
plt.scatter(
X[:, 0], X[:, 1],
edgecolor='black', s=50
)
plt.show()
from sklearn.cluster import KMeans
km = KMeans(
n_clusters=3, init='random',
n_init=1, max_iter=2,
tol=1e-04, random_state=0
)
y_km = km.fit_predict(X)
y_km
plt.scatter(X[:,0], X[:,1], c=y_km, cmap=plt.cm.Paired, alpha=0.4)
plt.scatter(km.cluster_centers_[:, 0],km.cluster_centers_[:, 1],
s=250, marker='*', label='centroids',
edgecolor='black',
c=[0,1,2],cmap=plt.cm.Paired,)
XC= X.transpose()
plt.scatter(XC[0], XC[1])
# plot the 3 clusters
plt.scatter(
X[y_km == 0, 0], X[y_km == 0, 1],
s=50, c='lightgreen',
label='cluster 1'
)
plt.scatter(
X[y_km == 1, 0], X[y_km == 1, 1],
s=50, c='orange',
marker='o', edgecolor='black',
label='cluster 2'
)
plt.scatter(
X[y_km == 2, 0], X[y_km == 2, 1],
s=50, c='lightblue',
label='cluster 3'
)
# plot the centroids
plt.scatter(
km.cluster_centers_[:, 0], km.cluster_centers_[:, 1],
s=250, marker='*',
label='centroids'
)
plt.legend(scatterpoints=1)
plt.grid()
plt.show()
```
## $k$-Means Clustering
Use $k$-Means to cluster the data and find a suitable number of clusters for $k$. Use a combination of knowledge you already have about the data, visualizations, as well as the within-sum-of-squares to determine a suitable number of clusters.
## EM Clustering
(Note: EM clustering is also known as Gaussian Mixture Models and can be found in the mixture package of ```sklearn```.)
Use the EM algorithm to determine multivariate clusters in the data. Determine a suitable number of clusters using the Bayesian Information Criterion (BIC).
## DBSCAN Clustering
Use DBSCAN to cluster the data and find suitable values for $epsilon$ and $minPts$. Use a combination of knowledge you already have about the data and visualizations.
## Hierarchical Clustering
(Note: Hierarchical clustering is also known as agglomerative clustering and can be found under that name in ```sklearn```. This task requires at least ```sklearn``` version 0.22, which is still under development (October 2019). You can find guidance on how to install packages in Jupyter notebook [here](https://jakevdp.github.io/blog/2017/12/05/installing-python-packages-from-jupyter/) and regarding the development version of ```sklearn``` [here](https://scikit-learn.org/stable/developers/advanced_installation.html).)
Use hierarchical clustering with single linkage to determine clusters within the housing data. Find a suitable cut-off for the clusters using a dendrogram.
## Compare the Clustering Results
How are the clustering results different between the algorithms? Consider, e.g., the number of clusters, the shape of clusters, general problems with using the algorithms, and the insights you get from each algorithm.
You may also use this to better understand the differences between the algorithms. For example, how are the results from EM clustering different/similar to the results of the $k$-Means clustering? Is there a relationship between the WSS and the BIC? How are the mean values of EM related to the centroids of $k$-Means? What is the relationship between the parameters for DBSCAN and the cut-off for the hierarchical clustering?
| github_jupyter |
## Step 1: Get and load the data
Go to Gutenberg press and https://www.gutenberg.org/ebooks/24407, get all the data and put it innto your /data/recipes folder.
```
import os
data_folder = os.path.join('data/recipes')
all_recipe_files = [os.path.join(data_folder, fname)
for fname in os.listdir(data_folder)]
documents = {}
for recipe_fname in all_recipe_files:
bname = os.path.basename(recipe_fname)
recipe_number = os.path.splitext(bname)[0]
with open(recipe_fname, 'r') as f:
documents[recipe_number] = f.read()
corpus_all_in_one = ' '.join([doc for doc in documents.values()])
print("Number of docs: {}".format(len(documents)))
print("Corpus size (char): {}".format(len(corpus_all_in_one)))
```
## Step 2: Let's tokenize
What this actually means is that we will be splitting raw string into a list of tokens. Where A "token" seentially is meaningful units of text such as **words, phrases, punctuation, numbers, dates,...**
```
from nltk.tokenize import word_tokenize
all_tokens = [token for token in word_tokenize(corpus_all_in_one)]
print("Total number of tokens: {}".format(len(all_tokens)))
```
## Step 3: Let's do a word count
We start with a simple word count using `collections.Counter` function.
Why we're doing this?
We want to know the number times a word occurs in the whole corpus and in home many docs it occurs.
```
from collections import Counter
total_word_freq = Counter(all_tokens)
for word, freq in total_word_freq.most_common(20):
# Let's try the top 25 words in descending order
print("{}\t{}".format(word, freq))
```
## Step 4: Stop words
Obviously you can see that a lot of words above were expected. Actually also quite boring as (, ) or full stop is something one would expect. If it were a scary novel a lot of ! would appear.
Wwe call these type of words **stop words** and they are pretty meaningless in themselves, right?
Also you will see that there is no universal list of stop words *and* removing them can have a certain desirable or undesirable effect, right?
So lets's import stop words from the big and mighty nltk library
```
from nltk.corpus import stopwords
import string
print(stopwords.words('english'))
print(len(stopwords.words('english')))
print(string.punctuation)
```
### Tip: A little bit about strings and digits btw
There is a pythonic way to do stuff as well but that's for another time and you can play a little game by creating a password generator and checking out all kinds og modules in string as well as crypt (there is cryptography as well)
```
string.ascii_letters
string.ascii_lowercase
string.ascii_uppercase
# How to get them all including symbols and make a cool password
import random
char_set = string.ascii_letters + string.digits + string.punctuation
print("".join(random.sample(char_set*9, 9)))
import crypt
passwd = input("Enter your email: ")
value = '$1$' + ''.join([random.choice(string.ascii_letters + string.digits) for _ in range(16)])
# print("%s" % value)
print(crypt.crypt(passwd, value))
```
OK, we got distracted a bit, so we're back 😅
**So, back to where we were...**
```
stop_list = stopwords.words('english') + list(string.punctuation)
tokens_no_stop = [token for token in all_tokens if token not in stop_list]
total_term_freq_no_stop = Counter(tokens_no_stop)
for word, freq in total_term_freq_no_stop.most_common(25):
print("{}\t{}".format(word, freq))
```
Do you see capitalized When and The?
```
print(total_term_freq_no_stop['olive'])
print(total_term_freq_no_stop['olives'])
print(total_term_freq_no_stop['Olive'])
print(total_term_freq_no_stop['Olives'])
print(total_term_freq_no_stop['OLIVE'])
print(total_term_freq_no_stop['OLIVES'])
```
## Step 5: Text Normalization
Replacing tokens with a canonical form, so we can group them together different spelling / variations of the same word:
- lowercases
- stemming
- UStoGB mapping
- synonym mapping
Stemming, btw - is a process of reducing the words -- nenerally modifed of derived, to their word stem or root form. The ain goal of stemmnig is to reduce related words to the same stem even when the stem isn't a dictionary word.
As a simple example, lets take this for instance:
1. handsome and handsomly would be stemmed as "handsom" - so it does not end up being a word you know!
2. Nice, cool, awesome would be stemmed as nice, cool and awesome
- You must also be careful with one-way transformations as well such as lowercasing (these you should be able to imporve post your training/epochs and loading the computation graph when done)
Lets take a deeper look at this...
```
from nltk.stem import PorterStemmer
stemmer = PorterStemmer()
all_tokens_lowercase = [token.lower() for token in all_tokens]
tokens_normalized = [stemmer.stem(token) for token in all_tokens_lowercase if token not in stop_list]
total_term_freq_normalized = Counter(tokens_normalized)
for word, freq in total_term_freq_normalized.most_common(25):
print("{}\t{}".format(word, freq))
```
Clearly you see the effect we just discussed aboce such as **"littl"** and so on...
## n-grams -- What are they?
n-gram is a sequence of n-items from a given sequence of text or speech. The items can be phonemes, syllables, letters, words or base pairs.
n-grams of texts are used quite heavily in text mining and NLP tasks. They basically are a set of words that co-occur within a given sentence and typically move one word forward. for instance `the dog jumps over the car`, and say if `N=2`(a bi-gram), then n-grams would be as such:
- th dog
- dog jumps
- jumps over
- over the
- the car
So, we have 5-ngrams in this case.
And if `N = 3` (tri-gram), then you have four n-grams and so on...
- the dog jumps
- dog jumps over
- jumps over the
- over the car
So, how many N-grams can be in a sentence?
If `X= number of words in a sentence K` then the number of n-grams for sentence K would be:
$$N_{gramsK} = X - (N - 1)$$
Two popular uses of N-grams:
- For buildin g language models (unigram, bigram, trigram). Google Yahoo Microsoft Amazon Netflix etc. use web scale n-gram models to do stuff like fix spellings, word breaking and text summarization
- For developing features for supervised Deep Learningh models such as SVM, MaxEnt model, Naive Bayes etc
**OK, enough lecture**, we move on to the next...
```
from nltk import ngrams
phrases = Counter(ngrams(all_tokens_lowercase, 2)) # N = 2
for phrase, frew in phrases.most_common(25):
print(phrase, freq)
# Sorry, I know its elegant to write like this => print()"{}\t{}".format(phrase, freq)), but too non-intuitive!
phrases = Counter(ngrams(tokens_no_stop, 3)) # N = 3
for phrase, freq in phrases.most_common(25):
print(phrase, freq)
```
| github_jupyter |
# Eight schools
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Eight_Schools.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Eight_Schools.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
The eight schools problem ([Rubin 1981](https://www.jstor.org/stable/1164617)) considers the effectiveness of SAT coaching programs conducted in parallel at eight schools. It has become a classic problem ([Bayesian Data Analysis](http://www.stat.columbia.edu/~gelman/book/), [Stan](https://github.com/stan-dev/rstan/wiki/RStan-Getting-Started)) that illustrates the usefulness of hierarchical modeling for sharing information between exchangeable groups.
The Edward2 implemention below is an adaptation of an Edward 1.0 [tutorial](https://github.com/blei-lab/edward/blob/master/notebooks/eight_schools.ipynb).
# Imports
```
!pip install tensorflow_probability
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow as tf
import tensorflow_probability as tfp
from tensorflow_probability import edward2 as ed
import warnings
plt.style.use("ggplot")
warnings.filterwarnings('ignore')
```
# The Data
From Bayesian Data Analysis, section 5.5 (Gelman et al. 2013):
> *A study was performed for the Educational Testing Service to analyze the effects of special coaching programs for SAT-V (Scholastic Aptitude Test-Verbal) in each of eight high schools. The outcome variable in each study was the score on a special administration of the SAT-V, a standardized multiple choice test administered by the Educational Testing Service and used to help colleges make admissions decisions; the scores can vary between 200 and 800, with mean about 500 and standard deviation about 100. The SAT examinations are designed to be resistant to short-term efforts directed specifically toward improving performance on the test; instead they are designed to reflect knowledge acquired and abilities developed over many years of education. Nevertheless, each of the eight schools in this study considered its short-term coaching program to be very successful at increasing SAT scores. Also, there was no prior reason to believe that any of the eight programs was more effective than any other or that some were more similar in effect to each other than to any other.*
For each of the eight schools ($J = 8$), we have an estimated treatment effect $y_j$ and a standard error of the effect estimate $\sigma_j$. The treatment effects in the study were obtained by a linear regression on the treatment group using PSAT-M and PSAT-V scores as control variables. As there was no prior belief that any of the schools were more or less similar or that any of the coaching programs would be more effective, we can consider the treatment effects as [exchangeable](https://en.wikipedia.org/wiki/Exchangeable_random_variables).
```
num_schools = 8 # number of schools
treatment_effects = np.array(
[28, 8, -3, 7, -1, 1, 18, 12], dtype=np.float32) # treatment effects
treatment_stddevs = np.array(
[15, 10, 16, 11, 9, 11, 10, 18], dtype=np.float32) # treatment SE
fig, ax = plt.subplots()
plt.bar(range(num_schools), treatment_effects, yerr=treatment_stddevs)
plt.title("8 Schools treatment effects")
plt.xlabel("School")
plt.ylabel("Treatment effect")
fig.set_size_inches(10, 8)
plt.show()
```
# Model
To capture the data, we use a hierarchical normal model. It follows the generative process,
\begin{align*}
\mu &\sim \text{Normal}(\text{loc}{=}0,\ \text{scale}{=}10) \\
\log\tau &\sim \text{Normal}(\text{loc}{=}5,\ \text{scale}{=}1) \\
\text{for } & i=1\ldots 8:\\
& \theta_i \sim \text{Normal}\left(\text{loc}{=}\mu,\ \text{scale}{=}\tau \right) \\
& y_i \sim \text{Normal}\left(\text{loc}{=}\theta_i,\ \text{scale}{=}\sigma_i \right)
\end{align*}
where $\mu$ represents the prior average treatment effect and $\tau$ controls how much variance there is between schools. The $y_i$ and $\sigma_i$ are observed. As $\tau \rightarrow \infty$, the model approaches the no-pooling model, i.e., each of the school treatment effect estimates are allowed to be more independent. As $\tau \rightarrow 0$, the model approaches the complete-pooling model, i.e., all of the school treatment effects are closer to the group average $\mu$. To restrict the standard deviation to be positive, we draw $\tau$ from a lognormal distribution (which is equivalent to drawing $log(\tau)$ from a normal distribution).
Following [Diagnosing Biased Inference with Divergences](http://mc-stan.org/users/documentation/case-studies/divergences_and_bias.html), we transform the model above into an equivalent non-centered model:
\begin{align*}
\mu &\sim \text{Normal}(\text{loc}{=}0,\ \text{scale}{=}10) \\
\log\tau &\sim \text{Normal}(\text{loc}{=}5,\ \text{scale}{=}1) \\
\text{for } & i=1\ldots 8:\\
& \theta_i' \sim \text{Normal}\left(\text{loc}{=}0,\ \text{scale}{=}1 \right) \\
& \theta_i = \mu + \tau \theta_i' \\
& y_i \sim \text{Normal}\left(\text{loc}{=}\theta_i,\ \text{scale}{=}\sigma_i \right)
\end{align*}
We reify this model as an Edward2 program:
```
def schools_model(num_schools, treatment_stddevs):
avg_effect = ed.Normal(loc=0., scale=10., name="avg_effect") # `mu` above
avg_stddev = ed.Normal(
loc=5., scale=1., name="avg_stddev") # `log(tau)` above
school_effects_standard = ed.Normal(
loc=tf.zeros(num_schools),
scale=tf.ones(num_schools),
name="school_effects_standard") # `theta_prime` above
school_effects = avg_effect + tf.exp(
avg_stddev) * school_effects_standard # `theta` above
treatment_effects = ed.Normal(
loc=school_effects, scale=treatment_stddevs,
name="treatment_effects") # `y` above
return treatment_effects
log_joint = ed.make_log_joint_fn(schools_model)
def target_log_prob_fn(avg_effect, avg_stddev, school_effects_standard):
"""Unnormalized target density as a function of states."""
return log_joint(
num_schools=num_schools,
treatment_stddevs=treatment_stddevs,
avg_effect=avg_effect,
avg_stddev=avg_stddev,
school_effects_standard=school_effects_standard,
treatment_effects=treatment_effects)
```
# Bayesian Inference
Given data, we perform Hamiltonian Monte Carlo (HMC) to calculate the posterior distribution over the model's parameters.
```
num_results = 5000
num_burnin_steps = 3000
states, kernel_results = tfp.mcmc.sample_chain(
num_results=num_results,
num_burnin_steps=num_burnin_steps,
current_state=[
tf.zeros([], name='init_avg_effect'),
tf.zeros([], name='init_avg_stddev'),
tf.ones([num_schools], name='init_school_effects_standard'),
],
kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target_log_prob_fn,
step_size=0.4,
num_leapfrog_steps=3))
avg_effect, avg_stddev, school_effects_standard = states
with tf.Session() as sess:
[
avg_effect_,
avg_stddev_,
school_effects_standard_,
is_accepted_,
] = sess.run([
avg_effect,
avg_stddev,
school_effects_standard,
kernel_results.is_accepted,
])
school_effects_samples = (
avg_effect_[:, np.newaxis] +
np.exp(avg_stddev_)[:, np.newaxis] * school_effects_standard_)
num_accepted = np.sum(is_accepted_)
print('Acceptance rate: {}'.format(num_accepted / num_results))
fig, axes = plt.subplots(8, 2, sharex='col', sharey='col')
fig.set_size_inches(12, 10)
for i in range(num_schools):
axes[i][0].plot(school_effects_samples[:,i])
axes[i][0].title.set_text("School {} treatment effect chain".format(i))
sns.kdeplot(school_effects_samples[:,i], ax=axes[i][1], shade=True)
axes[i][1].title.set_text("School {} treatment effect distribution".format(i))
axes[num_schools - 1][0].set_xlabel("Iteration")
axes[num_schools - 1][1].set_xlabel("School effect")
fig.tight_layout()
plt.show()
print("E[avg_effect] = {}".format(avg_effect_.mean()))
print("E[avg_stddev] = {}".format(avg_stddev_.mean()))
print("E[school_effects_standard] =")
print(school_effects_standard_[:, ].mean(0))
print("E[school_effects] =")
print(school_effects_samples[:, ].mean(0))
# Compute the 95% interval for school_effects
school_effects_low = np.array([
np.percentile(school_effects_samples[:, i], 2.5) for i in range(num_schools)
])
school_effects_med = np.array([
np.percentile(school_effects_samples[:, i], 50) for i in range(num_schools)
])
school_effects_hi = np.array([
np.percentile(school_effects_samples[:, i], 97.5)
for i in range(num_schools)
])
fig, ax = plt.subplots(nrows=1, ncols=1, sharex=True)
ax.scatter(np.array(range(num_schools)), school_effects_med, color='red', s=60)
ax.scatter(
np.array(range(num_schools)) + 0.1, treatment_effects, color='blue', s=60)
avg_effect = avg_effect_.mean()
plt.plot([-0.2, 7.4], [avg_effect, avg_effect], 'k', linestyle='--')
ax.errorbar(
np.array(range(8)),
school_effects_med,
yerr=[
school_effects_med - school_effects_low,
school_effects_hi - school_effects_med
],
fmt='none')
ax.legend(('avg_effect', 'Edward2/HMC', 'Observed effect'), fontsize=14)
plt.xlabel('School')
plt.ylabel('Treatment effect')
plt.title('Edward2 HMC estimated school treatment effects vs. observed data')
fig.set_size_inches(10, 8)
plt.show()
```
We can observe the shrinkage toward the group `avg_effect` above.
```
print("Inferred posterior mean: {0:.2f}".format(
np.mean(school_effects_samples[:,])))
print("Inferred posterior mean se: {0:.2f}".format(
np.std(school_effects_samples[:,])))
```
# Criticism
To get the posterior predictive distribution, i.e., a model of new data $y^*$ given the observed data $y$:
$$ p(y^*|y) \propto \int_\theta p(y^* | \theta)p(\theta |y)d\theta$$
we "intercept" the values of the random variables in the model to set them to the mean of the posterior distribution and sample from that model to generate new data $y^*$.
```
def interceptor(rv_constructor, *rv_args, **rv_kwargs):
"""Replaces prior on effects with empirical posterior mean from MCMC."""
name = rv_kwargs.pop("name")
if name == "avg_effect":
rv_kwargs["value"] = np.mean(avg_effect_, 0)
elif name == "avg_stddev":
rv_kwargs["value"] = np.mean(avg_stddev_, 0)
elif name == "school_effects_standard":
rv_kwargs["value"] = np.mean(school_effects_standard_, 0)
return rv_constructor(*rv_args, **rv_kwargs)
with ed.interception(interceptor):
posterior = schools_model(
num_schools=num_schools, treatment_stddevs=treatment_stddevs)
with tf.Session() as sess:
posterior_predictive = sess.run(
posterior.distribution.sample(sample_shape=(5000)))
fig, axes = plt.subplots(4, 2, sharex=True, sharey=True)
fig.set_size_inches(12, 10)
fig.tight_layout()
for i, ax in enumerate(axes):
sns.kdeplot(posterior_predictive[:, 2*i], ax=ax[0], shade=True)
ax[0].title.set_text(
"School {} treatment effect posterior predictive".format(2*i))
sns.kdeplot(posterior_predictive[:, 2*i + 1], ax=ax[1], shade=True)
ax[1].title.set_text(
"School {} treatment effect posterior predictive".format(2*i + 1))
plt.show()
# The mean predicted treatment effects for each of the eight schools.
prediction = posterior_predictive.mean(axis=0)
```
We can look at the residuals between the treatment effects data and the predictions of the model posterior. These correspond with the plot above which shows the shrinkage of the estimated effects toward the population average.
```
treatment_effects - prediction
```
Because we have a distribution of predictions for each school, we can consider the distribution of residuals as well.
```
residuals = treatment_effects - posterior_predictive
fig, axes = plt.subplots(4, 2, sharex=True, sharey=True)
fig.set_size_inches(12, 10)
fig.tight_layout()
for i, ax in enumerate(axes):
sns.kdeplot(residuals[:, 2*i], ax=ax[0], shade=True)
ax[0].title.set_text(
"School {} treatment effect residuals".format(2*i))
sns.kdeplot(residuals[:, 2*i + 1], ax=ax[1], shade=True)
ax[1].title.set_text(
"School {} treatment effect residuals".format(2*i + 1))
plt.show()
```
# Acknowledgements
This tutorial was originally written in Edward 1.0 ([source](https://github.com/blei-lab/edward/blob/master/notebooks/eight_schools.ipynb)). We thank all contributors to writing and revising that version.
# References
1. Donald B. Rubin. Estimation in parallel randomized experiments. Journal of Educational Statistics, 6(4):377-401, 1981.
2. Andrew Gelman, John Carlin, Hal Stern, David Dunson, Aki Vehtari, and Donald Rubin. Bayesian Data Analysis, Third Edition. Chapman and Hall/CRC, 2013.
| github_jupyter |
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
#for dirname, _, filenames in os.walk('/kaggle/input'):
# for filename in filenames:
# print(os.path.join(dirname, filename))
# You can write up to 5GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
df=pd.read_csv('D:\ANKIT\GraduateAdmissions\data\Admission_Predict_Ver1.1.csv')
df.head()
df.columns
import matplotlib.pyplot as plt
import seaborn as sns
#plt.style.use("dark_background")
import matplotlib as mpl
mpl.rcParams.update(mpl.rcParamsDefault)
```
Plot below we can clearly identify that **<8** are danger. Universities seem to love students with **higher CGPA :(** , thus becoming an important factor for unis.
However,rest assured other factors are important too we we will talk more on that later
```
#plt.figure(figsize=(8,10))
#sns.set_style({"figure.facecolor": "white",})
sns.scatterplot(x='CGPA',y='Chance of Admit ',data=df,size='Chance of Admit ',palette=sns.color_palette("coolwarm",5),hue='University Rating')
sns.despine()
```
* Let us now find out **Research Focus**\
Clearly many opted for research and many didn't
```
sns.set_style('white')
sns.distplot(df.Research,label='RRRR',kde=False,color='r')
sns.despine()
```
It's pretty clear that you need to have some research experience since **most having 65% or Greater** all seem to have research experience with exceptions of few.
```
sns.set_style('whitegrid')
sct=sns.scatterplot(x='CGPA',y='Chance of Admit ',data=df,size='Research',palette=sns.dark_palette('purple',2),hue='Research',alpha=0.7)
sct.axes.hlines(y=0.65, xmin=0, xmax=10, linewidth=5, color='r',alpha=0.7,linestyles='--', lw=1)
sns.despine()
unique_Uni=list(set(df['University Rating'].values))
```
So there were **184** different CPGA values due to various factors namely:
* *considering CGPA values were not rounded to nearest decimal place leading to such similar scores*
* Dataset is of **500** entries which is a good representation ,but not necessarily **extremely accurate**
* There ,might be many variations in larger datasets due to *shear student variety* such as increase in number of students with lesser CGPA but with better potential getting accepted.
i.e more density increase in head and tail possibly.
Obviously, there are more anomalies and outlies.
```
unique_CGPA=list(set(df['CGPA'].values))
len(list(set(df['CGPA'].values)))
print(unique_CGPA)
```
Let's visualize the data distribution of CGPA
Ok wow seems like most of the population h
```
sns.set_style('white')
sns.set_color_codes()
sns.distplot(df.CGPA,bins=20, color="purple")
sns.despine()
print("Mean CGPA:{}, Variance in CGPA is by {}%".format(df.CGPA.mean()*10,df.CGPA.var()*10))
df.CGPA.mode()
```
TOEFL Scores are wierdly similar
```
sns.set_style('white')
sns.set_color_codes()
sns.distplot(df['TOEFL Score'],bins=20, color="green")
sns.despine()
len(list(set(df['TOEFL Score'].values)))
sns.set_style('white')
sns.set_color_codes()
sns.distplot(df['GRE Score'],bins=15, color="red")
sns.despine()
sns.set_style('white')
sns.set_color_codes()
sns.distplot(df['LOR '],bins=5, color="red")
sns.despine()
c_1=df[(df['Research']==1) & (df['Chance of Admit ']>0.7)]
len(df[(df['Research']==1) & (df['Chance of Admit ']>0.7)])
```
Looks like 300 people who have 8 CGPA or above seem to have 70% or higher chance
```
c_2=df[(df['Chance of Admit ']>=0.70)&(df['CGPA']>=0.8)]
len(df[(df['Chance of Admit ']>=0.70)&(df['CGPA']>=0.8)])
```
Strangely enough research variable is correlated pretty less compared to others.
I think LORs would usually be gained as a result of research experience therefore having a higher impact .
```
mask = np.zeros_like(df.corr().values)
mask[np.triu_indices_from(mask)] = True
with sns.axes_style("white"):
sns.heatmap(df.corr(),annot=True,mask=mask)
import xgboost
```
| github_jupyter |
# Quantum tomography for n-qubit
Init state: general GHZ
Target state: 1 layer
Here is the case for n-qubit with $n>1$.
The state that need to reconstruct is GHZ state:
$
|G H Z\rangle=\frac{1}{\sqrt{2}}(|0 \ldots 0\rangle+|1 \ldots 1\rangle)=\frac{1}{\sqrt{2}}\left[\begin{array}{c}
1 \\
0 \\
\ldots \\
1
\end{array}\right]
$
$
|G H Z\rangle\langle G H Z|=\frac{1}{2}\left[\begin{array}{cccc}
1 & 0 & \ldots & 1 \\
0 & \ldots & \ldots & 0 \\
\ldots & \ldots & \ldots & \ldots \\
1 & 0 & \ldots & 1
\end{array}\right]
$
In general, the elements that have value 1 can be lower or greater base on $\theta$. The below image is the construct GHZ state circuit in case of 4-qubits:
<img src="../../images/general_ghz.png" width=500px/>
The reconstructed circuit will be include $R_X, R_Z$ and $CNOT$ gates:
<img src="../../images/1layer.png"/>
```
import qiskit
import numpy as np
import matplotlib.pyplot as plt
import sys
sys.path.insert(1, '../')
import qtm.base, qtm.constant, qtm.nqubit, qtm.onequbit
num_qubits = 3
thetas = np.zeros((2*num_qubits*3))
theta = np.random.uniform(0, 2*np.pi)
# Init quantum tomography n qubit
qc = qiskit.QuantumCircuit(num_qubits, num_qubits)
# qc = qtm.nqubit.create_ghz_state(qc, theta)
qc = qtm.nqubit.u_cluster_nqubit(qc, thetas)
qc.draw('mpl', scale = 6)
# Init parameters
num_qubits = 3
thetas = np.zeros((2*num_qubits*3))
theta = np.random.uniform(0, 2*np.pi)
# Init quantum tomography n qubit
qc = qiskit.QuantumCircuit(num_qubits, num_qubits)
qc = qtm.nqubit.create_ghz_state(qc)
# Reduce loss value in 100 steps
thetas, loss_values = qtm.base.fit(
qc, num_steps = 200, thetas = thetas,
create_circuit_func = qtm.nqubit.u_cluster_nqubit,
grad_func = qtm.base.grad_loss,
loss_func = qtm.base.loss_basis,
optimizer = qtm.base.sgd,
verbose = 1
)
# Plot loss value in 100 steps
plt.plot(loss_values)
plt.xlabel("Step")
plt.ylabel("Loss value")
plt.show()
plt.plot(loss_values)
plt.xlabel("Step")
plt.ylabel("Loss value")
plt.savefig('ghz_init2', dpi = 600)
# Get statevector from circuit
psi = qiskit.quantum_info.Statevector.from_instruction(qc)
rho_psi = qiskit.quantum_info.DensityMatrix(psi)
psi_hat = qiskit.quantum_info.Statevector(qtm.base.get_u_hat(
thetas = thetas,
create_circuit_func = qtm.nqubit.u_cluster_nqubit,
num_qubits = qc.num_qubits
))
rho_psi_hat = qiskit.quantum_info.DensityMatrix(psi_hat)
# Calculate the metrics
trace, fidelity = qtm.base.get_metrics(psi, psi_hat)
print("Trace: ", trace)
print("Fidelity: ", fidelity)
qiskit.visualization.plot_state_city(rho_psi, title = 'rho_psi')
qiskit.visualization.plot_state_city(rho_psi_hat, title = 'rho_psi_hat')
```
| github_jupyter |
# <div align="center">8.2 Challenges in Neural Network Optimization</div>
---------------------------------------------------------------------
you can Find me on Github:
> ###### [ GitHub](https://github.com/lev1khachatryan)
Optimization in general is an extremely difficult task. Traditionally, machine
learning has avoided the difficulty of general optimization by carefully designing
the objective function and constraints to ensure that the optimization problem is
convex. When training neural networks, we must confront the general non-convex
case. Even convex optimization is not without its complications. In this section,
we summarize several of the most prominent challenges involved in optimization
for training deep models
# <div align="center">8.2.1 ill-Conditioning</div>
---------------------------------------------------------------------
Some challenges arise even when optimizing convex functions. Of these, the most
prominent is ill-conditioning of the Hessian matrix H. This is a very general
problem in most numerical optimization, convex or otherwise.
The ill-conditioning problem is generally believed to be present in neural
network training problems. ill-conditioning can manifest by causing SGD to get
“stuck” in the sense that even very small steps increase the cost function.
Recall that a second-order Taylor series expansion of the
cost function predicts that a gradient descent step of $−\epsilon g$ will add
<img src='asset/8_2/1.png'>
to the cost. Ill-conditioning of the gradient becomes a problem when $1/2*\epsilon H g - \epsilon g^{T} g$
exceeds $\epsilon g^{T}g$. To determine whether ill-conditioning is detrimental to a neural
network training task, one can monitor the squared gradient norm $g^{T}g$ and the $g^{T}Hg$ term. In many cases, the gradient norm does not shrink significantly
throughout learning, but the $g^{T}Hg$ term grows by more than an order of magnitude.
The result is that learning becomes very slow despite the presence of a strong
gradient because the learning rate must be shrunk to compensate for even stronger
curvature. Figure 8.1 shows an example of the gradient increasing significantly
during the successful training of a neural network
<img src='asset/8_2/2.png'>
Figure 8.1: Gradient descent often does not arrive at a critical point of any kind. In this
example, the gradient norm increases throughout training of a convolutional network used
for object detection. (Left)A scatterplot showing how the norms of individual gradient
evaluations are distributed over time. To improve legibility, only one gradient norm
is plotted per epoch. The running average of all gradient norms is plotted as a solid
curve. The gradient norm clearly increases over time, rather than decreasing as we would
expect if the training process converged to a critical point. (Right)Despite the increasing
gradient, the training process is reasonably successful. The validation set classification
error decreases to a low level.
Though ill-conditioning is present in other settings besides neural network
training, some of the techniques used to combat it in other contexts are less
applicable to neural networks. For example, Newton’s method is an excellent tool
for minimizing convex functions with poorly conditioned Hessian matrices, but in
the subsequent sections we will argue that Newton’s method requires significant
modification before it can be applied to neural networks
# <div align="center">8.2.2 Local Minima</div>
---------------------------------------------------------------------
One of the most prominent features of a convex optimization problem is that it
can be reduced to the problem of finding a local minimum. Any local minimum is guaranteed to be a global minimum. Some convex functions have a flat region at
the bottom rather than a single global minimum point, but any point within such
a flat region is an acceptable solution. When optimizing a convex function, we
know that we have reached a good solution if we find a critical point of any kind.
***With non-convex functions, such as neural nets***, it is possible to have many
local minima. Indeed, nearly any deep model is essentially guaranteed to have
an extremely large number of local minima. However, as we will see, this is not
necessarily a major problem.
Neural networks and any models with multiple equivalently parametrized latent
variables all have multiple local minima because of the model ***identifiability
problem***. A model is said to be identifiable if a sufficiently large training set can
rule out all but one setting of the model’s parameters. Models with latent variables
are often not identifiable because we can obtain equivalent models by exchanging
latent variables with each other. For example, we could take a neural network and
modify layer 1 by swapping the incoming weight vector for unit i with the incoming
weight vector for unit j, then doing the same for the outgoing weight vectors. If we
have m layers with n units each, then there are n!m ways of arranging the hidden
units. This kind of non-identifiability is known as ***weight space symmetry***.
In addition to weight space symmetry, many kinds of neural networks have
additional causes of non-identifiability. For example, in any rectified linear or
maxout network, we can scale all of the incoming weights and biases of a unit by
α if we also scale all of its outgoing weights by 1/α. This means that—if the cost
function does not include terms such as weight decay that depend directly on the
weights rather than the models’ outputs—every local minimum of a rectified linear
or maxout network lies on an (m × n)-dimensional hyperbola of equivalent local
minima.
These model identifiability issues mean that there can be an extremely large
or even uncountably infinite amount of local minima in a neural network cost
function. However, all of these local minima arising from non-identifiability are
equivalent to each other in cost function value. As a result, these local minima are
not a problematic form of non-convexity.
Local minima can be problematic if they have high cost in comparison to the
global minimum. One can construct small neural networks, even without hidden
units, that have local minima with higher cost than the global minimum (Sontag
and Sussman, 1989; Brady et al., 1989; Gori and Tesi, 1992). If local minima
with high cost are common, this could pose a serious problem for gradient-based
optimization algorithms.
It remains an open question whether there are many local minima of high cost for networks of practical interest and whether optimization algorithms encounter
them. For many years, most practitioners believed that local minima were a
common problem plaguing neural network optimization. Today, that does not
appear to be the case. The problem remains an active area of research, but experts
now suspect that, for sufficiently large neural networks, most local minima have a
low cost function value, and that it is not important to find a true global minimum
rather than to find a point in parameter space that has low but not minimal cost.
Many practitioners attribute nearly all difficulty with neural network optimization to local minima. We encourage practitioners to carefully test for specific
problems. A test that can rule out local minima as the problem is to plot the
norm of the gradient over time. If the norm of the gradient does not shrink to
insignificant size, the problem is neither local minima nor any other kind of critical
point. This kind of negative test can rule out local minima. In high dimensional
spaces, it can be very difficult to positively establish that local minima are the
problem. Many structures other than local minima also have small gradients.
# <div align="center">8.2.3 Plateaus, Saddle Points and Other Flat Regions</div>
---------------------------------------------------------------------
For many high-dimensional non-convex functions, local minima (and maxima)
are in fact rare compared to another kind of point with zero gradient: a saddle
point. Some points around a saddle point have greater cost than the saddle point,
while others have a lower cost. At a saddle point, the Hessian matrix has both
positive and negative eigenvalues. Points lying along eigenvectors associated with
positive eigenvalues have greater cost than the saddle point, while points lying
along negative eigenvalues have lower value. We can think of a saddle point as
being a local minimum along one cross-section of the cost function and a local
maximum along another cross-section.
Many classes of random functions exhibit the following behavior: in lowdimensional spaces, local minima are common. In higher dimensional spaces, local
minima are rare and saddle points are more common. For a function f : Rn → R of
this type, the expected ratio of the number of saddle points to local minima grows
exponentially with n. To understand the intuition behind this behavior, observe
that the Hessian matrix at a local minimum has only positive eigenvalues. The
Hessian matrix at a saddle point has a mixture of positive and negative eigenvalues.
Imagine that the sign of each eigenvalue is generated by flipping a coin. In a single
dimension, it is easy to obtain a local minimum by tossing a coin and getting heads
once. In n-dimensional space, it is exponentially unlikely that all n coin tosses will be heads.
An amazing property of many random functions is that the eigenvalues of the
Hessian become more likely to be positive as we reach regions of lower cost. In
our coin tossing analogy, this means we are more likely to have our coin come up
heads n times if we are at a critical point with low cost. This means that local
minima are much more likely to have low cost than high cost. Critical points with
high cost are far more likely to be saddle points. Critical points with extremely
high cost are more likely to be local maxima.
This happens for many classes of random functions. Does it happen for neural
networks? Baldi and Hornik (1989) showed theoretically that shallow autoencoders
(feedforward networks trained to copy their input to their output, described in
chapter 14) with no nonlinearities have global minima and saddle points but no
local minima with higher cost than the global minimum. They observed without
proof that these results extend to deeper networks without nonlinearities. The
output of such networks is a linear function of their input, but they are useful
to study as a model of nonlinear neural networks because their loss function is
a non-convex function of their parameters. Such networks are essentially just
multiple matrices composed together. Saxe et al. (2013) provided exact solutions
to the complete learning dynamics in such networks and showed that learning in
these models captures many of the qualitative features observed in the training of
deep models with nonlinear activation functions. Dauphin et al. (2014) showed
experimentally that real neural networks also have loss functions that contain very
many high-cost saddle points. Choromanska et al. (2014) provided additional
theoretical arguments, showing that another class of high-dimensional random
functions related to neural networks does so as well.
What are the implications of the proliferation of saddle points for training algorithms? For first-order optimization algorithms that use only gradient information,
the situation is unclear. The gradient can often become very small near a saddle
point. On the other hand, gradient descent empirically seems to be able to escape
saddle points in many cases. Goodfellow et al. (2015) provided visualizations of
several learning trajectories of state-of-the-art neural networks, with an example
given in figure 8.2. These visualizations show a flattening of the cost function near
a prominent saddle point where the weights are all zero, but they also show the
gradient descent trajectory rapidly escaping this region. Goodfellow et al. (2015)
also argue that continuous-time gradient descent may be shown analytically to be
repelled from, rather than attracted to, a nearby saddle point, but the situation
may be different for more realistic uses of gradient descent.
For Newton’s method, it is clear that saddle points constitute a problem.
<img src='asset/8_2/3.png'>
Figure 8.2: A visualization of the cost function of a neural network. Image adapted
with permission from Goodfellow et al. (2015). These visualizations appear similar for
feedforward neural networks, convolutional networks, and recurrent networks applied
to real object recognition and natural language processing tasks. Surprisingly, these
visualizations usually do not show many conspicuous obstacles. Prior to the success of
stochastic gradient descent for training very large models beginning in roughly 2012,
neural net cost function surfaces were generally believed to have much more non-convex
structure than is revealed by these projections. The primary obstacle revealed by this
projection is a saddle point of high cost near where the parameters are initialized, but, as
indicated by the blue path, the SGD training trajectory escapes this saddle point readily.
Most of training time is spent traversing the relatively flat valley of the cost function,
which may be due to high noise in the gradient, poor conditioning of the Hessian matrix
in this region, or simply the need to circumnavigate the tall “mountain” visible in the
figure via an indirect arcing path
Gradient descent is designed to move “downhill” and is not explicitly designed
to seek a critical point. Newton’s method, however, is designed to solve for a
point where the gradient is zero. Without appropriate modification, it can jump
to a saddle point. The proliferation of saddle points in high dimensional spaces
presumably explains why second-order methods have not succeeded in replacing
gradient descent for neural network training. Dauphin et al. (2014) introduced a
saddle-free Newton method for second-order optimization and showed that it
improves significantly over the traditional version. Second-order methods remain
difficult to scale to large neural networks, but this saddle-free approach holds
promise if it could be scaled.
There are other kinds of points with zero gradient besides minima and saddle
points. There are also maxima, which are much like saddle points from the
perspective of optimization—many algorithms are not attracted to them, but
unmodified Newton’s method is. Maxima of many classes of random functions
become exponentially rare in high dimensional space, just like minima do.
There may also be wide, flat regions of constant value. In these locations, the
gradient and also the Hessian are all zero. Such degenerate locations pose major
problems for all numerical optimization algorithms. In a convex problem, a wide,
flat region must consist entirely of global minima, but in a general optimization
problem, such a region could correspond to a high value of the objective function
# <div align="center">8.2.4 Cliffs and Exploding Gradients</div>
---------------------------------------------------------------------
Neural networks with many layers often have extremely steep regions resembling
cliffs, as illustrated in figure 8.3. These result from the multiplication of several
large weights together. On the face of an extremely steep cliff structure, the
gradient update step can move the parameters extremely far, usually jumping off
of the cliff structure altogether.
<img src='asset/8_2/4.png'>
Figure 8.3: The objective function for highly nonlinear deep neural networks or for
recurrent neural networks often contains sharp nonlinearities in parameter space resulting
from the multiplication of several parameters. These nonlinearities give rise to very
high derivatives in some places. When the parameters get close to such a cliff region, a
gradient descent update can catapult the parameters very far, possibly losing most of the
optimization work that had been done. Figure adapted with permission from Pascanu
et al. (2013)
# <div align="center">8.2.5 Long-Term Dependencies</div>
---------------------------------------------------------------------
Another difficulty that neural network optimization algorithms must overcome
arises when the computational graph becomes extremely deep. Feedforward
networks with many layers have such deep computational graphs.
For example, suppose that a computational graph contains a path that consists
of repeatedly multiplying by a matrix W. After t steps, this is equivalent to multiplying by $W^t$ . Suppose that W has an eigendecomposition W = V diag(λ)V −1.
In this simple case, it is straightforward to see that
<img src='asset/8_2/5.png'>
Any eigenvalues $λ_{i}$ that are not near an absolute value of 1 will either explode if they
are greater than 1 in magnitude or vanish if they are less than 1 in magnitude. The
***vanishing and exploding gradient problem*** refers to the fact that gradients
through such a graph are also scaled according to diag(λ)t. Vanishing gradients
make it difficult to know which direction the parameters should move to improve
the cost function, while exploding gradients can make learning unstable. The cliff
structures described earlier that motivate gradient clipping are an example of the
exploding gradient phenomenon
The repeated multiplication by W at each time step described here is very
similar to the power method algorithm used to find the largest eigenvalue of
a matrix W and the corresponding eigenvector. From this point of view it is
not surprising that $x^{T}W_{t}$ will eventually discard all components of x that are
orthogonal to the principal eigenvector of W.
Recurrent networks use the same matrix W at each time step, but feedforward
networks do not, so even very deep feedforward networks can largely avoid the
vanishing and exploding gradient problem (Sussillo, 2014).
| github_jupyter |
```
this_notebook_name = "SagittalSpineSegmentationStudy-256"
# Update this folder name for your computer
local_data_folder = r"c:\Data\SagittalSpineSegmentationStudy"
overwrite_existing_data_files = False
# All results and output will be archived with this timestamp
import datetime
save_timestamp = datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S')
print("Save timestamp: {}".format(save_timestamp))
# Learning parameters
import numpy as np
num_extra_layers = 1
ultrasound_size = 128
num_classes = 2
num_epochs = 300
batch_size = 64
max_learning_rate = 0.02
min_learning_rate = 0.00001
regularization_rate = 0.0001
filter_multiplier = 8
class_weights = np.array([0.1, 0.9])
learning_rate_decay = (max_learning_rate - min_learning_rate) / num_epochs
# Training data augmentation parameters
max_shift_factor = 0.12
max_rotation_angle = 10
max_zoom_factor = 1.1
min_zoom_factor = 0.8
# Evaluation parameters
acceptable_margin_mm = 1.0
mm_per_pixel = 1.0
roc_thresholds = [0.9, 0.8, 0.7, 0.65, 0.6, 0.55, 0.5, 0.45, 0.4, 0.35, 0.3, 0.25, 0.2, 0.15, 0.1,
0.08, 0.06, 0.04, 0.02, 0.01,
0.008, 0.006, 0.004, 0.002, 0.001]
'''
Provide NxM numpy array to schedule cross validation
N rounds of validation will be performed, leaving out M patients in each for validation data
All values should be valid patient IDs, or negative. Negative values are ignored.
Example 1: a leave-one-out cross validation with 3 patients would look like this:
validation_schedule_patient = np.array([[0],[1],[2]])
Example 2: a leave-two-out cross validation on 10 patients would look like this:
validation_schedule_patient = np.array([[0,1],[2,3],[4,5],[6,7],[8,9]])
Example 3: leave-one-out cross validation with 3 patients, then training on all available data (no validation):
validation_schedule_patient = np.array([[0],[1],[2],[-1]])
'''
validation_schedule_patient = np.array([[1], [3], [6], [7]])
# Uncomment for faster debugging
roc_thresholds = [0.8, 0.6, 0.4, 0.2, 0.1, 0.01, 0.001]
num_epochs = 5
import os
import sys
from random import sample
from pathlib import Path
from ipywidgets import IntProgress
from IPython.display import display, HTML
import girder_client
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow as tf
import ultrasound_batch_generator as generator
import evaluation_metrics
tf.config.experimental.set_memory_growth(tf.config.experimental.list_physical_devices('GPU')[0], True)
# Import aigt modules
parent_folder = os.path.dirname(os.path.abspath(os.curdir))
sys.path.append(parent_folder)
import Models.segmentation_unet as unet
import utils
# Creating standard folders to save data and logs
data_arrays_fullpath, notebooks_save_fullpath, results_save_fullpath, models_save_fullpath, val_data_fullpath =\
utils.create_standard_project_folders(local_data_folder)
# Fetching Girder data
girder_url = "https://pocus.cs.queensu.ca/api/v1"
# data_csv_file = "VerdureAxial.csv"
from girder_api_key import girder_key
'''
ultrasound_arrays_by_patients, segmentation_arrays_by_patients =\
utils.load_girder_data(data_csv_file, data_arrays_fullpath, girder_url, girder_key=girder_key)
print(str(len(ultrasound_arrays_by_patients)) + ", " + str(len(segmentation_arrays_by_patients)))
'''
data_csv_file = "QueensSagittal.csv"
ultrasound_arrays_by_patients, segmentation_arrays_by_patients =\
utils.load_girder_data(data_csv_file, data_arrays_fullpath, girder_url, girder_key=girder_key)
'''
data_csv_file = "VerdureSagittal.csv"
ultrasound_arrays_by_patients3, segmentation_arrays_by_patients3 =\
utils.load_girder_data(data_csv_file, data_arrays_fullpath, girder_url, girder_key=girder_key)
ultrasound_arrays_by_patients += ultrasound_arrays_by_patients2
ultrasound_arrays_by_patients += ultrasound_arrays_by_patients3
segmentation_arrays_by_patients += segmentation_arrays_by_patients2
segmentation_arrays_by_patients += segmentation_arrays_by_patients3
'''
n_patients = len(ultrasound_arrays_by_patients)
# print(str(len(ultrasound_arrays_by_patients2)) + ", " + str(len(ultrasound_arrays_by_patients3)))
# print(str(len(segmentation_arrays_by_patients2)) + ", " + str(len(segmentation_arrays_by_patients3)))
for i in range(n_patients):
print("Patient {} has {} ultrasounds and {} segmentations".format(
i, ultrasound_arrays_by_patients[i].shape[0], segmentation_arrays_by_patients[i].shape[0]))
# Prepare validation rounds
print(n_patients)
if np.max(validation_schedule_patient) > (n_patients - 1):
raise Exception("Patient ID cannot be greater than {}".format(n_patients - 1))
num_validation_rounds = len(validation_schedule_patient)
print("Planning {} rounds of validation".format(num_validation_rounds))
for i in range(num_validation_rounds):
print("Validation on patients {} in round {}".format(validation_schedule_patient[i], i))
# Print training parameters, to archive them together with the notebook output.
time_sequence_start = datetime.datetime.now()
print("Timestamp for saved files: {}".format(save_timestamp))
print("\nTraining parameters")
print("Number of epochs: {}".format(num_epochs))
print("Step size maximum: {}".format(max_learning_rate))
print("Step size decay: {}".format(learning_rate_decay))
print("Batch size: {}".format(batch_size))
print("Regularization rate: {}".format(regularization_rate))
print("")
print("Saving validation predictions in: {}".format(val_data_fullpath))
print("Saving models in: {}".format(models_save_fullpath))
# ROC data will be saved in these containers
val_best_metrics = dict()
val_fuzzy_metrics = dict()
val_aurocs = np.zeros(num_validation_rounds)
val_best_thresholds = np.zeros(num_validation_rounds)
# Perform validation rounds
for val_round_index in range(num_validation_rounds):
# Prepare data arrays
train_ultrasound_data = np.zeros(
[0,
ultrasound_arrays_by_patients[0].shape[1],
ultrasound_arrays_by_patients[0].shape[2],
ultrasound_arrays_by_patients[0].shape[3]])
train_segmentation_data = np.zeros(
[0,
segmentation_arrays_by_patients[0].shape[1],
segmentation_arrays_by_patients[0].shape[2],
segmentation_arrays_by_patients[0].shape[3]])
val_ultrasound_data = np.zeros(
[0,
ultrasound_arrays_by_patients[0].shape[1],
ultrasound_arrays_by_patients[0].shape[2],
ultrasound_arrays_by_patients[0].shape[3]])
val_segmentation_data = np.zeros(
[0,
segmentation_arrays_by_patients[0].shape[1],
segmentation_arrays_by_patients[0].shape[2],
segmentation_arrays_by_patients[0].shape[3]])
for patient_index in range(n_patients):
if patient_index not in validation_schedule_patient[val_round_index]:
train_ultrasound_data = np.concatenate((train_ultrasound_data,
ultrasound_arrays_by_patients[patient_index]))
train_segmentation_data = np.concatenate((train_segmentation_data,
segmentation_arrays_by_patients[patient_index]))
else:
val_ultrasound_data = np.concatenate((val_ultrasound_data,
ultrasound_arrays_by_patients[patient_index]))
val_segmentation_data = np.concatenate((val_segmentation_data,
segmentation_arrays_by_patients[patient_index]))
n_train = train_ultrasound_data.shape[0]
n_val = val_ultrasound_data.shape[0]
print("\n*** Leave-one-out round # {}".format(val_round_index))
print(" Training on {} images, validating on {} images...".format(n_train, n_val))
val_segmentation_data_onehot = tf.keras.utils.to_categorical(val_segmentation_data, num_classes)
# Create and train model
model = unet.segmentation_unet_128(ultrasound_size, num_classes, num_extra_layers, filter_multiplier, regularization_rate)
model.compile(
optimizer=tf.keras.optimizers.Adam(lr=max_learning_rate, decay=learning_rate_decay),
loss=unet.weighted_categorical_crossentropy(class_weights),
metrics=["accuracy"]
)
# model.summary()
training_generator = generator.UltrasoundSegmentationBatchGenerator(
train_ultrasound_data,
train_segmentation_data[:, :, :, 0],
batch_size,
(ultrasound_size, ultrasound_size),
max_shift_factor=max_shift_factor,
min_zoom_factor=min_zoom_factor,
max_zoom_factor=max_zoom_factor,
max_rotation_angle=max_rotation_angle
)
training_time_start = datetime.datetime.now()
if n_val > 0:
training_log = model.fit_generator(
training_generator,
validation_data=(val_ultrasound_data, val_segmentation_data_onehot),
epochs=num_epochs,
verbose=0)
else:
training_log = model.fit_generator(training_generator, epochs=num_epochs, verbose=0)
training_time_stop = datetime.datetime.now()
# Pring training log
print(" Training time: {}".format(training_time_stop-training_time_start))
# Plot training loss and metrics
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(12, 4))
axes[0].plot(training_log.history['loss'], 'bo--')
if n_val > 0:
axes[0].plot(training_log.history['val_loss'], 'ro-')
axes[0].set(xlabel='Epochs (n)', ylabel='Loss')
if n_val > 0:
axes[0].legend(['Training loss', 'Validation loss'])
axes[1].plot(training_log.history['accuracy'], 'bo--')
if n_val > 0:
axes[1].plot(training_log.history['val_accuracy'], 'ro-')
axes[1].set(xlabel='Epochs (n)', ylabel='Accuracy')
if n_val > 0:
axes[1].legend(['Training accuracy', 'Validation accuracy'])
fig.tight_layout()
# Archive trained model with unique filename based on notebook name and timestamp
model_file_name = this_notebook_name + "_model-" + str(val_round_index) + "_" + save_timestamp + ".h5"
model_fullname = os.path.join(models_save_fullpath, model_file_name)
model.save(model_fullname)
# Predict on validation data
if n_val > 0:
y_pred_val = model.predict(val_ultrasound_data)
# Saving predictions for further evaluation
val_prediction_filename = save_timestamp + "_prediction_" + str(val_round_index) + ".npy"
val_prediction_fullname = os.path.join(val_data_fullpath, val_prediction_filename)
np.save(val_prediction_fullname, y_pred_val)
# Validation results
vali_metrics_dicts, vali_best_threshold_index, vali_area = evaluation_metrics.compute_roc(
roc_thresholds, y_pred_val, val_segmentation_data, acceptable_margin_mm, mm_per_pixel)
val_fuzzy_metrics[val_round_index] = evaluation_metrics.compute_evaluation_metrics(
y_pred_val, val_segmentation_data, acceptable_margin_mm, mm_per_pixel)
val_best_metrics[val_round_index] = vali_metrics_dicts[vali_best_threshold_index]
val_aurocs[val_round_index] = vali_area
val_best_thresholds[val_round_index] = roc_thresholds[vali_best_threshold_index]
# Printing total time of this validation round
print("\nTotal round time: {}".format(datetime.datetime.now() - training_time_start))
print("")
time_sequence_stop = datetime.datetime.now()
print("\nTotal training time: {}".format(time_sequence_stop - time_sequence_start))
# Arrange results in tables
metric_labels = [
"AUROC",
"best thresh",
"best TP",
"best FP",
"best recall",
"best precis",
"fuzzy recall",
"fuzzy precis",
"fuzzy Fscore"
]
results_labels = []
for label in metric_labels:
results_labels.append("Vali " + label)
results_df = pd.DataFrame(columns = results_labels)
for i in range(num_validation_rounds):
if i in val_best_metrics.keys():
results_df.loc[i] = [
val_aurocs[i],
val_best_thresholds[i],
val_best_metrics[i][evaluation_metrics.TRUE_POSITIVE_RATE],
val_best_metrics[i][evaluation_metrics.FALSE_POSITIVE_RATE],
val_best_metrics[i][evaluation_metrics.RECALL],
val_best_metrics[i][evaluation_metrics.PRECISION],
val_fuzzy_metrics[i][evaluation_metrics.RECALL],
val_fuzzy_metrics[i][evaluation_metrics.PRECISION],
val_fuzzy_metrics[i][evaluation_metrics.FSCORE]
]
display(results_df)
print("\nAverages")
results_means_df = results_df.mean()
display(results_means_df)
# Print the last ROC curve for visual verification that we catch the optimal point
n = len(roc_thresholds)
roc_x = np.zeros(n)
roc_y = np.zeros(n)
for i in range(n):
roc_x[i] = vali_metrics_dicts[i][evaluation_metrics.FALSE_POSITIVE_RATE]
roc_y[i] = vali_metrics_dicts[i][evaluation_metrics.SENSITIVITY]
# print("Threshold = {0:4.2f} False pos rate = {1:4.2f} Sensitivity = {2:4.2f}"
# .format(roc_thresholds[i], roc_x[i], roc_y[i]))
plt.figure()
plt.ylim(-0.01, 1.01)
plt.xlim(-0.01, 1.01)
plt.plot(roc_x, roc_y, color='darkred', lw=2)
# Save results table
csv_filename = this_notebook_name + "_" + save_timestamp + ".csv"
csv_fullname = os.path.join(results_save_fullpath, csv_filename)
results_df.to_csv(csv_fullname)
print("Results saved to: {}".format(csv_fullname))
# Display sample results
num_vali = val_ultrasound_data.shape[0]
num_show = 3
if num_vali < num_show:
num_show = 0
num_col = 4
indices = [i for i in range(num_vali)]
sample_indices = sample(indices, num_show)
sample_indices = [0, 4, 6]
threshold = 0.5
# Uncomment for comparing the same images
# sample_indices = [105, 195, 391, 133, 142]
fig = plt.figure(figsize=(18, num_show*5))
for i in range(num_show):
a0 = fig.add_subplot(num_show, num_col, i*num_col+1)
img0 = a0.imshow(np.flipud(val_ultrasound_data[sample_indices[i], :, :, 0].astype(np.float32)))
a0.set_title("Ultrasound #{}".format(sample_indices[i]))
a1 = fig.add_subplot(num_show, num_col, i*num_col+2)
img1 = a1.imshow(np.flipud(val_segmentation_data_onehot[sample_indices[i], :, :, 1]), vmin=0.0, vmax=1.0)
a1.set_title("Segmentation #{}".format(sample_indices[i]))
c = fig.colorbar(img1, fraction=0.046, pad=0.04)
a2 = fig.add_subplot(num_show, num_col, i*num_col+3)
img2 = a2.imshow(np.flipud(y_pred_val[sample_indices[i], :, :, 1]), vmin=0.0, vmax=1.0)
a2.set_title("Prediction #{}".format(sample_indices[i]))
c = fig.colorbar(img2, fraction=0.046, pad=0.04)
a3 = fig.add_subplot(num_show, num_col, i*num_col+4)
img3 = a3.imshow((np.flipud(y_pred_val[sample_indices[i], :, :, 1]) > threshold), vmin=0.0, vmax=1.0)
c = fig.colorbar(img3, fraction=0.046, pad=0.04)
a3.set_title("Thresholded #{}".format(sample_indices[i]))
# Save notebook so all output is archived by the next cell
from IPython.display import Javascript
script = '''
require(["base/js/namespace"],function(Jupyter) {
Jupyter.notebook.save_checkpoint();
});
'''
Javascript(script)
# Export HTML copy of this notebook
notebook_file_name = this_notebook_name + "_" + save_timestamp + ".html"
notebook_fullname = os.path.join(notebooks_save_fullpath, notebook_file_name)
os.system("jupyter nbconvert --to html " + this_notebook_name + " --output " + notebook_fullname)
print("Notebook saved to: {}".format(notebook_fullname))
```
| github_jupyter |
# Content:
1. [Data modeling](#1.-Data-modeling)
2. [Polynomial fitting explained using a quick implementation](#2.-Polynomial-fitting-explained-using-a-quick-implementation)
3. [Polyfit and Polyval](#3.-Polyfit-and-Polyval)
4. [Numpy's polyfit, polyval, and poly1d](#4.-Numpy's-polyfit,-polyval,-and-poly1d)
## 1. Data modeling






#### _Note:_ Instead of matrix inversion, we will directly solve the equation given in the left side using a linear solver.
Since $\left[ {\bf X}^{\rm T}{\bf X}\right]$ is a symmetric matrix, we can use Cholesky decomposition.
## 2. Polynomial fitting explained using a quick implementation
Let's write a general program to fit a set of $N$ points to a $D$ degree polynomial.
### Vandermonde matrix
The first step is to calculate the Vandermonde matrix. Let's calculate it for a set of x-values. Suppose we want to fit 4 points to a straightline, then $N=4$ and $D=1$. However, remember, in python the index starts with 0. So, we have to assign the variables accordingly.
```
import numpy as np
x = np.array([1, 2, 3, 5],float)
N = x.shape[0]
D=1
# Initialize the X-matrix
X = np.ones([N,D+1]) # Note that we are using D+1 here
# Add columns of x, x^2, ..., x^N-1 to build the Vandermonde matrix
#X[:,1]=x[:]
#X[:,2]=x[:]**2
#X[:,3]=x[:]**3
for i in range(1,D+1): # Note that we are using D+1 here
X[:,i]=x[:]**i
print(X)
```
Even though it is easy to calculate the Vandermonde matrix, we should note down that numpy already has a function to calculate this matrix. We can check if our results agree with numpy
```
np.vander(x, D+1,increasing=True) #If the last argument is not given, you get the orders of columns reversed!
```
### Now let's solve a problem
Lets's use the known form of a parabola, say, $y=-0.4x^2$. We can sample some points of $x$ and fit to the known values of $y$. After fitting the data we can check of the polynomial coefficients come out as expected.
```
x=np.arange(-5, 6, 1, float) # start, stop, step, dtype
print("x-vector is:\n", x)
y=-0.4*x**2
print("y-vector is:\n", y)
D=2 # for a parabola
X=np.vander(x, D+1, increasing=True) # V is the Vandermonde matrix.
print(X)
XT=np.transpose(X)
A=np.matmul(XT,X)
print(A)
```
Now, all we have to do is solve ${\bf A}{\bf c}={\bf b}$, where ${\bf b}={\bf X}^{\rm T}{\bf y}$.
```
import numpy as np
from scipy.linalg import cho_factor, cho_solve
b=np.matmul(XT,y)
c=np.zeros(D,float)
L, low = cho_factor(A)
c = cho_solve((L, low), b)
print('\nThe solution is\n')
print(c)
```
We see that the coefficient for $x^0$ and $x^1$ terms are 0.0. For the quadratic term ($x^2$), the coefficient is 0.4 according to the parabola we have started with. Now, suppose you want to find the value of the function at a new value of $x$, all you have to do is evaluate the polynomial.
```
xnew=0.5
ynew=c[0]*xnew**0 + c[1]*xnew**1 + c[2]*xnew**2
print("Value of y at x=", xnew, " is ", ynew)
```
The result is what is expected $y(0.5)=-0.4 \times 0.5^2=-0.1$.
## 3. Polyfit and Polyval
What we have done so far is to fit a set of points to a polynomial (polyfit) and evaluate the polynomial at new points (polyval). We can write general functions for these two steps.
```
def chol(A,b):
from scipy.linalg import cho_factor, cho_solve
D=b.shape[0]
c=np.zeros(D,float)
L, low = cho_factor(A)
c = cho_solve((L, low), b)
return c
def polyfit(x,y,D):
'''
Fits a given set of data x,y to a polynomial of degree D
'''
import numpy as np
X=np.vander(x, D+1, increasing=True)
XT=np.transpose(X)
A=np.matmul(XT,X)
b=np.matmul(XT,y)
c=chol(A,b)
return(c)
#=== Let's fit to a parabola
x=np.arange(-5, 6, 1, float)
y=-0.4*x**2
D=2 # for a parabola
c=polyfit(x,y,D)
for i in range(D+1):
print("coefficient of x^",i," is ",c[i])
```
Now, let's see what happens if we can fit the same data to higher-degree polynomial.
```
D=5
c=polyfit(x,y,D)
for i in range(D+1):
print("coefficient of x^",i," is ",c[i])
```
Only the quadratic term survives, all other coefficients are zero! How nice!
To evaluate the polynomial, i.e., the estimated values of y, one can write another function, called polyval.
```
def polyval(a,x):
'''
Determines the value of the polynomial using x and the coefficient vector a
'''
import numpy as np
D=a.shape[0]
N=x.shape
y=np.zeros(N)
for i in range(D):
y=y+a[i]*x**i
return(y)
xnew=np.array([-0.5,0.5]) # we will make the new x-values as an array
ynew=polyval(c,xnew)
print(ynew)
```
## 4. Numpy's polyfit, polyval, and poly1d
Again, since we have learned the basics of polynomial fitting _from scratch_, we can use numpy's in-built routines for production runs. But, before that we need to test if numpy's results agree with our own values!
```
x=np.arange(-5, 6, 1, float)
y=-0.4*x**2
D=4 # some polynomial degree
c=np.polyfit(x, y, D)
xnew=np.array([-0.5,0.5])
ynew=np.polyval(c,xnew)
print("Estimated value of y at new points of x is: \n",ynew)
```
There's also a cool function in numpy to print the polynomial as an expression.
```
p = np.poly1d(c)
print(p)
```
In the next class we will learn about how to quantify the accuracy of fitting.
| github_jupyter |
# Send Data from Bronze Table to Silver Temeletry Table
To run this notebook, import it into Azure Synapse and attach it to an Apache Spark Pool.
Choose the "Small" Node Size, and choose "3" as the Number of Nodes.
Be sure to run the "rate-streaming-to-bronze" Notebook beforehand, to ensure there is data to pull from.
```
%%spark
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
```
## Configure the Storage Account (to read from)
Replace the value `<storageAccountName>` with the name of the storage account where the Bronze Delta Table data is stored.
```
%%spark
val storageAccountName = "<storageAccountName>"
val bronzeDataLocation: String = "abfss://datalake@"+storageAccountName+".dfs.core.windows.net/bronzeSynapse"
val silverDataLocation: String = "abfss://datalake@"+storageAccountName+".dfs.core.windows.net/silverSynapse/Telemetry"
```
## Read the Data
Here the data is read from the `bronzeDataLocation` specified in the previous cell, which is configured using the value inputted for `storageAccount`.
```
%%spark
var bronzeDF = spark.readStream.format("delta").load(bronzeDataLocation)
```
## Parse the Body and Split into Columns
The schema of the Dataframe is configured to match the schema of the Silver Telemetery Table. The body is parsed and split into columns.
```
%%spark
val silverSchema: StructType = new StructType().
add("VehicleId", StringType).
add("EngineTemp", IntegerType).
add("BatteryVoltage", DoubleType).
add("DaysSinceLastServicing", IntegerType).
add("Mileage", IntegerType)
var silverDF = bronzeDF.where("Properties.topic == 'Telemetry'")
silverDF = silverDF.withColumn("Body", col("Body").cast(StringType)) // cast the "body" column to StringType
silverDF = silverDF.withColumn("JsonBody", get_json_object(col("Body"), "$")) // extracts JSON object from the "body" column
silverDF = silverDF.withColumn("SilverSchemaFields", from_json(col("JsonBody"), silverSchema)) // returns a struct value with the given JSON string and schema
silverDF = silverDF.select(
col("ProcessedTimestamp"),
col("ProcessedDate"),
col("ProcessedHour"),
col("UserId"),
col("Properties"),
col("SilverSchemaFields.*")
)
silverDF = silverDF.drop("Body")
silverDF = silverDF.drop("JsonBody")
silverDF = silverDF.drop("SilverSchemaFields")
silverDF.printSchema()
```
## Write Data to Silver Telemetry Table
```
%%spark
var silverTelemetryQuery = silverDF.writeStream.format("delta").
outputMode("append").
partitionBy("ProcessedDate", "ProcessedHour").
option("checkpointLocation", silverDataLocation + "/checkpoint").
start(silverDataLocation)
```
## Viewing the Data
```
%%spark
var silverViewDF = spark.read.format("delta").load(silverDataLocation)
silverViewDF.orderBy(col("ProcessedTimestamp").desc).show()
%%spark
silverViewDF.printSchema()
%%spark
display(silverViewDF.orderBy(col("ProcessedTimestamp").desc))
```
| github_jupyter |
A crash course in
<b><font size=44px><center> Surviving Titanic</center></font></b>
<img src='http://4.media.bustedtees.cvcdn.com/f/-/bustedtees.d6ab8f8f-a63a-45fd-acac-142e2c22.gif' width=400>
<center> (with numpy and matplotlib)</center>
---
This notebook's gonna teach you to use the basic data science stack for python: jupyter, numpy, matplotlib and sklearn.
### Part I: Jupyter notebooks in a nutshell
* You are reading this line in a jupyter notebook.
* A notebook consists of cells. A cell can contain either code or hypertext.
* This cell contains hypertext. The next cell contains code.
* You can __run a cell__ with code by selecting it (click) and pressing `Ctrl + Enter` to execute the code and display output(if any).
* If you're running this on a device with no keyboard, ~~you are doing it wrong~~ use topbar (esp. play/stop/restart buttons) to run code.
* Behind the curtains, there's a python interpreter that runs that code and remembers anything you defined.
Run these cells to get started
```
a = 5
print(a * 2)
```
* `Ctrl + S` to save changes (or use the button that looks like a floppy disk)
* Top menu -> Kernel -> Interrupt (or Stop button) if you want it to stop running cell midway.
* Top menu -> Kernel -> Restart (or cyclic arrow button) if interrupt doesn't fix the problem (you will lose all variables).
* For shortcut junkies like us: Top menu -> Help -> Keyboard Shortcuts
* More: [Hacker's guide](http://arogozhnikov.github.io/2016/09/10/jupyter-features.html), [Beginner's guide'](https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/), [Datacamp tutorial](https://www.datacamp.com/community/tutorials/tutorial-jupyter-notebook)
Now __the most important feature__ of jupyter notebooks for this course:
* if you're typing something, press `Tab` to see automatic suggestions, use arrow keys + enter to pick one.
* if you move your cursor inside some function and press `__Shift + Tab__`, you'll get a help window. `Shift + (Tab , Tab)` will expand it.
```
# run this first
import math
# place your cursor at the end of the unfinished line below to find a function
# that computes arctangent from two parameters (should have 2 in it's name)
# once you chose it, press shift + tab + tab(again) to see the docs
math.atan2 # <---
```
### Part II: Loading data with Pandas
Pandas is a library that helps you load the data, prepare it and perform some lightweight analysis. The god object here is the `pandas.DataFrame` - a 2d table with batteries included.
In the cell below we use it to read the data on the infamous titanic shipwreck.
__please keep running all the code cells as you read__
```
import pandas as pd
data = pd.read_csv("train.csv", index_col='PassengerId') # this yields a pandas.DataFrame
# Selecting rows
head = data[:10]
head #if you leave an expression at the end of a cell, jupyter will "display" it automatically
```
#### About the data
Here's some of the columns
* Name - a string with person's full name
* Survived - 1 if a person survived the shipwreck, 0 otherwise.
* Pclass - passenger class. Pclass == 3 is cheap'n'cheerful, Pclass == 1 is for moneybags.
* Sex - a person's gender (in those good ol' times when there were just 2 of them)
* Age - age in years, if available
* Sibsp - number of siblings on a ship
* Parch - number of parents on a ship
* Fare - ticket cost
* Embarked - port where the passenger embarked
* C = Cherbourg; Q = Queenstown; S = Southampton
```
# table dimensions
print("len(data) = ",len(data))
print("data.shape = ",data.shape)
# select a single row
print(data.loc[4])
# select a single column.
ages = data["Age"]
print(ages[:10]) # alternatively: data.Age
# select several columns and rows at once
data.loc[5:10, ("Fare", "Pclass")] # alternatively: data[["Fare","Pclass"]].loc[5:10]
```
## Your turn:
```
# select passengers number 13 and 666 - did they survive?
data.query('PassengerId == 13 or PassengerId == 666')
# compute the overall survival rate (what fraction of passengers survived the shipwreck)
data.query('Survived == 1').shape[0] / data.shape[0]
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
Pandas also has some basic data analysis tools. For one, you can quickly display statistical aggregates for each column using `.describe()`
```
data.describe()
```
Some columns contain __NaN__ values - this means that there is no data there. For example, passenger `#5` has unknown age. To simplify the future data analysis, we'll replace NaN values by using pandas `fillna` function.
_Note: we do this so easily because it's a tutorial. In general, you think twice before you modify data like this._
```
data.iloc[5]
data['Age'] = data['Age'].fillna(value=data['Age'].mean())
data['Fare'] = data['Fare'].fillna(value=data['Fare'].mean())
data.iloc[5]
```
More pandas:
* A neat [tutorial](http://pandas.pydata.org/) from pydata
* Official [tutorials](https://pandas.pydata.org/pandas-docs/stable/tutorials.html), including this [10 minutes to pandas](https://pandas.pydata.org/pandas-docs/stable/10min.html#min)
* Bunch of cheat sheets awaits just one google query away from you (e.g. [basics](http://blog.yhat.com/static/img/datacamp-cheat.png), [combining datasets](https://pbs.twimg.com/media/C65MaMpVwAA3v0A.jpg) and so on).
### Part III: Numpy and vectorized computing
Almost any machine learning model requires some computational heavy lifting usually involving linear algebra problems. Unfortunately, raw python is terrible at this because each operation is interpreted at runtime.
So instead, we'll use `numpy` - a library that lets you run blazing fast computation with vectors, matrices and other tensors. Again, the god oject here is `numpy.ndarray`:
```
import numpy as np
a = np.array([1,2,3,4,5])
b = np.array([5,4,3,2,1])
print("a = ",a)
print("b = ",b)
# math and boolean operations can applied to each element of an array
print("a + 1 =", a + 1)
print("a * 2 =", a * 2)
print("a == 2", a == 2)
# ... or corresponding elements of two (or more) arrays
print("a + b =",a + b)
print("a * b =",a * b)
# Your turn: compute half-products of a and b elements (halves of products)
print("(a * b)/2 =",(a * b)/2)
# compute elementwise quoient between squared a and (b plus 1)
print("a² * (b+1) =", a*a * (b+1))
```
```
```
```
```
```
```
```
```
```
```
### How fast is it, harry?

Let's compare computation time for python and numpy
* Two arrays of 10^6 elements
* first - from 0 to 1 000 000
* second - from 99 to 1 000 099
* Computing:
* elemwise sum
* elemwise product
* square root of first array
* sum of all elements in the first array
```
%%time
# ^-- this "magic" measures and prints cell computation time
# Option I: pure python
arr_1 = range(1000000)
arr_2 = range(99,1000099)
a_sum = []
a_prod = []
sqrt_a1 = []
for i in range(len(arr_1)):
a_sum.append(arr_1[i]+arr_2[i])
a_prod.append(arr_1[i]*arr_2[i])
a_sum.append(arr_1[i]**0.5)
arr_1_sum = sum(arr_1)
%%time
# Option II: start from python, convert to numpy
arr_1 = range(1000000)
arr_2 = range(99,1000099)
arr_1, arr_2 = np.array(arr_1) , np.array(arr_2)
a_sum = arr_1 + arr_2
a_prod = arr_1 * arr_2
sqrt_a1 = arr_1 ** .5
arr_1_sum = arr_1.sum()
%%time
# Option III: pure numpy
arr_1 = np.arange(1000000)
arr_2 = np.arange(99,1000099)
a_sum = arr_1 + arr_2
a_prod = arr_1 * arr_2
sqrt_a1 = arr_1 ** .5
arr_1_sum = arr_1.sum()
```
If you want more serious benchmarks, take a look at [this](http://brilliantlywrong.blogspot.ru/2015/01/benchmarks-of-speed-numpy-vs-all.html).
```
```
```
```
```
```
```
```
```
```
```
```
```
```
There's also a bunch of pre-implemented operations including logarithms, trigonometry, vector/matrix products and aggregations.
```
a = np.array([1,2,3,4,5])
b = np.array([5,4,3,2,1])
print("numpy.sum(a) = ", np.sum(a))
print("numpy.mean(a) = ", np.mean(a))
print("numpy.min(a) = ", np.min(a))
print("numpy.argmin(b) = ", np.argmin(b)) # index of minimal element
print("numpy.dot(a,b) = ", np.dot(a, b)) # dot product. Also used for matrix/tensor multiplication
print("numpy.unique(['male','male','female','female','male']) = ", np.unique(['male','male','female','female','male']))
# and tons of other stuff. see http://bit.ly/2u5q430 .
```
The important part: all this functionality works with dataframes:
```
print("Max ticket price: ", np.max(data["Fare"]))
print("\nThe guy who paid the most:\n", data.loc[np.argmax(data["Fare"])])
# your code: compute mean passenger age and the oldest guy on the ship
print("Mean passenger age: ", np.mean(data["Age"]))
print("\nThe oldest guy on the shipt:\n", data.loc[np.argmax(data["Age"])])
print("Boolean operations")
print('a = ', a)
print('b = ', b)
print("a > 2", a > 2)
print("numpy.logical_not(a>2) = ", np.logical_not(a>2))
print("numpy.logical_and(a>2,b>2) = ", np.logical_and(a > 2,b > 2))
print("numpy.logical_or(a>4,b<3) = ", np.logical_or(a > 2, b < 3))
print("\n shortcuts")
print("~(a > 2) = ", ~(a > 2)) #logical_not(a > 2)
print("(a > 2) & (b > 2) = ", (a > 2) & (b > 2)) #logical_and
print("(a > 2) | (b < 3) = ", (a > 2) | (b < 3)) #logical_or
```
The final numpy feature we'll need is indexing: selecting elements from an array.
Aside from python indexes and slices (e.g. a[1:4]), numpy also allows you to select several elements at once.
```
a = np.array([0, 1, 4, 9, 16, 25])
ix = np.array([1,2,3])
print("a = ", a)
print("Select by element index")
print("a[[1,2,5]] = ", a[ix])
print("\nSelect by boolean mask")
print("a[a > 5] = ", a[a > 5]) # select all elementts in a that are greater than 5
print("(a % 2 == 0) =",a % 2 == 0) # True for even, False for odd
print("a[a > 3] =", a[a % 2 == 0]) # select all elements in a that are even
print("data[(data['Age'] < 18) & (data['Sex'] == 'male')] = (below)") # select male children
data[(data['Age'] < 18) & (data['Sex'] == 'male')]
```
### Your turn
Use numpy and pandas to answer a few questions about data
```
# who on average paid more for their ticket, men or women?
mean_fare_men = np.mean(data[(data['Sex'] == 'male')]["Fare"])
mean_fare_women = np.mean(data[(data['Sex'] == 'female')]["Fare"])
print(mean_fare_men, mean_fare_women)
# who is more likely to survive: a child (<18 yo) or an adult?
child_survival_rate = np.mean(data[(data['Age'] < 18 )]["Survived"])
adult_survival_rate = np.mean(data[(data['Age'] >= 18 )]["Survived"])
print(child_survival_rate, adult_survival_rate)
```
# Part IV: plots and matplotlib
Using python to visualize the data is covered by yet another library: `matplotlib`.
Just like python itself, matplotlib has an awesome tendency of keeping simple things simple while still allowing you to write complicated stuff with convenience (e.g. super-detailed plots or custom animations).
```
import matplotlib.pyplot as plt
%matplotlib inline
# ^-- this "magic" tells all future matplotlib plots to be drawn inside notebook and not in a separate window.
# line plot
plt.plot([0,1,2,3,4,5],[0,1,4,9,16,25])
#scatter-plot
plt.scatter([0,1,2,3,4,5],[0,1,4,9,16,25])
plt.show() # show the first plot and begin drawing next one
# draw a scatter plot with custom markers and colors
plt.scatter([1,1,2,3,4,4.5],[3,2,2,5,15,24],
c = ["red","blue","orange","green","cyan","gray"], marker = "x")
# without .show(), several plots will be drawn on top of one another
plt.plot([0,1,2,3,4,5],[0,1,4,9,16,25],c = "black")
# adding more sugar
plt.title("Conspiracy theory proven!!!")
plt.xlabel("Per capita alcohol consumption")
plt.ylabel("# Layers in state of the art image classifier")
# fun with correlations: http://bit.ly/1FcNnWF
# histogram - showing data density
plt.hist([0,1,1,1,2,2,3,3,3,3,3,4,4,5,5,5,6,7,7,8,9,10])
plt.show()
plt.hist([0,1,1,1,2,2,3,3,3,3,3,4,4,5,5,5,6,7,7,8,9,10], bins = 5)
# plot a histogram of age and a histogram of ticket fares on separate plots
plt.hist(data["Age"])
plt.show()
plt.hist(data["Fare"])
plt.show()
#bonus: use tab shift-tab to see if there is a way to draw a 2D histogram of age vs fare.
# make a scatter plot of passenger age vs ticket fare
plt.scatter(data["Fare"], data["Fare"], c=data["Sex"].replace(["female", "male"], [0, 1]))
plt.show()
# kudos if you add separate colors for men and women
```
* Extended [tutorial](https://matplotlib.org/2.0.2/users/pyplot_tutorial.html)
* A [cheat sheet](http://bit.ly/2koHxNF)
* Other libraries for more sophisticated stuff: [Plotly](https://plot.ly/python/) and [Bokeh](https://bokeh.pydata.org/en/latest/)
### Part V (final): machine learning with scikit-learn
<img src='https://imgs.xkcd.com/comics/machine_learning.png' width=320px>
Scikit-learn is _the_ tool for simple machine learning pipelines.
It's a single library that unites a whole bunch of models under the common interface:
* Create:__ `model = sklearn.whatever.ModelNameHere(parameters_if_any)`__
* Train:__ `model.fit(X,y)`__
* Predict:__ `model.predict(X_test)`__
It also contains utilities for feature extraction, quality estimation or cross-validation.
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
features = data[["Fare", "SibSp"]].copy()
answers = data["Survived"]
model = RandomForestClassifier(n_estimators=100)
model.fit(features[:-100], answers[:-100])
test_predictions = model.predict(features[-100:])
print("Test accuracy:", accuracy_score(answers[-100:], test_predictions))
```
Final quest: add more features to achieve accuracy of at least 0.80
__Hint:__ for string features like "Sex" or "Embarked" you will have to compute some kind of numeric representation.
For example, 1 if male and 0 if female or vice versa
__Hint II:__ you can use `model.feature_importances_` to get a hint on how much did it rely each of your features.
* Sklearn [tutorials](http://scikit-learn.org/stable/tutorial/index.html)
* Sklearn [examples](http://scikit-learn.org/stable/auto_examples/index.html)
* SKlearn [cheat sheet](http://scikit-learn.org/stable/_static/ml_map.png)
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
Okay, what we learned: to survive a shipwreck you need to become an underaged girl with parents on the ship. Try this next time you'll find yourself in a shipwreck
| github_jupyter |
# Introduction
This example demonstrates using a network pretrained on ImageNet for classification. The model used was converted from the VGG_CNN_S model (http://arxiv.org/abs/1405.3531) in [Caffe's Model Zoo](https://github.com/BVLC/caffe/wiki/Model-Zoo).
For details of the conversion process, see the example notebook "Using a Caffe Pretrained Network - CIFAR10".
### License
The model is licensed for non-commercial use only
### Download the model (393 MB)
```
!wget https://s3.amazonaws.com/lasagne/recipes/pretrained/imagenet/vgg_cnn_s.pkl
```
# Setup
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import lasagne
from lasagne.layers import InputLayer, DenseLayer, DropoutLayer
from lasagne.layers.dnn import Conv2DDNNLayer as ConvLayer
from lasagne.layers import MaxPool2DLayer as PoolLayer
from lasagne.layers import LocalResponseNormalization2DLayer as NormLayer
from lasagne.utils import floatX
```
### Define the network
```
net = {}
net['input'] = InputLayer((None, 3, 224, 224))
net['conv1'] = ConvLayer(net['input'], num_filters=96, filter_size=7, stride=2, flip_filters=False)
net['norm1'] = NormLayer(net['conv1'], alpha=0.0001) # caffe has alpha = alpha * pool_size
net['pool1'] = PoolLayer(net['norm1'], pool_size=3, stride=3, ignore_border=False)
net['conv2'] = ConvLayer(net['pool1'], num_filters=256, filter_size=5, flip_filters=False)
net['pool2'] = PoolLayer(net['conv2'], pool_size=2, stride=2, ignore_border=False)
net['conv3'] = ConvLayer(net['pool2'], num_filters=512, filter_size=3, pad=1, flip_filters=False)
net['conv4'] = ConvLayer(net['conv3'], num_filters=512, filter_size=3, pad=1, flip_filters=False)
net['conv5'] = ConvLayer(net['conv4'], num_filters=512, filter_size=3, pad=1, flip_filters=False)
net['pool5'] = PoolLayer(net['conv5'], pool_size=3, stride=3, ignore_border=False)
net['fc6'] = DenseLayer(net['pool5'], num_units=4096)
net['drop6'] = DropoutLayer(net['fc6'], p=0.5)
net['fc7'] = DenseLayer(net['drop6'], num_units=4096)
net['drop7'] = DropoutLayer(net['fc7'], p=0.5)
net['fc8'] = DenseLayer(net['drop7'], num_units=1000, nonlinearity=lasagne.nonlinearities.softmax)
output_layer = net['fc8']
```
### Load the model parameters and metadata
```
import pickle
model = pickle.load(open('vgg_cnn_s.pkl'))
CLASSES = model['synset words']
MEAN_IMAGE = model['mean image']
lasagne.layers.set_all_param_values(output_layer, model['values'])
```
# Trying it out
### Get some test images
We'll download the ILSVRC2012 validation URLs and pick a few at random
```
import urllib
index = urllib.urlopen('http://www.image-net.org/challenges/LSVRC/2012/ori_urls/indexval.html').read()
image_urls = index.split('<br>')
np.random.seed(23)
np.random.shuffle(image_urls)
image_urls = image_urls[:5]
```
### Helper to fetch and preprocess images
```
import io
import skimage.transform
def prep_image(url):
ext = url.split('.')[-1]
im = plt.imread(io.BytesIO(urllib.urlopen(url).read()), ext)
# Resize so smallest dim = 256, preserving aspect ratio
h, w, _ = im.shape
if h < w:
im = skimage.transform.resize(im, (256, w*256/h), preserve_range=True)
else:
im = skimage.transform.resize(im, (h*256/w, 256), preserve_range=True)
# Central crop to 224x224
h, w, _ = im.shape
im = im[h//2-112:h//2+112, w//2-112:w//2+112]
rawim = np.copy(im).astype('uint8')
# Shuffle axes to c01
im = np.swapaxes(np.swapaxes(im, 1, 2), 0, 1)
# Convert to BGR
im = im[::-1, :, :]
im = im - MEAN_IMAGE
return rawim, floatX(im[np.newaxis])
```
### Process test images and print top 5 predicted labels
```
for url in image_urls:
try:
rawim, im = prep_image(url)
prob = np.array(lasagne.layers.get_output(output_layer, im, deterministic=True).eval())
top5 = np.argsort(prob[0])[-1:-6:-1]
plt.figure()
plt.imshow(rawim.astype('uint8'))
plt.axis('off')
for n, label in enumerate(top5):
plt.text(250, 70 + n * 20, '{}. {}'.format(n+1, CLASSES[label]), fontsize=14)
except IOError:
print('bad url: ' + url)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.