text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# Hidden Markov Model Demo
A Hidden Markov Model (HMM) is one of the simpler graphical models available in _SSM_. This notebook demonstrates creating and sampling from and HMM using SSM, and fitting an HMM to synthetic data. A full treatment of HMMs is beyond the scope of this notebook, but there are many good resources. [Stanford's CS228 Lecture Notes](https://ermongroup.github.io/cs228-notes/) provide a good introduction to HMMs and other graphical models. [Pattern Recognition and Machine Learning](http://users.isr.ist.utl.pt/~wurmd/Livros/school/Bishop%20-%20Pattern%20Recognition%20And%20Machine%20Learning%20-%20Springer%20%202006.pdf) by Christopher Bishop covers HMMs and how the EM algorithm is used to fit them from data.
The goal of these notebooks is to introduce state-space models to practitioners who have some familiarity with them, but who may not have used these models in practice before. As such, we've included a few exercises to try as you make your way through the notebooks.
## 1. Setup
The line `import ssm` imports the package for use. Here, we have also imported a few other packages for plotting.
```
import autograd.numpy as np
import autograd.numpy.random as npr
npr.seed(0)
import ssm
from ssm.util import find_permutation
from ssm.plots import gradient_cmap, white_to_color_cmap
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_style("white")
sns.set_context("talk")
color_names = [
"windows blue",
"red",
"amber",
"faded green",
"dusty purple",
"orange"
]
colors = sns.xkcd_palette(color_names)
cmap = gradient_cmap(colors)
# Speficy whether or not to save figures
save_figures = True
```
## 2. Create an HMM
An HMM consists of a set of hidden state variable, $z$, which can take on one of $K$ values (for our purposes, HMMs will always have discrete states), along with a set of transition probabilities for how the hidden state evolves over time.
In other words, we have $z_t \in \{1, \ldots, K\}$, where $z_t = k$ denotes that the hidden variable is in state $k$ at time $t$.
The key assumption in an HMM is that only the most recent state affects the next state. In mathematical terms:
$$
p(z_t \mid z_{t-1}, z_{t-2}, \ldots, z_1) = p(z_t \mid z_{t-1})
$$
In an HMM, we don't observe the state itself. Instead, we get a noisy observation of the state at each time step according to some observation model. We'll use $x_t$ to denote the observation at time step $t$. The observation can be a vector or scalar. We'll use $D$ to refer to the dimensionality of the observation. A few of the supported observation models are:
1. **Gaussian**: Each discrete state $z_t = k$ is associated with a $D$-dimensional mean $\mu_k$ and covariance matrix $\Sigma_k$. Each observation $z_t$ comes from a Gaussian distribution centered at the associated mean, with the corresponding covariance.
2. **Student's T**: Same as Gaussian, but the observations come from a Student's-T Distribution.
3. **Bernoulli**: Each element of the $D$-dimensional observation is a Bernoulli (binary) random variable. Each discrete state $Z_i$ determines the probability that each element in the observation is nonzero.
_Note: SSM supports many other observation models for HMMs. We are in the process of creating full-standalone documentation to describe them. For now, the best way to learn about SSM's other functionality is look at the source code. The observation models are described in observations.py._
In the below example, we create an instance of the HMM with 5 discrete states and 2 dimensional observations. We store our HMM instance in a variable called true_hmm with this line:
`
true_hmm = ssm.HMM(K, D, observations="gaussian")
`
We then manually set the means for each latent state to make them farther away (this makes them easier to visualize).
`
true_hmm.observations.mus = 3 * np.column_stack((np.cos(thetas), np.sin(thetas)))
`
Here we are modifying the `observations` instance associated with the HMM we created above. We could also change the covariance, but for now we're leaving it with the default (identity covariance).
```
# Set the parameters of the HMM
time_bins = 200 # number of time bins
num_states = 5 # number of discrete states
obs_dim = 2 # dimensionality of observation
# Make an HMM
true_hmm = ssm.HMM(num_states, obs_dim, observations="gaussian")
# Manually tweak the means to make them farther apart
thetas = np.linspace(0, 2 * np.pi, num_states, endpoint=False)
true_hmm.observations.mus = 3 * np.column_stack((np.cos(thetas), np.sin(thetas)))
```
## 3. Sample from the HMM
We draw samples from an HMM using the `sample` method:
`true_states, obs = true_hmm.sample(time_bins)`.
This returns a tuple $(z, x)$ of the latent states and observations, respectively.
In this case, `true_states` will be an array of size $(200,)$ because it contains the discrete state $z_t$ across $200$ time-bins. `obs` will be an array of size $(200, 2)$ because it contains the observations across $200$ time bins, and each observation is two dimensional.
We have specified the number of time-steps by passing `time_bins` as the argument to the `sample` method.
In the next line, we retrieve the log-likelihood of the data we observed:
`true_ll = true_hmm.log_probability(obs)`
This tells us the relative probability of our observations. In the next section, when we fit an HMM to the data we generated, the true log-likelihood will be helpful for determining if our fitting algorithm succeeded.
```
# Sample some data from the HMM
true_states, obs = true_hmm.sample(time_bins)
true_ll = true_hmm.log_probability(obs)
# Plot the observation distributions
lim = .85 * abs(obs).max()
XX, YY = np.meshgrid(np.linspace(-lim, lim, 100), np.linspace(-lim, lim, 100))
data = np.column_stack((XX.ravel(), YY.ravel()))
input = np.zeros((data.shape[0], 0))
mask = np.ones_like(data, dtype=bool)
tag = None
lls = true_hmm.observations.log_likelihoods(data, input, mask, tag)
```
Below, we plot the samples obtained from the HMM, color-coded according to the underlying state. The solid curves show regions of of equal probability density around each mean. The thin gray lines trace the latent variable as it transitions from one state to another.
```
plt.figure(figsize=(6, 6))
for k in range(num_states):
plt.contour(XX, YY, np.exp(lls[:,k]).reshape(XX.shape), cmap=white_to_color_cmap(colors[k]))
plt.plot(obs[true_states==k, 0], obs[true_states==k, 1], 'o', mfc=colors[k], mec='none', ms=4)
plt.plot(obs[:,0], obs[:,1], '-k', lw=1, alpha=.25)
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.title("Observation Distributions")
if save_figures:
plt.savefig("hmm_1.pdf")
```
Below, we visualize each component of of the observation variable as a time series. The colors correspond to the latent state. The dotted lines represent the "true" values of the observation variable (the mean) while the solid lines are the actual observations sampled from the HMM.
```
# Plot the data and the smoothed data
lim = 1.05 * abs(obs).max()
plt.figure(figsize=(8, 6))
plt.imshow(true_states[None,:],
aspect="auto",
cmap=cmap,
vmin=0,
vmax=len(colors)-1,
extent=(0, time_bins, -lim, (obs_dim)*lim))
Ey = true_hmm.observations.mus[true_states]
for d in range(obs_dim):
plt.plot(obs[:,d] + lim * d, '-k')
plt.plot(Ey[:,d] + lim * d, ':k')
plt.xlim(0, time_bins)
plt.xlabel("time")
plt.yticks(lim * np.arange(obs_dim), ["$x_{}$".format(d+1) for d in range(obs_dim)])
plt.title("Simulated data from an HMM")
plt.tight_layout()
if save_figures:
plt.savefig("hmm_2.pdf")
```
### Exercise 3.1: Change the observation model
Try changing the observation model to Bernoulli and visualizing the sampled data. You'll need to create a new HMM object with Bernoulli observations. Then, use the `sample` method to sample from it. Visualizing the mean vectors and contours makes sense for Gaussian observations, but might not be the best way to visualize Bernoulli observations.
```
# Your code here: create an HMM with Bernoulli observations
# ---------------------------------------------------------
```
# 4. Fit an HMM to synthetic data
This is all fine, but so far we haven't done anything that useful. It's far more interesting to learn an HMM from data. In the following cells, we'll use the synthetic data we generated above to fit an HMM from scratch. This is done in the following lines:
`
hmm = ssm.HMM(num_states, obs_dim, observations="gaussian")
hmm_lls = hmm.fit(obs, method="em", num_em_iters=N_iters)
`
In the first line, we create a new HMM instance called `hmm` with a gaussian observation model, as in the previous case. Because we haven't specified anything, the transition probabilities and observation means will be randomly initialized. In the next line, we use the `fit` method to learn the transition probabilities and observation means from data. We set the method to `em` (expectation maximization) and specify the maximum number of iterations which will be used to fit the data. The `fit` method returns a numpy array which shows the log-likelihood of the data over time. We then plot this and see that the EM algorithm quickly converges.
```
data = obs # Treat observations generated above as synthetic data.
N_iters = 50
## testing the constrained transitions class
hmm = ssm.HMM(num_states, obs_dim, observations="gaussian")
hmm_lls = hmm.fit(obs, method="em", num_iters=N_iters, init_method="kmeans")
plt.plot(hmm_lls, label="EM")
plt.plot([0, N_iters], true_ll * np.ones(2), ':k', label="True")
plt.xlabel("EM Iteration")
plt.ylabel("Log Probability")
plt.legend(loc="lower right")
plt.show()
```
The below cell is a bit subtle. In the first section, we sampled from the HMM and stored the resulting latent state $z$ in a variable called `state`.
Now, we are treating our observations from the previous section as data, and seeing whether we can infer the true state given only the observations. However, there is no guarantee that the states we learn correspond to the original states from the true HMM. In order to account for this, we need to find a permutation of the states of our new HMM so that they align with the states of the true HMM from the prior section. This is done in the following two lines:
`most_likely_states = hmm.most_likely_states(obs)
hmm.permute(find_permutation(true_states, most_likely_states))
`
In the first line, we use the `most_likely_states` method to infer the most likely latent states given the observations. In the second line we call the `find_permutation` function the permutation that best matches the true state. We then use the `permute` method on our `hmm` instance to permute its states accordingly.
```
# Find a permutation of the states that best matches the true and inferred states
most_likely_states = hmm.most_likely_states(obs)
hmm.permute(find_permutation(true_states, most_likely_states))
```
Below, we plot the inferred states ($z_{\mathrm{inferred}}$) and the true states ($z_{\mathrm{true}}$) over time. We see that the two match very closely, but not exactly. The model sometimes has difficulty inferring the state if we only observe that state for a very short time.
```
# Plot the true and inferred discrete states
hmm_z = hmm.most_likely_states(data)
plt.figure(figsize=(8, 4))
plt.subplot(211)
plt.imshow(true_states[None,:], aspect="auto", cmap=cmap, vmin=0, vmax=len(colors)-1)
plt.xlim(0, time_bins)
plt.ylabel("$z_{\\mathrm{true}}$")
plt.yticks([])
plt.subplot(212)
plt.imshow(hmm_z[None,:], aspect="auto", cmap=cmap, vmin=0, vmax=len(colors)-1)
plt.xlim(0, time_bins)
plt.ylabel("$z_{\\mathrm{inferred}}$")
plt.yticks([])
plt.xlabel("time")
plt.tight_layout()
```
An HMM can also be used to smooth data (once its parameters are learned) but computing the mean observation under the posterior distribution of latent states.
Let's say, for example, that during time steps 0 to 10 the model estimates a 0.3 probability of being in state 1, and a 0.7 probability of being in state 2, given the observations $x$.
Mathematically, that's saying we've computed the following probabilities:
$$
p(z=1 \mid X) = 0.3\\
p(z=3 \mid X) = 0.7
$$
The smoothed observations would then be $0.3 \mu_1 + 0.7 \mu_2$, where we $\mu_i$ is the mean for the observations in state $i$.
In the cell below, we use `hmm.smooth(obs)` to smooth the data this way. The orange and blue lines show the smoothed data, and the black lines show the original noisy observations.
```
# Use the HMM to "smooth" the data
hmm_x = hmm.smooth(obs)
plt.figure(figsize=(8, 4))
plt.plot(obs + 3 * np.arange(obs_dim), '-k', lw=2)
plt.plot(hmm_x + 3 * np.arange(obs_dim), '-', lw=2)
plt.xlim(0, time_bins)
plt.ylabel("$x$")
# plt.yticks([])
plt.xlabel("time")
```
### 4.1. Visualize the Transition Matrices
The dynamics of the hidden state in an HMM are specified by the transition probabilities $p(z_t \mid z_{t-1})$. It's standard to pack these probabilities into a stochastic matrix $A$ where $A_{ij} = p(z_t = j \mid z_{t-1} = i)$.
In SSM, we can access the transition matrices using `hmm.transitions.transition` matrix. In the following two lines, we retrives the transition matrices for the true HMM, as well as the HMM we learned from the data, and compare them visually.
```
true_transition_mat = true_hmm.transitions.transition_matrix
learned_transition_mat = hmm.transitions.transition_matrix
fig = plt.figure(figsize=(8, 4))
plt.subplot(121)
im = plt.imshow(true_transition_mat, cmap='gray')
plt.title("True Transition Matrix")
plt.subplot(122)
im = plt.imshow(learned_transition_mat, cmap='gray')
plt.title("Learned Transition Matrix")
cbar_ax = fig.add_axes([0.95, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
plt.show()
```
### Excercise 4.2: Distribution of State Durations
Derive the theoretical distribution over state durations. Do the state durations we observe ($Z_{true}$ in section 4) match the theory? If you're stuck, imagine that the system starts in state $1$, i.e $z_1 = 1$. What's the probability that $z_2 = 1$? From here, you might be able to work forwards in time.
When done, check if your derivation matches what we find in the section below.
### 4.3: Visualize State Durations
```
true_state_list, true_durations = ssm.util.rle(true_states)
inferred_state_list, inferred_durations = ssm.util.rle(hmm_z)
# Rearrange the lists of durations to be a nested list where
# the nth inner list is a list of durations for state n
true_durs_stacked = []
inf_durs_stacked = []
for s in range(num_states):
true_durs_stacked.append(true_durations[true_state_list == s])
inf_durs_stacked.append(inferred_durations[inferred_state_list == s])
fig = plt.figure(figsize=(8, 4))
plt.hist(true_durs_stacked, label=['state ' + str(s) for s in range(num_states)])
plt.xlabel('Duration')
plt.ylabel('Frequency')
plt.legend()
plt.title('Histogram of True State Durations')
fig = plt.figure(figsize=(8, 4))
plt.hist(inf_durs_stacked, label=['state ' + str(s) for s in range(num_states)])
plt.xlabel('Duration')
plt.ylabel('Frequency')
plt.legend()
plt.title('Histogram of Inferred State Durations')
plt.show()
```
### Excercise 4.4: Fit an HMM using more data
We see that the above histograms do not match each other as closely as we might expect. They also don't match the theoretical distriubtion of durations all that closely (see Exercise 4.2). Part of the reason for this is that we have sampled from a relatively small number of time steps.
Try modifying the `time_bins` variable to sample for more time-steps (say 2000 or so).
Then, re-run the analysis above. Because of the larger time frame, some of the plots above may become hard to read, but the histogram of durations should more closely match what we expect.
### Exercise 4.5: Mismatched Observations
Imagine a scenario where the true data comes from an HMM with Student's T observations, but you fit an HMM with Gaussian observations. What might you expect to happen?
You can try simulating this: modify the code in Section 2 so that we create and HMM with Student's T observations. Then re-run the cells in Section 4, which will fit an HMM with Gaussian observations to the observed data. What do you see?
| github_jupyter |
---
_You are currently looking at **version 1.0** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._
---
# Merging Dataframes
```
import pandas as pd
df = pd.DataFrame([{'Name': 'Chris', 'Item Purchased': 'Sponge', 'Cost': 22.50},
{'Name': 'Kevyn', 'Item Purchased': 'Kitty Litter', 'Cost': 2.50},
{'Name': 'Filip', 'Item Purchased': 'Spoon', 'Cost': 5.00}],
index=['Store 1', 'Store 1', 'Store 2'])
df
df['Date'] = ['December 1', 'January 1', 'mid-May']
df
df['Delivered'] = True
df
df['Feedback'] = ['Positive', None, 'Negative']
df
adf = df.reset_index()
adf['Date'] = pd.Series({0: 'December 1', 2: 'mid-May'})
adf
staff_df = pd.DataFrame([{'Name': 'Kelly', 'Role': 'Director of HR'},
{'Name': 'Sally', 'Role': 'Course liasion'},
{'Name': 'James', 'Role': 'Grader'}])
staff_df = staff_df.set_index('Name')
student_df = pd.DataFrame([{'Name': 'James', 'School': 'Business'},
{'Name': 'Mike', 'School': 'Law'},
{'Name': 'Sally', 'School': 'Engineering'}])
student_df = student_df.set_index('Name')
print(staff_df.head())
print()
print(student_df.head())
pd.merge(staff_df, student_df, how='outer', left_index=True, right_index=True)
pd.merge(staff_df, student_df, how='inner', left_index=True, right_index=True)
pd.merge(staff_df, student_df, how='left', left_index=True, right_index=True)
pd.merge(staff_df, student_df, how='right', left_index=True, right_index=True)
staff_df = staff_df.reset_index()
student_df = student_df.reset_index()
pd.merge(staff_df, student_df, how='left', left_on='Name', right_on='Name')
staff_df = pd.DataFrame([{'Name': 'Kelly', 'Role': 'Director of HR', 'Location': 'State Street'},
{'Name': 'Sally', 'Role': 'Course liasion', 'Location': 'Washington Avenue'},
{'Name': 'James', 'Role': 'Grader', 'Location': 'Washington Avenue'}])
student_df = pd.DataFrame([{'Name': 'James', 'School': 'Business', 'Location': '1024 Billiard Avenue'},
{'Name': 'Mike', 'School': 'Law', 'Location': 'Fraternity House #22'},
{'Name': 'Sally', 'School': 'Engineering', 'Location': '512 Wilson Crescent'}])
pd.merge(staff_df, student_df, how='left', left_on='Name', right_on='Name')
staff_df = pd.DataFrame([{'First Name': 'Kelly', 'Last Name': 'Desjardins', 'Role': 'Director of HR'},
{'First Name': 'Sally', 'Last Name': 'Brooks', 'Role': 'Course liasion'},
{'First Name': 'James', 'Last Name': 'Wilde', 'Role': 'Grader'}])
student_df = pd.DataFrame([{'First Name': 'James', 'Last Name': 'Hammond', 'School': 'Business'},
{'First Name': 'Mike', 'Last Name': 'Smith', 'School': 'Law'},
{'First Name': 'Sally', 'Last Name': 'Brooks', 'School': 'Engineering'}])
staff_df
student_df
pd.merge(staff_df, student_df, how='inner', left_on=['First Name','Last Name'], right_on=['First Name','Last Name'])
```
# Idiomatic Pandas: Making Code Pandorable
```
import pandas as pd
df = pd.read_csv('census.csv')
df
(df.where(df['SUMLEV']==50)
.dropna()
.set_index(['STNAME','CTYNAME'])
.rename(columns={'ESTIMATESBASE2010': 'Estimates Base 2010'}))
df = df[df['SUMLEV']==50]
df.set_index(['STNAME','CTYNAME'], inplace=True)
df.rename(columns={'ESTIMATESBASE2010': 'Estimates Base 2010'})
import numpy as np
def min_max(row):
data = row[['POPESTIMATE2010',
'POPESTIMATE2011',
'POPESTIMATE2012',
'POPESTIMATE2013',
'POPESTIMATE2014',
'POPESTIMATE2015']]
return pd.Series({'min': np.min(data), 'max': np.max(data)})
df.apply(min_max, axis=1)
import numpy as np
def min_max(row):
data = row[['POPESTIMATE2010',
'POPESTIMATE2011',
'POPESTIMATE2012',
'POPESTIMATE2013',
'POPESTIMATE2014',
'POPESTIMATE2015']]
row['max'] = np.max(data)
row['min'] = np.min(data)
return row
df.apply(min_max, axis=1)
rows = ['POPESTIMATE2010',
'POPESTIMATE2011',
'POPESTIMATE2012',
'POPESTIMATE2013',
'POPESTIMATE2014',
'POPESTIMATE2015']
df.apply(lambda x: np.max(x[rows]), axis=1)
```
# Group by
```
import pandas as pd
import numpy as np
df = pd.read_csv('census.csv')
df = df[df['SUMLEV']==50]
df
%%timeit -n 10
for state in df['STNAME'].unique():
avg = np.average(df.where(df['STNAME']==state).dropna()['CENSUS2010POP'])
print('Counties in state ' + state + ' have an average population of ' + str(avg))
%%timeit -n 10
for group, frame in df.groupby('STNAME'):
avg = np.average(frame['CENSUS2010POP'])
print('Counties in state ' + group + ' have an average population of ' + str(avg))
df.head()
df = df.set_index('STNAME')
def fun(item):
if item[0]<'M':
return 0
if item[0]<'Q':
return 1
return 2
for group, frame in df.groupby(fun):
print('There are ' + str(len(frame)) + ' records in group ' + str(group) + ' for processing.')
df = pd.read_csv('census.csv')
df = df[df['SUMLEV']==50]
df.groupby('STNAME').agg({'CENSUS2010POP': np.average})
print(type(df.groupby(level=0)['POPESTIMATE2010','POPESTIMATE2011']))
print(type(df.groupby(level=0)['POPESTIMATE2010']))
(df.set_index('STNAME').groupby(level=0)['CENSUS2010POP']
.agg({'avg': np.average, 'sum': np.sum}))
(df.set_index('STNAME').groupby(level=0)['POPESTIMATE2010','POPESTIMATE2011']
.agg({'avg': np.average, 'sum': np.sum}))
(df.set_index('STNAME').groupby(level=0)['POPESTIMATE2010','POPESTIMATE2011']
.agg({'POPESTIMATE2010': np.average, 'POPESTIMATE2011': np.sum}))
```
# Scales
```
df = pd.DataFrame(['A+', 'A', 'A-', 'B+', 'B', 'B-', 'C+', 'C', 'C-', 'D+', 'D'],
index=['excellent', 'excellent', 'excellent', 'good', 'good', 'good', 'ok', 'ok', 'ok', 'poor', 'poor'])
df.rename(columns={0: 'Grades'}, inplace=True)
df
df['Grades'].astype('category').head()
grades = df['Grades'].astype('category',
categories=['D', 'D+', 'C-', 'C', 'C+', 'B-', 'B', 'B+', 'A-', 'A', 'A+'],
ordered=True)
grades.head()
grades > 'C'
df = pd.read_csv('census.csv')
df = df[df['SUMLEV']==50]
df = df.set_index('STNAME').groupby(level=0)['CENSUS2010POP'].agg({'avg': np.average})
pd.cut(df['avg'],10)
```
# Pivot Tables
```
#http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64
df = pd.read_csv('cars.csv')
df.head()
df.pivot_table(values='(kW)', index='YEAR', columns='Make', aggfunc=np.mean)
df.pivot_table(values='(kW)', index='YEAR', columns='Make', aggfunc=[np.mean,np.min], margins=True)
```
# Date Functionality in Pandas
```
import pandas as pd
import numpy as np
```
### Timestamp
```
pd.Timestamp('9/1/2016 10:05AM')
```
### Period
```
pd.Period('1/2016')
pd.Period('3/5/2016')
```
### DatetimeIndex
```
t1 = pd.Series(list('abc'), [pd.Timestamp('2016-09-01'), pd.Timestamp('2016-09-02'), pd.Timestamp('2016-09-03')])
t1
type(t1.index)
```
### PeriodIndex
```
t2 = pd.Series(list('def'), [pd.Period('2016-09'), pd.Period('2016-10'), pd.Period('2016-11')])
t2
type(t2.index)
```
### Converting to Datetime
```
d1 = ['2 June 2013', 'Aug 29, 2014', '2015-06-26', '7/12/16']
ts3 = pd.DataFrame(np.random.randint(10, 100, (4,2)), index=d1, columns=list('ab'))
ts3
ts3.index = pd.to_datetime(ts3.index)
ts3
pd.to_datetime('4.7.12', dayfirst=True)
```
### Timedeltas
```
pd.Timestamp('9/3/2016')-pd.Timestamp('9/1/2016')
pd.Timestamp('9/2/2016 8:10AM') + pd.Timedelta('12D 3H')
```
### Working with Dates in a Dataframe
```
dates = pd.date_range('10-01-2016', periods=9, freq='2W-SUN')
dates
df = pd.DataFrame({'Count 1': 100 + np.random.randint(-5, 10, 9).cumsum(),
'Count 2': 120 + np.random.randint(-5, 10, 9)}, index=dates)
df
df.index.weekday_name
df.diff()
df.resample('M').mean()
df['2017']
df['2016-12']
df['2016-12':]
df.asfreq('W', method='ffill')
import matplotlib.pyplot as plt
%matplotlib inline
df.plot()
```
| github_jupyter |
```
%matplotlib inline
```
# Visualizing Part-of-Speech Tagging with Yellowbrick
This notebook is a sample of the text visualizations that yellowbrick provides, in particular a feature that enables visual part-of-speech tagging, which can be used to make decisions about text normalization and vectorization.
```
import os
import sys
# Modify the path
sys.path.append("..")
import yellowbrick as yb
```
## First let's grab a few documents to experiment with
```
texts = {
'nursery_rhyme' : '''Baa, baa, black sheep,
Have you any wool?
Yes, sir, yes, sir,
Three bags full;
One for the master,
And one for the dame,
And one for the little boy
Who lives down the lane.''',
'algebra' : '''Algebra (from Arabic "al-jabr" meaning
"reunion of broken parts") is one of the
broad parts of mathematics, together with
number theory, geometry and analysis. In
its most general form, algebra is the study
of mathematical symbols and the rules for
manipulating these symbols; it is a unifying
thread of almost all of mathematics.''',
'french_silk' : '''In a small saucepan, combine sugar and eggs
until well blended. Cook over low heat, stirring
constantly, until mixture reaches 160° and coats
the back of a metal spoon. Remove from the heat.
Stir in chocolate and vanilla until smooth. Cool
to lukewarm (90°), stirring occasionally. In a small
bowl, cream butter until light and fluffy. Add cooled
chocolate mixture; beat on high speed for 5 minutes
or until light and fluffy. In another large bowl,
beat cream until it begins to thicken. Add
confectioners' sugar; beat until stiff peaks form.
Fold into chocolate mixture. Pour into crust. Chill
for at least 6 hours before serving. Garnish with
whipped cream and chocolate curls if desired. '''
}
##########################################################################
# Imports
##########################################################################
from yellowbrick.text.base import TextVisualizer
##########################################################################
# PosTagVisualizer
##########################################################################
class PosTagVisualizer(TextVisualizer):
"""
A part-of-speech tag visualizer colorizes text to enable
the user to visualize the proportions of nouns, verbs, etc.
and to use this information to make decisions about text
normalization (e.g. stemming vs lemmatization) and
vectorization.
Parameters
----------
kwargs : dict
Pass any additional keyword arguments to the super class.
cmap : dict
ANSII colormap
These parameters can be influenced later on in the visualization
process, but can and should be set as early as possible.
"""
def __init__(self, ax=None, **kwargs):
"""
Initializes the base frequency distributions with many
of the options required in order to make this
visualization work.
"""
super(PosTagVisualizer, self).__init__(ax=ax, **kwargs)
# TODO: hard-coding in the ANSII colormap for now.
# Can we let the user reset the colors here?
self.COLORS = {
'white' : "\033[0;37m{}\033[0m",
'yellow' : "\033[0;33m{}\033[0m",
'green' : "\033[0;32m{}\033[0m",
'blue' : "\033[0;34m{}\033[0m",
'cyan' : "\033[0;36m{}\033[0m",
'red' : "\033[0;31m{}\033[0m",
'magenta' : "\033[0;35m{}\033[0m",
'black' : "\033[0;30m{}\033[0m",
'darkwhite' : "\033[1;37m{}\033[0m",
'darkyellow' : "\033[1;33m{}\033[0m",
'darkgreen' : "\033[1;32m{}\033[0m",
'darkblue' : "\033[1;34m{}\033[0m",
'darkcyan' : "\033[1;36m{}\033[0m",
'darkred' : "\033[1;31m{}\033[0m",
'darkmagenta': "\033[1;35m{}\033[0m",
'darkblack' : "\033[1;30m{}\033[0m",
None : "\033[0;0m{}\033[0m"
}
self.TAGS = {
'NN' : 'green',
'NNS' : 'green',
'NNP' : 'green',
'NNPS' : 'green',
'VB' : 'blue',
'VBD' : 'blue',
'VBG' : 'blue',
'VBN' : 'blue',
'VBP' : 'blue',
'VBZ' : 'blue',
'JJ' : 'red',
'JJR' : 'red',
'JJS' : 'red',
'RB' : 'cyan',
'RBR' : 'cyan',
'RBS' : 'cyan',
'IN' : 'darkwhite',
'POS' : 'darkyellow',
'PRP$' : 'magenta',
'PRP$' : 'magenta',
'DT' : 'black',
'CC' : 'black',
'CD' : 'black',
'WDT' : 'black',
'WP' : 'black',
'WP$' : 'black',
'WRB' : 'black',
'EX' : 'yellow',
'FW' : 'yellow',
'LS' : 'yellow',
'MD' : 'yellow',
'PDT' : 'yellow',
'RP' : 'yellow',
'SYM' : 'yellow',
'TO' : 'yellow',
'None' : 'off'
}
def colorize(self, token, color):
"""
Colorize text
Parameters
----------
token : str
A str representation of
"""
return self.COLORS[color].format(token)
def transform(self, tagged_tuples):
"""
The transform method transforms the raw text input for the
part-of-speech tagging visualization. It requires that
documents be in the form of (tag, token) tuples.
Parameters
----------
tagged_token_tuples : list of tuples
A list of (tag, token) tuples
Text documents must be tokenized and tagged before passing to fit()
"""
self.tagged = [
(self.TAGS.get(tag),tok) for tok, tag in tagged_tuples
]
from nltk.corpus import wordnet as wn
from nltk import pos_tag, word_tokenize
# Tokenize the text
for label,text in texts.items():
tokens = word_tokenize(text)
tagged = pos_tag(tokens)
visualizer = PosTagVisualizer()
visualizer.transform(tagged)
print(' '.join((visualizer.colorize(token, color) for color, token in visualizer.tagged)))
print('\n')
```
| github_jupyter |
##### Copyright 2021 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Feature Engineering using TFX Pipeline and TensorFlow Transform
***Transform input data and traing a model with a TFX pipeline.***
Note: We recommend running this tutorial in a Colab notebook, with no setup required! Just click "Run in Google Colab".
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/tfx/penguin_tft">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png"/>View on TensorFlow.org</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/penguin_tft.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/penguin_tft.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/tfx/penguin_tft.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a></td>
</table></div>
In this notebook-based tutorial, we will create and run a TFX pipeline
to ingest raw input data and preprocess it appropriately for ML training.
This notebook is based on the TFX pipeline we built in
[Data validation using TFX Pipeline and TensorFlow Data Validation Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/penguin_tfdv).
If you have not read that one yet, you should read it before proceeding with
this notebook.
You can increase the predictive quality of your data and/or reduce
dimensionality with feature engineering. One of the benefits of using TFX is
that you will write your transformation code once, and the resulting transforms
will be consistent between training and serving in
order to avoid training/serving skew.
We will add a `Transform` component to the pipeline. The Transform component is
implemented using the
[tf.transform](https://www.tensorflow.org/tfx/transform/get_started) library.
Please see
[Understanding TFX Pipelines](https://www.tensorflow.org/tfx/guide/understanding_tfx_pipelines)
to learn more about various concepts in TFX.
## Set Up
We first need to install the TFX Python package and download
the dataset which we will use for our model.
### Upgrade Pip
To avoid upgrading Pip in a system when running locally,
check to make sure that we are running in Colab.
Local systems can of course be upgraded separately.
```
try:
import colab
!pip install --upgrade pip
except:
pass
```
### Install TFX
```
!pip install -U tfx
```
### Did you restart the runtime?
If you are using Google Colab, the first time that you run
the cell above, you must restart the runtime by clicking
above "RESTART RUNTIME" button or using "Runtime > Restart
runtime ..." menu. This is because of the way that Colab
loads packages.
Check the TensorFlow and TFX versions.
```
import tensorflow as tf
print('TensorFlow version: {}'.format(tf.__version__))
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
```
### Set up variables
There are some variables used to define a pipeline. You can customize these
variables as you want. By default all output from the pipeline will be
generated under the current directory.
```
import os
PIPELINE_NAME = "penguin-transform"
# Output directory to store artifacts generated from the pipeline.
PIPELINE_ROOT = os.path.join('pipelines', PIPELINE_NAME)
# Path to a SQLite DB file to use as an MLMD storage.
METADATA_PATH = os.path.join('metadata', PIPELINE_NAME, 'metadata.db')
# Output directory where created models from the pipeline will be exported.
SERVING_MODEL_DIR = os.path.join('serving_model', PIPELINE_NAME)
from absl import logging
logging.set_verbosity(logging.INFO) # Set default logging level.
```
### Prepare example data
We will download the example dataset for use in our TFX pipeline. The dataset
we are using is
[Palmer Penguins dataset](https://allisonhorst.github.io/palmerpenguins/articles/intro.html).
However, unlike previous tutorials which used an already preprocessed dataset,
we will use the **raw** Palmer Penguins dataset.
Because the TFX ExampleGen component reads inputs from a directory, we need
to create a directory and copy the dataset to it.
```
import urllib.request
import tempfile
DATA_ROOT = tempfile.mkdtemp(prefix='tfx-data') # Create a temporary directory.
_data_path = 'https://storage.googleapis.com/download.tensorflow.org/data/palmer_penguins/penguins_size.csv'
_data_filepath = os.path.join(DATA_ROOT, "data.csv")
urllib.request.urlretrieve(_data_path, _data_filepath)
```
Take a quick look at what the raw data looks like.
```
!head {_data_filepath}
```
There are some entries with missing values which are represented as `NA`.
We will just delete those entries in this tutorial.
```
!sed -i '/\bNA\b/d' {_data_filepath}
!head {_data_filepath}
```
You should be able to see seven features which describe penguins. We will use
the same set of features as the previous tutorials - 'culmen_length_mm',
'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g' - and will predict
the 'species' of a penguin.
**The only difference will be that the input data is not preprocessed.** Note
that we will not use other features like 'island' or 'sex' in this tutorial.
### Prepare a schema file
As described in
[Data validation using TFX Pipeline and TensorFlow Data Validation Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/penguin_tfdv),
we need a schema file for the dataset. Because the dataset is different from the previous tutorial we need to generate it again. In this tutorial, we will skip those steps and just use a prepared schema file.
```
import shutil
SCHEMA_PATH = 'schema'
_schema_uri = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/schema/raw/schema.pbtxt'
_schema_filename = 'schema.pbtxt'
_schema_filepath = os.path.join(SCHEMA_PATH, _schema_filename)
os.makedirs(SCHEMA_PATH, exist_ok=True)
urllib.request.urlretrieve(_schema_uri, _schema_filepath)
```
This schema file was created with the same pipeline as in the previous tutorial
without any manual changes.
## Create a pipeline
TFX pipelines are defined using Python APIs. We will add `Transform`
component to the pipeline we created in the
[Data Validation tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/penguin_tfdv).
A Transform component requires input data from an `ExampleGen` component and
a schema from a `SchemaGen` component, and produces a "transform graph". The
output will be used in a `Trainer` component. Transform can optionally
produce "transformed data" in addition, which is the materialized data after
transformation.
However, we will transform data during training in this tutorial without
materialization of the intermediate transformed data.
One thing to note is that we need to define a Python function,
`preprocessing_fn` to describe how input data should be transformed. This is
similar to a Trainer component which also requires user code for model
definition.
### Write preprocessing and training code
We need to define two Python functions. One for Transform and one for Trainer.
#### preprocessing_fn
The Transform component will find a function named `preprocessing_fn` in the
given module file as we did for `Trainer` component. You can also specify a
specific function using the
[`preprocessing_fn` parameter](https://github.com/tensorflow/tfx/blob/142de6e887f26f4101ded7925f60d7d4fe9d42ed/tfx/components/transform/component.py#L113)
of the Transform component.
In this example, we will do two kinds of transformation. For continuous numeric
features like `culmen_length_mm` and `body_mass_g`, we will normalize these
values using the
[tft.scale_to_z_score](https://www.tensorflow.org/tfx/transform/api_docs/python/tft/scale_to_z_score)
function. For the label feature, we need to convert string labels into numeric
index values. We will use
[`tf.lookup.StaticHashTable`](https://www.tensorflow.org/api_docs/python/tf/lookup/StaticHashTable)
for conversion.
To identify transformed fields easily, we append a `_xf` suffix to the
transformed feature names.
#### run_fn
The model itself is almost the same as in the previous tutorials, but this time
we will transform the input data using the transform graph from the Transform
component.
One more important difference compared to the previous tutorial is that now we
export a model for serving which includes not only the computation graph of the
model, but also the transform graph for preprocessing, which is generated in
Transform component. We need to define a separate function which will be used
for serving incoming requests. You can see that the same function
`_apply_preprocessing` was used for both of the training data and the
serving request.
```
_module_file = 'penguin_utils.py'
%%writefile {_module_file}
from typing import List, Text
from absl import logging
import tensorflow as tf
from tensorflow import keras
from tensorflow_metadata.proto.v0 import schema_pb2
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
# Specify features that we will use.
_FEATURE_KEYS = [
'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'
]
_LABEL_KEY = 'species'
_TRAIN_BATCH_SIZE = 20
_EVAL_BATCH_SIZE = 10
# NEW: Transformed features will have '_xf' suffix.
def _transformed_name(key):
return key + '_xf'
# NEW: TFX Transform will call this function.
def preprocessing_fn(inputs):
"""tf.transform's callback function for preprocessing inputs.
Args:
inputs: map from feature keys to raw not-yet-transformed features.
Returns:
Map from string feature key to transformed feature.
"""
outputs = {}
# Uses features defined in _FEATURE_KEYS only.
for key in _FEATURE_KEYS:
# tft.scale_to_z_score computes the mean and variance of the given feature
# and scales the output based on the result.
outputs[_transformed_name(key)] = tft.scale_to_z_score(inputs[key])
# For the label column we provide the mapping from string to index.
# We could instead use `tft.compute_and_apply_vocabulary()` in order to
# compute the vocabulary dynamically and perform a lookup.
# Since in this example there are only 3 possible values, we use a hard-coded
# table for simplicity.
table_keys = ['Adelie', 'Chinstrap', 'Gentoo']
initializer = tf.lookup.KeyValueTensorInitializer(
keys=table_keys,
values=tf.cast(tf.range(len(table_keys)), tf.int64),
key_dtype=tf.string,
value_dtype=tf.int64)
table = tf.lookup.StaticHashTable(initializer, default_value=-1)
outputs[_transformed_name(_LABEL_KEY)] = table.lookup(inputs[_LABEL_KEY])
return outputs
# NEW: This function will apply the same transform operation to training data
# and serving requests.
def _apply_preprocessing(raw_features, tft_layer):
transformed_features = tft_layer(raw_features)
if _LABEL_KEY in raw_features:
transformed_label = transformed_features.pop(_transformed_name(_LABEL_KEY))
return transformed_features, transformed_label
else:
return transformed_features, None
# NEW: This function will create a handler function which gets a serialized
# tf.example, preprocess and run an inference with it.
def _get_serve_tf_examples_fn(model, tf_transform_output):
# We must save the tft_layer to the model to ensure its assets are kept and
# tracked.
model.tft_layer = tf_transform_output.transform_features_layer()
@tf.function(input_signature=[
tf.TensorSpec(shape=[None], dtype=tf.string, name='examples')
])
def serve_tf_examples_fn(serialized_tf_examples):
# Expected input is a string which is serialized tf.Example format.
feature_spec = tf_transform_output.raw_feature_spec()
# Because input schema includes unnecessary fields like 'species' and
# 'island', we filter feature_spec to include required keys only.
required_feature_spec = {
k: v for k, v in feature_spec.items() if k in _FEATURE_KEYS
}
parsed_features = tf.io.parse_example(serialized_tf_examples,
required_feature_spec)
# Preprocess parsed input with transform operation defined in
# preprocessing_fn().
transformed_features, _ = _apply_preprocessing(parsed_features,
model.tft_layer)
# Run inference with ML model.
return model(transformed_features)
return serve_tf_examples_fn
def _input_fn(file_pattern: List[Text],
data_accessor: tfx.components.DataAccessor,
tf_transform_output: tft.TFTransformOutput,
batch_size: int = 200) -> tf.data.Dataset:
"""Generates features and label for tuning/training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
tf_transform_output: A TFTransformOutput.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
dataset = data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(batch_size=batch_size),
schema=tf_transform_output.raw_metadata.schema)
transform_layer = tf_transform_output.transform_features_layer()
def apply_transform(raw_features):
return _apply_preprocessing(raw_features, transform_layer)
return dataset.map(apply_transform).repeat()
def _build_keras_model() -> tf.keras.Model:
"""Creates a DNN Keras model for classifying penguin data.
Returns:
A Keras Model.
"""
# The model below is built with Functional API, please refer to
# https://www.tensorflow.org/guide/keras/overview for all API options.
inputs = [
keras.layers.Input(shape=(1,), name=_transformed_name(f))
for f in _FEATURE_KEYS
]
d = keras.layers.concatenate(inputs)
for _ in range(2):
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.summary(print_fn=logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)
train_dataset = _input_fn(
fn_args.train_files,
fn_args.data_accessor,
tf_transform_output,
batch_size=_TRAIN_BATCH_SIZE)
eval_dataset = _input_fn(
fn_args.eval_files,
fn_args.data_accessor,
tf_transform_output,
batch_size=_EVAL_BATCH_SIZE)
model = _build_keras_model()
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
# NEW: Save a computation graph including transform layer.
signatures = {
'serving_default': _get_serve_tf_examples_fn(model, tf_transform_output),
}
model.save(fn_args.serving_model_dir, save_format='tf', signatures=signatures)
```
Now you have completed all of the preparation steps to build a TFX pipeline.
### Write a pipeline definition
We define a function to create a TFX pipeline. A `Pipeline` object
represents a TFX pipeline, which can be run using one of the pipeline
orchestration systems that TFX supports.
```
def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str,
schema_path: str, module_file: str, serving_model_dir: str,
metadata_path: str) -> tfx.dsl.Pipeline:
"""Implements the penguin pipeline with TFX."""
# Brings data into the pipeline or otherwise joins/converts training data.
example_gen = tfx.components.CsvExampleGen(input_base=data_root)
# Computes statistics over data for visualization and example validation.
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
# Import the schema.
schema_importer = tfx.dsl.Importer(
source_uri=schema_path,
artifact_type=tfx.types.standard_artifacts.Schema).with_id(
'schema_importer')
# Performs anomaly detection based on statistics and data schema.
example_validator = tfx.components.ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=schema_importer.outputs['result'])
# NEW: Transforms input data using preprocessing_fn in the 'module_file'.
transform = tfx.components.Transform(
examples=example_gen.outputs['examples'],
schema=schema_importer.outputs['result'],
materialize=False,
module_file=module_file)
# Uses user-provided Python function that trains a model.
trainer = tfx.components.Trainer(
module_file=module_file,
examples=example_gen.outputs['examples'],
# NEW: Pass transform_graph to the trainer.
transform_graph=transform.outputs['transform_graph'],
train_args=tfx.proto.TrainArgs(num_steps=100),
eval_args=tfx.proto.EvalArgs(num_steps=5))
# Pushes the model to a filesystem destination.
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=serving_model_dir)))
components = [
example_gen,
statistics_gen,
schema_importer,
example_validator,
transform, # NEW: Transform component was added to the pipeline.
trainer,
pusher,
]
return tfx.dsl.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
metadata_connection_config=tfx.orchestration.metadata
.sqlite_metadata_connection_config(metadata_path),
components=components)
```
## Run the pipeline
We will use `LocalDagRunner` as in the previous tutorial.
```
tfx.orchestration.LocalDagRunner().run(
_create_pipeline(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
data_root=DATA_ROOT,
schema_path=SCHEMA_PATH,
module_file=_module_file,
serving_model_dir=SERVING_MODEL_DIR,
metadata_path=METADATA_PATH))
```
You should see "INFO:absl:Component Pusher is finished." if the pipeline
finished successfully.
The pusher component pushes the trained model to the `SERVING_MODEL_DIR` which
is the `serving_model/penguin-transform` directory if you did not change
the variables in the previous steps. You can see the result from the file
browser in the left-side panel in Colab, or using the following command:
```
# List files in created model directory.
!find {SERVING_MODEL_DIR}
```
You can also check the signature of the generated model using the
[`saved_model_cli` tool](https://www.tensorflow.org/guide/saved_model#show_command).
```
!saved_model_cli show --dir {SERVING_MODEL_DIR}/$(ls -1 {SERVING_MODEL_DIR} | sort -nr | head -1) --tag_set serve --signature_def serving_default
```
Because we defined `serving_default` with our own `serve_tf_examples_fn`
function, the signature shows that it takes a single string.
This string is a serialized string of tf.Examples and will be parsed with the
[tf.io.parse_example()](https://www.tensorflow.org/api_docs/python/tf/io/parse_example)
function as we defined earlier (learn more about tf.Examples [here](https://www.tensorflow.org/tutorials/load_data/tfrecord)).
We can load the exported model and try some inferences with a few examples.
```
# Find a model with the latest timestamp.
model_dirs = (item for item in os.scandir(SERVING_MODEL_DIR) if item.is_dir())
model_path = max(model_dirs, key=lambda i: int(i.name)).path
loaded_model = tf.keras.models.load_model(model_path)
inference_fn = loaded_model.signatures['serving_default']
# Prepare an example and run inference.
features = {
'culmen_length_mm': tf.train.Feature(float_list=tf.train.FloatList(value=[49.9])),
'culmen_depth_mm': tf.train.Feature(float_list=tf.train.FloatList(value=[16.1])),
'flipper_length_mm': tf.train.Feature(int64_list=tf.train.Int64List(value=[213])),
'body_mass_g': tf.train.Feature(int64_list=tf.train.Int64List(value=[5400])),
}
example_proto = tf.train.Example(features=tf.train.Features(feature=features))
examples = example_proto.SerializeToString()
result = inference_fn(examples=tf.constant([examples]))
print(result['output_0'].numpy())
```
The third element, which corresponds to 'Gentoo' species, is expected to be the
largest among three.
## Next steps
If you want to learn more about Transform component, see
[Transform Component guide](https://www.tensorflow.org/tfx/guide/transform).
You can find more resources on https://www.tensorflow.org/tfx/tutorials.
Please see
[Understanding TFX Pipelines](https://www.tensorflow.org/tfx/guide/understanding_tfx_pipelines)
to learn more about various concepts in TFX.
```
```
| github_jupyter |
```
# for use in tutorial and development; do not include this `sys.path` change in production:
import sys ; sys.path.insert(0, "../")
```
# Vector embedding with `gensim`
Let's make use of deep learning through a technique called *embedding* – to analyze the relatedness of the labels used for recipe ingredients.
Among the most closely related ingredients:
* Some are very close synonyms and should be consolidated to improve data quality
* Others are interesting other ingredients that pair frequently, useful for recommendations
On the one hand, this approach is quite helpful for analyzing the NLP annotations that go into a knowledge graph.
On the other hand it can be used along with [`SKOS`](https://www.w3.org/2004/02/skos/) or similar vocabularies for ontology-based discovery within the graph, e.g., for advanced search UI.
## Curating annotations
We'll be working with the labels for ingredients that go into our KG.
Looking at the raw data, there are many cases where slightly different spellings are being used for the same entity.
As a first step let's define a list of synonyms to substitute, prior to running the vector embedding.
This will help produce better quality results.
Note that this kind of work comes of the general heading of *curating annotations* ... which is what we spend so much time doing in KG work.
It's similar to how *data preparation* is ~80% of the workload for data science teams, and for good reason.
```
SYNONYMS = {
"pepper": "black pepper",
"black pepper": "black pepper",
"egg": "egg",
"eggs": "egg",
"vanilla": "vanilla",
"vanilla extract": "vanilla",
"flour": "flour",
"all-purpose flour": "flour",
"onions": "onion",
"onion": "onion",
"carrots": "carrot",
"carrot": "carrot",
"potatoes": "potato",
"potato": "potato",
"tomatoes": "tomato",
"fresh tomatoes": "tomato",
"fresh tomato": "tomato",
"garlic": "garlic",
"garlic clove": "garlic",
"garlic cloves": "garlic",
}
```
## Analyze ingredient labels from 250K recipes
```
import csv
MAX_ROW = 250000 # 231638
max_context = 0
min_context = 1000
recipes = []
vocab = set()
with open("../dat/all_ind.csv", "r") as f:
reader = csv.reader(f)
next(reader, None) # remove file header
for i, row in enumerate(reader):
id = row[0]
ind_set = set()
# substitute synonyms
for ind in set(eval(row[3])):
if ind in SYNONYMS:
ind_set.add(SYNONYMS[ind])
else:
ind_set.add(ind)
if len(ind_set) > 1:
recipes.append([id, ind_set])
vocab.update(ind_set)
max_context = max(max_context, len(ind_set))
min_context = min(min_context, len(ind_set))
if i > MAX_ROW:
break
print("max context: {} unique ingredients per recipe".format(max_context))
print("min context: {} unique ingredients per recipe".format(min_context))
print("vocab size", len(list(vocab)))
```
Since we've performed this data preparation work, let's use `pickle` to save this larger superset of the recipes dataset to the `tmp.pkl` file:
```
import pickle
pickle.dump(recipes, open("tmp.pkl", "wb"))
recipes[:3]
```
Then we can restore the pickled Python data structure for usage later in other use cases.
The output shows the first few entries, to illustrated the format.
Now reshape this data into a vector of vectors of ingredients per recipe, to use for training a [*word2vec*](https://arxiv.org/abs/1301.3781) vector embedding model:
```
vectors = [
[
ind
for ind in ind_set
]
for id, ind_set in recipes
]
vectors[:3]
```
We'll use the [`Word2Vec`](https://radimrehurek.com/gensim/models/word2vec.html) implementation in the `gensim` library (i.e., *deep learning*) to train an embedding model.
This approach tends to work best if the training data has at least 100K rows.
Let's also show how to serialize the *word2vec* results, saving them to the `tmp.w2v` file so they could be restored later for other use cases.
NB: there is work in progress which will replace `gensim` with `pytorch` instead.
```
import gensim
MIN_COUNT = 2
model_path = "tmp.w2v"
model = gensim.models.Word2Vec(vectors, min_count=MIN_COUNT, window=max_context)
model.save(model_path)
```
The `get_related()` function takes any ingredient as input, using the embedding model to find the most similar other ingredients – along with calculating [`levenshtein`](https://github.com/toastdriven/pylev) edit distances (string similarity) among these labels. Then it calculates *percentiles* for both metrics in [`numpy`](https://numpy.org/) and returns the results as a [`pandas`](https://pandas.pydata.org/) DataFrame.
```
import numpy as np
import pandas as pd
import pylev
def term_ratio (target, description):
d_set = set(description.split(" "))
num_inter = len(d_set.intersection(target))
return num_inter / float(len(target))
def get_related (model, query, target, n=20, granularity=100):
"""return a DataFrame of the closely related items"""
try:
bins = np.linspace(0, 1, num=granularity, endpoint=True)
v = sorted(
model.wv.most_similar(positive=[query], topn=n),
key=lambda x: x[1],
reverse=True
)
df = pd.DataFrame(v, columns=["ingredient", "similarity"])
s = df["similarity"]
quantiles = s.quantile(bins, interpolation="nearest")
df["sim_pct"] = np.digitize(s, quantiles) - 1
df["levenshtein"] = [ pylev.levenshtein(d, query) / len(query) for d in df["ingredient"] ]
s = df["levenshtein"]
quantiles = s.quantile(bins, interpolation="nearest")
df["lev_pct"] = granularity - np.digitize(s, quantiles)
df["term_ratio"] = [ term_ratio(target, d) for d in df["ingredient"] ]
return df
except KeyError:
return pd.DataFrame(columns=["ingredient", "similarity", "percentile"])
```
Let's try this with `dried basil` as the ingredient to query, and review the top `50` most similar other ingredients returned as the DataFrame `df`:
```
pd.set_option("max_rows", None)
target = set([ "basil" ])
df = get_related(model, "dried basil", target, n=50)
df
```
Note how some of the most similar items, based on vector embedding, are *synonyms* or special forms of our query `dried basil` ingredient: `dried basil leaves`, `dry basil`, `dried sweet basil leaves`, etc. These tend to rank high in terms of levenshtein distance too.
Let's plot the similarity measures:
```
import matplotlib
import matplotlib.pyplot as plt
matplotlib.style.use("ggplot")
df["similarity"].plot(alpha=0.75, rot=0)
plt.show()
```
Notice the inflection points at approximately `0.56` and again at `0.47` in that plot.
We could use some statistical techniques (e.g., clustering) to segment the similarities into a few groups:
* highest similarity – potential synonyms for the query
* mid-range similarity – potential [hypernyms and hyponyms](https://en.wikipedia.org/wiki/Hyponymy_and_hypernymy) for the query
* long-tail similarity – other ingredients that pair well with the query
In this example, below a threshold of the 75th percentile for vector embedding similarity, the related ingredients are less about being synonyms and more about other foods that pair well with basil.
Let's define another function `rank_related()` which ranks the related ingredients based on a combination of these two metrics.
This uses a cheap approximation of a [*pareto archive*](https://www.cs.bham.ac.uk/~jdk/multi/) for the ranking -- which comes in handy for recommender systems and custom search applications that must combine multiple ranking metrics:
```
from kglab import root_mean_square
def rank_related (df):
df2 = df.copy(deep=True)
df2["related"] = df2.apply(lambda row: root_mean_square([ row[2], row[4] ]), axis=1)
return df2.sort_values(by=["related"], ascending=False)
df = rank_related(df)
df
```
Notice how the "synonym" cases tend to move up to the top now?
Meanwhile while the "pairs well with" are in the lower half of the ranked list: `fresh mushrooms`, `italian turkey sausage`, `cooked spaghetti`, `white kidney beans`, etc.
```
df.loc[ (df["related"] >= 50) & (df["term_ratio"] > 0) ]
```
---
## Exercises
**Exercise 1:**
Build a report for a *human-in-the-loop* reviewer, using the `rank_related()` function while iterating over `vocab` to make algorithmic suggestions for possible synonyms.
**Exercise 2:**
How would you make algorithmic suggestions for a reviewer about which ingredients could be related to a query, e.g., using the `skos:broader` and `skos:narrower` relations in the [`skos`](https://www.w3.org/2004/02/skos/) vocabulary to represent *hypernyms* and *hyponyms* respectively?
This could extend the KG to provide a kind of thesaurus about recipe ingredients.
| github_jupyter |
```
import nltk
```
# 1、Sentences Segment(分句)
```
sent_tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
paragraph = "The first time I heard that song was in Hawaii on radio. I was just a kid, and loved it very much! What a fantastic song!"
sentences = sent_tokenizer.tokenize(paragraph)
sentences
```
# 2、Tokenize sentences (分词)
```
from nltk.tokenize import WordPunctTokenizer
sentence = "Are you old enough to remember Michael Jackson attending the Grammys \
with Brooke Shields and Webster sat on his lap during the show?"
words = WordPunctTokenizer().tokenize(sentence)
words
text = 'That U.S.A. poster-print costs $12.40...'
pattern = r"""(?x) # set flag to allow verbose regexps
(?:[A-Z]\.)+ # abbreviations, e.g. U.S.A.
|\d+(?:\.\d+)?%? # numbers, incl. currency and percentages
|\w+(?:[-']\w+)* # words w/ optional internal hyphens/apostrophe
|\.\.\. # ellipsis
|(?:[.,;"'?():-_`]) # special characters with meanings
"""
nltk.regexp_tokenize(text, pattern)
```
# Tokenize and tag some text:
```
sentence = """At eight o'clock on Thursday morning
... Arthur didn't feel very good."""
tokens = nltk.word_tokenize(sentence)
tokens
tagged = nltk.pos_tag(tokens)
tagged
```
# Display a parse tree:
```
entities = nltk.chunk.ne_chunk(tagged)
entities
from nltk.corpus import treebank
t = treebank.parsed_sents('wsj_0001.mrg')[0]
t.draw()
from nltk.book import *
text1
text1.concordance("monstrous")
text1.similar("monstrous")
text2.common_contexts(["monstrous","very"])
text3.generate('luck')
text3.count('smote') / len(text3)
len(text3) / len(set(text3))
```
# 抽取词干 并归类
```
from pandas import DataFrame
import pandas as pd
d = ['pets insurance','pets insure','pet insurance','pet insur','pet insurance"','pet insu']
df = DataFrame(d)
df.columns = ['Words']
df
# 去除标点符号等特殊字符的正则表达式分词器
from nltk.stem import WordNetLemmatizer
from nltk.tokenize import word_tokenize
from nltk.stem.porter import *
stemmer = PorterStemmer()
wnl = WordNetLemmatizer()
tokenizer = nltk.RegexpTokenizer(r'w+')
df["Stemming Words"] = ""
df["Count"] = 1
j = 0
while (j <= 5):
for word in word_tokenize(df["Words"][j]): # 分词
df["Stemming Words"][j] = df["Stemming Words"][j] + " " + stemmer.stem(word) # stemming
j=j + 1
df
wnl.lemmatize('left')
tokenizer.tokenize( ' pets insur ')
uniqueWords = df.groupby(['Stemming Words'], as_index = False).sum()
uniqueWords
# Levenshtein edit-distance 有很多不同的计算距离的方法
from nltk.metrics import edit_distance
minDistance = 0.8
distance = -1
lastWord = ""
j = 0
while (j < 1):
lastWord = uniqueWords["Stemming Words"][j]
distance = edit_distance(uniqueWords["Stemming Words"][j], uniqueWords["Stemming Words"][j + 1])
if (distance > minDistance):
uniqueWords["Stemming Words"][j] = uniqueWords["Stemming Words"][j + 1]
j += 1
uniqueWords
uniqueWords = uniqueWords.groupby(['Stemming Words'], as_index = False).sum()
uniqueWords
```
# 停用词移除(Stop word removal)
```
from nltk.corpus import stopwords
stoplist = stopwords.words('english')
text = "This is just a test"
cleanwordlist = [word for word in text.split() if word not in stoplist]
print(cleanwordlist)
from nltk.metrics import edit_distance
print(edit_distance("rain", "rainbow"))
# 4.4 不同的解析器类型
# 4.4.1 递归下降解析器
# 4.4.2 移位-规约解析器
# 4.4.3 图表解析器
# 4.4.4 正则表达式解析器
import nltk
from nltk.chunk.regexp import *
chunk_rules = ChunkRule("<.*>+", "chunk everything")
reg_parser = RegexpParser('''
NP: {<DT>? <JJ>* <NN>*} # NP
P: {<IN>} # Preposition
V: {<V.*>} # Verb
PP: {<P> <NP>} # PP -> P NP
VP: {<V> <NP|PP>*} # VP -> V (NP|PP)*
''')
test_sent = "Mr. Obama played a big role in the Health insurance bill"
test_sent_pos = nltk.pos_tag(nltk.word_tokenize(test_sent))
paresed_out = reg_parser.parse(test_sent_pos)
print(paresed_out)
# 4.5 依存性文本解析(dependency parsing, DP)
# 基于概率的投射依存性解析器(probabilistic, projective dependency parser)
from nltk.parse.stanford import StanfordParser
# https://nlp.stanford.edu/software/stanford-parser-full-2017-06-09.zip
english_parser = StanfordParser()
english_parser.raw_parse_sents(("this is the english parser test"))
%pwd
```
# 文本分类
```
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
import csv
def preprocessing(text):
#text = text.decode("utf8")
# tokenize into words
tokens = [word for sent in nltk.sent_tokenize(text) for word in nltk.word_tokenize(sent)]
# remove stopwords
stop = stopwords.words('english')
tokens = [token for token in tokens if token not in stop]
# remove words less than three letters
tokens = [word for word in tokens if len(word) >= 3]
# lower capitalization
tokens = [word.lower() for word in tokens]
# lemmatize
lmtzr = WordNetLemmatizer()
tokens = [lmtzr.lemmatize(word) for word in tokens]
preprocessed_text = ' '.join(tokens)
return preprocessed_text
sms = open('./Machine-Learning-with-R-datasets-master/SMSSpamCollection.txt', encoding='utf8') # check the structure of this file!
sms_data = []
sms_labels = []
csv_reader = csv.reader(sms, delimiter = '\t')
for line in csv_reader:
# adding the sms_id
sms_labels.append(line[0])
# adding the cleaned text We are calling preprocessing method
sms_data.append(preprocessing(line[1]))
sms.close()
# 6.3 采样操作
import sklearn
import numpy as np
trainset_size = int(round(len(sms_data)*0.70))
# i chose this threshold for 70:30 train and test split.
print('The training set size for this classifier is ' + str(trainset_size) + '\n')
x_train = np.array([''.join(el) for el in sms_data[0: trainset_size]])
y_train = np.array([el for el in sms_labels[0: trainset_size]])
x_test = np.array([''.join(el) for el in sms_data[trainset_size+1:len(sms_data)]])
y_test = np.array([el for el in sms_labels[trainset_size+1:len(sms_labels)]])
#or el in sms_labels[trainset_size+1:len(sms_labels)]
print(x_train)
print(y_train)
from sklearn.feature_extraction.text import CountVectorizer
sms_exp = []
for line in sms_data:
sms_exp.append(preprocessing(line))
vectorizer = CountVectorizer(min_df = 1, encoding='utf-8')
X_exp = vectorizer.fit_transform(sms_exp)
print("||".join(vectorizer.get_feature_names()))
print(X_exp.toarray())
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(min_df = 2, ngram_range=(1, 2),
stop_words = 'english', strip_accents = 'unicode', norm = 'l2')
X_train = vectorizer.fit_transform(x_train)
X_test = vectorizer.transform(x_test)
# 6.3.1 朴素贝叶斯法
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
clf = MultinomialNB().fit(X_train, y_train)
y_nb_predicted = clf.predict(X_test)
print(y_nb_predicted)
print('\n confusion_matrix \n')
#cm = confusion_matrix(y_test, y_pred)
cm = confusion_matrix(y_test, y_nb_predicted)
print(cm)
print('\n Here is the classification report:')
print(classification_report(y_test, y_nb_predicted))
feature_names = vectorizer.get_feature_names()
coefs = clf.coef_
intercept = clf.intercept_
coefs_with_fns = sorted(zip(clf.coef_[0], feature_names))
n = 10
top = zip(coefs_with_fns[:n], coefs_with_fns[:-(n + 1):-1])
for (coef_1, fn_1), (coef_2, fn_2) in top:
print('\t%.4f\t%-15s\t\t%.4f\t%-15s' %(coef_1, fn_1, coef_2, fn_2))
# 6.3.2 决策树
from sklearn import tree
clf = tree.DecisionTreeClassifier().fit(X_train.toarray(), y_train)
y_tree_predicted = clf.predict(X_test.toarray())
print(y_tree_predicted)
print('\n Here is the classification report:')
print(classification_report(y_test, y_tree_predicted))
# 6.3.3 随机梯度下降法
from sklearn.linear_model import SGDClassifier
from sklearn.metrics import confusion_matrix
clf = SGDClassifier(alpha = 0.0001, n_iter=50).fit(X_train, y_train)
y_pred = clf.predict(X_test)
print('\n Here is the classification report:')
print(classification_report(y_test, y_pred))
print(' \n confusion_matrix \n')
cm = confusion_matrix(y_test, y_pred)
print(cm)
# 6.3.4 逻辑回归
# 6.3.5 支持向量机
from sklearn.svm import LinearSVC
svm_classifier = LinearSVC().fit(X_train, y_train)
y_svm_predicted = svm_classifier.predict(X_test)
print('\n Here is the classification report:')
print(classification_report(y_test, y_svm_predicted))
cm = confusion_matrix(y_test, y_pred)
print(cm)
# 6.4 随机森林
from sklearn.ensemble import RandomForestClassifier
RF_clf = RandomForestClassifier(n_estimators=10).fit(X_train, y_train)
predicted = RF_clf.predict(X_test)
print('\n Here is the classification report:')
print(classification_report(y_test, predicted))
cm = confusion_matrix(y_test, y_pred)
print(cm)
# 6.5 文本聚类
# K 均值法
from sklearn.cluster import KMeans, MiniBatchKMeans
from collections import defaultdict
true_k = 5
km = KMeans(n_clusters = true_k, init='k-means++', max_iter=100, n_init= 1)
kmini = MiniBatchKMeans(n_clusters=true_k, init='k-means++', n_init=1, init_size=1000, batch_size=1000, verbose=2)
km_model = km.fit(X_train)
kmini_model = kmini.fit(X_train)
print("For K-mean clustering ")
clustering = defaultdict(list)
for idx, label in enumerate(km_model.labels_):
clustering[label].append(idx)
print("For K-mean Mini batch clustering ")
clustering = defaultdict(list)
for idx, label in enumerate(kmini_model.labels_):
clustering[label].append(idx)
# 6.6 文本中的主题建模
# https://pypi.python.org/pypi/gensim#downloads
import gensim
from gensim import corpora, models, similarities
from itertools import chain
import nltk
from nltk.corpus import stopwords
from operator import itemgetter
import re
documents = [document for document in sms_data]
stoplist = stopwords.words('english')
texts = [[word for word in document.lower().split() if word not in stoplist] for document in documents]
print(texts)
dictionary = corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
tfidf = models.TfidfModel(corpus)
corpus_tfidf = tfidf[corpus]
lsi = models.LsiModel(corpus_tfidf, id2word = dictionary, num_topics = 100)
# print(lsi.print_topics(20))
n_topics = 5
lda = models.LdaModel(corpus_tfidf, id2word = dictionary, num_topics = n_topics)
for i in range(0, n_topics):
temp = lda.show_topic(i, 10)
terms = []
for term in temp:
terms.append(str(term[0]))
print("Top 10 terms for topic #" + str(i) + ": " + ",".join(terms))
```
# Getting Started with gensim
```
raw_corpus = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"]
# Create a set of frequent words
stoplist = set('for a of the and to in'.split(' '))
# Lowercase each document, split it by white space and filter out stopwords
texts = [[word for word in document.lower().split() if word not in stoplist]
for document in raw_corpus]
texts
# Count word frequencies
from collections import defaultdict
frequency = defaultdict(int)
# print(frequency['ddddd']) # 0 default
for text in texts:
for token in text:
frequency[token] += 1
frequency
# Only keep words that appear more than once
processed_corpus = [[token for token in text if frequency[token] > 1] for text in texts]
processed_corpus
'''
Before proceeding, we want to associate each word in the corpus with a unique integer ID.
We can do this using the gensim.corpora.Dictionary class. This dictionary defines the vocabulary of all words that our processing knows about.
'''
from gensim import corpora
dictionary = corpora.Dictionary(processed_corpus)
print(dictionary)
print(dictionary.token2id)
new_doc = "Human computer interaction"
new_vec = dictionary.doc2bow(new_doc.lower().split())
new_vec
bow_corpus = [dictionary.doc2bow(text) for text in processed_corpus]
bow_corpus
from gensim import models
# train the model
tfidf = models.TfidfModel(bow_corpus)
# transform the "system minors" string
tfidf[dictionary.doc2bow("system minors".lower().split())]
```
| github_jupyter |
This notebook shows how to calculate all the angles. There are three major functions for the calculation. The <code>filter_sensor_points_to_cube_id</code> function returns only the sensor points that corresponds to one HSI cube. This significantly increases the efficiency and make the process faster. The second function <code>get_closest_sensor_point</code> returns a csv file that attaches the information from the closest sensor points for each raster pixel. The angles are calculated outside the functions. However, they can be converted into a function to form a loop for a number of HSI cubes.
```
import os
import glob
import numpy as np
import pandas as pd
import geopandas as gpd
from pyproj import Transformer
from shapely.geometry import Point
from scipy.spatial import distance
from pvlib import solarposition
from tqdm.notebook import tqdm
# Define the working directory
os.chdir('YOUR WORKING DIRECTORY')
def filter_sensor_points_to_cube_id(sensor_filename, raster_filename):
"""
This function filters out only the sensor coordinates that align with
the cube boundary. Basically it relies on corresponding timestamp for
the cubes which is stored in a file "frameindex_cubeid.txt". The
corresponding UTC time of the timestamp is store id the "imu_gps.txt"
file. Based on that, only the in between coordinates are selected and
returned.
Input:
- sensor_filename: (str) path of the sensor filename as ASCII format
- raster_filename: (str) path of the raster filename as csv format
"""
#######################################################################
# Read the sensor coordinates
sensor = pd.read_csv(sensor_filename, sep="\"", header=None)
# Rename the columns
sensor.columns = ['Time', 'Lat_v', 'Lon_v']
# Insert new columns for X and Y
sensor.insert(3, "X_v", 0)
sensor.insert(4, "Y_v", 0)
# Convert lat lon to X and Y in UTM 15N
transformer = Transformer.from_crs(4326, 32615)
Xv, Yv = transformer.transform(sensor.iloc[:, 1], sensor.iloc[:, 2])
sensor.loc[:, 'X_v'] = Xv
sensor.loc[:, 'Y_v'] = Yv
# Convert the time string to a timestamp column
sensor['Time_UTC'] = pd.to_datetime(sensor['Time'])
# Drop the string time column
sensor.drop(columns=['Time'], inplace=True)
#######################################################################
#######################################################################
# Get the cubeid from the raster_filename
cube_id = os.path.basename(raster_filename).split('_')[1]
# Generate the frame filename
frame_filename = os.path.join(os.path.dirname(raster_filename),
f'frameIndex_{cube_id}.txt')
# Generate the imu+gps filename
imu_filename = os.path.join(os.path.dirname(raster_filename),
'imu_gps.txt')
# Read frame and imu_gps files as df
frame = pd.read_csv(frame_filename, sep="\t", header=0)
imu = pd.read_csv(imu_filename, sep="\t", header=0,
parse_dates=['Gps_UTC_Date&Time'])
#######################################################################
#######################################################################
# Get the starting and ending frame timestamp
start_frame = frame.iloc[0, -1]
end_frame = frame.iloc[-1, -1]
# Get the closest starting timestamp date
start_imu = pd.DatetimeIndex(imu.iloc[(imu['Timestamp']-start_frame).abs().argsort()[:1], 7])
# Add a 20s offset
start_imu = start_imu - pd.to_timedelta(20, unit='s')
# Get the string time information
start_imu = start_imu.strftime('%Y-%M-%d %H:%M:%-S')[0]
# Get the closest starting timestamp date
end_imu = pd.DatetimeIndex(imu.iloc[(imu['Timestamp']-end_frame).abs().argsort()[:1], 7])
# Add a 16s offset
end_imu = end_imu - pd.to_timedelta(16, unit='s')
# Get the string time information
end_imu = end_imu.strftime('%Y-%M-%d %H:%M:%-S')[0]
# Filter the sensor df
sensor_filter = sensor[(sensor['Time_UTC'] >= start_imu) & (sensor['Time_UTC'] <= end_imu)]
#######################################################################
return sensor_filter
def get_closest_sensor_point(raster_filename, sensor_filename):
# Read the raster point csv file
raster = pd.read_csv(raster_filename, index_col=0)
# Split the rasters into 4 different parts
raster1, raster2, raster3, raster4 = np.array_split(raster, 4)
# Delete the raster
del raster
# Read the sensor shapefile
sensor = filter_sensor_points_to_cube_id(sensor_filename, raster_filename)
# Take only the X, Y
#sensor = sensor[['X_m', 'Y_m', 'Z']]
# Give the observations a new id
sensor['sensor_id'] = np.arange(0, sensor.shape[0])
# Create an empty list to hold the processed dfs
raster_sensor = []
count = 0
# Loop through every df
for raster_split in [raster1, raster2, raster3, raster4]:
# Get the X and Y from each dataframe
R = raster_split[['X_r', 'Y_r']].values
V = sensor[['X_v', 'Y_v']].values
# Calcualte the distance
dist = distance.cdist(R, V, 'euclidean')
# Calculate the minimum distance index
argmin_dist = np.argmin(dist, axis=1)
# Add the minimum sensor index to raster
raster_split['sensor_id'] = argmin_dist
# Join the sensor information to the raster split
raster_split_sensor = raster_split.join(sensor.set_index('sensor_id'), on='sensor_id')
# Add the df to the list
raster_sensor.append(raster_split_sensor)
print(f"Part {count+1} done")
count = count + 1
# Create a pandas dataframe from the list of dfs
raster_sensor = pd.concat(raster_sensor)
# Drop the sensor_id
raster_sensor.drop(columns=['sensor_id'], inplace=True)
return raster_sensor
# Define the filenames
sensor_filename = "./Data/imu_gps.txt"
raster_filename = "./Data/raw_0_rd_rf_or_pr_warp.csv"
```
Get the closest sensor points in a dataframe.
```
%%time
raster_sensor = get_closest_sensor_point(raster_filename, sensor_filename)
# View the dataframe
raster_sensor
```
Function to calculate SZA (Solar Zenith Angle)
```
def calculate_sensor_zenith_angle(R, V):
return 90-np.degrees(np.arctan(50.0 / np.linalg.norm(R-V, axis=1)))
# Add the sza into the dataframe
raster_sensor['VZA'] = calculate_sensor_zenith_angle(raster_sensor.iloc[:, 0:2].values,
raster_sensor.iloc[:, 6:8].values)
# Get the datetime index and localize it
datetime_idx = pd.DatetimeIndex(raster_sensor['Time_UTC']).tz_convert('America/Chicago')
# The local timezone should be changed based on the location
# Equation of time
equation_of_time = solarposition.equation_of_time_spencer71(datetime_idx.dayofyear).values
# Hour angle in degrees
ha = solarposition.hour_angle(datetime_idx,
raster_sensor['Lon_r'].values,
equation_of_time)
# Solar declination in radians
declination = solarposition.declination_cooper69(datetime_idx.dayofyear).values
# Solar zenith angle in radians
sza = solarposition.solar_zenith_analytical(np.radians(raster_sensor['Lat_r']),
np.radians(ha),
declination).values
# Solar azimuth angle in radians
saa = solarposition.solar_azimuth_analytical(np.radians(raster_sensor['Lat_r']),
np.radians(ha), declination, sza)
# Add the SZA and SAA to the dataframe, convert it to degrees
raster_sensor['SZA'] = np.degrees(sza)
raster_sensor['SAA'] = np.degrees(saa)
```
Function to calculate VAA (Viewing Azimuth Angle)
```
def calculate_viewing_azimuth_angle(X_v, Y_v, X_r, Y_r):
V = np.array([X_v, Y_v])
R = np.array([X_r, Y_r])
a = np.array([ 0., 100.])
b = V - R
unit_a = a/np.linalg.norm(a)
unit_b = b/np.linalg.norm(b)
dot_prod = np.dot(unit_a, unit_b)
return np.degrees(np.arccos(np.clip(dot_prod, -1.0, 1.0)))
```
Apply the VAA function to each row in the dataframe. Lambda function method was found to be the most efficient way.
```
%%time
raster_sensor.loc[:, 'VAA'] = raster_sensor.apply(lambda row: calculate_viewing_azimuth_angle(row['X_v'],
row['Y_v'],
row['X_r'],
row['Y_r'],), axis=1)
# Check the dataframe
raster_sensor
```
Convert the csv rows in to a ESRI shapefile.
```
def convert_to_shape(df, X, Y, out_path):
geometry = [Point(xy) for xy in zip(df[X], df[Y])]
point_gdf = gpd.GeoDataFrame(df, crs="EPSG:32615", geometry=geometry)
point_gdf.to_file(out_path)
convert_to_shape(raster_sensor.drop(columns=['Time_UTC']), 'X_r', 'Y_r', r"F:\danforthstudy\temp\angle_test.shp")
```
The shapefile was interpolated using ArcGIS Natural Neighbor (https://pro.arcgis.com/en/pro-app/latest/tool-reference/spatial-analyst/natural-neighbor.htm) tool. While doing the interpolation, 0.1 m was considered as the raster resolution. The outputs for the given datasaet are added in the <code>Outputs</code> directory.
| github_jupyter |
<img src="https://drive.google.com/uc?id=1E_GYlzeV8zomWYNBpQk0i00XcZjhoy3S" width="100"/>
# DSGT Bootcamp Week 1: Introduction and Environment Setup
# Learning Objectives
1. Gain an understanding of Google Colab
2. Introduction to team project
3. Gain an understanding of Kaggle
4. Download and prepare dataset
5. Install dependencies
6. Gain an understanding of the basics of Python
7. Gain an understanding of the basics of GitHub / Git
# Google Colab
#### Google Colab is a cell-based Python text editor that allows you to run small snippets of code at a time. This is useful for data science; we can divide our work into manageable sections and run / debug independently of each other.
#### Colab lets you store and access data from your Google Drive account (freeing up local storage) and lets you use Google's servers for computing (allowing you to parse bigger datasets).
#### Any given cell will either be a **code cell** or a **markdown cell** (formatted text, like this one). We won't need to focus on using Markdown cells, because you can just use comments in code cells to convey any text that might be important.
---
# Basic Commands:
#### All of these commands assume that you have a cell highlighted. To do so, simply click on any cell.
#### `shift + enter (return)`: Runs the current cell, and highlights the next cell
#### `alt (option) + enter (return)`: Runs the current cell, and creates a new cell
#### `Top Bar -> Runtime -> Run all`: Runs entire notebook
#### `+Code or +Text`: Adds a code or text cell below your highlighted cell
### For more information, check out the resources at the end!
---
<img src="https://www.kaggle.com/static/images/site-logo.png" alt="kaggle-logo-LOL"/>
# Introducing Kaggle
#### [Kaggle](https://kaggle.com) is an online 'practice tool' that helps you become a better data scientist. They have various data science challenges, tutorials, and resources to help you improve your skillset.
#### For this bootcamp, we'll be trying to predict trends using the Kaggle Titanic Data Set. This dataset models variable related to the passengers and victims of the Titanic sinking incident. By the end of this bootcamp, you'll submit your machine learning model to the leaderboards and see how well it performs compared to others worldwide.
#### For more information on Kaggle, check out the resources section.
# Accessing the Titanic Dataset
#### To speed up the data download process, we've placed the data in this Google Folder where everyone will be making their notebooks. Let's go over the steps needed to import data into Google Colab.
**PLEASE READ THE STEPS!!!**
1. Go to your Google Drive (drive.google.com) and check **"Shared with me"**
2. Search for a folder named **"Spring 2021 Bootcamp Material"**
3. Enter the **Spring 2021 Bootcamp Material** folder, click the name of the folder (**Spring 2021 Bootcamp Material**) on the bar at the top of the folder to create a drop-down and select **"Add shortcut to Drive"**
4. Select **"My Drive"** and hit **"Add Shortcut"**
5. Enter the **Spring 2021 Bootcamp Material** folder you just made, and navigate to the **"Participants"** subfolder
6. Make a new folder within Participants in the format **"FirstName LastName"**.
7. Return to Google Colab.
8. Go to **"File -> Save a copy in Drive"**. Rename the file to **"firstname-lastname-week1.ipynb"**. It will be placed into a folder named **"Colab Notebooks"** in your Google Drive.
9. Move **"firstname-lastname-week1.ipynb"** to your **Participant** folder within Google Drive.
10. Return to Google Colab.
11. Hit the folder image on the left bar to expand the file system.
12. Hit **"Mount Drive"** to allow Colab to access your files. Click the link and copy the code provided into the textbox and hit Enter.
```
from google.colab import drive
drive.mount('/content/drive')
# This cell should appear once you hit "Mount Drive". Press Shift + Enter to run it.
from google.colab import drive
drive.mount('/content/drive')
"""
You can use the following commands in Colab code cells.
Type "%pwd" to list the folder you are currently in and "%ls" to list subfolders. Use "%cd [subfolder]"
to change your current directory into where the data is.
"""
%cd 'drive'
%ls
# Move into one subfolder ("change directory")
%cd 'drive'
%ls
# Move into a nested subfolder
%cd '/MyDrive/Spring 2021 Bootcamp Material/Participants/Data'
%pwd
```
As you can see here, we've now located our runtime at "../Participants/Data" where WICData.csv is located. This is the dataset for the WIC Program. For now, understand how you navigate the file system to move towards where your data is.
**Note:** The above code cells could also have simply been
`cd drive/MyDrive/Spring 2021 Bootcamp Material/Participants/Data`
It was done one step at a time to show the process of exploring a file system you might not be familiar with. If you know the file path before hand, you can move multiple subfolders at once.
# Project Presentation
Link to Google Slides: [Slides](https://docs.google.com/presentation/d/1QzomRX5kpJTKuy9j2JFvCo0siBEtkPNvacur2ZAxCiI/edit?usp=sharing)
# Read Data with Pandas
#### `!pip install` adds libraries (things that add more functionality to Python) to this environment as opposed to your machine.
#### We'll worry about importing and using these libraries later. For now, let's just make sure your environment has them installed.
#### Applied Data Science frequently uses core libraries to avoid "reinventing the wheel". One of these is pandas!
```
!pip install pandas
import pandas as pd #pd is the standard abbreviation for the library
```
#### Now that we're in the correct folder, we can use pandas to take a sneak peek at the data. Don't worry about these commands -- we'll cover them next week!
```
df = pd.read_csv("titanic_test.csv")
df.head()
```
# Introduction to the Python Programming Language
### **Why do we use Python?**
- Easy to read and understand
- Lots of libraries for Data Science
- One of the most popular languages for Data Science (alongside R)
# Primer on Variables, If Statements, and Loops
```
#You can create a variable by using an "=" sign. The value on the right gets
#assigned to the variable name of the left.
a = 5
b = 15
print(a + b)
c = "Data Science "
d = "is fun!"
print(c + d)
#If statements allow you to run certain lines of code based on certain conditions.
if (c + d) == "Data Science is fun!":
print("Correct!")
else: # this section is only triggered if (c + d) doesn't equal "Data Science is fun!"
print("False!")
#For loops are used to perform an action a fixed amount of times, or to go through each element in a list or string
for index in range(0, a):
print('DSGT')
#In this block of code, c+d is treated as a list of letters, with letter serving
#as each individual character as the for loop iterates through the string.
for letter in c + d:
print(letter)
```
# Lists, Tuples, and Dictionaries
```
# Let's start by creating a list (otherwise known as an array)
c = ["a", "b", "c"]
# We can retrieve an element by accessing its position in the array.
# Position counting starts at 0 in Python.
print("The 1st item in the array is " + c[0])
# Lists can have more than one type of element!
c[1] = 23
print(c)
# Tuples are lists but they don't like change
tup = ("car", True, 4)
tup[2] = 5 #would cause an error
# Dictionaries are unordered key, value pairs
d = {"Data Science": "Fun", "GPA": 4, "Best Numbers": [3, 4]}
# We can get values by looking up their corresponding key
print(d["Data Science"])
# We can also reassign the value of keys
d["Best Numbers"] = [99, 100]
# And add keys
d["Birds are Real"] = False
#We can also print out all the key value pairs
print(d)
```
## Functions
```
# Functions help improve code reusability and readability.
# You can define a series of steps and then use them multiple times.
def add(a, b):
sum = a + b
return sum
print(add(2, 4))
print(add(4, 7))
print(add(3 * 4, 6))
```
**Note**: A lot of Data Science is dependent on having a solid foundation in Python. If you aren't currently familiar, we *highly recommend* spending some time learning (tutorials available in resources). Otherwise, using the libraries and parsing data in a foreign language may make things rather difficult.
## **Introduction to the Version Control and GitHub**
What is Version Control?
* Tracks changes in computer files
* Coordinates work between multiple developers
* Allows you to revert back at any time
* Can have local & remote repositories
What is GitHub?
* cloud-based Git repository hosting service
* Allows users to use Git for version control
* **Git** is a command line tool
* **GitHub** is a web-based graphical user interface
# Set Up
If you do not already have Git on your computer, use the following link to install it:
[Install Git](https://git-scm.com/downloads)
**Setting Up a Repo**
* $git config
* $git config --global user.name "YOUR_NAME"
* $git config --global user.email "YOUR_EMAIL"
**Create a Repo**
* $git init
* $git clone [URL]
** You can use
https://github.gatech.edu with
YOUR_NAME = your username that you log into GitHub with
YOUR_EMAIL = your email that you log into GitHub with **
**GitHub GUI**
One person in each team will create a "New Repository" on GitHub. Once they add team members to the repo, anyone can clone the project to their local device using "$git clone [URL]" .
# Steps for Using Git
1. Check that you are up to date with Remote Repo -- **git fetch**
* check status -- **git status**
* if not up to date, pull down changes -- **git pull**
2. Make changes to code
3. Add all changes to the "stage" -- **git add .**
4. Commit any changes you want to make -- **git commit -m [message]**
5. Update the Remote Repo with your changes -- **git push**
**Summary**
3 stage process for making commits (after you have made a change):
1. ADD
2. COMMIT
3. PUSH
# Branching
By default when you create your project you will be on Master - however, it is good practice to have different branches for different features, people etc.
* To see all local branches -- **git branch**
* To create a branch -- **git branch [BRANCHNAME]**
* To move to a branch -- **git checkout [BRANCHNAME]**
* To create a new branch **and** move to it -- **git checkout -b [BRANCHNAME]**
# Merging
Merging allows you to carry the changes in one branch over to another branch. Github is your best friend for this - you can open and resolve merge conflicts through the GUI very easily. However, it is also good to know how to do it manually, in the event that you are unable to resolve conflicts.
**Manual Steps**
1. $git checkout [NAME_OF_BRANCH_TO_MERGE_INTO]
2. $git merge [NAME_OF_BRANCH_TO_BRING_IN]
# Helpful Resources
#### [Colab Overview](https://colab.research.google.com/notebooks/basic_features_overview.ipynb)
#### [Kaggle Courses](https://www.kaggle.com/learn/overview)
#### [Kaggle](https://www.kaggle.com/)
#### [Intro Python](https://pythonprogramming.net/introduction-learn-python-3-tutorials/)
#### [Pandas Documentation](https://pandas.pydata.org/docs/)
#### [Pandas Cheatsheet](https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf)
#### [Github Tutorial](https://guides.github.com/activities/hello-world/)
| github_jupyter |
## Figure 12
Similar to [Figure 5](https://github.com/EdwardJKim/astroclass/blob/master/paper/notebooks/figure05/purity_mag_integrated.ipynb)
but for the reduced training set.
```
from __future__ import division, print_function, unicode_literals
%matplotlib inline
import numpy as np
from scipy.special import gammaln
from scipy.integrate import quad
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.neighbors import KernelDensity
plt.rc('legend', fontsize=10)
truth_train = np.loadtxt('../../data/truth_train.dat')
truth_test = np.loadtxt('../../data/truth_test.dat')
mask_w1_train = np.loadtxt('../../data/vvds_w1_train.mask').astype(bool)
mag_i_train = np.loadtxt('../../data/mag_i.train.dat')
mag_i_test = np.loadtxt('../../data/mag_i.test.dat')
tpc_test = np.loadtxt('../../data/w1_22_0_tpc_test.mlz', unpack=True, usecols=(2,))
som_test = np.loadtxt('../../data/w1_22_0_som_test.mlz', unpack=True, usecols=(2,))
hbc_all = np.loadtxt('../../data/w1_22_0_median.hbc', unpack=True, usecols=(0,))
hbc_cv = hbc_all[:-len(truth_test)]
hbc_test = hbc_all[-len(truth_test):]
bmc_test = np.loadtxt('../../data/w1_22_0.bmc')
# read in FLUX_RADIUS and MAG_i and make a classification
def morph_class(magnitude, half_radius, cut=[0, 25, 1.0, 3.0]):
point_source = ((magnitude > cut[0]) & (magnitude < cut[1]) &
(half_radius > cut[2]) & (half_radius < cut[3]))
return point_source.astype(np.int)
mag_i_lower = 17
mag_i_upper = 21.0
r_h_lower = 1.4
r_h_upper = 2.8
r_h_test = np.loadtxt('../../data/flux_radius.test.dat')
mag_i_test = np.loadtxt('../../data/mag_i.test.dat')
morph_test = morph_class(mag_i_test, r_h_test, cut=[mag_i_lower, mag_i_upper, r_h_lower, r_h_upper])
hist_bins = np.arange(17, 25.5, 1)
# http://inspirehep.net/record/669498/files/fermilab-tm-2286.PDF
def calceff(N, k, conf=0.683, tol=1.0e-3, step=1.0e-3, a0=None, dx0=None, output=True):
epsilon = k / N
if a0 is None:
a0 = epsilon
if dx0 is None:
dx0 = step
bins = np.arange(0, 1 + step, step)
def get_log_p(N, k):
p = gammaln(N + 2) - gammaln(k + 1) - gammaln(N - k + 1) + k * np.log(bins) + (N - k) * np.log(1 - bins)
return p
alpha = np.arange(0, a0, step)
beta = np.arange(epsilon, 1, step)
log_p = get_log_p(N, k)
def func(x):
i = np.argmin(np.abs(bins - x))
return np.exp(log_p[i])
found = False
area_best = 1
alpha_best = alpha[-1]
beta_best = 1.0
dxs = np.arange(dx0, 1, step)
for ix, dx in enumerate(dxs):
for ia, a in enumerate(alpha[::-1]):
b = a + dx
#a = min(a, b)
#b = max(a, b)
if b > 1 or b < epsilon:
break
area, err = quad(func, a, b)
#print(area, a, b)
if np.abs(area - conf) < tol:
area_best = area
alpha_best = a
beta_best = b
found = True
break
if area > conf:
# go back a step, recalculate with smaller step
alpha_best, beta_best, area_best = calceff(N, k, step=0.8*step, a0=a + step, dx0=dx - step, output=False)
found = True
# exit the inner for loop for a
break
# exit the outer for loop for dx
if found:
break
if output:
print("Done. N = {0}, k = {1}, area: {2:.3f}, alpha: {3:.4f}, beta: {4:.4f}"
"".format(N, k, area_best, alpha_best, beta_best, step))
return alpha_best, beta_best, area_best
def calc_completeness_purity(truth, classif, mag, p_cut=0.001, bins=np.arange(16, 26, 0.5)):
'''
'''
bins = bins[1:]
result = {}
g_comp_bin = np.zeros(len(bins))
g_pur_bin = np.zeros(len(bins))
s_comp_bin = np.zeros(len(bins))
s_pur_bin = np.zeros(len(bins))
g_pur_lower_bin = np.zeros(len(bins))
g_pur_upper_bin = np.zeros(len(bins))
s_pur_upper_bin = np.zeros(len(bins))
s_pur_lower_bin = np.zeros(len(bins))
for i, b in enumerate(bins):
# true galaxies classified as stars
mask = (mag > -90) & (mag < b)
gs_bin = ((classif[mask] >= p_cut) & (truth[mask] == 0)).sum().astype(np.float)
# true galaxies classified as galaxies
gg_bin = ((classif[mask] < p_cut) & (truth[mask] == 0)).sum().astype(np.float)
# true stars classified as galaxies
sg_bin = ((classif[mask] < p_cut) & (truth[mask] == 1)).sum().astype(np.float)
# true stars classified as stars
ss_bin = ((classif[mask] >= p_cut) & (truth[mask] == 1)).sum().astype(np.float)
# galaxy completeness
g_comp_bin[i] = gg_bin / (gg_bin + gs_bin)
# galaxy purity
g_pur_bin[i] = gg_bin / (gg_bin + sg_bin)
# star completeness
s_comp_bin[i] = ss_bin / (ss_bin + sg_bin)
s_pur_bin[i] = ss_bin / (ss_bin + gs_bin)
print("Calculating completenss for {0}...".format(b))
g_pur_err = calceff(gg_bin + sg_bin, gg_bin)
g_pur_lower_bin[i] = g_pur_err[0]
g_pur_upper_bin[i] = g_pur_err[1]
print("Calculating purity for {0}...".format(b))
s_pur_err = calceff(ss_bin + gs_bin, ss_bin)
s_pur_lower_bin[i] = s_pur_err[0]
s_pur_upper_bin[i] = s_pur_err[1]
result['galaxy_completeness'] = g_comp_bin
result['galaxy_purity'] = g_pur_bin
result['galaxy_purity_lower'] = g_pur_lower_bin
result['galaxy_purity_upper'] = g_pur_upper_bin
result['star_completeness'] = s_comp_bin
result['star_purity'] = s_pur_bin
result['star_purity_lower'] = s_pur_lower_bin
result['star_purity_upper'] = s_pur_upper_bin
return result
def find_purity_at(truth_test, clf, step=0.001, gc=None, gp=None, sc=None, sp=None):
print("Finding the threshold value...")
if bool(gc) and bool(sc) and bool(gp) and bool(sp):
raise Exception('Specify only one of gp or sp parameter.')
pbin = np.arange(0, 1, step)
pure_all = np.zeros(len(pbin))
comp_all = np.zeros(len(pbin))
for i, p in enumerate(pbin):
# true galaxies classified as stars
gs = ((clf >= p) & (truth_test == 0)).sum()
# true galaxies classified as galaxies
gg = ((clf < p) & (truth_test == 0)).sum()
# true stars classified as galaxies
sg = ((clf < p) & (truth_test == 1)).sum()
# true stars classified as stars
ss = ((clf >= p) & (truth_test == 1)).sum()
if gc is not None or gp is not None:
if gg == 0 and sg == 0:
pure_all[i] = np.nan
else:
pure_all[i] = gg / (gg + sg)
if gg == 0 and gs == 0:
comp_all[i] = np.nan
else:
comp_all[i] = gg / (gg + gs)
if sc is not None or sp is not None:
if ss == 0 and sg == 0:
comp_all[i] = np.nan
else:
comp_all[i] = ss / (ss + sg)
if ss == 0 and gs == 0:
pure_all[i] = np.nan
else:
pure_all[i] = ss / (ss + gs)
if gc is not None:
ibin = np.argmin(np.abs(comp_all - gc))
return pbin[ibin], pure_all[ibin]
if gp is not None:
ibin = np.argmin(np.abs(pure_all - gp))
return pbin[ibin], comp_all[ibin]
if sc is not None:
ibin = np.argmin(np.abs(comp_all - sc))
return pbin[ibin], pure_all[ibin]
if sp is not None:
ibin = np.argmin(np.abs(pure_all - sp))
return pbin[ibin], comp_all[ibin]
morph = calc_completeness_purity(truth_test, morph_test, mag_i_test, p_cut=0.5, bins=hist_bins)
bmc_p_cut, _ = find_purity_at(truth_test, bmc_test, gc=0.9964, step=0.0001)
bmc_mg = calc_completeness_purity(truth_test, bmc_test, mag_i_test, p_cut=bmc_p_cut, bins=hist_bins)
bmc_p_cut, _ = find_purity_at(truth_test, bmc_test, sc=0.7145, step=0.0001)
bmc_ms = calc_completeness_purity(truth_test, bmc_test, mag_i_test, p_cut=bmc_p_cut, bins=hist_bins)
tpc_p_cut, _ = find_purity_at(truth_test, tpc_test, gc=0.9964, step=0.0001)
tpc_mg = calc_completeness_purity(truth_test, tpc_test, mag_i_test, p_cut=tpc_p_cut, bins=hist_bins)
tpc_p_cut, _ = find_purity_at(truth_test, tpc_test, sc=0.7145, step=0.0001)
tpc_ms = calc_completeness_purity(truth_test, tpc_test, mag_i_test, p_cut=tpc_p_cut, bins=hist_bins)
p = sns.color_palette()
sns.set_style("ticks")
fig = plt.figure(figsize=(6, 6))
ax0 = plt.subplot2grid((6, 3), (0, 0), colspan=3, rowspan=3)
ax1 = plt.subplot2grid((6, 3), (3, 0), colspan=3, rowspan=3)
plt.setp(ax0.get_xticklabels(), visible=False)
x_offset = 0.1
ax0.errorbar(hist_bins[1:], bmc_mg['galaxy_purity'],
yerr=[bmc_mg['galaxy_purity'] - bmc_mg['galaxy_purity_lower'],
bmc_mg['galaxy_purity_upper'] - bmc_mg['galaxy_purity']],
label='BMC', ls='-', marker='o', markersize=4)
ax0.errorbar(hist_bins[1:] - x_offset, tpc_mg['galaxy_purity'],
yerr=[tpc_mg['galaxy_purity'] - tpc_mg['galaxy_purity_lower'],
tpc_mg['galaxy_purity_upper'] - tpc_mg['galaxy_purity']],
label='TPC', ls='--', marker='o', markersize=4)
ax0.errorbar(hist_bins[1:] + x_offset, morph['galaxy_purity'],
yerr=[morph['galaxy_purity'] - morph['galaxy_purity_lower'],
morph['galaxy_purity_upper'] - morph['galaxy_purity']],
label='Morphology', ls='--', marker='o', markersize=4)
ax0.legend(loc='lower right')
ax0.set_xlim(17.5, 24.5)
ax0.set_ylim(0.875, 1.005)
#ax0.set_yticks([0.86, 0.91.0])
ax0.set_ylabel(r'$p_g\left(c_g=0.9964\right)$', fontsize=12)
ax1.errorbar(hist_bins[1:], bmc_ms['star_purity'],
yerr=[bmc_ms['star_purity'] - bmc_ms['star_purity_lower'],
bmc_ms['star_purity_upper'] - bmc_ms['star_purity']],
label='BMC', ls='-', marker='o', markersize=4)
ax1.errorbar(hist_bins[1:] - x_offset, tpc_ms['star_purity'],
yerr=[tpc_ms['star_purity'] - tpc_ms['star_purity_lower'],
tpc_ms['star_purity_upper'] - tpc_ms['star_purity']],
label='TPC', ls='--', marker='o', markersize=4)
ax1.errorbar(hist_bins[1:] + x_offset, morph['star_purity'],
yerr=[morph['star_purity'] - morph['star_purity_lower'],
morph['star_purity_upper'] - morph['star_purity']],
label='Morphology', ls='--', marker='o', markersize=4)
ax1.set_ylabel(r'$p_s\left(c_s=0.7145\right)$', fontsize=12)
ax1.set_xlim(17.5, 24.5)
ax1.set_ylim(0.55, 1.05)
ax1.set_yticks([0.6, 0.7, 0.8, 0.9, 1.0])
ax1.set_xlabel(r'$i$ (mag)')
plt.savefig('../../figures/purity_mag_cut_integrated.pdf')
plt.show()
```
| github_jupyter |
```
Copyright 2021 IBM Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
# Support Vector Machine on Avazu Dataset
## Background
This data is used in a competition on click-through rate prediction jointly hosted by Avazu and Kaggle in 2014. The participants were asked to learn a model from the first 10 days of advertising log, and predict the click probability for the impressions on the 11th day
## Source
The raw dataset can be obtained directly from the [Kaggle competition](https://www.kaggle.com/c/avazu-ctr-prediction/).
In this example, we download the pre-processed dataset from the [LIBSVM dataset repository](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/).
## Goal
The goal of this notebook is to illustrate how Snap ML can accelerate training of a support vector machine model on this dataset.
## Code
```
cd ../../
CACHE_DIR='cache-dir'
import numpy as np
import time
from datasets import Avazu
from sklearn.svm import LinearSVC
from snapml import SupportVectorMachine as SnapSupportVectorMachine
from sklearn.metrics import accuracy_score as score
dataset= Avazu(cache_dir=CACHE_DIR)
X_train, X_test, y_train, y_test = dataset.get_train_test_split()
print("Number of examples: %d" % (X_train.shape[0]))
print("Number of features: %d" % (X_train.shape[1]))
print("Number of classes: %d" % (len(np.unique(y_train))))
# the dataset is highly imbalanced
labels, sizes = np.unique(y_train, return_counts=True)
print("%6.2f %% of the training transactions belong to class 0" % (sizes[0]*100.0/(sizes[0]+sizes[1])))
print("%6.2f %% of the training transactions belong to class 1" % (sizes[1]*100.0/(sizes[0]+sizes[1])))
model = LinearSVC(loss="hinge", class_weight="balanced", fit_intercept=False, random_state=42)
t0 = time.time()
model.fit(X_train, y_train)
t_fit_sklearn = time.time()-t0
score_sklearn = score(y_test, model.predict(X_test))
print("Training time (sklearn): %6.2f seconds" % (t_fit_sklearn))
print("Accuracy score (sklearn): %.4f" % (score_sklearn))
model = SnapSupportVectorMachine(n_jobs=4, class_weight="balanced", fit_intercept=False, random_state=42)
t0 = time.time()
model.fit(X_train, y_train)
t_fit_snapml = time.time()-t0
score_snapml = score(y_test, model.predict(X_test))
print("Training time (snapml): %6.2f seconds" % (t_fit_snapml))
print("Accuracy score (snapml): %.4f" % (score_snapml))
speed_up = t_fit_sklearn/t_fit_snapml
score_diff = (score_snapml-score_sklearn)/score_sklearn
print("Speed-up: %.1f x" % (speed_up))
print("Relative diff. in score: %.4f" % (score_diff))
```
## Disclaimer
Performance results always depend on the hardware and software environment.
Information regarding the environment that was used to run this notebook are provided below:
```
import utils
environment = utils.get_environment()
for k,v in environment.items():
print("%15s: %s" % (k, v))
```
## Record Statistics
Finally, we record the enviroment and performance statistics for analysis outside of this standalone notebook.
```
import scrapbook as sb
sb.glue("result", {
'dataset': dataset.name,
'n_examples_train': X_train.shape[0],
'n_examples_test': X_test.shape[0],
'n_features': X_train.shape[1],
'n_classes': len(np.unique(y_train)),
'model': type(model).__name__,
'score': score.__name__,
't_fit_sklearn': t_fit_sklearn,
'score_sklearn': score_sklearn,
't_fit_snapml': t_fit_snapml,
'score_snapml': score_snapml,
'score_diff': score_diff,
'speed_up': speed_up,
**environment,
})
```
| github_jupyter |
## YUV color space
Colors in images can be encoded in different ways. Most well known is perhaps the RGB-encoding, in which the image consists of a Red, Green, and Blue channel. However, there are many other encodings, which sometimes have arisen for historical reasons or to better comply with properties of human perception. The YUV color space has arisen in order to better deal with transmission or compression artifacts; when using YUV instead of RGB, these artifacts will less easily be detected by humans. YUV consists of one luma component (Y) and two chrominance (color) components (UV).
Many cameras used in robotics directly output YUV-encoded images. Although these images can be converted to RGB, this conversion costs computation time, so it is better to work directly in YUV-space. The YUV color space is aptly explained on <A HREF="https://en.wikipedia.org/wiki/YUV" TARGET="_blank">Wikipedia</A>. It also contains an image on the U and V axes, for a value of Y$=0.5$. However, using this image for determining thresholds on U and V for color detection may lead to suboptimal results.
<font color='red'><B>Exercise 1</B></font>
Generate slices of the YUV space below, with the help of the script `YUV_slices.py` <A HREF="https://github.com/guidoAI/YUV_notebook/blob/master/YUV_slices.py" TARGET="_blank">(link to file)</A>. You can change the number of slices (`n_slices`) and the height and width (`H`, `W`) of the generated images.
1. Why can determining thresholds at Y$=0.5$ lead to suboptimal results?
2. What U and V thresholds would you set for detecting orange? And for green?
3. Can you think of a better way than setting a threshold on U and V for determining if a pixel belongs to a certain color?
```
%matplotlib inline
import YUV_slices as YUV
n_slices = 5;
YUV.generate_slices_YUV(n_slices);
```
## Color filtering
The code below loads an image and filters the colors.
```
%matplotlib inline
import cv2;
import numpy as np;
import matplotlib.pyplot as plt
def filter_color(image_name = 'DelFly_tulip.jpg', y_low = 50, y_high = 200, \
u_low = 120, u_high = 130, v_low = 120, v_high = 130, resize_factor=1):
im = cv2.imread(image_name);
im = cv2.resize(im, (int(im.shape[1]/resize_factor), int(im.shape[0]/resize_factor)));
YUV = cv2.cvtColor(im, cv2.COLOR_BGR2YUV);
Filtered = np.zeros([YUV.shape[0], YUV.shape[1]]);
for y in range(YUV.shape[0]):
for x in range(YUV.shape[1]):
if(YUV[y,x,0] >= y_low and YUV[y,x,0] <= y_high and \
YUV[y,x,1] >= u_low and YUV[y,x,1] <= u_high and \
YUV[y,x,2] >= v_low and YUV[y,x,2] <= v_high):
Filtered[y,x] = 1;
plt.figure();
RGB = cv2.cvtColor(im, cv2.COLOR_BGR2RGB);
plt.imshow(RGB);
plt.title('Original image');
plt.figure()
plt.imshow(Filtered);
plt.title('Filtered image');
```
<font color='red'><B>Exercise 2</B></font>
Please answer the questions of this exercise, by changing and running the code block below. Note that Y, U, and V are all in the range $[0, 255]$.
1. Can you find an easy way to make the code run faster, while still being able to evaluate if your filter works?
2. Can you filter the colors, so that only the tulip remains?
3. Can you filter the colors, so that only the stem remains?
```
filter_color(y_low = 50, y_high = 200, u_low = 60, u_high = 160, v_low = 60, v_high = 160);
```
## Answers
Exercise 1:
1. Colors and color regions are different for different Y values. What is orange at one value of Y can be a different color (e.g., red) at another value of Y.
2. Green: low U, low V (e.g., [0,120], [0,120]). Orange: low U, relatively high V (e.g., [0,120], [160,220])
3. Include the Y in the selection of the threshold, e.g., as in a look-up table (different U and V depending on Y), or by determining a prototype pixel for each different "color" and when classifying a pixel, determining which prototype is closest in YUV-space.
Exercise 2:
1. Set the resize_factor to a factor larger than 1. Setting it to 4 makes the filtering faster, while it is still possible to evaluate the success of the filter.
2. ``y_low = 50, y_high = 200, u_low = 0, u_high = 120, v_low = 160, v_high = 220``
3. ``y_low = 50, y_high = 200, u_low = 0, u_high = 120, v_low = 0, v_high = 120``
| github_jupyter |
```
a=5
print(a)
import numpy as np
np.pi
import pandas as pd
data = pd.read_csv("./data_SNat/BRGM_Mayotte_2018.txt",delimiter = "\t")
print(data.head())
import matplotlib.pyplot as plt
plt.plot(data['sec'],data['mag'])
plt.show()
from Codes_Graphes.OmoriUtsu import GraphOmoriUtsu,RegressionOU,RegressionOU_foreshock
a, b, idx_ms = GraphOmoriUtsu(data, 3600*24*4, 1, 1, 0)
print(a)
print(b)
print(idx_ms)
gap = 3600*24*4
nb_gap = int((data['sec'].max() - data['sec'].min()) // gap)
nt = []
magmax = []
title = 'Earthquake number and maximal magnitude per day'
for i in range(nb_gap + 1):
dfbis = data[((i * gap + data['sec'].min()) <= data['sec']) & (((i + 1) * gap + data['sec'].min()) > data['sec'])]
nt.append(len(dfbis))
magmax.append(dfbis['mag'].max())
print(magmax)
print(nt)
data = pd.read_csv("./data_SNat/CDSA_SeulementEssaimSaintes_2004-2005.txt",delimiter = "\t")
print(data.head())
print(data.iloc[[1]] )
first = data[data['sec'] == data['sec'].min()]
print(float(first['mag']))
GraphOmoriUtsu(data, 3600*24*13, 1, 1, 0)
GraphOmoriUtsu(data, 3600*24*8, 1, 1, 0)
def findFirstSeisme(dataset):
firstSeisme = dataset[dataset['sec'] == dataset['sec'].min()]
return firstSeisme
def getDeltaT(dataset,currentLine):
timeDF = dataset['sec']
currentTime = currentLine['sec']
deltaT = timeDF.apply(lambda x : int(x)-int(currentLine['sec']))
return deltaT
#Test 1
deltaT = getDeltaT(data, data[data['sec'] == data['sec'].min()])
print(max(deltaT))
print(float(deltaT.iloc[[0]])<=float(deltaT.iloc[[0]]))
#print(data[data['sec'] == data['sec'].max()]['sec'])
#print(data['sec'][0])
#print(data[data['sec'] == data['sec'].min()]['sec'])
from math import sin, cos, sqrt, atan2, radians
def getDist(lat1, lon1, lat2,lon2):
R = 6373.0
dlon = lon2 - lon1
dlat = lat2 - lat1
a = sin(dlat/2)**2 + cos(lat1)*cos(lat2)*sin(dlon/2)**2
c = 2*atan2(sqrt(a),sqrt(1-a))
distance = R*c
return distance
def getDeltaDist(dataset,currentLine):
coordDF = dataset[['p0','p1']]
currentCoord = currentLine[['p0','p1']]
list = []
for i in range(len(dataset['p0'])):
list.append(getDist(coordDF['p0'][i],coordDF['p1'][0],currentCoord['p0'],currentCoord['p1']))
deltaDist = coordDF.apply(dist)
coordDF['dist']=list
return coordDF['dist']
#Test 1
print(data[['p0','p1','p2']])
def getDeltaMag(dataset, currentLine):
magDF = dataset['mag']
currentMag = currentLine['mag']
deltaMag = magDF.apply(lambda x : abs(float(x)-float(currentLine['mag'])))
return deltaMag
def getCorrectChilds(dataset, currentSeisme,maxDeltaT, maxDeltaDist, maxDeltaMag):
deltaT = getDeltaT(dataset,currentSeisme)
deltaDist = getDeltaMag(dataset,currentSeisme)
deltaMag = getDeltaMag(dataset,currentSeisme)
correctChilds = []
for i in range(min(len(deltaT),len(deltaDist),len(deltaMag))):
if (0<float(deltaT.iloc[[i]])<=maxDeltaT*1.3 and maxDeltaT/1.3<=float(deltaT.iloc[[i]]) and maxDeltaDist/1.3<=float(deltaDist.iloc[[i]])<=maxDeltaDist*1.3 and maxDeltaMag/1.3<=float(deltaMag.iloc[[i]])<=maxDeltaMag*1.3):
child = [dataset.iloc[[i]],float(deltaT.iloc[[i]]),float(deltaDist.iloc[[i]]),float(deltaMag.iloc[[i]])]
correctChilds.append(child)
return correctChilds
def getCorrectChildsInit(dataset, currentSeisme,maxDeltaT, maxDeltaDist, maxDeltaMag):
deltaT = getDeltaT(dataset,currentSeisme)
deltaDist = getDeltaMag(dataset,currentSeisme)
deltaMag = getDeltaMag(dataset,currentSeisme)
correctChilds = []
for i in range(min(len(deltaT),len(deltaDist),len(deltaMag))):
if (0<float(deltaT.iloc[[i]])<=maxDeltaT and float(deltaDist.iloc[[i]])<=maxDeltaDist and float(deltaMag.iloc[[i]])<=maxDeltaMag):
child = [dataset.iloc[[i]],float(deltaT.iloc[[i]]),float(deltaDist.iloc[[i]]),float(deltaMag.iloc[[i]])]
correctChilds.append(child)
return correctChilds
def DFS(root,dataset,maxDeltaT, maxDeltaDist, maxDeltaMag):
if root.children == []:
print("end of branch")
else :
for child in root.children:
childChilds = getCorrectChilds(dataset,child.data[0],child.data[1], child.data[2], child.data[3])
for j in range(len(childChilds)):
child.children.append(Tree(childChilds[j]))
DFS(child,dataset,child.data[1], child.data[2], child.data[3])
class Tree:
def __init__(self, data):
self.children = []
self.data = data
print("new tree "+str(data))
def getLongestBranch(self):
print("oO")
tree = Tree(data[data['sec'] == data['sec'].min()])
root = tree
tree.children.append(Tree(data[data['sec'] == data['sec'].max()]))
tree.children.append(5)
print(tree.data)
print(tree.children[0].data)
print(root.children[0].data)
!pip install anytree
from anytree import Node, RenderTree
udo = Node(data[data['sec'] == data['sec'].min()])
marc = Node("Marc", parent=udo)
lian = Node("Lian", parent=marc)
dan = Node("Dan", parent=udo)
jet = Node("Jet", parent=dan)
jan = Node("Jan", parent=dan)
joe = Node("Joe", parent=dan)
for pre, fill, node in RenderTree(udo):
print("%s%s" % (pre, node.name))
!pip install graphviz
from anytree.exporter import DotExporter
# graphviz needs to be installed for the next line!
DotExporter(udo).to_picture("udo.png")
#HERE we base ourselves on 3 delta which is pretty restritive
#MAYBE normalising and multiplying them is a better option
def createTree(maxDeltaT, maxDeltaDist, maxDeltaMag, dataset):
firstSeisme = findFirstSeisme(dataset)
root = Tree(firstSeisme)
#First Step (not really essential but better to understand the Deep First Search)
correctChilds = getCorrectChildsInit(dataset, firstSeisme,maxDeltaT, maxDeltaDist, maxDeltaMag)
for j in range(len(correctChilds)):
root.children.append(Tree(correctChilds[i]))
#Recursion Step => DFS
DFS(root,dataset,maxDeltaT, maxDeltaDist, maxDeltaMag)
return root
tree = createTree(10000000, 2000000, 10000000, data)
print(tree.data)
print(getCorrectChilds(data, findFirstSeisme(data),20000,10,10)[0][3])
```
| github_jupyter |
```
import os
os.environ['CASTLE_BACKEND'] = 'pytorch'
from collections import OrderedDict
import warnings
import numpy as np
import networkx as nx
import ges
from castle.common import GraphDAG
from castle.metrics import MetricsDAG
from castle.datasets import IIDSimulation, DAG
from castle.algorithms import PC, ICALiNGAM, GOLEM
import matplotlib.pyplot as plt
# Mute warnings - for the sake of presentation clarity
# Should be removed for real-life applications
warnings.simplefilter('ignore')
```
# Causal Discovery in Python
Over the last decade, causal inference gained a lot of traction in academia and in the industry. Causal models can be immensely helpful in various areas – from marketing to medicine and from finance to cybersecurity. To make these models work, we need not only data as in traditional machine learning, but also a causal structure. Traditional way to obtain the latter is through well-designed experiments. Unfortunately, experiments can be tricky – difficult to design, expensive or unethical. Causal discovery (also known as structure learning) is an umbrella term that describes several families of methods aiming at discovering causal structure from observational data. During the talk, we will review the basics of causal inference and introduce the concept of causal discovery. Next, we will discuss differences between various approaches to causal discovery. Finally, we will see a series of practical examples of causal discovery using Python.
## Installing the environment
* Using **Conda**:
`conda env create --file econml-dowhy-py38.yml`
* Installing `gcastle` only:
`pip install gcastle==1.0.3rc3`
```
def get_n_undirected(g):
total = 0
for i in range(g.shape[0]):
for j in range(g.shape[0]):
if (g[i, j] == 1) and (g[i, j] == g[j, i]):
total += .5
return total
```
## PC algorithm
**PC algorithm** starts with a **fully connected** graph and then performs a series of steps to remove edges, based on graph independence structure. Finally, it tries to orient as many edges as possible.
Figure 1 presents a visual representatrion of these steps.
<br><br>
<img src="img/glymour_et_al_pc.jpg">
<br>
<figcaption><center><b>Figure 1. </b>Original graph and PC algorithm steps. (Gylmour et al., 2019)</center></figcaption>
<br>
Interested in more details?
[Gylmour et al. - Review of Causal Discovery Methods Based on Graphical Models (2019)](https://www.frontiersin.org/articles/10.3389/fgene.2019.00524/full)
```
# Let's implement this structure
x = np.random.randn(1000)
y = np.random.randn(1000)
z = x + y + .1 * np.random.randn(1000)
w = .7 * z + .1 * np.random.randn(1000)
# To matrix
pc_dataset = np.vstack([x, y, z, w]).T
# Sanity check
pc_dataset, pc_dataset.shape
# Build the model
pc = PC()
pc.learn(pc_dataset)
pc.causal_matrix
# Get learned graph
learned_graph = nx.DiGraph(pc.causal_matrix)
# Relabel the nodes
MAPPING = {k: v for k, v in zip(range(4), ['X', 'Y', 'Z', 'W'])}
learned_graph = nx.relabel_nodes(learned_graph, MAPPING, copy=True)
# Plot the graph
nx.draw(
learned_graph,
with_labels=True,
node_size=1800,
font_size=18,
font_color='white'
)
```
## Let's do some more discovery!
### Generate datasets
We'll use a [scale-free](https://en.wikipedia.org/wiki/Scale-free_network) model to generate graphs.
Then we'll use three different causal models on this graph:
* linear Gaussian
* linear exp
* non-linear quadratic
```
# Data simulation, simulate true causal dag and train_data.
true_dag = DAG.scale_free(n_nodes=10, n_edges=15, seed=18)
DATA_PARAMS = {
'linearity': ['linear', 'nonlinear'],
'distribution': {
'linear': ['gauss', 'exp'],
'nonlinear': ['quadratic']
}
}
datasets = {}
for linearity in DATA_PARAMS['linearity']:
for distr in DATA_PARAMS['distribution'][linearity]:
datasets[f'{linearity}_{distr}'] = IIDSimulation(
W=true_dag,
n=2000,
method=linearity,
sem_type=distr)
# Sanity check
datasets
plt.figure(figsize=(16, 8))
for i, dataset in enumerate(datasets):
X = datasets[dataset].X
plt.subplot(4, 2, i + 1)
plt.hist(X[:, 0], bins=100)
plt.title(dataset)
plt.axis('off')
plt.subplot(4, 2, i + 5)
plt.scatter(X[:, 8], X[:, 4], alpha=.3)
plt.title(dataset)
plt.axis('off')
plt.subplots_adjust(hspace=.7)
plt.show()
```
### Visualize the true graph
```
nx.draw(
nx.DiGraph(true_dag),
node_size=1800,
alpha=.7,
pos=nx.circular_layout(nx.DiGraph(true_dag))
)
GraphDAG(true_dag)
plt.show()
```
## Method comparison
```
methods = OrderedDict({
'PC': PC,
'GES': ges,
'LiNGAM': ICALiNGAM,
'GOLEM': GOLEM
})
%%time
results = {}
for k, dataset in datasets.items():
print(f'************* Current dataset: {k}\n')
X = dataset.X
results[dataset] = {}
for method in methods:
if method not in ['GES', 'CORL']:
print(f'Method: {method}')
# Fit the model
if method == 'GOLEM':
model = methods[method](num_iter=2.5e4)
else:
model = methods[method]()
model.learn(X)
pred_dag = model.causal_matrix
elif method == 'GES':
print(f'Method: {method}')
# Fit the model
pred_dag, _ = methods[method].fit_bic(X)
# Get n undir edges
n_undir = get_n_undirected(pred_dag)
# Plot results
GraphDAG(pred_dag, true_dag, 'result')
mt = MetricsDAG(pred_dag, true_dag)
print(f'FDR: {mt.metrics["fdr"]}')
print(f'Recall: {mt.metrics["recall"]}')
print(f'Precision: {mt.metrics["precision"]}')
print(f'F1 score: {mt.metrics["F1"]}')
print(f'No. of undir. edges: {n_undir}\n')
print('-' * 50, '\n')
results[dataset][method] = pred_dag
print('\n')
```
| github_jupyter |
# Оценка pi ($\pi$) с использованием квантового алгоритма оценки фазы
# Оценка pi ($\pi$) с использованием квантового алгоритма оценки фазы.
## 1. Краткий обзор [квантового алгоритма оценки фазы](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html)
Quantum Phase Estimation (QPE) is a quantum algorithm that forms the building block of many more complex quantum algorithms. At its core, QPE solves a fairly straightforward problem: given an operator $U$ and a quantum state $\vert\psi\rangle$ that is an eigenvalue of $U$ with $U\vert\psi\rangle = \exp\left(2 \pi i \theta\right)\vert\psi\rangle$, can we obtain an estimate of $\theta$?
Ответ положительный. Алгоритм QPE дает нам $2^n\theta$, где $n$ — количество кубитов, которое мы используем для оценки фазы $\theta$.
## 2. Оценка $\pi$
В этой демонстрации мы выбираем $$U = p(\theta), \vert\psi\rangle = \vert1\rangle$$, где $$ p(\theta) = \begin{bmatrix} 1 & 0\ 0 & \ exp(i\theta) \end{bmatrix} $$ — один из квантовых вентилей, доступных в Qiskit, а $$p(\theta)\vert1\rangle = \exp(i\theta)\vert1\rangle.$$
Выбрав фазу для нашего вентиля $\theta = 1$, мы можем найти $\pi$, используя следующие два соотношения:
1. На выходе алгоритма QPE мы измеряем оценку для $2^n\theta$. Тогда $\theta = \text{измерено} / 2^n$
2. Из приведенного выше определения вентиля $p(\theta)$ мы знаем, что $2\pi\theta = 1 \Rightarrow \pi = 1 / 2\theta$
Комбинируя эти два соотношения, $\pi = 1 / \left(2 \times (\text{(измерено)}/2^n)\right)$.
Для подробного понимания алгоритма QPE обратитесь к посвященной ему главе в учебнике Qiskit, расположенном по адресу [qiskit.org/textbook](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html) .
## 3. Время писать код
Начнем с импорта необходимых библиотек.
```
## import the necessary tools for our work
from IPython.display import clear_output
from qiskit import *
from qiskit.visualization import plot_histogram
import numpy as np
import matplotlib.pyplot as plotter
from qiskit.tools.monitor import job_monitor
# Visualisation settings
import seaborn as sns, operator
sns.set_style("dark")
pi = np.pi
```
Функция `qft_dagger` вычисляет обратное квантовое преобразование Фурье. Для подробного понимания этого алгоритма см. посвященную ему главу в [учебнике Qiskit](https://qiskit.org/textbook/ch-algorithms/quantum-fourier-transform.html) .
```
## Code for inverse Quantum Fourier Transform
## adapted from Qiskit Textbook at
## qiskit.org/textbook
def qft_dagger(circ_, n_qubits):
"""n-qubit QFTdagger the first n qubits in circ"""
for qubit in range(int(n_qubits/2)):
circ_.swap(qubit, n_qubits-qubit-1)
for j in range(0,n_qubits):
for m in range(j):
circ_.cp(-np.pi/float(2**(j-m)), m, j)
circ_.h(j)
```
Следующая функция, `qpe_pre` , подготавливает начальное состояние для оценки. Обратите внимание, что начальное состояние создается путем применения вентиля Адамара ко всем кубитам, кроме последнего, и установки последнего кубита в $\vert1\rangle$.
```
## Code for initial state of Quantum Phase Estimation
## adapted from Qiskit Textbook at qiskit.org/textbook
## Note that the starting state is created by applying
## H on the first n_qubits, and setting the last qubit to |psi> = |1>
def qpe_pre(circ_, n_qubits):
circ_.h(range(n_qubits))
circ_.x(n_qubits)
for x in reversed(range(n_qubits)):
for _ in range(2**(n_qubits-1-x)):
circ_.cp(1, n_qubits-1-x, n_qubits)
```
Затем мы пишем быструю функцию `run_job` для запуска квантовой схемы и возврата результатов.
```
## Run a Qiskit job on either hardware or simulators
def run_job(circ, backend, shots=1000, optimization_level=0):
t_circ = transpile(circ, backend, optimization_level=optimization_level)
qobj = assemble(t_circ, shots=shots)
job = backend.run(qobj)
job_monitor(job)
return job.result().get_counts()
```
Затем загрузите свою учетную запись, чтобы использовать облачный симулятор или реальные устройства.
```
## Load your IBMQ account if
## you'd like to use the cloud simulator or real quantum devices
my_provider = IBMQ.load_account()
simulator_cloud = my_provider.get_backend('ibmq_qasm_simulator')
device = my_provider.get_backend('ibmq_16_melbourne')
simulator = Aer.get_backend('qasm_simulator')
```
Наконец, мы объединяем все вместе в функции `get_pi_estimate` , которая использует `n_qubits` для получения оценки $\pi$.
```
## Function to estimate pi
## Summary: using the notation in the Qiskit textbook (qiskit.org/textbook),
## do quantum phase estimation with the 'phase' operator U = p(theta) and |psi> = |1>
## such that p(theta)|1> = exp(2 x pi x i x theta)|1>
## By setting theta = 1 radian, we can solve for pi
## using 2^n x 1 radian = most frequently measured count = 2 x pi
def get_pi_estimate(n_qubits):
# create the circuit
circ = QuantumCircuit(n_qubits + 1, n_qubits)
# create the input state
qpe_pre(circ, n_qubits)
# apply a barrier
circ.barrier()
# apply the inverse fourier transform
qft_dagger(circ, n_qubits)
# apply a barrier
circ.barrier()
# measure all but the last qubits
circ.measure(range(n_qubits), range(n_qubits))
# run the job and get the results
counts = run_job(circ, backend=simulator, shots=10000, optimization_level=0)
# print(counts)
# get the count that occurred most frequently
max_counts_result = max(counts, key=counts.get)
max_counts_result = int(max_counts_result, 2)
# solve for pi from the measured counts
theta = max_counts_result/2**n_qubits
return (1./(2*theta))
```
Теперь запустите функцию `get_pi_estimate` с различным количеством кубитов и распечатайте оценки.
```
# estimate pi using different numbers of qubits
nqs = list(range(2,12+1))
pi_estimates = []
for nq in nqs:
thisnq_pi_estimate = get_pi_estimate(nq)
pi_estimates.append(thisnq_pi_estimate)
print(f"{nq} qubits, pi ≈ {thisnq_pi_estimate}")
```
И постройте все результаты.
```
plotter.plot(nqs, [pi]*len(nqs), '--r')
plotter.plot(nqs, pi_estimates, '.-', markersize=12)
plotter.xlim([1.5, 12.5])
plotter.ylim([1.5, 4.5])
plotter.legend(['$\pi$', 'estimate of $\pi$'])
plotter.xlabel('Number of qubits', fontdict={'size':20})
plotter.ylabel('$\pi$ and estimate of $\pi$', fontdict={'size':20})
plotter.tick_params(axis='x', labelsize=12)
plotter.tick_params(axis='y', labelsize=12)
plotter.show()
import qiskit
qiskit.__qiskit_version__
```
| github_jupyter |
```
%pylab inline
import numpy as np
import seaborn as sns
from tqdm import tqdm
from laika.lib.coordinates import LocalCoord, ecef2geodetic
# A practical way to confirm the accuracy of laika's processing
# is by downloading some observation data from a CORS station
# and confirming our position estimate to the known position
# of the station.
# We begin by initializing an AstroDog
from laika import AstroDog
dog = AstroDog()
# Building this cache takes forever just copy it from repo
from shutil import copyfile
import os
cache_directory = '/tmp/gnss/cors_coord/'
try:
os.mkdir('/tmp/gnss/')
except:
pass
try:
os.mkdir(cache_directory)
except:
pass
copyfile('cors_station_positions', cache_directory + 'cors_station_positions')
from datetime import datetime
from laika.gps_time import GPSTime
from laika.downloader import download_cors_station
from laika.rinex_file import RINEXFile
from laika.dgps import get_station_position
# We can use helper functions in laika to download the station's observation
# data for a certain time and its known exact position.
station_name = 'slac'
time = GPSTime.from_datetime(datetime(2018,1,7))
slac_rinex_obs_file = download_cors_station(time, station_name, dog.cache_dir)
obs_data = RINEXFile(slac_rinex_obs_file)
slac_exact_postition = get_station_position('slac')
import laika.raw_gnss as raw
# Now we have the observation data for the station we can process
# it with the astrodog.
rinex_meas_grouped = raw.read_rinex_obs(obs_data)
rinex_corr_grouped = []
for meas in tqdm(rinex_meas_grouped):
proc = raw.process_measurements(meas, dog=dog)
corr = raw.correct_measurements(meas, slac_exact_postition, dog=dog)
rinex_corr_grouped.append(corr)
# Using laika's WLS solver we can now calculate position
# fixes for every epoch (every 30s) over 24h.
ests = []
for corr in tqdm(rinex_corr_grouped[:]):
fix, _ = raw.calc_pos_fix(corr)
ests.append(fix)
ests = array(ests)
# Now we plot the error of every fix in NED (North, East, Down)
# coordinates, we see clearly that the Down axis is noisier,
# this is to be expected as the VDOP is generally much larger
# than the HDOP.
conv = LocalCoord.from_ecef(slac_exact_postition)
errors_ned = conv.ecef2ned(ests[:,:3])
figsize(10,10)
plot(errors_ned[:,2], label='Down');
plot(errors_ned[:,1], label='East');
plot(errors_ned[:,0], label='North');
title('Error of estation estimated by C1C signal', fontsize=25);
ylim(-10,10);
xlabel('Epoch (#)', fontsize=15);
ylabel('Error (m)', fontsize=15);
legend(fontsize=15);
# The error of the median position compared to the true
# position is ~0.5m with this technique. This is about what
# we would expect. Without using carrier phase measurements
# we do not expect better results.
print 'The error of the median position is NED:'
print np.median(errors_ned, axis=0)
# Out of curiousity we repeat the previous experiment, but we use
# the C2P signal. Just to make sure we get similar results. And while
# we're at it lets compare the residuals of GLONASS and GPS.
from laika.helpers import get_constellation
ests, errs, residuals_gps, residuals_glonass = [], [], [], []
for corr in tqdm(rinex_corr_grouped[:]):
sol = raw.calc_pos_fix(corr, signal='C2P')
residuals_gps.append(raw.pr_residual([c for c in corr if get_constellation(c.prn) == 'GPS'], signal='C2P')(np.append(slac_exact_postition, sol[0][3:])))
residuals_glonass.append(raw.pr_residual([c for c in corr if get_constellation(c.prn) == 'GLONASS'], signal='C2P')(np.append(slac_exact_postition, sol[0][3:])))
ests.append(sol[0])
ests = array(ests)
# We plot the error again and see similar results,
# this gives us confidence that the different signals
# are being processed correctly.
conv = LocalCoord.from_ecef(slac_exact_postition)
errors_ned = conv.ecef2ned(ests[:,:3])
figsize(10,10)
plot(errors_ned[:,2], label='Down');
plot(errors_ned[:,1], label='East');
plot(errors_ned[:,0], label='North');
title('Error of estation estimated by C2P signal', fontsize=25);
ylim(-10,10);
ylabel('Epoch (#)',fontsize=15);
xlabel('Error (m)',fontsize=15);
legend(fontsize=15);
print 'The error of the median position is NED:'
print np.median(errors_ned, axis=0)
# When we look at at the residual distributions of GLONASS
# and GPS here we see the distributions are very similar,
# showing that that both constellations provide correct
# and good signal. Intrestingly there is a non-trival negative
# bias on both constellations. This bias must be attriubted to
# something that is not sattelite or constellation specific since
# it is equally present on GLONASS and GPS. It is probably caused
# by modelling errors in the ionosphere/troposhere or multipath.
sns.distplot(np.concatenate(residuals_gps)[np.isfinite(np.concatenate(residuals_gps))], label='GPS');
sns.distplot(np.concatenate(residuals_glonass)[np.isfinite(np.concatenate(residuals_glonass))],label='GLONASS');
xlim(-7, 7);
xlabel('Pseudorange residual (m)',fontsize=15);
ylabel('Probability',fontsize=15);
title('Distribution of GLONASS vs GPS residual on C2P', fontsize=25);
legend(fontsize=15);
```
| github_jupyter |
# T81-558: Applications of Deep Neural Networks
**Module 6: Convolutional Neural Networks (CNN) for Computer Vision**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 6 Material
* Part 6.1: Image Processing in Python [[Video]](https://www.youtube.com/watch?v=4Bh3gqHkIgc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_06_1_python_images.ipynb)
* Part 6.2: Keras Neural Networks for Digits and Fashion MNIST [[Video]](https://www.youtube.com/watch?v=-SA8BmGvWYE&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_06_2_cnn.ipynb)
* Part 6.3: Implementing a ResNet in Keras [[Video]](https://www.youtube.com/watch?v=qMFKsMeE6fM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_06_3_resnet.ipynb)
* Part 6.4: Using Your Own Images with Keras [[Video]](https://www.youtube.com/watch?v=VcFja1fUNSk&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_06_4_keras_images.ipynb)
* **Part 6.5: Recognizing Multiple Images with YOLO Darknet** [[Video]](https://www.youtube.com/watch?v=oQcAKvBFli8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_06_5_yolo.ipynb)
```
# Nicely formatted time string
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return f"{h}:{m:>02}:{s:>05.2f}"
```
# Part 6.5: Recognizing Multiple Images with Darknet
Convolutional neural networks are great at recognizing classifying a single item that is centered in an image. However, as humans we are able to recognize many items in our field of view, in real-time. It is very useful to be able to recognize multiple items in a single image. One of the most advanced means of doing this is YOLO DarkNet (not to be confused with the Internet [Darknet](https://en.wikipedia.org/wiki/Darknet). YOLO is an acronym for You Only Look Once. This speaks to the efficency of the algorithm.
* Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). [You only look once: Unified, real-time object detection](https://arxiv.org/abs/1506.02640). In *Proceedings of the IEEE conference on computer vision and pattern recognition* (pp. 779-788).
The following image shows YOLO tagging in action.

It is also possible to run YOLO on live video streams. The following frame is from the YouTube Video for this module.

As you can see it is classifying many things in this video. My collection of books behind me is adding considerable "noise", as DarkNet tries to classify every book behind me. If you watch the video you will note that it is less than perfect. The coffee mug that I pick up gets classified as a cell phone and at times a remote. The small yellow object behind me on the desk is actually a small toolbox. However, it gets classified as a book at times and a remote at other times. Currently this algorithm classifies each frame on its own. More accuracy could be gained by using multiple images together. Consider when you see an object coming towards you, if it changes angles, you might form a better opinion of what it was. If that same object now changes to an unfavorable angle, you still know what it is, based on previous information.
### How Does DarkNet/YOLO Work?
YOLO begins by resizing the image to an $S \times S$ grid. A single convolutional neural network is run against this grid that predicts bounding boxes and what might be contained by those boxes. Each bounding box also has a confidence in which item it believes the box contains. This is a regular convolution network, just like we've seen privously. The only difference is that a YOLO CNN outputs a number of prediction bounding boxes. At a high level this can be seen by the following diagram.

The output of the YOLO convolutional neural networks is essentially a multiple regression. The following values are generated for each of the bounding records that are generated.
* **x** - The x-coordinate of the center of a bounding rectangle.
* **y** - The y-coordinate of the center of a bounding rectangle.
* **w** - The width of each bounding rectangle.
* **h** - The height of each bounding rectangle.
* **labels** - The relative probabilities of each of the labels (1 value for each label)
* **confidence** - The confidence in this rectangle.
The output layer of a Keras neural network is a Tensor. In the case of YOLO, this output tensor is 3D and is of the following dimensions.
$ S \times S \times (B \cdot 5 + C) $
The constants in the above expression are:
* *S* - The dimensions of the YOLO grid that is overlaid across the source image.
* *B* - The number of potential bounding rectangles generated for each grid cell.
* *C* - The number of class labels that here are.
The value 5 in the above expression is simply the count of non-label components of each bounding rectangle ($x$, $y$, $h$, $w$, $confidence$.
Because there are $S^2 \cdot B$ total potential bounding rectangles, the image will get very full. Because of this it is important to drop all rectangles below some threshold of confidence. This is demonstrated by the image below.

The actual structure of the convolutional neural network behind YOLO is relatively simple and is shown in the following image. Because there is only one convolutional neural network, and it "only looks once," the performance is not impacted by how many objects are detected.

The following image shows some additional recognitions being performed by a YOLO.

### Using DarkFlow in Python
To make use of DarkFlow you have several options:
* **[DarkNet](https://pjreddie.com/darknet/yolo/)** - The original implementation of YOLO, written in C.
* **[DarkFlow](https://github.com/thtrieu/darkflow)** - Python package that implements YOLO in Python, using TensorFlow.
DarkFlow can be used from the command line. This allows videos to be produced from existing videos. This is how the YOLO videos used in the class module video were created.
It is also possible call DarkFlow directly from Python. The following code performs a classification of the image of my dog and I in the kitchen from above.
### Running DarkFlow (YOLO) from Google CoLab
Make sure you create the following folders on your Google drive and download yolo.weights, coco.names, and yolo.cfg into the correct locations. See the helper script below to set this up.
'/content/drive/My Drive/projects/yolo':
bin cfg
'/content/drive/My Drive/projects/yolo/bin':
yolo.weights
'/content/drive/My Drive/projects/yolo/cfg':
coco.names yolo.cfg
```
!git clone https://github.com/thtrieu/darkflow.git
!pip install ./darkflow/
# Note, if you are using Google CoLab, this can be used to mount your drive to load YOLO config and weights.
from google.colab import drive
drive.mount('/content/drive')
# The following helper script will create a projects/yolo folder for you
# and download the needed files.
!mkdir -p /content/drive/My\ Drive/projects
!mkdir -p /content/drive/My\ Drive/projects/yolo
!mkdir -p /content/drive/My\ Drive/projects/yolo/bin
!mkdir -p /content/drive/My\ Drive/projects/yolo/cfg
!wget https://raw.githubusercontent.com/thtrieu/darkflow/master/cfg/coco.names -O /content/drive/My\ Drive/projects/yolo/cfg/coco.names
!wget https://raw.githubusercontent.com/thtrieu/darkflow/master/cfg/yolo.cfg -O /content/drive/My\ Drive/projects/yolo/cfg/yolo.cfg
!wget https://pjreddie.com/media/files/yolov2.weights -O /content/drive/My\ Drive/projects/yolo/bin/yolo.weights
```
### Running DarkFlow (YOLO) Locally
If you wish to run YOLO from your own computer you will need to pip install cython and then follow the instructions [here](https://github.com/thtrieu/darkflow).
### Running DarkFlow (YOLO)
Regardless of which path you take above (Google CoLab or Local) you will run this code to continue. Make sure to uncomment the correct **os.chdir** command below.
```
from darkflow.net.build import TFNet
import cv2
import numpy as np
import requests
import os
from scipy import misc
from io import BytesIO
from urllib.request import urlopen
from PIL import Image, ImageFile
os.chdir('/content/drive/My Drive/projects/yolo') # Google CoLab
#os.chdir('/Users/jheaton/projects/darkflow') # Local
# For GPU (Google CoLab)
options = {"model": "./cfg/yolo.cfg", "load": "./bin/yolo.weights", "threshold": 0.1, "gpu": 1.0}
# For CPU
#options = {"model": "./cfg/yolo.cfg", "load": "./bin/yolo.weights", "threshold": 0.1}
tfnet = TFNet(options)
# Read image to classify
url = "https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/cook.jpg"
response = requests.get(url)
img = Image.open(BytesIO(response.content))
img.load()
result = tfnet.return_predict(np.asarray(img))
for row in result:
print(row)
```
# Generate a YOLO Tagged Image
DarkFlow does not contain a built in "boxing function" for images. However, it is not difficult to create one using the results provided above. The following code demonstrates this process.
```
def box_image(img, pred):
array = np.asarray(img)
for result in pred:
top_x = result['topleft']['x']
top_y = result['topleft']['y']
bottom_x = result['bottomright']['x']
bottom_y = result['bottomright']['y']
confidence = int(result['confidence'] * 100)
label = f"{result['label']} {confidence}%"
if confidence > 0.3:
array = cv2.rectangle(array, (top_x, top_y), (bottom_x, bottom_y), (255,0,0), 3)
array = cv2.putText(array, label, (top_x, top_y-5), cv2.FONT_HERSHEY_COMPLEX_SMALL ,
0.45, (0, 255, 0), 1, cv2.LINE_AA)
return Image.fromarray(array, 'RGB')
boxed_image = box_image(img, result)
boxed_image
```
# Module 6 Assignment
You can find the first assignment here: [assignment 6](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class1.ipynb)
| github_jupyter |
# Union and intersection of rankers
Let's build a pipeline using union `|` and intersection `&` operators.
```
%load_ext autoreload
%autoreload 2
from cherche import data, rank, retrieve
from sentence_transformers import SentenceTransformer
```
The first step is to define the corpus on which we will perform the neural search. The towns dataset contains about a hundred documents, all of which have four attributes, an `id`, the `title` of the article, the `url` and the content of the `article`.
```
documents = data.load_towns()
documents[:4]
```
We start by creating a retriever whose mission will be to quickly filter the documents. This retriever will match the query with the documents using the title and content of the article with `on` parameter.
```
retriever = retrieve.TfIdf(key="id", on=["title", "article"], documents=documents, k = 30)
```
## Union
We will use a ranker composed of the union of two pre-trained models.
```
ranker = (
rank.Encoder(
key = "id",
on = ["title", "article"],
encoder = SentenceTransformer("sentence-transformers/all-mpnet-base-v2").encode,
k = 5,
path = "encoder.pkl"
) |
rank.Encoder(
key = "id",
on = ["title", "article"],
encoder = SentenceTransformer("sentence-transformers/multi-qa-mpnet-base-cos-v1").encode,
k = 5,
path = "second_encoder.pkl"
)
)
search = retriever + ranker
search.add(documents)
search("Paris football")
search("speciality Lyon")
```
We can automatically map document identifiers to their content.
```
search += documents
search("Paris football")
search("speciality Lyon")
```
## Intersection
```
retriever = retrieve.Lunr(key = "id", on = ["title", "article"], documents = documents, k = 30)
```
We will build a set of rankers consisting of two different pre-trained models with the intersection operator `&`. The pipeline will only offer the documents returned by the union of the two retrievers and the intersection of the rankers.
```
ranker = (
rank.Encoder(
key = "id",
on = ["title", "article"],
encoder = SentenceTransformer("sentence-transformers/all-mpnet-base-v2").encode,
k = 5,
path = "encoder.pkl"
) &
rank.Encoder(
key = "id",
on = ["title", "article"],
encoder = SentenceTransformer("sentence-transformers/multi-qa-mpnet-base-cos-v1").encode,
k = 5,
path = "second_encoder.pkl"
)
)
search = retriever + ranker
search.add(documents)
search("Paris football")
search("speciality Lyon")
```
We can automatically map document identifiers to their content.
```
search += documents
search("Paris football")
search("speciality Lyon")
```
| github_jupyter |
## <small>
Copyright (c) 2017-21 Andrew Glassner
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
</small>
# Deep Learning: A Visual Approach
## by Andrew Glassner, https://glassner.com
### Order: https://nostarch.com/deep-learning-visual-approach
### GitHub: https://github.com/blueberrymusic
------
### What's in this notebook
This notebook is provided to help you work with Keras and TensorFlow. It accompanies the bonus chapters for my book. The code is in Python3, using the versions of libraries as of April 2021.
Note that I've included the output cells in this saved notebook, but Jupyter doesn't save the variables or data that were used to generate them. To recreate any cell's output, evaluate all the cells from the start up to that cell. A convenient way to experiment is to first choose "Restart & Run All" from the Kernel menu, so that everything's been defined and is up to date. Then you can experiment using the variables, data, functions, and other stuff defined in this notebook.
## Bonus Chapter 3 - Notebook 4: Synthetic Data
We'll create our own data with a program (that is, we'll make synthetic data)
and train a system to classify it.
Note that training on synthetic data is often a risky proposition if that data
is meant to stand in for real data (that is, measurements in the world). The
problem is that it is all but impossible to generate data that has all of the
variety of real data, and has that variety in the right proportions. Getting it
wrong means we're baking those errors into our system, and if that system is
used in ways that can affect people's lives, such as getting loans or school
admissions or medical diagnoses, then our errors can change their lives. Be
very wary of ever training on synthetic data that's supposed to stand in for
real-world data, or will be used in a system that can affect any living thing.
```
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras.constraints import MaxNorm
from tensorflow.keras.optimizers import Adam, SGD, RMSprop
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.utils import Sequence
from tensorflow.keras.utils import to_categorical
from sklearn.pipeline import make_pipeline, Pipeline
from sklearn.model_selection import GridSearchCV
import numpy as np
import matplotlib.pyplot as plt
import numpy as np
random_seed = 42
np.random.seed(random_seed)
image_size = 64
image_width = image_height = image_size
number_of_classes = 5
from tensorflow.keras import backend as keras_backend
keras_backend.set_image_data_format('channels_last')
# Workaround for Keras issues on Mac computers (you can comment this
# out if you're not on a Mac, or not having problems)
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
# Make a File_Helper for saving and loading files.
save_files = True
import os, sys, inspect
current_dir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
sys.path.insert(0, os.path.dirname(current_dir)) # path to parent dir
from DLBasics_Utilities import File_Helper
file_helper = File_Helper(save_files)
# get MNIST data to show a block of transformed images
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1)
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1)
X_train = keras_backend.cast_to_floatx(X_train)
X_test = keras_backend.cast_to_floatx(X_test)
# Use just one image
X_train = np.reshape(8 * [X_train[5]], (8, 28, 28, 1))
y_train = 8 * [y_train[5]]
image_generator = ImageDataGenerator(rotation_range=100, horizontal_flip=True)
image_generator.fit(X_train)
for X_batch, y_batch in image_generator.flow(X_train, y_train, batch_size=8, seed=42):
for i in range(0, 8):
plt.subplot(2, 4, i+1)
plt.imshow(X_batch[i].reshape(28, 28), cmap='gray')
plt.xticks([],[])
plt.yticks([],[])
break
plt.tight_layout()
file_helper.save_figure('MNIST-2-IDG')
plt.show()
def plot_accuracy_and_loss(history, plot_title, filename):
xs = range(len(history.history['accuracy']))
plt.figure(figsize=(10,3))
plt.subplot(1, 2, 1)
plt.plot(xs, history.history['accuracy'], label='train')
plt.plot(xs, history.history['val_accuracy'], label='test')
plt.legend(loc='lower left')
plt.xlabel('epochs')
plt.ylabel('accuracy')
plt.title(plot_title+', Accuracy')
plt.subplot(1, 2, 2)
plt.plot(xs, history.history['loss'], label='train')
plt.plot(xs, history.history['val_loss'], label='test')
plt.legend(loc='upper left')
plt.xlabel('epochs')
plt.ylabel('loss')
plt.title(plot_title+', Loss')
#plt.tight_layout()
file_helper.save_figure(filename)
plt.show()
# Make synthetic data. Use random numbers to move the
# points around a little so they're all different.
# For some reason I thought of this as "wubbling."
import numpy as np
from numpy.random import randint, uniform
from tensorflow.keras.preprocessing.image import img_to_array
import cv2
import math
def makeSyntheticImage():
# Create a black image
half_size = int(image_size/2.0)
img = np.zeros((image_size, image_size, 3), np.uint8)
img_type = randint(0, number_of_classes)
if img_type == 0: # circle
cx = 32
cy = 32
r = half_size * uniform(.6, .9)
cv2.circle(img, (wub(cx), wub(cy)), int(wub(r)), (255,255,255), 2)
elif img_type == 1: # plus sign
cv2.line(img, (wub(32), wub(10)),(wub(32), wub(54)), (255,255,255), 2)
cv2.line(img, (wub(10), wub(32)),(wub(60), wub(32)), (255,255,255), 2)
elif img_type == 2: # three lines
cv2.line(img,(wub(15), wub(10)), (wub(15), wub(54)), (255,255,255), 2)
cv2.line(img,(wub(33), wub(10)), (wub(33), wub(54)), (255,255,255), 2)
cv2.line(img,(wub(51), wub(10)), (wub(51), wub(54)), (255,255,255), 2)
elif img_type == 3: # Z
x1 = wub(54)
y1 = wub(10)
x2 = wub(10)
y2 = wub(54)
cv2.line(img, (wub(10), wub(10)), (x1,y1), (255,255,255), 2)
cv2.line(img, (x1, y1), (x2, y2), (255, 255, 255), 2)
cv2.line(img, (x2, y2), (wub(54), wub(54)), (255, 255, 255), 2)
else: # U
x1 = wub(10)
y1 = wub(54)
x2 = wub(54)
y2 = wub(54)
cv2.line(img, (wub(10), wub(10)), (x1,y1), (255,255,255), 2)
cv2.line(img, (x1, y1), (x2, y2), (255, 255, 255), 2)
cv2.line(img, (x2, y2), (wub(54), wub(10)), (255, 255, 255), 2)
sample = img_to_array(img)
sample = sample[:,:,0]/255.0
sample = sample.reshape((sample.shape[0], sample.shape[1], 1))
return (sample, img_type)
# create a little wubble (a uniform, or symmetrical, wobble)
def wub(p):
range = 5
return randint(p-range, p+range+1)
# Show a grid of random synthetic images
np.random.seed(5)
num_rows = 5
num_columns = 10
plt.figure(figsize=(10,6))
for y in range(num_rows):
for x in range(num_columns):
index = (y*num_columns)+x
plt.subplot(num_rows, num_columns, 1 + index)
(img, label) = makeSyntheticImage()
img = img.reshape(64, 64)
plt.imshow(img, cmap=plt.get_cmap('gray'))
plt.xticks([],[])
plt.yticks([],[])
plt.tight_layout()
file_helper.save_figure('synthetic-demo')
plt.show()
```
Build the ImageDataGenerator
Code adapted from
https://towardsdatascience.com/implementing-custom-data-generators-in-keras-de56f013581c
```
class SyntheticImageGenerator(Sequence):
def __init__(self, batch_size=32, batches_per_epoch=10):
self.batch_size = batch_size
self.batchs_per_epoch = batches_per_epoch
self.on_epoch_end()
def on_epoch_end(self):
pass # Maybe we'll want something here one day
def __len__(self):
# The number of batches per epoch
return self.batchs_per_epoch
def __getitem__(self, index):
# Generate and return one batch of data
X, y = self.__get_data()
return X,y
def __get_data(self):
X = []
y = []
for b in range(self.batch_size):
# Make one piece of data, append it to the list
img, label = makeSyntheticImage()
X.append(img)
one_hot_label = [0] * 5
one_hot_label[label] = 1
y.append(one_hot_label)
X = np.array(X)
y = np.array(y)
return X, y
# Make a dataset so that we have something to test again when fitting,
# and come up with a validation accuracy and loss.
def make_dataset(number_of_images):
X = np.zeros(shape=(number_of_images, image_height, image_width, 1))
y = np.zeros(shape=(number_of_images), dtype='uint8')
for i in range(number_of_images):
(sample, label) = makeSyntheticImage()
X[i] = sample
y[i] = label
return (X, y)
# A little routine to set up and run the learning process
def generator_run_and_report(model, plot_title, filename, epochs,
batch_size, verbosity):
np.random.seed(random_seed)
# make validation data
(X_test, y_test) = make_dataset(10*batch_size)
y_test = to_categorical(y_test, number_of_classes)
data_generator = SyntheticImageGenerator(batch_size=batch_size, batches_per_epoch=10)
history = model.fit(data_generator,
steps_per_epoch=None,
epochs=epochs, verbose=verbosity,
validation_data=(X_test, y_test)
)
plot_accuracy_and_loss(history, plot_title, filename)
return history
# build and return our little CNN
def make_model():
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', padding='same',
input_shape=(image_height, image_width, 1)))
model.add(Flatten())
model.add(Dense(number_of_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=["accuracy"])
return model
np.random.seed(random_seed)
model = make_model()
# steps_per_epoch / batch_size must be an integer (power of 2?) or we get a warning
history = generator_run_and_report(model, 'Synthetic CNN', 'Synthetic-CNN',
epochs=100, batch_size=64,
verbosity=1)
```
| github_jupyter |
```
import os
from os.path import join, dirname
import cv2
import lmdb
import pickle
import matplotlib.pyplot as plt
import numpy as np
dataset_name = 'rscd'
data_path = join('/home/zhong/Dataset',dataset_name)
lmdb_path = join('/home/zhong/Dataset',dataset_name+'_lmdb')
for dataset_part in ['train', 'valid', 'test']:
path = join(data_path, dataset_part)
seqs = os.listdir(join(path, 'global'))
seqs.sort()
seqs_info = {}
length = 0
for i in range(len(seqs)):
seq_info = {}
seq_info['seq'] = seqs[i]
seq_imgs = os.listdir(join(path,'global',seqs[i]))
seq_imgs = [item for item in seq_imgs if item.endswith('.png')]
length_temp = len(seq_imgs)
seq_info['length'] = length_temp
length += length_temp
seqs_info[i] = seq_info
seqs_info['length'] = length
seqs_info['num'] = len(seqs)
save_path = join(lmdb_path,'{}_info_{}.pkl'.format(dataset_name, dataset_part))
os.makedirs(dirname(save_path), exist_ok=True)
f = open(save_path, 'wb')
pickle.dump(seqs_info, f)
f.close()
for dataset_label in [dataset_part, '{}_gt'.format(dataset_part)]:
for i in range(seqs_info['num']):
env = lmdb.open(join(lmdb_path, '{}_{}'.format(dataset_name, dataset_label)), map_size=1099511627776)
txn = env.begin(write=True)
if dataset_label.endswith('gt'):
subpath = join(path, 'global', seqs_info[i]['seq'])
else:
subpath = join(path, 'rolling', seqs_info[i]['seq'])
imgs = [item for item in os.listdir(subpath) if item.endswith('.png')]
nums = [int(img.split('.')[0]) for img in imgs]
nums.sort()
gap = nums[0]-0
for img in imgs:
img_path = join(subpath, img)
seq_idx = i
frame_idx = int(img.split('.')[0])-gap
key = '%03d_%08d' % (seq_idx, frame_idx)
data = cv2.imread(img_path)
txn.put(key=key.encode(), value=data)
txn.commit()
env.close()
import os
from os.path import join, dirname
import cv2
import lmdb
import pickle
import matplotlib.pyplot as plt
import numpy as np
# test lmdb dataset
H,W,C = 480,640,3
dataset_name = 'rscd'
lmdb_path = join('/home/zhong/Dataset',dataset_name+'_lmdb')
# tarin set
env = lmdb.open(join(lmdb_path, '{}_train'.format(dataset_name)), map_size=1099511627776)
env_gt = lmdb.open(join(lmdb_path, '{}_train_gt'.format(dataset_name)), map_size=1099511627776)
txn = env.begin()
txn_gt = env_gt.begin()
seq = 49
frame = 49
key = '{:03d}_{:08d}'.format(seq, frame)
test = txn.get(key.encode())
test = np.frombuffer(test, dtype='uint8')
test = test.reshape(H,W,C)
test_gt = txn_gt.get(key.encode())
test_gt = np.frombuffer(test_gt, dtype='uint8')
test_gt = test_gt.reshape(H,W,C)
plt.imshow(test[:,:,::-1])
plt.figure()
plt.imshow(test_gt[:,:,::-1])
plt.show()
env.close()
env_gt.close()
# valid set
env = lmdb.open(join(lmdb_path, '{}_valid'.format(dataset_name)), map_size=1099511627776)
env_gt = lmdb.open(join(lmdb_path, '{}_valid_gt'.format(dataset_name)), map_size=1099511627776)
txn = env.begin()
txn_gt = env_gt.begin()
seq = 14
frame = 49
key = '{:03d}_{:08d}'.format(seq, frame)
test = txn.get(key.encode())
test = np.frombuffer(test, dtype='uint8')
test = test.reshape(H,W,C)
test_gt = txn_gt.get(key.encode())
test_gt = np.frombuffer(test_gt, dtype='uint8')
test_gt = test_gt.reshape(H,W,C)
plt.imshow(test[:,:,::-1])
plt.figure()
plt.imshow(test_gt[:,:,::-1])
plt.show()
env.close()
env_gt.close()
# test set
env = lmdb.open(join(lmdb_path, '{}_test'.format(dataset_name)), map_size=1099511627776)
env_gt = lmdb.open(join(lmdb_path, '{}_test_gt'.format(dataset_name)), map_size=1099511627776)
txn = env.begin()
txn_gt = env_gt.begin()
seq = 14
frame = 49
key = '{:03d}_{:08d}'.format(seq, frame)
test = txn.get(key.encode())
test = np.frombuffer(test, dtype='uint8')
test = test.reshape(H,W,C)
test_gt = txn_gt.get(key.encode())
test_gt = np.frombuffer(test_gt, dtype='uint8')
test_gt = test_gt.reshape(H,W,C)
plt.imshow(test[:,:,::-1])
plt.figure()
plt.imshow(test_gt[:,:,::-1])
plt.show()
env.close()
env_gt.close()
```
| github_jupyter |
```
#This function gets the raw data and clean it
def data_clean(data):
print("Data shape before cleaning:" + str(np.shape(data)))
#Change the data type of any column if necessary.
print("Now it will print only those columns with non-numeric values")
print(data.select_dtypes(exclude=[np.number]))
#Now dropping those columns with zero values entirely or which sums to zero
data= data.loc[:, (data != 0).any(axis=0)]
#Now dropping those columns with NAN values entirely
data=data.dropna(axis=1, how='all')
data=data.dropna(axis=0, how='all')
#Keep track of the columns which are exculded after NAN and column zero sum operation above
print("Data shape after cleaning:" + str(np.shape(data)))
return data
#This function impute the missing values with features (column mean)
def data_impute(data):
#Seprating out the NAMES of the molecules column and ACTIVITY column because they are not the features to be normalized.
data_input=data.drop(['ACTIVITY', 'NAME'], axis=1)
data_labels= data.ACTIVITY
data_names = data.NAME
#Imputing the missing values with features mean values
fill_NaN = Imputer(missing_values=np.nan, strategy='mean', axis=1)
Imputed_Data_input = pd.DataFrame(fill_NaN.fit_transform(data_input))
print(np.shape(Imputed_Data_input))
print("Data shape after imputation:" + str(np.shape(Imputed_Data_input)))
return Imputed_Data_input, data_labels, data_names
#This function is to normalize features
def data_norm(Imputed_Data_input,data_labels,data_names):
#Calculatig the mean and STD of the imputed input data set
Imputed_Data_input_mean=Imputed_Data_input.mean()
Imputed_Data_input_std=Imputed_Data_input.std()
#z-score normalizing the whole input data:
Imputed_Data_input_norm = (Imputed_Data_input - Imputed_Data_input_mean)/Imputed_Data_input_std
#Adding names and labels to the data again
frames = [data_names,data_labels, Imputed_Data_input_norm]
full_data_norm = pd.concat(frames,axis=1)
return full_data_norm
#This function gives train-test-split
from sklearn.cross_validation import train_test_split as sk_train_test_split
def data_split(full_data_norm, test_size):
full_data_norm_input=full_data_norm.drop(['ACTIVITY', 'NAME'], axis=1)
target_attribute = full_data_norm['ACTIVITY']
# We call train set as train_cv as a part of it will be used for cross-validadtion
train_cv_x, test_x, train_cv_y, test_y = sk_train_test_split(full_data_norm_input, target_attribute, test_size=test_size, random_state=55)
return train_cv_x, test_x, train_cv_y, test_y
#Optimizing drop_out and threshold with 3 cross CV validation
def hybrid_model_opt():
class fs(TransformerMixin, BaseEstimator):
def __init__(self, n_estimators=1000, threshold='1.7*mean'):
self.ss=None
self.n_estimators = n_estimators
self.x_new = None
self. threshold= threshold
def fit(self, X, y):
m = ExtraTreesClassifier(n_estimators=self.n_estimators, random_state=0)
m.fit(X,y)
self.ss = SelectFromModel(m, threshold=self. threshold , prefit=True)
return self
def transform(self, X):
self.x_new=self.ss.transform(X)
global xx
xx=self.x_new.shape[1]
return self.x_new
def nn_model_opt(dropout_rate=0.5,init_mode='uniform', activation='relu'):
#n_x_new=xx # this is the number of features selected for current iteration
np.random.seed(200000)
model_opt = Sequential()
model_opt.add(Dense(xx,input_dim=xx ,kernel_initializer='he_normal', activation='relu'))
model_opt.add(Dense(10, kernel_initializer='he_normal', activation='relu'))
model_opt.add(Dropout(dropout_rate))
model_opt.add(Dense(1,kernel_initializer='he_normal', activation='sigmoid'))
model_opt.compile(loss='binary_crossentropy',optimizer='adam', metrics=['binary_crossentropy'])
return model_opt
clf=KerasClassifier(build_fn=nn_model_opt, epochs=250, batch_size=3000, verbose=-1)
hybrid_model = Pipeline([('fs', fs()),('clf', clf)])
return hybrid_model
#Getting fetaures importances of all the features using extra_tree classifier only
def feature_imp(train_cv_x,train_cv_y):
m = ExtraTreesClassifier(n_estimators=1000 )
m.fit(train_cv_x,train_cv_y)
importances = m.feature_importances_
return importances, m
def selected_feature_names(m, thr, train_cv_x):
sel = SelectFromModel(m,threshold=thr ,prefit=True)
feature_idx = sel.get_support()
feature_name = train_cv_x.columns[feature_idx]
feature_name =pd.DataFrame(feature_name )
return feature_name
def train_test_feature_based_selection(feature_name,train_cv_x,train_cv_y,test_x,test_y ):
feature_name=feature_name.T
feature_name.columns = feature_name.iloc[0]
feature_name.reindex(feature_name.index.drop(0))
train_selected_x=train_cv_x[train_cv_x.columns.intersection(feature_name.columns)]
test_selected_x=test_x[test_x.columns.intersection(feature_name.columns)]
train_selected_x=train_selected_x.as_matrix()
test_selected_x=test_selected_x.as_matrix()
train_selected_y=train_cv_y.as_matrix()
test_selected_y=test_y.as_matrix()
return train_selected_x, train_selected_y, test_selected_x, test_selected_y
def model_nn_final(train_selected_x, train_selected_y, test_selected_x, test_selected_y, x, drop_out):
model_final = Sequential()
#n_x_new=train_selected_x.shape[1]
n_x_new=train_selected_x.shape[1]
model_final.add(Dense(n_x_new, input_dim=n_x_new, kernel_initializer ='he_normal', activation='sigmoid'))
model_final.add(Dense(10, kernel_initializer='he_normal', activation='sigmoid'))
model_final.add(Dropout(drop_out))
model_final.add(Dense(1, kernel_initializer='he_normal', activation='sigmoid'))
model_final.compile(loss='binary_crossentropy', optimizer='adam', metrics=['binary_crossentropy'])
seed = 7000
np.random.seed(seed)
model_final.fit(train_selected_x, train_selected_y, epochs=250, batch_size=1064)
pred_test = model_final.predict(test_selected_x)
auc_test = roc_auc_score(test_selected_y, pred_test)
print ("AUROC_test: " + str(auc_test))
print(" ")
model_json = model_final.to_json()
with open(str(x)+"_model.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model_final.save_weights(str(x)+"_model.h5")
print("Saved model to disk")
print(" ")
return pred_test
```
## 1) Loading all packages needed
```
from keras.callbacks import ModelCheckpoint
from keras import backend as K
from keras import optimizers
from keras.layers import Dense
from keras.layers import Dense, Dropout
from keras.models import Sequential
from keras.wrappers.scikit_learn import KerasClassifier
from pandas import ExcelFile
from pandas import ExcelWriter
from PIL import Image
from scipy import ndimage
from scipy.stats import randint as sp_randint
from sklearn.base import BaseEstimator
from sklearn.base import TransformerMixin
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import SelectFromModel
from sklearn import datasets
from sklearn import metrics
from sklearn import pipeline
from sklearn.metrics import roc_auc_score, roc_curve
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import PredefinedSplit
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import FunctionTransformer
from sklearn.preprocessing import Imputer
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.utils import resample
from tensorflow.python.framework import ops
import h5py
import keras
import matplotlib.pyplot as plt
import numpy as np
import openpyxl
import pandas as pd
import scipy
import tensorflow as tf
import xlsxwriter
%load_ext autoreload
%matplotlib inline
```
## 2) Loading the data
The "NAME" Column is for naming the molecule. The "ACTIVITY" column is the Activity of molecule. Rest of the columns shows the features.
```
data = pd.read_excel(r'full_data.xlsx')
data
```
## 3) Cleaning the data
Removing NAN values from the data. Other attributes can also be added here to clean the data as per requirement. After executing this function, only those columns will be displayed which have non-numeric values in it. If these non-numeric values appear in numeric features columns, then these should be treated before going further. It will also print the data shape before and after cleaning.
```
#Cleaning the data
data= data_clean(data)
```
## 4) Imputing the missing data
Imputing the missin values in feature columns by means of respective feature.
```
#imputing the missing values
Imputed_Data_input, data_labels, data_names=data_impute(data)
```
## 5) Normalizing the data
Imputing the missin values in feature columns by means of respective feature.
```
#Normalizing the data
full_data_norm=data_norm(Imputed_Data_input, data_labels, data_names)
```
## 6) splitting the data the data
```
#Splitting the data into train and test
test_size=0.30
train_cv_x, test_x, train_cv_y, test_y=data_split(full_data_norm, test_size)
```
## 7) Hybrid Model optimization
Currently, only two variables are optimized (drop_out and threshold). This optimization search can be extended as per requiremnet. x-fold cross validation is used in random search setting.
```
xx=0 #This variable stores the number of features selected
hybrid_model=hybrid_model_opt() #calling the hybrid model for optimizattion
#Defining two important paramters of hybrid model to be optimized using random cv search
param_grid= {'fs__threshold': ['0.08*mean','0.09*mean','0.10*mean','0.2*mean','0.3*mean','0.4*mean','0.5*mean','0.6*mean','0.7*mean','0.8*mean','0.9*mean','1*mean','1.1*mean','1.2*mean','1.3*mean','1.4*mean','1.5*mean','1.6*mean','1.7*mean','1.8*mean','1.9*mean','2.0*mean','2.1*mean','2.2*mean','2.3*mean'],
'clf__dropout_rate': [0.1, 0.2, 0.3, 0.4, 0.5,0.6,0.7,0.8,0.9]}
#Random CV search
grid = RandomizedSearchCV(estimator=hybrid_model, param_distributions=param_grid,n_iter = 1,scoring='roc_auc',cv = 3 , n_jobs=1)
opt_result = grid.fit(train_cv_x, train_cv_y)
#Printing the optimization results
print("Best: %f using %s" % (opt_result.best_score_, opt_result.best_params_))
means = opt_result.cv_results_['mean_test_score']
stds = opt_result.cv_results_['std_test_score']
params = opt_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
```
## 8) Gini_importances
```
#getting the importances of all the features
importances, m =feature_imp(train_cv_x,train_cv_y)
```
## 9) Features names
```
#getting the features names of the selected features based on optimized threshold
feature_name=selected_feature_names(m, opt_result.best_params_["fs__threshold"], train_cv_x)
```
## 10) Saving the gini-importance and selected features names
```
#saving gini-importance of all the featues
writer = pd.ExcelWriter('importances.xlsx',engine='xlsxwriter')
pd.DataFrame(importances).to_excel(writer,sheet_name='importances')
writer.save()
#Saving features names which are selected on the basis of optimized threshold
writer = pd.ExcelWriter('feature_name.xlsx',engine='xlsxwriter')
pd.DataFrame(feature_name).to_excel(writer,sheet_name='feature_name')
writer.save()
```
## 11) Features selection in train and test
```
#Selection of train and test features based on optimized value of threshold
train_selected_x, train_selected_y, test_selected_x, test_selected_y=train_test_feature_based_selection(feature_name,train_cv_x,train_cv_y,test_x,test_y )
```
## 12) Saving the test on the basis of selected features columns
```
#Saving the selected test set
writer = pd.ExcelWriter('test_selected.xlsx',engine='xlsxwriter')
pd.DataFrame(test_selected_x).to_excel(writer,sheet_name='test_selected_x')
pd.DataFrame(test_selected_y).to_excel(writer,sheet_name='test_selected_y')
writer.save()
```
## 13) Final prediction based on ensembling.
This will also save all the ensembled average models and weight matrix.
```
# At this point, we have obtained the optimized optimized values and selected the features in train and test based on
#optimized threshold value of feature selection module of hybrid framework
ensemb=4 #Number of ensembling average
pred_test=[] #To store the individual model test prediction
pred_test_final=np.zeros((test_selected_x.shape[0],1)) # To store the final test prediction after ensembling
#As per the above number of ensemble, the models will be saved in the directory
for x in range(ensemb):
pred_test.append(model_nn_final(train_selected_x, train_selected_y, test_selected_x, test_selected_y, x, opt_result.best_params_["clf__dropout_rate"]))
pred_test_final=pred_test[x]+pred_test_final
#ensemble averaging
pred_test_final=pred_test_final/ensemb
#Final Accuracy
auc_test_final = roc_auc_score(test_selected_y, pred_test_final)
print(auc_test_final)
```
| github_jupyter |
```
# Copyright 2019 NVIDIA Corporation. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
```
<img src="img/nvidia_logo.png" style="width: 90px; float: right;">
# QA Inference on BERT using TensorRT
## 1. Overview
Bidirectional Embedding Representations from Transformers (BERT), is a method of pre-training language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks.
The original paper can be found here: https://arxiv.org/abs/1810.04805.
### 1.a Learning objectives
This notebook demonstrates:
- Inference on Question Answering (QA) task with BERT Base/Large model
- The use fine-tuned NVIDIA BERT models
- Use of BERT model with TRT
## 2. Requirements
Please refer to the ReadMe file
## 3. BERT Inference: Question Answering
We can run inference on a fine-tuned BERT model for tasks like Question Answering.
Here we use a BERT model fine-tuned on a [SQuaD 2.0 Dataset](https://rajpurkar.github.io/SQuAD-explorer/) which contains 100,000+ question-answer pairs on 500+ articles combined with over 50,000 new, unanswerable questions.
### 3.a Paragraph and Queries
The paragraph and the questions can be customized by changing the text below:
#### Paragraph:
```
paragraph_text = "The Apollo program, also known as Project Apollo, was the third United States human spaceflight program carried out by the National Aeronautics and Space Administration (NASA), which accomplished landing the first humans on the Moon from 1969 to 1972. First conceived during Dwight D. Eisenhower's administration as a three-man spacecraft to follow the one-man Project Mercury which put the first Americans in space, Apollo was later dedicated to President John F. Kennedy's national goal of landing a man on the Moon and returning him safely to the Earth by the end of the 1960s, which he proposed in a May 25, 1961, address to Congress. Project Mercury was followed by the two-man Project Gemini. The first manned flight of Apollo was in 1968. Apollo ran from 1961 to 1972, and was supported by the two-man Gemini program which ran concurrently with it from 1962 to 1966. Gemini missions developed some of the space travel techniques that were necessary for the success of the Apollo missions. Apollo used Saturn family rockets as launch vehicles. Apollo/Saturn vehicles were also used for an Apollo Applications Program, which consisted of Skylab, a space station that supported three manned missions in 1973-74, and the Apollo-Soyuz Test Project, a joint Earth orbit mission with the Soviet Union in 1975."
```
#### Question:
```
question_text = "What project put the first Americans into space?"
#question_text = "What year did the first manned Apollo flight occur?"
#question_text = "What President is credited with the original notion of putting Americans in space?"
#question_text = "Who did the U.S. collaborate with on an Earth orbit mission in 1975?"
```
In this example we ask our BERT model questions related to the following paragraph:
**The Apollo Program**
_"The Apollo program, also known as Project Apollo, was the third United States human spaceflight program carried out by the National Aeronautics and Space Administration (NASA), which accomplished landing the first humans on the Moon from 1969 to 1972. First conceived during Dwight D. Eisenhower's administration as a three-man spacecraft to follow the one-man Project Mercury which put the first Americans in space, Apollo was later dedicated to President John F. Kennedy's national goal of landing a man on the Moon and returning him safely to the Earth by the end of the 1960s, which he proposed in a May 25, 1961, address to Congress. Project Mercury was followed by the two-man Project Gemini. The first manned flight of Apollo was in 1968. Apollo ran from 1961 to 1972, and was supported by the two-man Gemini program which ran concurrently with it from 1962 to 1966. Gemini missions developed some of the space travel techniques that were necessary for the success of the Apollo missions. Apollo used Saturn family rockets as launch vehicles. Apollo/Saturn vehicles were also used for an Apollo Applications Program, which consisted of Skylab, a space station that supported three manned missions in 1973-74, and the Apollo-Soyuz Test Project, a joint Earth orbit mission with the Soviet Union in 1975."_
The questions and relative answers expected are shown below:
- **Q1:** "What project put the first Americans into space?"
- **A1:** "Project Mercury"
- **Q2:** "What program was created to carry out these projects and missions?"
- **A2:** "The Apollo program"
- **Q3:** "What year did the first manned Apollo flight occur?"
- **A3:** "1968"
- **Q4:** "What President is credited with the original notion of putting Americans in space?"
- **A4:** "John F. Kennedy"
- **Q5:** "Who did the U.S. collaborate with on an Earth orbit mission in 1975?"
- **A5:** "Soviet Union"
- **Q6:** "How long did Project Apollo run?"
- **A6:** "1961 to 1972"
- **Q7:** "What program helped develop space travel techniques that Project Apollo used?"
- **A7:** "Gemini Mission"
- **Q8:** "What space station supported three manned missions in 1973-1974?"
- **A8:** "Skylab"
## Data Preprocessing
Let's convert the paragraph and the question to BERT input with the help of the tokenizer:
```
import data_processing as dp
import tokenization
#Large
#tokenizer = tokenization.FullTokenizer(vocab_file="./data/uncased_L-24_H-1024_A-16/vocab.txt", do_lower_case=True)
#Base
tokenizer = tokenization.FullTokenizer(vocab_file="./data/uncased_L-12_H-768_A-12/vocab.txt", do_lower_case=True)
# The maximum number of tokens for the question. Questions longer than this will be truncated to this length.
max_query_length = 64
# When splitting up a long document into chunks, how much stride to take between chunks.
doc_stride = 128
# The maximum total input sequence length after WordPiece tokenization.
# Sequences longer than this will be truncated, and sequences shorter
max_seq_length = 384
# Extract tokecs from the paragraph
doc_tokens = dp.convert_doc_tokens(paragraph_text)
# Extract features from the paragraph and question
features = dp.convert_examples_to_features(doc_tokens, question_text, tokenizer, max_seq_length, doc_stride, max_query_length)
```
## TensorRT Inference
```
import tensorrt as trt
TRT_LOGGER = trt.Logger(trt.Logger.WARNING)
import ctypes
nvinfer = ctypes.CDLL("libnvinfer_plugin.so", mode = ctypes.RTLD_GLOBAL)
cm = ctypes.CDLL("./build/libcommon.so", mode = ctypes.RTLD_GLOBAL)
pg = ctypes.CDLL("./build/libbert_plugins.so", mode = ctypes.RTLD_GLOBAL)
import pycuda.driver as cuda
import pycuda.autoinit
import numpy as np
import time
# For this example we are going to use batch size 1
max_batch_size = 1
# Load the Large BERT Engine
# with open("./bert_python.engine", "rb") as f, \
# trt.Runtime(TRT_LOGGER) as runtime, \
# runtime.deserialize_cuda_engine(f.read()) as engine, \
# engine.create_execution_context() as context:
# Load the Base BERT Engine
with open("./bert_python_base.engine", "rb") as f, \
trt.Runtime(TRT_LOGGER) as runtime, \
runtime.deserialize_cuda_engine(f.read()) as engine, \
engine.create_execution_context() as context:
print("List engine binding:")
for binding in engine:
print(" - {}: {}, Shape {}, {}".format(
"Input" if engine.binding_is_input(binding) else "Output",
binding,
engine.get_binding_shape(binding),
engine.get_binding_dtype(binding)))
def binding_nbytes(binding):
return trt.volume(engine.get_binding_shape(binding)) * engine.get_binding_dtype(binding).itemsize
# Allocate device memory for inputs and outputs.
d_inputs = [cuda.mem_alloc(binding_nbytes(binding)) for binding in engine if engine.binding_is_input(binding)]
h_output = cuda.pagelocked_empty(tuple(engine.get_binding_shape(3)), dtype=np.float32)
d_output = cuda.mem_alloc(h_output.nbytes)
# Create a stream in which to copy inputs/outputs and run inference.
stream = cuda.Stream()
print("\nRunning Inference...")
eval_start_time = time.time()
# Copy inputs
cuda.memcpy_htod_async(d_inputs[0], input_features["input_ids"], stream)
cuda.memcpy_htod_async(d_inputs[1], input_features["segment_ids"], stream)
cuda.memcpy_htod_async(d_inputs[2], input_features["input_mask"], stream)
# Run inference
context.execute_async(bindings=[int(d_inp) for d_inp in d_inputs] + [int(d_output)], stream_handle=stream.handle)
# Transfer predictions back from GPU
cuda.memcpy_dtoh_async(h_output, d_output, stream)
# Synchronize the stream
stream.synchronize()
eval_time_elapsed = time.time() - eval_start_time
```
## Data Post-Processing
Now that we have the inference results let's extract the actual answer to our question
```
start_logits = h_output[:, 0]
end_logits = h_output[:, 1]
# The total number of n-best predictions to generate in the nbest_predictions.json output file
n_best_size = 20
# The maximum length of an answer that can be generated. This is needed
# because the start and end predictions are not conditioned on one another
max_answer_length = 30
(prediction, nbest_json, scores_diff_json) = \
dp.get_predictions(doc_tokens, features, \
start_logits, end_logits, n_best_size, max_answer_length)
print("-----------------------------")
print("Running Inference in {:.3f} Sentences/Sec".format(1.0/eval_time_elapsed))
print("-----------------------------")
print("Answer: '{}'".format(prediction))
print("with prob: {:.3f}%".format(nbest_json[0]['probability']*100.0))
```
| github_jupyter |
<!-- :Author: Arthur Goldberg <Arthur.Goldberg@mssm.edu> -->
<!-- :Date: 2020-07-13 -->
<!-- :Copyright: 2020, Karr Lab -->
<!-- :License: MIT -->
# DE-Sim tutorial
DE-Sim is an open-source, object-oriented, discrete-event simulation (OO DES) tool implemented in Python.
DE-Sim makes it easy to build and simulate discrete-event models.
This page introduces the basic concepts of discrete-event modeling and teaches you how to build and simulate discrete-event models with DE-Sim.
## Installation
Use `pip` to install `de_sim`.
```
!pip install de_sim
```

## DE-Sim model of a one-dimensional random walk
<font size="4">Three steps: define an event message class; define a simulation object class; and build and run a simulation.</font>
### 1: Create an event message class by subclassing [`EventMessage`](https://docs.karrlab.org/de_sim/master/source/de_sim.html#de_sim.event_message.EventMessage).
<font size="4">Each DE-Sim event contains an event message that provides data to the simulation object which executes the event.
The random walk model sends event messages that contain the value of a random step.</font>
```
import de_sim
class RandomStepMessage(de_sim.EventMessage):
"An event message class that stores the value of a random walk step"
step_value: float
```

### 2: Subclass `SimulationObject` to define a simulation object class
<font size="4">
Simulation objects are like threads: a simulation's scheduler decides when to execute them, and their execution is suspended when they have no work to do.
But a DES scheduler schedules simulation objects to ensure that events occur in simulation time order. Precisely, the fundamental invariant of discrete-event simulation:
<br>
<br>
1. All events in a simulation are executed in non-decreasing time order.
By guaranteeing this behavior, the DE-Sim scheduler ensures that causality relationships between events are respected.
This invariant has two consequences:
1. All synchronization between simulation objects is controlled by the simulation times of events.
2. Each simulation object executes its events in non-decreasing time order.
The Python classes that generate and handle simulation events are simulation object classes, subclasses of `SimulationObject` which uses a custom class creation method that gives special meaning to certain methods and attributes.
Below, we define a simulation object class that models a random walk which randomly selects the time delay between steps, and illustrates all key features of `SimulationObject`.
</font>
```
import random
class RandomWalkSimulationObject(de_sim.SimulationObject):
" A 1D random walk model, with random delays between steps "
def __init__(self, name):
super().__init__(name)
def init_before_run(self):
" Initialize before a simulation run; called by the simulator "
self.position = 0
self.history = {'times': [0],
'positions': [0]}
self.schedule_next_step()
def schedule_next_step(self):
" Schedule the next event, which is a step "
# A step moves -1 or +1 with equal probability
step_value = random.choice([-1, +1])
# The time between steps is 1 or 2, with equal probability
delay = random.choice([1, 2])
# Schedule an event `delay` in the future for this object
# The event contains a `RandomStepMessage` with `step_value=step_value`
self.send_event(delay, self, RandomStepMessage(step_value))
def handle_step_event(self, event):
" Handle a step event "
# Update the position and history
self.position += event.message.step_value
self.history['times'].append(self.time)
self.history['positions'].append(self.position)
self.schedule_next_step()
# `event_handlers` contains pairs that map each event message class
# received by this simulation object to the method that handles
# the event message class
event_handlers = [(RandomStepMessage, handle_step_event)]
# messages_sent registers all message types sent by this object
messages_sent = [RandomStepMessage]
```
<font size="4">
DE-Sim simulation objects employ special methods and attributes:
<br>
* Special `SimulationObject` methods:
1. **`init_before_run`** (optional): immediately before a simulation run, the simulator calls each simulation object’s `init_before_run` method. In this method simulation objects can send initial events and perform other initializations.
2. **`send_event`**: `send_event(delay, receiving_object, event_message)` schedules an event to occur `delay` time units in the future at simulation object `receiving_object`. `event_message` must be an [`EventMessage`](https://docs.karrlab.org/de_sim/master/source/de_sim.html#de_sim.event_message.EventMessage) instance. An event can be scheduled for any simulation object in a simulation.
The event will be executed at its scheduled simulation time by an event handler in the simulation object `receiving_object`.
The `event` parameter in the handler will be the scheduled event, which contains `event_message` in its `message` attribute.
3. **event handlers**: Event handlers have the signature `event_handler(self, event)`, where `event` is a simulation event. A subclass of `SimulationObject` must define at least one event handler, as illustrated by `handle_step_event` above.
<br>
<br>
* Special `SimulationObject` attributes:
1. **`event_handlers`**: a simulation object can receive arbitrarily many types of event messages, and implement arbitrarily many event handlers. The attribute `event_handlers` contains an iterator over pairs that map each event message class received to the event handler which handles the event message class.
2. **`time`**: `time` is a read-only attribute that always equals the current simulation time.
</font>

### 3: Execute a simulation by creating and initializing a [`Simulator`](https://docs.karrlab.org/de_sim/master/source/de_sim.html#de_sim.simulator.Simulator), and running the simulation.
<font size="4">
The `Simulator` class simulates models.
Its `add_object` method adds a simulation object to the simulator.
Each object in a simulation must have a unique `name`.
The `initialize` method, which calls each simulation object’s `init_before_run` method, must be called before a simulation starts.
At least one simulation object in a simulation must schedule an initial event--otherwise the simulation cannot start.
More generally, a simulation with no events to execute will terminate.
Finally, `run` simulates a model. It takes the maximum time of a simulation run. `run` also takes several optional configuration arguments.
</font>
```
# Create a simulator
simulator = de_sim.Simulator()
# Create a random walk simulation object and add it to the simulation
random_walk_sim_obj = RandomWalkSimulationObject('rand_walk')
simulator.add_object(random_walk_sim_obj)
# Initialize the simulation
simulator.initialize()
# Run the simulation until time 10
max_time = 10
simulator.run(max_time)
# Plot the random walk
import matplotlib.pyplot as plt
import matplotlib.ticker as plticker
fig, ax = plt.subplots()
loc = plticker.MultipleLocator(base=1.0)
ax.yaxis.set_major_locator(loc)
plt.step(random_walk_sim_obj.history['times'],
random_walk_sim_obj.history['positions'],
where='post')
plt.xlabel('Time')
plt.ylabel('Position')
plt.show()
```
<font size="4">
This example runs a simulation for `max_time` time units, and plots the random walk’s trajectory.
This trajectory illustrates two key characteristics of discrete-event models. First, the state changes at discrete times.
Second, since the state does not change between instantaneous events, the trajectory of any state variable is a step function.
</font>

## DE-Sim example with multiple object instances
<font size="4">
We show an DE-Sim implementation of the parallel hold (PHOLD) model, frequently used to benchmark parallel DES simulators.
<br>
<br>
We illustrate these DE-Sim features:
* Use multiple [`EventMessage`](https://docs.karrlab.org/de_sim/master/source/de_sim.html#de_sim.event_message.EventMessage) types
* Run multiple instances of a simulation object type
* Simulation objects scheduling events for each other
</font>
```
""" Messages for the PHOLD benchmark for parallel discrete-event simulators """
import random
class MessageSentToSelf(de_sim.EventMessage):
"A message that's sent to self"
class MessageSentToOtherObject(de_sim.EventMessage):
"A message that's sent to another PHold simulation object"
class InitMsg(de_sim.EventMessage):
"An initialization message"
MESSAGE_TYPES = [MessageSentToSelf, MessageSentToOtherObject, InitMsg]
class PholdSimulationObject(de_sim.SimulationObject):
""" Run a PHOLD simulation """
def __init__(self, name, args):
self.args = args
super().__init__(name)
def init_before_run(self):
self.send_event(random.expovariate(1.0), self, InitMsg())
@staticmethod
def record_event_header():
print('\t'.join(('Sender', 'Send', "Receivr",
'Event', 'Message type')))
print('\t'.join(('', 'time', '', 'time', '')))
def record_event(self, event):
record_format = '{}\t{:.2f}\t{}\t{:.2f}\t{}'
print(record_format.format(event.sending_object.name,
event.creation_time,
event.receiving_object.name,
self.time,
type(event.message).__name__))
def handle_simulation_event(self, event):
""" Handle a simulation event """
# Record this event
self.record_event(event)
# Schedule an event
if random.random() < self.args.frac_self_events or \
self.args.num_phold_objects == 1:
receiver = self
else:
# Send the event to another randomly selected object
obj_index = random.randrange(self.args.num_phold_objects - 1)
if int(self.name) <= obj_index:
obj_index += 1
receiver = self.simulator.simulation_objects[str(obj_index)]
if receiver == self:
message_type = MessageSentToSelf
else:
message_type = MessageSentToOtherObject
self.send_event(random.expovariate(1.0), receiver, message_type())
event_handlers = [(sim_msg_type, 'handle_simulation_event') \
for sim_msg_type in MESSAGE_TYPES]
messages_sent = MESSAGE_TYPES
```
<font size="4">
The PHOLD model runs multiple instances of `PholdSimulationObject`.
`create_and_run` creates the objects and adds them to the simulator.
Each `PholdSimulationObject` object is initialized with `args`, an object that defines two attributes used by all objects:
* `args.num_phold_objects`: the number of PHOLD objects running
* `args.frac_self_events`: the fraction of events sent to self
At time 0, each PHOLD object schedules an `InitMsg` event for itself that occurs after a random exponential time delay with mean = 1.0.
The `handle_simulation_event` method handles all events.
Each event schedules one more event.
A random value in [0, 1) is used to decide whether to schedule the event for itself (with probability `args.frac_self_events`) or for another PHOLD object.
If the event is scheduled for another PHOLD object, this gets a reference to the object:
receiver = self.simulator.simulation_objects[str(obj_index)]
The attribute `self.simulator` always references the running simulator, and `self.simulator.simulation_objects` is a dictionary that maps simulation object names to simulation objects.
</font>
<font size="4">
Each event is printed by `record_event`.
It accesses the DE-Sim `Event` object that is passed to all event handlers.
`de_sim.event.Event` contains five useful fields:
* `sending_object`: the object that created and sent the event
* `creation_time`: the simulation time when the event was created (a.k.a. its *send time*)
* `receiving_object`: the object that received the event
* `event_time`: the simulation time when the event must execute (a.k.a. its *receive time*)
* `message`: the [`EventMessage`](https://docs.karrlab.org/de_sim/master/source/de_sim.html#de_sim.event_message.EventMessage) carried by the event
However, rather than use the event's `event_time`, `record_event` uses `self.time` to report the simulation time when the event is being executed, as they are always equal.
</font>

### Execute the simulation
<font size="4">
Run a short simulation, and print all events:
</font>
```
def create_and_run(args):
# create a simulator
simulator = de_sim.Simulator()
# create simulation objects, and send each one an initial event message to self
for obj_id in range(args.num_phold_objects):
phold_obj = PholdSimulationObject(str(obj_id), args)
simulator.add_object(phold_obj)
# run the simulation
simulator.initialize()
PholdSimulationObject.record_event_header()
event_num = simulator.simulate(args.max_time).num_events
print("Executed {} events.\n".format(event_num))
from argparse import Namespace
args = Namespace(max_time=2,
frac_self_events=0.3,
num_phold_objects=6)
create_and_run(args)
```
| github_jupyter |
# First Steps with Huggingface
```
from IPython.display import display, Markdown
with open('../../doc/env_variables_setup.md', 'r') as fh:
content = fh.read()
display(Markdown(content))
```
## Import Packages
Try to avoid 'pip install' in the notebook. This can destroy dependencies in the env.
```
# only running this cell leads to problems when kernel has not been restarted
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow.python.data.ops import dataset_ops
from tensorboard.backend.event_processing import event_accumulator
from absl import logging
from datetime import datetime
import os
import shutil
import numpy as np
from tqdm import tqdm
import re
#from transformers import *
from transformers import (BertTokenizer,
TFBertForSequenceClassification,
TFBertModel,
TFBertForPreTraining,
glue_convert_examples_to_features,
glue_processors,)
# local packages
import preprocessing.preprocessing as pp
import importlib
importlib.reload(pp);
```
### To Do:
- extend to other language models like gpt-2
- find out how to attach additional layers to the architecture
- find out at which point multilingualism can be introduced
## Define Paths
```
try:
data_dir=os.environ['PATH_DATASETS']
except:
print('missing PATH_DATASETS')
print(data_dir)
```
## 1. Loading the IMDb Dataset from Tensorflow
```
#import tensorflow_datasets as tfds
#from ipywidgets import IntProgress
train_data, validation_data, test_data = tfds.load(name="imdb_reviews",
data_dir=data_dir,
split=('train[:60%]', 'train[60%:]', 'test'),
as_supervised=True)
# trying to extract the info requires loading the data without splitting it
data_ex, data_ex_info = tfds.load(name="imdb_reviews",
data_dir=data_dir,
as_supervised=True,
with_info=True)
```
## 2. Exploring the Dataset
### 2.1. Getting a feeling of the data structure of the IMDb data
```
print(type(train_data))
# splitting features and labels up into separate objects and creating a batch with 10 entries
train_examples_batch, train_labels_batch = next(iter(train_data.batch(10)))
train_examples_batch[:2]
train_labels_batch
```
Converting the tf.Tensor objects into numpy arrays seems more manageable in the functions afterwards which is why it is done here.
```
train_examples_batch_np = tfds.as_numpy(train_examples_batch)
train_labels_batch_np = tfds.as_numpy(train_labels_batch)
data_ex_info
data_ex.keys()
data_ex['test']
data_ex_info.features
```
### 2.2. Experimenting with the Data Structure
```
# load as numpy
train_data_np = tfds.as_numpy(train_data)
#, validation_data_np, test_data_np
print(type(train_data_np))
# this data structure is a generator, but we need a tuple of strings / integers
# getting a sense of the structure inside the generator
for index, entry in enumerate(train_data_np):
if index < 10:
print(entry)
else:
break
# checking the data type of the main dataset
train_data
# different way of getting the entries
list(train_data.take(3).as_numpy_iterator())[0][0]
```
### 2.3. Cleaning
The data still contains non-word structures like \<br />\<br /> and \\ which have to be removed.
```
REPLACE_NO_SPACE = re.compile("[.;:!\'?,\"()\[\]]")
REPLACE_WITH_SPACE = re.compile("(<br\s*/><br\s*/>)|(\-)|(\/)")
REPLACE_NO_SPACE
#np.array(list(data_ex['train'].as_numpy_iterator()))
for line in np.array(list(train_data.as_numpy_iterator())):
print(line[0].decode("utf-8"))#.lower())
break
def preprocess_reviews(reviews):
#reviews = [REPLACE_NO_SPACE.sub("", line[0].decode("utf-8").lower()) for line in np.array(list(reviews.as_numpy_iterator()))]
reviews = [REPLACE_WITH_SPACE.sub(" ", line[0].decode("utf-8")) for line in np.array(list(reviews.as_numpy_iterator()))]# for line in reviews]
return reviews
reviews_train_clean = preprocess_reviews(train_data)
reviews_test_clean = preprocess_reviews(test_data)
for index, entry in enumerate(reviews_train_clean):
if index < 10:
print(entry)
else:
break
```
*Is it problematic that full stops got replaced?*
Yes -> took that part out again and even let capital letters in there
*What about stopwords?*
BERT was trained on full sentences and depends on word before and after -> eliminating stopwords would mess with this
### 2.4. Examining the Distribution of Labels
```
labels_train = [int(line[1].decode("utf-8")) for line in np.array(list(train_data.as_numpy_iterator()))]
labels_valid = [int(line[1].decode("utf-8")) for line in np.array(list(validation_data.as_numpy_iterator()))]
type(labels_train[0])
share_negative = sum(labels_train)/len(labels_train)
print(share_negative)
```
### 2.5. Comparisons to the MRPC Dataset
```
# testing the way the original code works by importing the other dataset
data_original, info_original = tfds.load('glue/mrpc', data_dir=data_dir, with_info=True)
info_original
info_original.features
print(type(data_original['train']))
print(type(train_data))
print(data_original['train'])
print(train_data)
```
### 2.5. Statistical Analysis
```
len_element = []
longest_sequences = []
for index, element in enumerate(train_data.as_numpy_iterator()):
len_element.append(len(element[0]))
if len(element[0])>7500:
longest_sequences.append(element[0])
continue
else:
continue
len(longest_sequences)
import statistics as st
print("Longest sequence: {:7}".format(max(len_element)))
print("Shortest sequence: {:7}".format(min(len_element)))
print("Average: {:10.{prec}f}".format(st.mean(len_element), prec=2))
print("Standard deviation: {:10.{prec}f}".format(st.stdev(len_element), prec=2))
# plot the distribution of the length of the sequences
import matplotlib.pyplot as plt
_ = plt.hist(len_element, bins='auto')
plt.title("Histogram of the sequence length")
plt.show()
```
Given the relatively large mean of the sequence length, choosing a max_length of 512 may not be appropriate and should be increased to 1024. This will increase the computation time, though.
*Is it an option to choose a relatively small max_length and still get good results?*
*Kick out outliers?*
```
# what do those really long sequences look like?
longest_sequences[1]
```
This little exploration shows that the longest sequences are simply really long summaries of the plot coupled with a recommendation of whether or not to watch it. We should experiment with just taking the beginning of the sequence and the end, or even better: snip out parts in the middle since the beginning and the and are somewhat summaries of the sentiment.
| github_jupyter |
# 使用SentimentNet实现情感分类
`GPU` `CPU` `进阶` `自然语言处理` `全流程`
[](https://authoring-modelarts-cnnorth4.huaweicloud.com/console/lab?share-url-b64=aHR0cHM6Ly9taW5kc3BvcmUtd2Vic2l0ZS5vYnMuY24tbm9ydGgtNC5teWh1YXdlaWNsb3VkLmNvbS9ub3RlYm9vay9tYXN0ZXIvdHV0b3JpYWxzL3poX2NuL21pbmRzcG9yZV9zZW50aW1lbnRuZXQuaXB5bmI=&imageid=65f636a0-56cf-49df-b941-7d2a07ba8c8c) [](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/tutorials/zh_cn/mindspore_sentimentnet.ipynb) [](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/tutorials/zh_cn/mindspore_sentimentnet.py) [](https://gitee.com/mindspore/docs/blob/master/tutorials/source_zh_cn/intermediate/text/sentimentnet.ipynb)
## 概述
情感分类是自然语言处理中文本分类问题的子集,属于自然语言处理最基础的应用。它是对带有感情色彩的主观性文本进行分析和推理的过程,即分析说话人的态度,是倾向正面还是反面。
通常情况下,我们会把情感类别分为正面、反面和中性三类。虽然“面无表情”的评论也有不少;不过,大部分时候会只采用正面和反面的案例进行训练,下面这个数据集就是很好的例子。
传统的文本主题分类问题的典型参考数据集为[20 Newsgroups](http://qwone.com/~jason/20Newsgroups/),该数据集由20组新闻数据组成,包含约20000个新闻文档。
其主题列表中有些类别的数据比较相似,例如comp.sys.ibm.pc.hardware和comp.sys.mac.hardware都是和电脑系统硬件相关的题目,相似度比较高。而有些主题类别的数据相对来说就毫无关联,例如misc.forsale和soc.religion.christian。
就网络本身而言,文本主题分类的网络结构和情感分类的网络结构大致相似。在掌握了情感分类网络如何构造之后,可以很容易构造一个类似的网络,稍作调参即可用于文本主题分类任务。
但在业务上下文侧,文本主题分类是分析文本讨论的客观内容,而情感分类是要从文本中得到它是否支持某种观点的信息。比如,“《阿甘正传》真是好看极了,影片主题明确,节奏流畅。”这句话,在文本主题分类是要将其归为类别为“电影”主题,而情感分类则要挖掘出这一影评的态度是正面还是负面。
相对于传统的文本主题分类,情感分类较为简单,实用性也较强。常见的购物网站、电影网站都可以采集到相对高质量的数据集,也很容易给业务领域带来收益。例如,可以结合领域上下文,自动分析特定类型客户对当前产品的意见,可以分主题分用户类型对情感进行分析,以作针对性的处理,甚至基于此进一步推荐产品,提高转化率,带来更高的商业收益。
特殊领域中,某些非极性词也充分表达了用户的情感倾向,比如下载使用APP时,“卡死了”、“下载太慢了”就表达了用户的负面情感倾向;股票领域中,“看涨”、“牛市”表达的就是用户的正面情感倾向。所以,本质上,我们希望模型能够在垂直领域中,挖掘出一些特殊的表达,作为极性词给情感分类系统使用:
$垂直极性词 = 通用极性词 + 领域特有极性词$
按照处理文本的粒度不同,情感分析可分为词语级、短语级、句子级、段落级以及篇章级等几个研究层次。这里以“段落级”为例,输入为一个段落,输出为影评是正面还是负面的信息。
接下来,以IMDB影评情感分类为例来体验MindSpore在自然语言处理上的应用。
> 本篇基于,GPU/CPU环境运行。
## 整体流程
1. 准备环节。
2. 加载数据集,进行数据处理。
3. 定义网络。
4. 定义优化器和损失函数。
5. 使用网络训练数据,生成模型。
6. 得到模型之后,使用验证数据集,查看模型精度情况。
## 准备环节
### 下载数据集
本次体验采用IMDB影评数据集作为实验数据。
1. 下载IMDB影评数据集。
以下是负面影评(Negative)和正面影评(Positive)的案例。
| Review | Label |
|:---|:---:|
| "Quitting" may be as much about exiting a pre-ordained identity as about drug withdrawal. As a rural guy coming to Beijing, class and success must have struck this young artist face on as an appeal to separate from his roots and far surpass his peasant parents' acting success. Troubles arise, however, when the new man is too new, when it demands too big a departure from family, history, nature, and personal identity. The ensuing splits, and confusion between the imaginary and the real and the dissonance between the ordinary and the heroic are the stuff of a gut check on the one hand or a complete escape from self on the other. | Negative |
| This movie is amazing because the fact that the real people portray themselves and their real life experience and do such a good job it's like they're almost living the past over again. Jia Hongsheng plays himself an actor who quit everything except music and drugs struggling with depression and searching for the meaning of life while being angry at everyone especially the people who care for him most. | Positive |
将下载好的数据集解压并放在当前工作目录下的`datasets`目录下,由于数据集文件较多,解压过程耗时大约15分钟。以下示例代码将数据集下载并解压到指定位置。
```
import os
import requests
import tarfile
import zipfile
from tqdm import tqdm
requests.packages.urllib3.disable_warnings()
def download_dataset(url, target_path):
"""下载并解压数据集"""
if not os.path.exists(target_path):
os.makedirs(target_path)
download_file = url.split("/")[-1]
if not os.path.exists(download_file):
res = requests.get(url, stream=True, verify=False)
if download_file.split(".")[-1] not in ["tgz", "zip", "tar", "gz"]:
download_file = os.path.join(target_path, download_file)
with open(download_file, "wb") as f:
for chunk in res.iter_content(chunk_size=512):
if chunk:
f.write(chunk)
if download_file.endswith("zip"):
z = zipfile.ZipFile(download_file, "r")
z.extractall(path=target_path)
z.close()
if download_file.endswith(".tar.gz") or download_file.endswith(".tar") or download_file.endswith(".tgz"):
with tarfile.open(download_file) as t:
names = t.getnames()
for name in tqdm(names):
t.extract(name, target_path)
download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/aclImdb_v1.tar.gz", "./datasets")
```
2. 下载GloVe文件
下载并解压GloVe文件到当前工作目录下的`datasets`目录下,并在所有Glove文件开头处添加如下所示新的一行,意思是总共读取400000个单词,每个单词用300纬度的词向量表示。
```
def add_first_line(file, line):
with open(file, "r+") as f:
data = f.read()
f.seek(0, 0)
f.write(line+data)
download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/glove.6B.zip", "./datasets/glove")
os.makedirs("./preprocess", exist_ok=True)
os.makedirs("./ckpt", exist_ok=True)
glove_f = [os.path.join("./datasets/glove", i) for i in os.listdir("./datasets/glove")]
[add_first_line(i, "400000 300\n") for i in glove_f]
```
3. 在当前工作目录创建名为`preprocess`的空目录,该目录将用于存储在数据集预处理操作中IMDB数据集转换为MindRecord格式后的文件。此时当前工作目录结构如下所示。
```text
.
├── aclImdb_v1.tar.gz
├── ckpt
├── datasets
│ ├── aclImdb
│ │ ├── imdbEr.txt
│ │ ├── imdb.vocab
│ │ ├── README
│ │ ├── test
│ │ └── train
│ └── glove
│ ├── glove.6B.100d.txt
│ ├── glove.6B.200d.txt
│ ├── glove.6B.300d.txt
│ └── glove.6B.50d.txt
├── glove.6B.zip
├── nlp_application.ipynb
└── preprocess
```
### 确定评价标准
作为典型的分类问题,情感分类的评价标准可以比照普通的分类问题处理。常见的精度(Accuracy)、精准度(Precision)、召回率(Recall)和F_beta分数都可以作为参考。
$精度(Accuracy)= 分类正确的样本数目 / 总样本数目$
$精准度(Precision)= 真阳性样本数目 / 所有预测类别为阳性的样本数目$
$召回率(Recall)= 真阳性样本数目 / 所有真实类别为阳性的样本数目$
$F1分数 = (2*Precision*Recall) / (Precision+Recall)$
在IMDB这个数据集中,正负样本数差别不大,可以简单地用精度(accuracy)作为分类器的衡量标准。
### 确定网络
我们使用基于LSTM构建的SentimentNet网络进行自然语言处理。
> LSTM(Long short-term memory,长短期记忆)网络是一种时间循环神经网络,适合于处理和预测时间序列中间隔和延迟非常长的重要事件。
### 配置运行信息和SentimentNet网络参数
1. 使用`parser`模块传入运行必要的信息。
- `preprocess`:是否预处理数据集,默认为否。
- `aclimdb_path`:数据集存放路径。
- `glove_path`:GloVe文件存放路径。
- `preprocess_path`:预处理数据集的结果文件夹。
- `ckpt_path`:CheckPoint文件路径。
- `pre_trained`:预加载CheckPoint文件。
- `device_target`:指定GPU或CPU环境。
2. 进行训练前,需要配置必要的信息,包括环境信息、执行的模式、后端信息及硬件信息。
运行以下一段代码中配置训练所需相关参数(详细的接口配置信息,请参见MindSpore官网`context.set_context`API接口说明)。
```
import argparse
from mindspore import context
from easydict import EasyDict as edict
# LSTM CONFIG
lstm_cfg = edict({
'num_classes': 2,
'learning_rate': 0.1,
'momentum': 0.9,
'num_epochs': 10,
'batch_size': 64,
'embed_size': 300,
'num_hiddens': 100,
'num_layers': 2,
'bidirectional': True,
'save_checkpoint_steps': 390,
'keep_checkpoint_max': 10
})
cfg = lstm_cfg
parser = argparse.ArgumentParser(description='MindSpore LSTM Example')
parser.add_argument('--preprocess', type=str, default='false', choices=['true', 'false'],
help='whether to preprocess data.')
parser.add_argument('--aclimdb_path', type=str, default="./datasets/aclImdb",
help='path where the dataset is stored.')
parser.add_argument('--glove_path', type=str, default="./datasets/glove",
help='path where the GloVe is stored.')
parser.add_argument('--preprocess_path', type=str, default="./preprocess",
help='path where the pre-process data is stored.')
parser.add_argument('--ckpt_path', type=str, default="./models/ckpt/nlp_application",
help='the path to save the checkpoint file.')
parser.add_argument('--pre_trained', type=str, default=None,
help='the pretrained checkpoint file path.')
parser.add_argument('--device_target', type=str, default="GPU", choices=['GPU', 'CPU'],
help='the target device to run, support "GPU", "CPU". Default: "GPU".')
args = parser.parse_args(['--device_target', 'GPU', '--preprocess', 'true'])
context.set_context(
mode=context.GRAPH_MODE,
save_graphs=False,
device_target=args.device_target)
print("Current context loaded:\n mode: {}\n device_target: {}".format(context.get_context("mode"), context.get_context("device_target")))
```
安装`gensim`依赖包。
```
!pip install gensim
```
## 数据处理
### 预处理数据集
执行数据集预处理:
- 定义`ImdbParser`类解析文本数据集,包括编码、分词、对齐、处理GloVe原始数据,使之能够适应网络结构。
- 定义`convert_to_mindrecord`函数将数据集格式转换为MindRecord格式,便于MindSpore读取。函数`_convert_to_mindrecord`中`weight.txt`为数据预处理后自动生成的weight参数信息文件。
- 调用`convert_to_mindrecord`函数执行数据集预处理。
```
import os
from itertools import chain
import numpy as np
import gensim
from mindspore.mindrecord import FileWriter
class ImdbParser():
"""
按照下面的流程解析原始数据集,获得features与labels:
sentence->tokenized->encoded->padding->features
"""
def __init__(self, imdb_path, glove_path, embed_size=300):
self.__segs = ['train', 'test']
self.__label_dic = {'pos': 1, 'neg': 0}
self.__imdb_path = imdb_path
self.__glove_dim = embed_size
self.__glove_file = os.path.join(glove_path, 'glove.6B.' + str(self.__glove_dim) + 'd.txt')
# properties
self.__imdb_datas = {}
self.__features = {}
self.__labels = {}
self.__vacab = {}
self.__word2idx = {}
self.__weight_np = {}
self.__wvmodel = None
def parse(self):
"""
解析imdb data
"""
self.__wvmodel = gensim.models.KeyedVectors.load_word2vec_format(self.__glove_file)
for seg in self.__segs:
self.__parse_imdb_datas(seg)
self.__parse_features_and_labels(seg)
self.__gen_weight_np(seg)
def __parse_imdb_datas(self, seg):
"""
从原始文本中加载数据
"""
data_lists = []
for label_name, label_id in self.__label_dic.items():
sentence_dir = os.path.join(self.__imdb_path, seg, label_name)
for file in os.listdir(sentence_dir):
with open(os.path.join(sentence_dir, file), mode='r', encoding='utf8') as f:
sentence = f.read().replace('\n', '')
data_lists.append([sentence, label_id])
self.__imdb_datas[seg] = data_lists
def __parse_features_and_labels(self, seg):
"""
解析features与labels
"""
features = []
labels = []
for sentence, label in self.__imdb_datas[seg]:
features.append(sentence)
labels.append(label)
self.__features[seg] = features
self.__labels[seg] = labels
self.__updata_features_to_tokenized(seg)
self.__parse_vacab(seg)
self.__encode_features(seg)
self.__padding_features(seg)
def __updata_features_to_tokenized(self, seg):
"""
切分原始语句
"""
tokenized_features = []
for sentence in self.__features[seg]:
tokenized_sentence = [word.lower() for word in sentence.split(" ")]
tokenized_features.append(tokenized_sentence)
self.__features[seg] = tokenized_features
def __parse_vacab(self, seg):
"""
构建词汇表
"""
tokenized_features = self.__features[seg]
vocab = set(chain(*tokenized_features))
self.__vacab[seg] = vocab
# word_to_idx: {'hello': 1, 'world':111, ... '<unk>': 0}
word_to_idx = {word: i + 1 for i, word in enumerate(vocab)}
word_to_idx['<unk>'] = 0
self.__word2idx[seg] = word_to_idx
def __encode_features(self, seg):
""" 词汇编码 """
word_to_idx = self.__word2idx['train']
encoded_features = []
for tokenized_sentence in self.__features[seg]:
encoded_sentence = []
for word in tokenized_sentence:
encoded_sentence.append(word_to_idx.get(word, 0))
encoded_features.append(encoded_sentence)
self.__features[seg] = encoded_features
def __padding_features(self, seg, maxlen=500, pad=0):
"""
将所有features填充到相同的长度
"""
padded_features = []
for feature in self.__features[seg]:
if len(feature) >= maxlen:
padded_feature = feature[:maxlen]
else:
padded_feature = feature
while len(padded_feature) < maxlen:
padded_feature.append(pad)
padded_features.append(padded_feature)
self.__features[seg] = padded_features
def __gen_weight_np(self, seg):
"""
使用gensim获取权重
"""
weight_np = np.zeros((len(self.__word2idx[seg]), self.__glove_dim), dtype=np.float32)
for word, idx in self.__word2idx[seg].items():
if word not in self.__wvmodel:
continue
word_vector = self.__wvmodel.get_vector(word)
weight_np[idx, :] = word_vector
self.__weight_np[seg] = weight_np
def get_datas(self, seg):
"""
返回 features, labels, weight
"""
features = np.array(self.__features[seg]).astype(np.int32)
labels = np.array(self.__labels[seg]).astype(np.int32)
weight = np.array(self.__weight_np[seg])
return features, labels, weight
def _convert_to_mindrecord(data_home, features, labels, weight_np=None, training=True):
"""
将原始数据集转换为mindrecord格式
"""
if weight_np is not None:
np.savetxt(os.path.join(data_home, 'weight.txt'), weight_np)
# 写入mindrecord
schema_json = {"id": {"type": "int32"},
"label": {"type": "int32"},
"feature": {"type": "int32", "shape": [-1]}}
data_dir = os.path.join(data_home, "aclImdb_train.mindrecord")
if not training:
data_dir = os.path.join(data_home, "aclImdb_test.mindrecord")
def get_imdb_data(features, labels):
data_list = []
for i, (label, feature) in enumerate(zip(labels, features)):
data_json = {"id": i,
"label": int(label),
"feature": feature.reshape(-1)}
data_list.append(data_json)
return data_list
writer = FileWriter(data_dir, shard_num=4)
data = get_imdb_data(features, labels)
writer.add_schema(schema_json, "nlp_schema")
writer.add_index(["id", "label"])
writer.write_raw_data(data)
writer.commit()
def convert_to_mindrecord(embed_size, aclimdb_path, preprocess_path, glove_path):
"""
将原始数据集转换为mindrecord格式
"""
parser = ImdbParser(aclimdb_path, glove_path, embed_size)
parser.parse()
if not os.path.exists(preprocess_path):
print(f"preprocess path {preprocess_path} is not exist")
os.makedirs(preprocess_path)
train_features, train_labels, train_weight_np = parser.get_datas('train')
_convert_to_mindrecord(preprocess_path, train_features, train_labels, train_weight_np)
test_features, test_labels, _ = parser.get_datas('test')
_convert_to_mindrecord(preprocess_path, test_features, test_labels, training=False)
if args.preprocess == "true":
os.system("rm -f ./preprocess/aclImdb* weight*")
print("============== Starting Data Pre-processing ==============")
convert_to_mindrecord(cfg.embed_size, args.aclimdb_path, args.preprocess_path, args.glove_path)
print("======================= Successful =======================")
```
转换成功后会在`preprocess`目录下生成MindRecord文件,通常该操作在数据集不变的情况下,无需每次训练都执行,可通过执行脚本时设置`--preprocess false`跳过,此时查看`preprocess`文件目录结构。
```text
preprocess
├── aclImdb_test.mindrecord0
├── aclImdb_test.mindrecord0.db
├── aclImdb_test.mindrecord1
├── aclImdb_test.mindrecord1.db
├── aclImdb_test.mindrecord2
├── aclImdb_test.mindrecord2.db
├── aclImdb_test.mindrecord3
├── aclImdb_test.mindrecord3.db
├── aclImdb_train.mindrecord0
├── aclImdb_train.mindrecord0.db
├── aclImdb_train.mindrecord1
├── aclImdb_train.mindrecord1.db
├── aclImdb_train.mindrecord2
├── aclImdb_train.mindrecord2.db
├── aclImdb_train.mindrecord3
├── aclImdb_train.mindrecord3.db
└── weight.txt
```
此时`preprocess`目录下的文件为:
- 名称包含`aclImdb_train.mindrecord`的为转换后的MindRecord格式的训练数据集。
- 名称包含`aclImdb_test.mindrecord`的为转换后的MindRecord格式的测试数据集。
- `weight.txt`为预处理后自动生成的weight参数信息文件。
创建训练集:
- 定义创建数据集函数`lstm_create_dataset`,创建训练集`ds_train`。
- 通过`create_dict_iterator`方法创建字典迭代器,读取已创建的数据集`ds_train`中的数据。
运行以下一段代码,创建数据集并读取第1个`batch`中的`label`数据列表,和第1个`batch`中第1个元素的`feature`数据。
```
import os
import mindspore.dataset as ds
def lstm_create_dataset(data_home, batch_size, repeat_num=1, training=True):
"""创建数据集"""
ds.config.set_seed(1)
data_dir = os.path.join(data_home, "aclImdb_train.mindrecord0")
if not training:
data_dir = os.path.join(data_home, "aclImdb_test.mindrecord0")
data_set = ds.MindDataset(data_dir, columns_list=["feature", "label"], num_parallel_workers=4)
# 对数据集进行shuffle、batch与repeat操作
data_set = data_set.shuffle(buffer_size=data_set.get_dataset_size())
data_set = data_set.batch(batch_size=batch_size, drop_remainder=True)
data_set = data_set.repeat(count=repeat_num)
return data_set
ds_train = lstm_create_dataset(args.preprocess_path, cfg.batch_size)
iterator = next(ds_train.create_dict_iterator())
first_batch_label = iterator["label"].asnumpy()
first_batch_first_feature = iterator["feature"].asnumpy()[0]
print(f"The first batch contains label below:\n{first_batch_label}\n")
print(f"The feature of the first item in the first batch is below vector:\n{first_batch_first_feature}")
```
## 定义网络
1. 导入初始化网络所需模块。
2. 定义需要单层LSTM小算子堆叠的设备类型。
3. 定义`lstm_default_state`函数来初始化网络参数及网络状态。
4. 定义`stack_lstm_default_state`函数来初始化小算子堆叠需要的初始化网络参数及网络状态。
5. 针对CPU场景,自定义单层LSTM小算子堆叠,来实现多层LSTM大算子功能。
6. 使用`Cell`方法,定义网络结构(`SentimentNet`网络)。
7. 实例化`SentimentNet`,创建网络,最后输出网络中加载的参数。
```
import math
import numpy as np
from mindspore import Tensor, nn, context, Parameter, ParameterTuple
from mindspore.common.initializer import initializer
import mindspore.ops as ops
# 当设备类型为CPU时采用堆叠类型的LSTM
STACK_LSTM_DEVICE = ["CPU"]
# 将短期记忆(h)和长期记忆(c)初始化为0
def lstm_default_state(batch_size, hidden_size, num_layers, bidirectional):
"""LSTM网络输入初始化"""
num_directions = 2 if bidirectional else 1
h = Tensor(np.zeros((num_layers * num_directions, batch_size, hidden_size)).astype(np.float32))
c = Tensor(np.zeros((num_layers * num_directions, batch_size, hidden_size)).astype(np.float32))
return h, c
def stack_lstm_default_state(batch_size, hidden_size, num_layers, bidirectional):
"""STACK LSTM网络输入初始化"""
num_directions = 2 if bidirectional else 1
h_list = c_list = []
for _ in range(num_layers):
h_list.append(Tensor(np.zeros((num_directions, batch_size, hidden_size)).astype(np.float32)))
c_list.append(Tensor(np.zeros((num_directions, batch_size, hidden_size)).astype(np.float32)))
h, c = tuple(h_list), tuple(c_list)
return h, c
class StackLSTM(nn.Cell):
"""
实现堆叠LSTM
"""
def __init__(self,
input_size,
hidden_size,
num_layers=1,
has_bias=True,
batch_first=False,
dropout=0.0,
bidirectional=False):
super(StackLSTM, self).__init__()
self.num_layers = num_layers
self.batch_first = batch_first
self.transpose = ops.Transpose()
num_directions = 2 if bidirectional else 1
input_size_list = [input_size]
for i in range(num_layers - 1):
input_size_list.append(hidden_size * num_directions)
# LSTMCell为单层RNN结构,通过堆叠LSTMCell可完成StackLSTM
layers = []
for i in range(num_layers):
layers.append(nn.LSTMCell(input_size=input_size_list[i],
hidden_size=hidden_size,
has_bias=has_bias,
batch_first=batch_first,
bidirectional=bidirectional,
dropout=dropout))
# 权重初始化
weights = []
for i in range(num_layers):
weight_size = (input_size_list[i] + hidden_size) * num_directions * hidden_size * 4
if has_bias:
bias_size = num_directions * hidden_size * 4
weight_size = weight_size + bias_size
stdv = 1 / math.sqrt(hidden_size)
w_np = np.random.uniform(-stdv, stdv, (weight_size, 1, 1)).astype(np.float32)
weights.append(Parameter(initializer(Tensor(w_np), w_np.shape), name="weight" + str(i)))
self.lstms = layers
self.weight = ParameterTuple(tuple(weights))
def construct(self, x, hx):
"""构建网络"""
if self.batch_first:
x = self.transpose(x, (1, 0, 2))
h, c = hx
hn = cn = None
for i in range(self.num_layers):
x, hn, cn, _, _ = self.lstms[i](x, h[i], c[i], self.weight[i])
if self.batch_first:
x = self.transpose(x, (1, 0, 2))
return x, (hn, cn)
class SentimentNet(nn.Cell):
"""构建SentimentNet"""
def __init__(self,
vocab_size,
embed_size,
num_hiddens,
num_layers,
bidirectional,
num_classes,
weight,
batch_size):
super(SentimentNet, self).__init__()
# 对数据中的词汇进行降维
self.embedding = nn.Embedding(vocab_size,
embed_size,
embedding_table=weight)
self.embedding.embedding_table.requires_grad = False
self.trans = ops.Transpose()
self.perm = (1, 0, 2)
# 判断是否需要堆叠LSTM
if context.get_context("device_target") in STACK_LSTM_DEVICE:
self.encoder = StackLSTM(input_size=embed_size,
hidden_size=num_hiddens,
num_layers=num_layers,
has_bias=True,
bidirectional=bidirectional,
dropout=0.0)
self.h, self.c = stack_lstm_default_state(batch_size, num_hiddens, num_layers, bidirectional)
else:
self.encoder = nn.LSTM(input_size=embed_size,
hidden_size=num_hiddens,
num_layers=num_layers,
has_bias=True,
bidirectional=bidirectional,
dropout=0.0)
self.h, self.c = lstm_default_state(batch_size, num_hiddens, num_layers, bidirectional)
self.concat = ops.Concat(1)
if bidirectional:
self.decoder = nn.Dense(num_hiddens * 4, num_classes)
else:
self.decoder = nn.Dense(num_hiddens * 2, num_classes)
def construct(self, inputs):
# input:(64,500,300)
embeddings = self.embedding(inputs)
embeddings = self.trans(embeddings, self.perm)
output, _ = self.encoder(embeddings, (self.h, self.c))
# states[i] size(64,200) -> encoding.size(64,400)
encoding = self.concat((output[0], output[499]))
outputs = self.decoder(encoding)
return outputs
embedding_table = np.loadtxt(os.path.join(args.preprocess_path, "weight.txt")).astype(np.float32)
network = SentimentNet(vocab_size=embedding_table.shape[0],
embed_size=cfg.embed_size,
num_hiddens=cfg.num_hiddens,
num_layers=cfg.num_layers,
bidirectional=cfg.bidirectional,
num_classes=cfg.num_classes,
weight=Tensor(embedding_table),
batch_size=cfg.batch_size)
print(network.parameters_dict(recurse=True))
```
## 训练并保存模型
运行以下一段代码,创建优化器和损失函数模型,加载训练数据集(`ds_train`)并配置好`CheckPoint`生成信息,然后使用`model.train`接口,进行模型训练。根据输出可以看到loss值随着训练逐步降低,最后达到0.262左右。
```
from mindspore import Model
from mindspore.train.callback import CheckpointConfig, ModelCheckpoint, TimeMonitor, LossMonitor
from mindspore.nn import Accuracy
from mindspore import nn
os.system("rm -f {0}/*.ckpt {0}/*.meta".format(args.ckpt_path))
loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
opt = nn.Momentum(network.trainable_params(), cfg.learning_rate, cfg.momentum)
model = Model(network, loss, opt, {'acc': Accuracy()})
loss_cb = LossMonitor(per_print_times=78)
print("============== Starting Training ==============")
config_ck = CheckpointConfig(save_checkpoint_steps=cfg.save_checkpoint_steps,
keep_checkpoint_max=cfg.keep_checkpoint_max)
ckpoint_cb = ModelCheckpoint(prefix="lstm", directory=args.ckpt_path, config=config_ck)
time_cb = TimeMonitor(data_size=ds_train.get_dataset_size())
if args.device_target == "CPU":
model.train(cfg.num_epochs, ds_train, callbacks=[time_cb, ckpoint_cb, loss_cb], dataset_sink_mode=False)
else:
model.train(cfg.num_epochs, ds_train, callbacks=[time_cb, ckpoint_cb, loss_cb])
print("============== Training Success ==============")
```
## 模型验证
创建并加载验证数据集(`ds_eval`),加载由**训练**保存的CheckPoint文件,进行验证,查看模型质量,此步骤用时约30秒。
```
from mindspore import load_checkpoint, load_param_into_net
args.ckpt_path_saved = f'{args.ckpt_path}/lstm-{cfg.num_epochs}_390.ckpt'
print("============== Starting Testing ==============")
ds_eval = lstm_create_dataset(args.preprocess_path, cfg.batch_size, training=False)
param_dict = load_checkpoint(args.ckpt_path_saved)
load_param_into_net(network, param_dict)
if args.device_target == "CPU":
acc = model.eval(ds_eval, dataset_sink_mode=False)
else:
acc = model.eval(ds_eval)
print("============== {} ==============".format(acc))
```
### 训练结果评价
根据以上一段代码的输出可以看到,在经历了10轮epoch之后,使用验证的数据集,对文本的情感分析正确率在85%左右,达到一个基本满意的结果。
## 总结
以上便完成了MindSpore自然语言处理应用的体验,我们通过本次体验全面了解了如何使用MindSpore进行自然语言中处理情感分类问题,理解了如何通过定义和初始化基于LSTM的`SentimentNet`网络进行训练模型及验证正确率。
| github_jupyter |
# Datasets to download
Here we list a few datasets that might be interesting to explore with vaex.
## New York taxi dataset
The very well known dataset containing trip infromation from the iconic Yellow Taxi company in NYC.
The raw data is curated by the [Taxi & Limousine Commission (TLC)](
https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page
).
See for instance [Analyzing 1.1 Billion NYC Taxi and Uber Trips, with a Vengeance](http://toddwschneider.com/posts/analyzing-1-1-billion-nyc-taxi-and-uber-trips-with-a-vengeance/) for some ideas.
* [Year: 2015 - 146 million rows - 12GB](
https://vaex.s3.us-east-2.amazonaws.com/taxi/yellow_taxi_2015_f32s.hdf5
)
* [Year 2009-2015 - 1 billion rows - 107GB](
https://vaex.s3.us-east-2.amazonaws.com/taxi/yellow_taxi_2009_2015_f32.hdf5
)
One can also stream the data directly from S3. Only the data that is necessary will be streamed, and it will cached locally:
```
import vaex
df = vaex.open('s3://vaex/taxi/yellow_taxi_2015_f32s.hdf5?anon=true')
```
```
import vaex
import warnings; warnings.filterwarnings("ignore")
df = vaex.open('/data/yellow_taxi_2009_2015_f32.hdf5')
print(f'number of rows: {df.shape[0]:,}')
print(f'number of columns: {df.shape[1]}')
long_min = -74.05
long_max = -73.75
lat_min = 40.58
lat_max = 40.90
df.plot(df.pickup_longitude, df.pickup_latitude, f="log1p", limits=[[-74.05, -73.75], [40.58, 40.90]], show=True);
```
## Gaia - European Space Agency
Gaia is an ambitious mission to chart a three-dimensional map of our Galaxy, the Milky Way, in the process revealing the composition, formation and evolution of the Galaxy.
See the [Gaia Science Homepage for details](http://www.cosmos.esa.int/web/gaia/home), and you may want to try the [Gaia Archive](https://archives.esac.esa.int/gaia) for ADQL (SQL like) queries.
```
df = vaex.open('/data/gaia-dr2-sort-by-source_id.hdf5')
print(f'number of rows: {df.shape[0]:,}')
print(f'number of columns: {df.shape[1]}')
df.plot("ra", "dec", f="log", limits=[[360, 0], [-90, 90]], show=True);
```
## U.S. Airline Dataset
This dataset contains information on flights within the United States between 1988 and 2018.
The original data can be downloaded from [United States Department of Transportation](
https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236
).
- [Year 1988-2018 - 180 million rows - 17GB](
https://vaex.s3.us-east-2.amazonaws.com/airline/us_airline_data_1988_2018.hdf5
)
One can also stream it from S3:
```
import vaex
df = vaex.open('s3://vaex/airline/us_airline_data_1988_2018.hdf5?anon=true')
```
```
df = vaex.open('/data/airline/us_airline_data_1988_2018.hd5')
print(f'number of rows: {df.shape[0]:,}')
print(f'number of columns: {df.shape[1]}')
df.head(5)
```
## Sloan Digital Sky Survey (SDSS)
The data is public and can be queried from the [SDSS archive](https://www.sdss.org/).
The original query at [SDSS archive](https://www.sdss.org/) was (although split in small parts):
```
SELECT ra, dec, g, r from PhotoObjAll WHERE type = 6 and clean = 1 and r>=10.0 and r<23.5;
```
```
df = vaex.open('/data/sdss/sdss-clean-stars-dered.hdf5')
print(f'number of rows: {df.shape[0]:,}')
print(f'number of columns: {df.shape[1]}')
df.healpix_plot(df.healpix9, show=True, f="log1p", healpix_max_level=9, healpix_level=9,
healpix_input='galactic', healpix_output='galactic', rotation=(0,45)
)
```
## Helmi & de Zeeuw 2000
Result of an N-body simulation of the accretion of 33 satellite galaxies into a Milky Way dark matter halo.
* [3 million rows - 252MB](
https://github.com/vaexio/vaex-datasets/releases/download/v1.0/helmi-dezeeuw-2000-FeH-v2.hdf5
)
```
df = vaex.datasets.helmi_de_zeeuw.fetch() # this will download it on the fly
print(f'number of rows: {df.shape[0]:,}')
print(f'number of columns: {df.shape[1]}')
df.plot([["x", "y"], ["Lz", "E"]], f="log", figsize=(12,5), show=True, limits='99.99%');
```
| github_jupyter |
```
# Setting up a model and a mesh for the MT forward problem
import SimPEG as simpeg, sys
import numpy as np
from SimPEG import NSEM
import telluricpy
import matplotlib.pyplot as plt
%matplotlib inline
import copy
# Define the area of interest
bw, be = 557100, 557580
bs, bn = 7133340, 7133960
bb, bt = 0,480
```
# Build the mesh
# Design the tensors
hSize,vSize = 25., 10
nrCcore = [15, 8, 6, 5, 4, 2, 2, 2, 2]
hPad = simpeg.Utils.meshTensor([(hSize,10,1.5)])
hx = np.concatenate((hPad[::-1],np.ones(((be-bw)/hSize,))*hSize,hPad))
hy = np.concatenate((hPad[::-1],np.ones(((bn-bs)/hSize,))*hSize,hPad))
airPad = simpeg.Utils.meshTensor([(vSize,13,1.5)])
vCore = np.concatenate([ np.ones(i)*s for i, s in zip(nrCcore,(simpeg.Utils.meshTensor([(vSize,1),(vSize,8,1.3)])))])[::-1]
botPad = simpeg.Utils.meshTensor([(vCore[0],8,-1.5)])
hz = np.concatenate((botPad,vCore,airPad))
# Calculate the x0 point
x0 = np.array([bw-np.sum(hPad),bs-np.sum(hPad),bt-np.sum(vCore)-np.sum(botPad)])
# Make the mesh
meshFor = simpeg.Mesh.TensorMesh([hx,hy,hz],x0)
```
# Build the Inversion mesh
# Design the tensors
hSizeI,vSizeI = 25., 10.
nrCcoreI = [12, 6, 4, 4, 3, 3, 3, 2, 1]
hPadI = simpeg.Utils.meshTensor([(hSizeI,10,1.75)])
hxI = np.concatenate((hPadI[::-1],np.ones(((be-bw)/hSizeI,))*hSizeI,hPadI))
hyI = np.concatenate((hPadI[::-1],np.ones(((bn-bs)/hSizeI,))*hSizeI,hPadI))
airPadI = simpeg.Utils.meshTensor([(vSizeI,12,1.75)])
vCoreI = np.concatenate([ np.ones(i)*s for i, s in zip(nrCcoreI,(simpeg.Utils.meshTensor([(vSizeI,1),(vSizeI,8,1.3)])))])[::-1]
botPadI = simpeg.Utils.meshTensor([(vCoreI[0],8,-1.75)])
hzI = np.concatenate((botPadI,vCoreI,airPadI))
# Calculate the x0 point
x0I = np.array([bw-np.sum(hPadI),bs-np.sum(hPadI),bt-np.sum(vCoreI)-np.sum(botPadI)])
# Make the mesh
meshInv = simpeg.Mesh.TensorMesh([hxI,hyI,hzI],x0I)
meshFor = copy.deepcopy(meshInv)
NSEM.Utils.skindepth(1e2,10)
print np.sum(vCoreI)
print np.sum(hPadI)
print np.sum(airPadI), np.sum(botPadI)
print meshFor.nC
print meshFor
# Save the mesh
meshFor.writeVTK('nsmesh_GKRcoarse.vtr',{'id':np.arange(meshFor.nC)})
nsvtr = telluricpy.vtkTools.io.readVTRFile('nsmesh_GKRcoarse.vtr')
nsvtr
topoSurf = telluricpy.vtkTools.polydata.normFilter(telluricpy.vtkTools.io.readVTPFile('../../Geological_model/CDED_Lake_Coarse.vtp'))
activeMod = telluricpy.vtkTools.extraction.extractDataSetWithPolygon(nsvtr,topoSurf)
topoSurf
#telluricpy.vtkTools.io.writeVTUFile('activeModel.vtu',activeMod)
# Get active indieces
activeInd = telluricpy.vtkTools.dataset.getDataArray(activeMod,'id')
# Make the conductivity dictionary
# Note: using the background value for the till, since the extraction gets the ind's below the till surface
geoStructFileDict = {'Till':1e-4,
'PK1':5e-2,
'HK1':1e-3,
'VK':5e-3}
# Loop through
extP = '../../Geological_model/'
geoStructIndDict = {}
for key, val in geoStructFileDict.iteritems():
geoPoly = telluricpy.vtkTools.polydata.normFilter(telluricpy.vtkTools.io.readVTPFile(extP+key+'.vtp'))
modStruct = telluricpy.vtkTools.extraction.extractDataSetWithPolygon(activeMod,geoPoly,extBoundaryCells=True,extInside=True,extractBounds=True)
geoStructIndDict[key] = telluricpy.vtkTools.dataset.getDataArray(modStruct,'id')
# Make the physical prop
sigma = np.ones(meshFor.nC)*1e-8
sigma[activeInd] = 1e-3 # 1e-4 is the background and 1e-3 is the till value
# Add the structure
for key in ['Till','PK1','HK1','VK']:
sigma[geoStructIndDict[key]] = geoStructFileDict[key]
fig = plt.figure(figsize=(10,10))
ax = plt.subplot(111)
model = sigma.reshape(meshFor.vnC,order='F')
a = ax.pcolormesh(meshFor.gridCC[:,0].reshape(meshFor.vnC,order='F')[:,20,:],meshFor.gridCC[:,2].reshape(meshFor.vnC,order='F')[:,20,:],np.log10(model[:,20,:]),edgecolor='k')
ax.set_xlim([bw,be])
ax.set_ylim([-0,bt])
ax.grid(which="major")
plt.colorbar(a)
ax.set_aspect("equal")
# Save the model
meshFor.writeVTK('nsmesh_GKRCoarseHKPK1.vtr',{'S/m':sigma})
import numpy as np
# Set up the forward modeling
freq = np.logspace(5,0,31)
np.save('MTfrequencies',freq)
freq
# Find the locations on the surface of the model.
# Get the outer shell of the model
import vtk
actModVTP = telluricpy.vtkTools.polydata.normFilter(telluricpy.vtkTools.extraction.geometryFilt(activeMod))
polyBox = vtk.vtkCubeSource()
polyBox.SetBounds(bw-5.,be+5,bs-5.,bn+5.,bb-5.,bt+5)
polyBox.Update()
# Exract the topo of the model
modTopoVTP = telluricpy.vtkTools.extraction.extractDataSetWithPolygon(actModVTP,telluricpy.vtkTools.polydata.normFilter(polyBox.GetOutput()),extractBounds=False)
telluricpy.vtkTools.io.writeVTPFile('topoSurf.vtp',actModVTP)
# Make the rxLocations file
x,y = np.meshgrid(np.arange(bw+12.5,be,25),np.arange(bs+12.5,bn,25))
xy = np.hstack((x.reshape(-1,1),y.reshape(-1,1)))
# Find the location array
locArr = telluricpy.modelTools.surfaceIntersect.findZofXYOnPolydata(xy,actModVTP) #modTopoVTP)
np.save('MTlocations',locArr)
telluricpy.vtkTools.io.writeVTPFile('MTloc.vtp',telluricpy.dataFiles.XYZtools.makeCylinderPtsVTP(locArr,5,10,10))
# Running the forward modelling on the Cluster.
# Define the forward run in findDiam_MTforward.py
%matplotlib qt
#sys.path.append('/home/gudni/Dropbox/code/python/MTview/')
import interactivePlotFunctions as iPf
# Load the data
mtData = np.load('MTdataStArr_nsmesh_HKPK1Coarse_noExtension.npy')
mtData
iPf.MTinteractiveMap([mtData])
# Looking at the data shows that data below 100Hz is affected by the boundary conditions,
# which makes sense for very conductive conditions as we have.
# Invert data in the 1e5-1e2 range.
```
Run the inversion on the cluster using the inv3d/run1/findDiam_inversion.py
```
drecAll = np.load('MTdataStArr_nsmesh_0.npy')
np.unique(drecAll['freq'])[10::]
# Build the Inversion mesh
# Design the tensors
hSizeI,vSizeI = 25., 10.
nrCcoreI = [12, 6, 6, 6, 5, 4, 3, 2, 1]
hPadI = simpeg.Utils.meshTensor([(hSizeI,9,1.5)])
hxI = np.concatenate((hPadI[::-1],np.ones(((be-bw)/hSizeI,))*hSizeI,hPadI))
hyI = np.concatenate((hPadI[::-1],np.ones(((bn-bs)/hSizeI,))*hSizeI,hPadI))
airPadI = simpeg.Utils.meshTensor([(vSizeI,12,1.5)])
vCoreI = np.concatenate([ np.ones(i)*s for i, s in zip(nrCcoreI,(simpeg.Utils.meshTensor([(vSizeI,1),(vSizeI,8,1.3)])))])[::-1]
botPadI = simpeg.Utils.meshTensor([(vCoreI[0],7,-1.5)])
hzI = np.concatenate((botPadI,vCoreI,airPadI))
# Calculate the x0 point
x0I = np.array([bw-np.sum(hPadI),bs-np.sum(hPadI),bt-np.sum(vCoreI)-np.sum(botPadI)])
# Make the mesh
meshInv = simpeg.Mesh.TensorMesh([hxI,hyI,hzI],x0I)
meshInv.writeVTK('nsmesh_HPVK1_inv.vtr',{'id':np.arange(meshInv.nC)})
nsInvvtr = telluricpy.vtkTools.io.readVTRFile('nsmesh_HPVK1_inv.vtr')
activeModInv = telluricpy.vtkTools.extraction.extractDataSetWithPolygon(nsInvvtr,topoSurf,extBoundaryCells=True)
sigma = np.ones(meshInv.nC)*1e-8
indAct = telluricpy.vtkTools.dataset.getDataArray(activeModInv,'id')
sigma[indAct] = 1e-4
meshInv.writeVTK('nsmesh_HPVK1_inv.vtr',{'id':np.arange(meshInv.nC),'S/m':sigma})
from pymatsolver import MumpsSolver
pymatsolver.AvailableSolvers
NSEM.Utils.skindepth(1000,100000)
np.unique(mtData['freq'])[5::]
```
| github_jupyter |
## List comprehensions
A *list comprehension* is a compact way to construct a new collection by performing operations on some or all of the elements of another collection
It is a powerful and succinct way to specify a data transformation (from one collection to another).
General form: `[ expression for-clause conditional ]`
Note that the condition is optional!
```
# *expr* *for-clause* *conditional*
new_list = [value for value in range(20) if value % 2 == 0]
print(new_list)
```
The optional condition is like other Python defaults:
```
new_list[::]
```
First reference to value is what we're adding to the list, but it's an expression based on the value in the for loop.
```
new_list = [ value * 2 for value in range(20) ] # expression evaluated before appending to list
print(new_list)
```
First element (expression) is only evaluated if the condition is true:
```
new_list = [ value + 5 for value in range(20) if value > 5 ]
print(new_list)
```
A more realistic example:
```
TICKER_IDX = 0
SAL_IDX = 1
CEO_salaries = [("GE", 1), ("AMT", 3), ("AAPL", 54), ("AMZN", 2),
("FCBK", 23), ("IBM", 7), ("TED", 19), ("XRX", 4),
("ABC", 6), ("DEF", 44)]
high_salaries = [(salary[TICKER_IDX],
salary[SAL_IDX] * 1000000) # expression
for salary in CEO_salaries # for loop
if salary[SAL_IDX] >= 10] # conditional
print(high_salaries)
```
Equivalent code in a loop:
```
high_salaries = []
for salary in CEO_salaries:
if salary[SAL_IDX] >= 10:
high_salaries.append(salary[SAL_IDX] * 1000000)
print(high_salaries)
```
More complexity...
```
list_o_tuples = [(value, value**2, value**3) # expression
for value in range(16) # for loop
if value % 2 == 0 ] # conditonal
print(list_o_tuples)
```
Nested loops in comprehensions:
```
GRID_WIDTH = 6
GRID_HEIGHT = 4
coords_list = [(x, y)
for x in range(GRID_WIDTH) if x > 1
for y in range(GRID_HEIGHT) if y < 2]
print(coords_list)
```
Similar loop:
```
for x in range(GRID_WIDTH):
for y in range(GRID_HEIGHT):
print("({}, {}),".format(x, y), end=' ')
```
Comprehension on a string:
```
some_string = 'this is a class about python'
new_list = [char.upper() for char in some_string if char > 'j']
print(new_list)
```
Comprehension on a dictionary:
```
game_constants = {'MAX_KNIVES': 6, 'PIRATE': "Aargh!", 'C': coords_list,
'PI': 3.14, 'SAO_PAOLO': (3, 7), 'HELLO': "Hello world!"}
string_constants = [(key, val)
for (key, val) in game_constants.items()
if isinstance(val, str)]
print(string_constants)
```
Let's get a list of booleans marking which items in another list meet some condition:
```
target_companies = [sal[SAL_IDX] >= 10 for sal in CEO_salaries]
print(target_companies)
```
| github_jupyter |
# Images to Text: A Gentle Introduction to Optical Character Recognition with PyTesseract
***How to install & run the course notebooks on your own computer***
For this course, we've been working in Jupyter Notebooks hosted on [Binder](https://binder.constellate.org/). If you want to host your own materials in Binder, or host a copy of these course materials for your own researhc/teaching, [here's a video that explains how to do this using Github](https://www.youtube.com/watch?v=owSGVOov9pQ). You can also follow [this tutorial](https://nkelber.github.io/tapi2021/book/create-your-own.html), which explains how to create [Jupyter Book](https://jupyterbook.org/intro.html).
If you have unstable internet access or are working with files containing sensitive information, you can follow these instructions to save and work with [the course materials](https://nkelber.github.io/tapi2021/book/courses/ocr.html) on your local computer.
Don't hesitate to reach out if you have any questions.
Here's how to install & run these course materials and your own Python/Tesseract scripts locally:
(Note that, even when working with Jupyter Notebooks offline, parts of these notebooks will require internet access at specific points to download Python libraries, interact with Tesseract, etc.)
1. Jupyter Notebooks can be created and edited on a computer with limited or no internet access through a free software package called [Anaconda Navigator](https://www.anaconda.com/products/individual). **Begin by [downloading](https://www.anaconda.com/products/individual) the version of Anaconda that matches your operating system (Windows, Mac, or Linux).** Make sure you select the version labeled "graphical installer".
2. When the software package finishes downloading, **locate and open it and follow its instructions to complete installation.** *Note that you may need administrator access to your computer to finalize installation. If you are working on a computer provided to you through your institution, check with the lab or computer administrator before attempting the install.*
3. If needed, **see [these full instructions on installing and getting started with Anaconda](https://docs.anaconda.com/anaconda/install/).**
4. Once Anaconda has been installed, **launch the Anaconda Navigator.**
<img src="images/00-intro-03.jpeg" width="70%" style="padding-top:20px; box-shadow: 25px 25px 20px -30px rgba(0, 0, 0);" alt="Screen capture of Anaconda Navigator's Environments tab." title="Screen capture of Anaconda Navigator's Environments tab.">
<br/>
5. (1) Click on the "Environments" tab on the left.
6. (2) Then click the "Create" button at the bottom of the Environments window. A popup box will appear.
<img src="images/00-intro-04.jpeg" width="40%" style="padding-top:20px; box-shadow: 25px 25px 20px -30px rgba(0, 0, 0);" alt="Screen capture of Anaconda Navigator's create an environment pop up window." title="Screen capture of Anaconda Navigator's create an environment pop up window.">
<br/>
7. We'll create a new environment, basically a space for your Jupyter Notebooks where your Python libraries and other settings will be saved. Give this new environment a name such as "ocr" or your project name. Note that spaces are not allowed in the name. Click "Create".
<img src="images/00-intro-05.jpeg" width="90%" style="padding-top:20px; box-shadow: 25px 25px 20px -30px rgba(0, 0, 0);" alt="Screen capture of Anaconda Navigator's Python library list." title="Screen capture of Anaconda Navigator's Python library list.">
<br/>
8. You'll be returned to the Environments window. (1) Make sure that your environment name is selected in the middle panel. (2) Then, from the "Installed" dropdown menu, select "All." This will show a list of all Python libraries and modules available to be installed. (If you're not sure what a Python library is at this point, that's OK--just know that it's a package of code that can be used to extend Python's core functions--like a plug-in for a website or an add-on for software like Microsoft Word.)
9. In the search bar at right, (3) search for each of the following libraries. (4) Make sure that there is a green check mark next to each.
- pillow
- pytesseract
**NOTE: You may also need to [install Tesseract via your Command Line or Terminal using these instructions](https://pytesseract.readthedocs.io/en/latest/install.html).**
10. Back in Anaconda: (1) Return to the "Home" tab at left and (2) click "Launch" under Jupyter Notebooks:
<img src="images/00-intro-06.jpeg" width="70%" style="padding-top:20px; box-shadow: 25px 25px 20px -30px rgba(0, 0, 0);" alt="Screen capture of Anaconda Navigator's Python library list." title="Screen capture of Anaconda Navigator's Python library list.">
<br/>
11. Jupyter Notebooks will launch in your default browser. It will look something like this:
<img src="images/00-intro-07.jpeg" width="70%" style="padding-top:20px; box-shadow: 25px 25px 20px -30px rgba(0, 0, 0);" alt="Screen capture of Jupyter Notebooks launch in browser." title="Screen capture of Jupyter Notebooks launch in browser.">
<br/>
12. A Terminal or Console window will also launch. Keep this window and Anaconda Navigator open in the background while you work in your browser window!
<img src="images/00-intro-08.jpeg" width="70%" style="padding-top:20px; box-shadow: 25px 25px 20px -30px rgba(0, 0, 0);" alt="Screen capture of terminal window opened by Anaconda for running Jupyter Notebooks." title="Screen capture of terminal window opened by Anaconda for running Jupyter Notebooks.">
<br/>
13. In a new browser tab, navigate to the <mark style="background-color:orange">course Github repository.</mark>
14. **If you use Github** and have Github Desktop installed or know how to use the command line (Terminal) to access repositories, use your preferred method to clone the repository to your computer.
15. **If you do not use Github** and/or are not sure how to clone a Github repository, click the green "Code" button and then click "Download Zip." When asked, save the .zip file to an easily accessible location, such as your Desktop.
<img src="images/00-intro-09.jpeg" width="50%" style="padding-top:20px; box-shadow: 25px 25px 20px -30px rgba(0, 0, 0);" alt="Screen capture of the download code button on Github." title="Screen capture of the download code button on Github.">
<br/>
16. Navigate to the .zip file's location and (on a Mac) double click on it to extract the contents, or (on Windows) right-click and select "Extract All". When you've successfully unzipped the file, you should see a folder containing the Github repository contents.
<img src="images/00-intro-10.jpeg" width="50%" style="padding-top:20px; box-shadow: 25px 25px 20px -30px rgba(0, 0, 0);" alt="Screen capture of the On The Books Github repository unzipped on a local computer." title="Screen capture of the On The Books Github repository unzipped on a local computer.">
<br/>
17. Back in your browser in the Jupyter Notebooks tab, locate the folder where you just downloaded the Github repository to your computer. If you saved it to the Desktop, find the Desktop folder, click to open, then click to open the Github folder.
<img src="images/00-intro-11.jpg" width="70%" style="padding-top:20px; box-shadow: 25px 25px 20px -30px rgba(0, 0, 0);" alt="Screen capture of the On The Books Github repository in Jupyter Notebooks on a local computer." title="Screen capture of the On The Books Github repository in Jupyter Notebooks on a local computer.">
<br/>
18. Click on any notebook (labeled with the .ipynb file extension) to open a notebook and get started.
<img src="images/00-intro-12.jpg" width="70%" style="padding-top:20px; box-shadow: 25px 25px 20px -30px rgba(0, 0, 0);" alt="Screen capture of the On The Books Github oer folder in Jupyter Notebooks on a local computer." title="Screen capture of the On The Books Github oer folder in Jupyter Notebooks on a local computer.">
<br/>
Now it's time to get started!
| github_jupyter |
```
import torchaudio as ta
import torch
from torch.utils.data import DataLoader
import torch.nn as nn
import torch.nn.functional as F
import torch.autograd.profiler as profiler
# import pytorch_lightning as pl
import numpy as np
import os
import IPython.display as ipd
import numpy as np
import math
import glob
from tqdm.auto import tqdm
from python_files.Noise_Reduction_Datagen import Signal_Synthesis_DataGen
import warnings
warnings.filterwarnings("ignore")
import gc
# from numba import jit
noise_dir = "./dataset/UrbanSound8K-Resampled/audio/"
signal_dir = "./dataset/cv-corpus-5.1-2020-06-22-Resampled/en/clips/"
signal_nums_save = "./dataset_loader_files/signal_paths_nums_save.npy"
num_noise_samples=100
num_signal_samples = 100
noise_save_path = "./dataset_loader_files/noise_paths_resampled_save.npy"
n_fft=512
win_length=n_fft
hop_len=n_fft//4
create_specgram = False
perform_stft = False
default_sr = 16000
sec = 9
augment=True
device_datagen = "cpu"
signal_synthesis_dataset = Signal_Synthesis_DataGen(noise_dir, signal_dir, \
signal_nums_save=signal_nums_save, num_noise_samples=num_noise_samples, \
num_signal_samples=num_signal_samples, noise_path_save=noise_save_path, \
n_fft=n_fft, win_length=win_length, hop_len=hop_len, create_specgram=create_specgram, \
perform_stft=perform_stft, normalize=True, default_sr=default_sr, sec=sec, epsilon=1e-5, augment=False, device=device_datagen)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
BATCH_SIZE = 3
shuffle = True
num_workers = 8
pin_memory = False
# data_loader = DataLoader(signal_synthesis_dataset, batch_size=BATCH_SIZE, shuffle=shuffle, num_workers=num_workers)
data_loader = DataLoader(signal_synthesis_dataset, batch_size=BATCH_SIZE, shuffle=shuffle, num_workers=num_workers, pin_memory=pin_memory)
# data_loader.to(device)
signal_synthesis_dataset.__len__()
# %%time
# # data_loader_iter = iter(data_loader)
# for index, i in enumerate(data_loader):
# # i = next(data_loader)
# if index < 32-1:
# pass
# else:
# break
# print(i[1].shape,i[0].min(), i[0].max(), index)
# %%time
# stft_sig = torch.stft(i[0], n_fft=n_fft, hop_length=hop_len, win_length=win_length)
# i[1].max()
# nan_sig = signal_synthesis_dataset.__getitem__(119)
# nan_sig[0].max(), nan_sig[1].min()
# stft_sig.max()
# def normalize(tensor):
# tensor_minusmean = tensor - tensor.min()
# return tensor_minusmean/tensor_minusmean.abs().max()
# aud = i[0][0]
# aud.dtype
# aud.max(), aud.min()
# x = aud.t().to("cpu").numpy()
# ipd.Audio(x, rate=default_sr)
# sig1 = i[0].unsqueeze(dim=1)
# sig2 = i[0].unsqueeze(dim=1)
# stacked_sig = torch.cat((sig1, sig2), dim=1)
# sig2 = i[0].unsqueeze(dim=1)
# sig2.shape
# torch.sum(stacked_sig, dim=1).shape
# np.floor(((default_sr*sec) - (win_length - 1) - 1)/ hop_len + 5)
# n_fft // 2 + 1
# n_fft // 2 + 1
class InstantLayerNormalization(nn.Module):
def __init__(self, in_shape, out_shape):
self.in_shape = in_shape
self.out_shape = out_shape
self.epsilon = 1e-7
self.gamma = None
self.beta = None
super(InstantLayerNormalization, self).__init__()
self.gamma = torch.ones(out_shape)
self.gamma = nn.Parameter(self.gamma)
self.beta = torch.zeros(out_shape)
self.beta = nn.Parameter(self.beta)
def forward(self, inps):
mean = inps.mean(-1, keepdim=True)
variance = torch.mean(torch.square(inps - mean), dim=-1, keepdim=True)
std = torch.sqrt(variance + self.epsilon)
outs = (inps - mean) / std
print(outs.shape, self.gamma.shape)
outs = outs * self.gamma
outs = outs + self.beta
return outs
class Multiply():
def __init__(self):
super(Multiply, self).__init__()
def forward(self, ten1, ten2):
mul_out = torch.mul(ten1, ten2)
return mul_out
class NoiseReducer(nn.Module):
def __init__(self, default_sr, n_fft, win_length, hop_len, sec, dropout=0.5, batch_first=True, stride=2, normalized=False, bidir=False):
self.default_sr = default_sr
self.n_fft = n_fft
self.win_length = win_length
self.hop_len = hop_len
self.sec = sec
self.normalized = normalized
self.conv_filters = 256
# Universal LSTM Units
self.batch_first = True
self.dropout = 0.25
self.bidir = bidir
# LSTM 1 UNITS
self.rnn1_dims = n_fft // 2 + 1
self.hidden_size_1 = 128
self.num_layers = 2
# LSTM 2 UNITS
self.rnn2_dims = self.conv_filters
self.hidden_size_2 = self.hidden_size_1
# Conv1d Layer Units
self.conv1_in = 1
self.conv1_out = self.conv_filters
# InstanceNorm Layer Units
self.instance1_in = self.rnn1_dims
self.instance2_in = self.conv1_out
# Dense1 Layer Units
self.dense1_in = self.hidden_size_2
self.dense1_out = self.rnn1_dims #int(np.floor(((default_sr*sec) - (win_length - 1) - 1)/ hop_len + 5))#3))
# Dense2 Layer Units
self.dense2_in = self.hidden_size_2
self.dense2_out = self.conv1_out
# Conv2d Layer Units
self.conv2_in = self.dense2_out
self.conv2_out = self.conv_filters
super(NoiseReducer, self).__init__()
self.lstm1 = nn.LSTM(input_size=self.rnn1_dims, hidden_size=self.hidden_size_1, num_layers=self.num_layers, batch_first=self.batch_first, dropout=self.dropout, bidirectional=self.bidir)
print(self.rnn2_dims)
self.lstm2 = nn.LSTM(input_size=self.rnn2_dims, hidden_size=self.hidden_size_2, num_layers=self.num_layers, batch_first=self.batch_first, dropout=self.dropout, bidirectional=self.bidir)
self.instancenorm1 = nn.InstanceNorm1d(self.rnn1_dims)
self.instancenorm2 = nn.InstanceNorm1d(self.rnn2_dims)
self.dense1 = nn.Linear(self.dense1_in, self.dense1_out)
self.dense2 = nn.Linear(self.dense2_in, self.dense2_out)
self.conv1 = nn.Conv1d(self.conv1_in, self.conv1_out, kernel_size=3, stride=1, padding=1)
self.conv2 = nn.Conv1d(self.conv2_in, self.conv2_out, kernel_size=3, stride=1, padding=1)
@torch.jit.export
def stft_layer(self, sig):
sig_stft = torch.stft(sig, n_fft=self.n_fft, hop_length=self.hop_len, win_length=self.win_length)
sig_cplx = torch.view_as_complex(sig_stft)
mag = sig_cplx.abs().permute(0, 2, 1)
angle = sig_cplx.angle().permute(0, 2, 1)
return [mag, angle]
@torch.jit.export
def istft_layer(self, mag, angle):
mag = mag.permute(0, 2, 1)
angle = angle.permute(0, 2, 1)
mag = torch.unsqueeze(mag, dim=-1)
angle = torch.unsqueeze(angle, dim=-1)
pre_stft = torch.cat((mag, angle), dim=-1)
stft_sig = torch.istft(pre_stft, n_fft=self.n_fft, win_length=self.win_length, hop_length=self.hop_len)
return stft_sig
def forward(self, inp_tensor):
mag, angle = self.stft_layer(inp_tensor)
mag_norm = self.instancenorm1(mag)
x, hidden_states = self.lstm1(mag_norm)
mask = self.dense1(x)
estimated_mag = torch.mul(mag, mask)
signal = self.istft_layer(estimated_mag, angle)
signal = signal.unsqueeze(dim=1)
feature_rep = self.conv1(signal)
feature_norm = self.instancenorm2(feature_rep)
feature_norm = feature_norm.permute(0, 2, 1)
x, hidden_states = self.lstm2(feature_norm)
mask = self.dense2(x)
feature_mask = F.sigmoid(mask)
feature_mask = feature_mask.permute(0, 2, 1)
estimate_feat = torch.mul(feature_rep, feature_mask)
estimate_frames = self.conv2(estimate_feat)
estimate_sig = torch.sum(estimate_frames, dim=1)
return estimate_sig
scripted_model = torch.jit.script(NoiseReducer(default_sr, n_fft, win_length, hop_len, sec).to(device))
# scripted_model.code
class Negative_SNR_Loss(nn.Module):
def __init__(self):
super(Negative_SNR_Loss, self).__init__()
def forward(self, sig_pred, sig_true):
sig_true_sq = torch.square(sig_true)
sig_pred_sq = torch.square(sig_true - sig_pred)
sig_true_mean = torch.mean(sig_true_sq)
sig_pred_mean = torch.mean(sig_pred_sq)
snr = sig_true_mean / sig_pred_mean + 1e-7
loss = -1*torch.log10(snr)
return loss
use_scripted_model = True
if not use_scripted_model:
print("Using Primary model")
model = NoiseReducer(default_sr=default_sr, n_fft=n_fft, win_length=win_length, hop_len=hop_len, sec=sec).to(device)
model.to(device)
else:
print("Using Scripted Model")
model = scripted_model
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
criterion = Negative_SNR_Loss()
n_epochs=100
model.train()
scaler = torch.cuda.amp.GradScaler()
# outs = model(fake_inputs)
# outs.shape
# del model, fake_inputs
# gc.collect()
# torch.cuda.empty_cache()
# fake_inputs = torch.randn(BATCH_SIZE, int(default_sr*sec)).to(device)
# with profiler.profile(profile_memory=True, record_shapes=False, use_cuda=True) as prof:
# scripted_model(fake_inputs)
# # model(fake_inputs)
# print(prof.key_averages().table(sort_by="cuda_time_total"))
# model.load_state_dict(torch.load("Pytorch_model_save_scripted.pt"))
for epoch in range(1, n_epochs+1):
loop = tqdm(enumerate(data_loader), leave=True, total=len(data_loader))
train_loss = np.zeros((len(data_loader)))
for index, (data, target) in loop:
data = data.type(torch.float32).to(device)
target = target.type(torch.float32).to(device)
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
train_loss[index] = loss.item()
if np.isnan(loss.item()) or np.isnan(np.sum(train_loss)/index+1e-5):
print(f"Data shape = {data.shape}\nTarget Shape = {target.shape}, \nindex = {index}")
loop.set_description(f"Epoch: [{epoch}/{n_epochs}]\t")
loop.set_postfix(loss = np.sum(train_loss)/index+1e-5)
torch.save(scripted_model.state_dict(), "./Pytorch_model_save_scripted.pt")
# del model
# gc.collect()
# torch.cuda.empty_cache()
noise_add_sig, main_sig = signal_synthesis_dataset.__getitem__(1000)
noise_add_sig = torch.unsqueeze(noise_add_sig, dim=0).to(device)
main_sig = torch.unsqueeze(main_sig, dim=0).to(device)
%%time
with torch.no_grad():
outs = model(noise_add_sig)
outs.shape
x = outs[0].t().to("cpu").numpy()
ipd.Audio(x, rate=default_sr)
x = noise_add_sig[0].t().to("cpu").numpy()
ipd.Audio(x, rate=default_sr)
x = main_sig[0].t().to("cpu").numpy()
ipd.Audio(x, rate=default_sr)
```
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive')
path = '/content/drive/MyDrive/Research/AAAI/dataset1/second_layer_with_entropy/'
import numpy as np
import pandas as pd
import torch
import torchvision
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from matplotlib import pyplot as plt
%matplotlib inline
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
```
# Generate dataset
```
np.random.seed(12)
y = np.random.randint(0,10,5000)
idx= []
for i in range(10):
print(i,sum(y==i))
idx.append(y==i)
x = np.zeros((5000,2))
np.random.seed(12)
x[idx[0],:] = np.random.multivariate_normal(mean = [4,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[0]))
x[idx[1],:] = np.random.multivariate_normal(mean = [5.5,6],cov=[[0.01,0],[0,0.01]],size=sum(idx[1]))
x[idx[2],:] = np.random.multivariate_normal(mean = [4.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[2]))
x[idx[3],:] = np.random.multivariate_normal(mean = [3,3.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[3]))
x[idx[4],:] = np.random.multivariate_normal(mean = [2.5,5.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[4]))
x[idx[5],:] = np.random.multivariate_normal(mean = [3.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[5]))
x[idx[6],:] = np.random.multivariate_normal(mean = [5.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[6]))
x[idx[7],:] = np.random.multivariate_normal(mean = [7,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[7]))
x[idx[8],:] = np.random.multivariate_normal(mean = [6.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[8]))
x[idx[9],:] = np.random.multivariate_normal(mean = [5,3],cov=[[0.01,0],[0,0.01]],size=sum(idx[9]))
color = ['#1F77B4','orange', 'g','brown']
name = [1,2,3,0]
for i in range(10):
if i==3:
plt.scatter(x[idx[i],0],x[idx[i],1],c=color[3],label="D_"+str(name[i]))
elif i>=4:
plt.scatter(x[idx[i],0],x[idx[i],1],c=color[3])
else:
plt.scatter(x[idx[i],0],x[idx[i],1],c=color[i],label="D_"+str(name[i]))
plt.legend()
x[idx[0]][0], x[idx[5]][5]
desired_num = 6000
mosaic_list_of_images =[]
mosaic_label = []
fore_idx=[]
for j in range(desired_num):
np.random.seed(j)
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,9)
a = []
for i in range(9):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list_of_images.append(a)
mosaic_label.append(fg_class)
fore_idx.append(fg_idx)
len(mosaic_list_of_images), mosaic_list_of_images[0]
```
# load mosaic data
```
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list, mosaic_label,fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] , self.fore_idx[idx]
batch = 250
msd1 = MosaicDataset(mosaic_list_of_images[0:3000], mosaic_label[0:3000] , fore_idx[0:3000])
train_loader = DataLoader( msd1 ,batch_size= batch ,shuffle=True)
batch = 250
msd2 = MosaicDataset(mosaic_list_of_images[3000:6000], mosaic_label[3000:6000] , fore_idx[3000:6000])
test_loader = DataLoader( msd2 ,batch_size= batch ,shuffle=True)
```
# models
```
class Focus_deep(nn.Module):
'''
deep focus network averaged at zeroth layer
input : elemental data
'''
def __init__(self,inputs,output,K,d):
super(Focus_deep,self).__init__()
self.inputs = inputs
self.output = output
self.K = K
self.d = d
self.linear1 = nn.Linear(self.inputs,50, bias=False) #,self.output)
self.linear2 = nn.Linear(50,50 , bias=False)
self.linear3 = nn.Linear(50,self.output, bias=False)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.xavier_normal_(self.linear2.weight)
torch.nn.init.xavier_normal_(self.linear3.weight)
def forward(self,z):
batch = z.shape[0]
x = torch.zeros([batch,self.K],dtype=torch.float64)
y = torch.zeros([batch,50], dtype=torch.float64) # number of features of output
features = torch.zeros([batch,self.K,50],dtype=torch.float64)
x,y = x.to("cuda"),y.to("cuda")
features = features.to("cuda")
for i in range(self.K):
alp,ftrs = self.helper(z[:,i] ) # self.d*i:self.d*i+self.d
x[:,i] = alp[:,0]
features[:,i] = ftrs
log_x = F.log_softmax(x,dim=1)
x = F.softmax(x,dim=1) # alphas
for i in range(self.K):
x1 = x[:,i]
y = y+torch.mul(x1[:,None],features[:,i]) # self.d*i:self.d*i+self.d
return y , x ,log_x
def helper(self,x):
x = self.linear1(x)
x = F.relu(x)
x = self.linear2(x)
x1 = F.tanh(x)
x = F.relu(x)
x = self.linear3(x)
#print(x1.shape)
return x,x1
class Classification_deep(nn.Module):
'''
input : elemental data
deep classification module data averaged at zeroth layer
'''
def __init__(self,inputs,output):
super(Classification_deep,self).__init__()
self.inputs = inputs
self.output = output
self.linear1 = nn.Linear(self.inputs,50)
#self.linear2 = nn.Linear(6,12)
self.linear2 = nn.Linear(50,self.output)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.zeros_(self.linear1.bias)
torch.nn.init.xavier_normal_(self.linear2.weight)
torch.nn.init.zeros_(self.linear2.bias)
def forward(self,x):
x = F.relu(self.linear1(x))
#x = F.relu(self.linear2(x))
x = self.linear2(x)
return x
# torch.manual_seed(12)
# focus_net = Focus_deep(2,1,9,2).double()
# focus_net = focus_net.to("cuda")
# focus_net.linear2.weight.shape,focus_net.linear3.weight.shape
# focus_net.linear2.weight.data[25:,:] = focus_net.linear2.weight.data[:25,:] #torch.nn.Parameter(torch.tensor([last_layer]) )
# (focus_net.linear2.weight[:25,:]== focus_net.linear2.weight[25:,:] )
# focus_net.linear3.weight.data[:,25:] = -focus_net.linear3.weight.data[:,:25] #torch.nn.Parameter(torch.tensor([last_layer]) )
# focus_net.linear3.weight
# focus_net.helper( torch.randn((5,2,2)).double().to("cuda") )
criterion = nn.CrossEntropyLoss()
def my_cross_entropy(x, y,alpha,log_alpha,k):
# log_prob = -1.0 * F.log_softmax(x, 1)
# loss = log_prob.gather(1, y.unsqueeze(1))
# loss = loss.mean()
loss = criterion(x,y)
#alpha = torch.clamp(alpha,min=1e-10)
b = -1.0* alpha * log_alpha
b = torch.mean(torch.sum(b,dim=1))
closs = loss
entropy = b
loss = (1-k)*loss + ((k)*b)
return loss,closs,entropy
def calculate_attn_loss(dataloader,what,where,criter,k):
what.eval()
where.eval()
r_loss = 0
cc_loss = 0
cc_entropy = 0
alphas = []
lbls = []
pred = []
fidices = []
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels,fidx = data
lbls.append(labels)
fidices.append(fidx)
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
avg,alpha,log_alpha = where(inputs)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
alphas.append(alpha.cpu().numpy())
#ent = np.sum(entropy(alpha.cpu().detach().numpy(), base=2, axis=1))/batch
# mx,_ = torch.max(alpha,1)
# entropy = np.mean(-np.log2(mx.cpu().detach().numpy()))
# print("entropy of batch", entropy)
#loss = (1-k)*criter(outputs, labels) + k*ent
loss,closs,entropy = my_cross_entropy(outputs,labels,alpha,log_alpha,k)
r_loss += loss.item()
cc_loss += closs.item()
cc_entropy += entropy.item()
alphas = np.concatenate(alphas,axis=0)
pred = np.concatenate(pred,axis=0)
lbls = np.concatenate(lbls,axis=0)
fidices = np.concatenate(fidices,axis=0)
#print(alphas.shape,pred.shape,lbls.shape,fidices.shape)
analysis = analyse_data(alphas,lbls,pred,fidices)
return r_loss/i,cc_loss/i,cc_entropy/i,analysis
def analyse_data(alphas,lbls,predicted,f_idx):
'''
analysis data is created here
'''
batch = len(predicted)
amth,alth,ftpt,ffpt,ftpf,ffpf = 0,0,0,0,0,0
for j in range (batch):
focus = np.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
amth +=1
else:
alth +=1
if(focus == f_idx[j] and predicted[j] == lbls[j]):
ftpt += 1
elif(focus != f_idx[j] and predicted[j] == lbls[j]):
ffpt +=1
elif(focus == f_idx[j] and predicted[j] != lbls[j]):
ftpf +=1
elif(focus != f_idx[j] and predicted[j] != lbls[j]):
ffpf +=1
#print(sum(predicted==lbls),ftpt+ffpt)
return [ftpt,ffpt,ftpf,ffpf,amth,alth]
```
# training
```
number_runs = 10
full_analysis =[]
FTPT_analysis = pd.DataFrame(columns = ["FTPT","FFPT", "FTPF","FFPF"])
k = 0.001
for n in range(number_runs):
print("--"*40)
# instantiate focus and classification Model
torch.manual_seed(n)
where = Focus_deep(2,1,9,2).double()
where.linear2.weight.data[25:,:] = where.linear2.weight.data[:25,:]
where.linear3.weight.data[:,25:] = -where.linear3.weight.data[:,:25]
where = where.double().to("cuda")
ex,_ = where.helper( torch.randn((5,2,2)).double().to("cuda"))
print(ex)
torch.manual_seed(n)
what = Classification_deep(50,3).double()
where = where.to("cuda")
what = what.to("cuda")
# instantiate optimizer
optimizer_where = optim.Adam(where.parameters(),lr =0.001)
optimizer_what = optim.Adam(what.parameters(), lr=0.001)
#criterion = nn.CrossEntropyLoss()
acti = []
analysis_data = []
loss_curi = []
epochs = 2000
# calculate zeroth epoch loss and FTPT values
running_loss ,_,_,anlys_data= calculate_attn_loss(train_loader,what,where,criterion,k)
loss_curi.append(running_loss)
analysis_data.append(anlys_data)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
# training starts
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
what.train()
where.train()
for i, data in enumerate(train_loader, 0):
# get the inputs
inputs, labels,_ = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_where.zero_grad()
optimizer_what.zero_grad()
# forward + backward + optimize
avg, alpha,log_alpha = where(inputs)
outputs = what(avg)
my_loss,_,_ = my_cross_entropy(outputs,labels,alpha,log_alpha,k)
# print statistics
running_loss += my_loss.item()
my_loss.backward()
optimizer_where.step()
optimizer_what.step()
#break
running_loss,ccloss,ccentropy,anls_data = calculate_attn_loss(train_loader,what,where,criterion,k)
analysis_data.append(anls_data)
if(epoch % 200==0):
print('epoch: [%d] loss: %.3f celoss: %.3f entropy: %.3f' %(epoch + 1,running_loss,ccloss,ccentropy))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.01:
break
print('Finished Training run ' +str(n))
#break
analysis_data = np.array(analysis_data)
FTPT_analysis.loc[n] = analysis_data[-1,:4]/30
full_analysis.append((epoch, analysis_data))
correct = 0
total = 0
with torch.no_grad():
for data in test_loader:
images, labels,_ = data
images = images.double()
images, labels = images.to("cuda"), labels.to("cuda")
avg, alpha,log_alpha = where(images)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 3000 test images: %f %%' % ( 100 * correct / total))
print(np.mean(np.array(FTPT_analysis),axis=0)) #[7.42700000e+01 2.44100000e+01 7.33333333e-02 1.24666667e+00]
FTPT_analysis
cnt=1
for epoch, analysis_data in full_analysis:
analysis_data = np.array(analysis_data)
# print("="*20+"run ",cnt,"="*20)
plt.figure(figsize=(6,5))
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,0]/30,label="FTPT")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,1]/30,label="FFPT")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,2]/30,label="FTPF")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,3]/30,label="FFPF")
plt.title("Training trends for run "+str(cnt))
plt.grid()
# plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.legend()
plt.xlabel("epochs", fontsize=14, fontweight = 'bold')
plt.ylabel("percentage train data", fontsize=14, fontweight = 'bold')
plt.savefig(path + "run"+str(cnt)+".png",bbox_inches="tight")
plt.savefig(path + "run"+str(cnt)+".pdf",bbox_inches="tight")
cnt+=1
FTPT_analysis.to_csv(path+"synthetic_zeroth.csv",index=False)
```
| github_jupyter |
```
import pandas as pd
import janitor as jn
import pymc3 as pm
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from utils import ECDF
import arviz as az
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
```
# Darwin's Finches
A research group has taken measurements of the descendants of the finches that Charles Darwin observed when he postulated the theory of evolution.
We will be using Bayesian methods to analyze this data, specifically answering the question of how quantitatively different two species of birds' beaks are.
## Data Credits
The Darwin's finches datasets come from the paper, [40 years of evolution. Darwin's finches on Daphne Major Island][data].
One row of data has been added for pedagogical purposes.
[data]: (https://datadryad.org/resource/doi:10.5061/dryad.g6g3h).
Let's get started and load the data.
```
from data import load_finches_2012
df = load_finches_2012()
```
**Exercise:** View a random sample of the data to get a feel for the structure of the dataset.
```
# Your code below
```
**Note:** I have added one row of data, simulating the discovery of an "unknown" species of finch for which beak measurements have been taken.
For pedagogical brevity, we will analyze only beak depth during the class. However, I would encourage you to perform a similar analysis for beak length as well.
```
# These are filters that we can use later on.
fortis_filter = df['species'] == 'fortis'
scandens_filter = df['species'] == 'scandens'
unknown_filter = df['species'] == 'unknown'
```
**Exercise:** Recreate the estimation model for finch beak depths. A few things to note:
- Practice using numpy-like fancy indexing.
- Difference of means & effect size are optional.
- Feel free to play around with other priors.
A visual representation of the model using distribution diagrams is as follows:

```
with pm.Model() as beak_depth_model:
# Your model defined here.
```
**Exercise:** Perform MCMC sampling to estimate the posterior distribution of each parameter.
```
# Your code below.
```
**Exercise:** Diagnose whether the sampling has converged or not using trace plots.
```
# Your code below.
```
**Exercise:** Visualize the posterior distribution over the parameters using the forest plot.
```
# Your code below.
```
**Exercise:** Visualize the posterior distribution over the parameters using the forest plot.
```
# Your code below.
```
**Discuss:**
- Is the posterior distribution of beaks for the unknown species reasonable?
**Exercise:** Perform a posterior predictive check to visually diagnose whether the model describes the data generating process well or not.
```
samples = pm.sample_ppc(trace, model=beak_depth_model, samples=2000)
```
Hint: Each column in the samples (key: "likelihood") corresponds to simulated measurements of each finch in the dataset. We can use fancy indexing along the columns (axis 1) to select out simulated measurements for each category, and then flatten the resultant array to get the full estimated distribution of values for each class.
```
fig = plt.figure()
ax_fortis = fig.add_subplot(2, 1, 1)
ax_scandens = fig.add_subplot(2, 1, 2, sharex=ax_fortis)
# Extract just the fortis samples.
# Compute the ECDF for the fortis samples.
ax_fortis.plot(x_s, y_s, label='samples')
# Extract just the fortis measurements.
# Compute the ECDF for the fortis measurements.
ax_fortis.plot(x, y, label='data')
ax_fortis.legend()
ax_fortis.set_title('fortis')
# Extract just the scandens samples.
# Compute the ECDF for the scandens samples
ax_scandens.plot(x_s, y_s, label='samples')
# Extract just the scandens measurements.
# Compute the ECDF for the scanens measurements.
ax_scandens.plot(x, y, label='data')
ax_scandens.legend()
ax_scandens.set_title('scandens')
ax_scandens.set_xlabel('beak length')
plt.tight_layout()
```
## Summary
1. NumPy-like fancy indexing lets us write models in a concise fashion.
1. Posterior estimates can show up as being "unreasonable", "absurd", or at the minimum, counter-intuitive, if we do not impose the right set of assumptions on the model.
| github_jupyter |
Trees and diversity estimates for molecular markers. Env from 62_phylo_reduced.
After first run and evaluation, manually resolve problems with selected sequences (on plate fasta level):
- VBS00055 (aconitus) its2 trimmed 3' - weak signal, multipeaks
- VBS00021,VBS00022,VBS00023 (barbirostris) its2 re-trimmed to the same length - sequences too long, weak signal after 1000 bases
- VBS00024 (barbirostris) its2 removed - weak signal, multipeaks
- marshalli mis-identifications - remove all samples from analysis
- VBS00059, VBS00061 (minimus) coi trimmed 5' - retained variation is true
- VBS00145 (sundaicus) is rampae according to its2 and ampseq - removed from analysis
- vin.M0004, vin.B0009, vin.M0009 its2 removed - problem with amplification after ca 60 bases
After finalisation of sequences, need to re-run BLAST and 1_blast.ipynb
Higher than expexted variation left without changes:
- carnevalei its2 - legitimately highly variable
- coustani, tenebrosus, ziemannii - highly variable unresolved group with two branches according to all markers
- hyrcanus - two distinct lineages - VBS00082, VBS00083 and VBS00085,VBS00086 according to all markers
- nili - two distinct lineages - nil.237, nil.239 and nil.233, nil.236, nil.238 according to all markers
- paludis: Apal-81 within coustani group, while Apal-257 is an outgroup according to all markers
- VBS00113 (rampae) coi groups with maculatus (closest relative), while its2 and ampseq group it with rampae
- sundaicus coi - legitimately highly variable
- theileri its - multipeaks at the middle of all samples result in several bases mis-called, do not change
- vagus coi - legitimately highly variable
Other:
- brohieri, hancocki, demeilloni - single group of closely related individuals according to all markers
```
import os
from Bio import AlignIO, SeqIO
from Bio.Phylo.TreeConstruction import DistanceCalculator
from collections import defaultdict
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import ete3
# in
WD = '../../../data/phylo_ampl_dada2/coi_its2/work/'
PRD = '../../../data/phylo_ampl_dada2/phylo_reduced'
OD = 'data'
META = os.path.join(OD, 'species_predictions.csv')
FA = os.path.join(WD, 'seqman_fa/plate{}.fas')
AMPL_ALN = os.path.join(PRD,'aln_all/{}.fa')
AMPL_HAP = os.path.join(PRD,'0_haplotypes.csv')
TAXONOMY = '../7_species_id/data/0_taxonomy.csv'
# out
ALN = os.path.join(WD, 'phylo/{}.aln.fas')
TREE = os.path.join(OD, '{}.nwk')
TREE_FIG = os.path.join(OD, '{}.pdf')
TREE_FIG_UNFILTERED = os.path.join(WD, 'phylo/{}.all.pdf')
DIVERSITY = os.path.join(OD, 'diversity.csv')
! mkdir -p {os.path.join(WD, 'phylo')}
! mkdir -p {OD}
# mapping of markers used and plates
MARKER_PLATES = {
'coi':[1,2],
'its2':[3,4]
}
# samples with conflicting phylogenetic positions - to be removed from diversity estimates
CONFLICT_SAMPLES = ['Amar-3-1','Amar-42','Amar-5','VBS00145']
# minimum number of samples per species to be used in diversity plotting
MIN_SAMPLES_DIV = 2
# species labels containing multiple - hard filtering
MULTISP_LABELS = ['brohieri', 'hancocki', 'demeilloni',
'coustani','tenebrosus','ziemanni',
'hyrcanus','nili','paludis']
# species that are overly diverged - softer filtering
BAD_SPP = ['hyrcanus','nili','paludis']
MULTISP_LABELS = ['Anopheles_'+l for l in MULTISP_LABELS]
BAD_SPP = ['Anopheles_'+l for l in BAD_SPP]
```
## Metadata
```
meta = pd.read_csv(META, index_col=0)
meta.shape
# metadata table for use in diversity estimates - only include ampseq'd and non-conflicting samples
# remove non-ampseq'd
ampseq_meta = meta[~meta.ampseq_species.isna()]
# remove conflicting samples
# ampseq_meta = ampseq_meta[~ampseq_meta.index.isin(CONFLICT_SAMPLES)]
display(ampseq_meta.shape)
ampseq_meta.head(1)
# basic stats
meta[~meta.ampseq_species.isna()].COI_length.mean(), meta[~meta.ampseq_species.isna()].COI_length.std()
```
## Alignment and phylogeny
```
for marker, plates in MARKER_PLATES.items():
! cat {FA.format(plates[0])} {FA.format(plates[1])} | mafft - > {ALN.format(marker)}
for marker in MARKER_PLATES:
! fasttree -nt {ALN.format(marker)} > {TREE.format(marker)}
# draw trees
for marker in MARKER_PLATES:
t = ete3.Tree(TREE.format(marker))
# set outgroup to implexus
outgroups = []
for leaf in t:
if leaf.name.startswith('imp'):# or leaf.name.startswith('cou'):
outgroups.append(leaf.name)
# print(outgroups)
if len(outgroups) > 1:
t.set_outgroup(t.get_common_ancestor(*outgroups))
elif len(outgroups) == 1:
t.set_outgroup(outgroups[0])
# style
t.ladderize(direction=1)
ns = ete3.NodeStyle(size=0)
for n in t.traverse():
n.set_style(ns)
t.render(TREE_FIG_UNFILTERED.format(marker));
# remove non-ampseq samples from tree
pruned_taxa = [leaf.name for leaf in t if leaf.name in ampseq_meta[marker.upper() + '_seqid'].to_list()]
print(pruned_taxa)
t.prune(pruned_taxa)
t.render(TREE_FIG.format(marker));
'col.554_D08-ITS2A.ab1' in ampseq_meta.COI_seqid.to_list()
```
## Diversity estimates for Sanger sequencing
```
# variable sites
def distances(aln, seqids):
s_aln = [seq for seq in aln if seq.name in seqids]
s_aln = AlignIO.MultipleSeqAlignment(s_aln)
aln_len = len(s_aln[0].seq)
dist_matrix = DistanceCalculator('identity').get_distance(s_aln)
dists = []
for i, d in enumerate(dist_matrix):
dists.extend(d[:i])
dists = [int(d * aln_len) for d in dists]
return dists
def aln_stats(aln, seqids, verbose=False):
var_sites = 0
aln_len = 0
if len(seqids) == 0:
return var_sites, aln_len
# subset alignment to seqids
s_aln = [seq for seq in aln if seq.name in seqids]
s_aln = AlignIO.MultipleSeqAlignment(s_aln)
# iterate over alignment columns
for i in range(len(s_aln[0].seq)):
chars = s_aln[:,i]
charset = set(chars)
# any aligned bases
if charset != set('-'):
aln_len += 1
# any variable bases
if len(charset - set('-')) > 1:
if verbose:
print(i, chars)
var_sites += 1
return var_sites, aln_len
aln_div = defaultdict(dict)
for marker in MARKER_PLATES:
marker_aln = AlignIO.read(ALN.format(marker), format='fasta')
marker_aln = list(marker_aln)
for species, sp_meta in ampseq_meta.groupby('partner_species'):
# remove conflicting samples
sp_meta = sp_meta[~sp_meta.index.isin(CONFLICT_SAMPLES)]
# subset samples
sp_marker_samples = sp_meta[marker.upper()+'_seqid'].dropna().to_list()
# debug
if species == 'Anopheles_marshallii' and marker=='its2':
print(sp_marker_samples)
print(aln_stats(marker_aln, sp_marker_samples, verbose=True))
# skip small datasets
#if len(sp_marker_samples) < MIN_SAMPLES_DIV:
# continue
v, l = aln_stats(marker_aln, sp_marker_samples)
aln_div[marker + '_len'][species] = l
aln_div[marker + '_var'][species] = v
aln_div[marker + '_nseq'][species] = len(sp_marker_samples)
#aln_div[marker + '_samples'][species] = sp_marker_samples
#aln_div[marker + '_dists'][species] = distances(marker_aln, sp_marker_samples)
aln_div = pd.DataFrame(aln_div)
aln_div.head()
```
## Diversity estimates for ampseq
Use alignments generated for 62_phylo_reduced
```
ampl_haps = pd.read_csv(AMPL_HAP)
ampl_haps.head(1)
ampseq_var = defaultdict(dict)
ampseq_len = defaultdict(dict)
ampseq_nseq = dict()
for species, sp_meta in ampseq_meta.groupby('partner_species'):
# remove conflicting samples
sp_meta = sp_meta[~sp_meta.index.isin(CONFLICT_SAMPLES)]
# subset samples
sp_marker_samples = sp_meta.index.dropna().to_list()
# skip small datasets
#if len(sp_marker_aln) < MIN_SAMPLES_DIV:
# continue
ampseq_nseq[species] = len(sp_marker_samples)
for target in range(62):
target = str(target)
st_uids = ampl_haps.loc[ampl_haps.s_Sample.isin(sp_marker_samples)
& (ampl_haps.target == target),
'combUID'].to_list()
# no sequences - no divergent sites
if len(st_uids) == 0:
ampseq_var[target][species] = np.nan
# estimate divergent sites
else:
taln = AlignIO.read(AMPL_ALN.format(target), format='fasta')
ampseq_var[target][species], ampseq_len[target][species] = aln_stats(taln, st_uids)
ampseq_var = pd.DataFrame(ampseq_var)
ampseq_len = pd.DataFrame(ampseq_len)
ampseq_len.iloc[:3,:3]
comb_div = aln_div.copy()
comb_div['total_ampseq_len'] = ampseq_len.sum(axis=1).astype(int)
comb_div['total_ampseq_var'] = ampseq_var.sum(axis=1).astype(int)
comb_div['total_ampseq_nsamples'] = pd.Series(ampseq_nseq)
comb_div
comb_div.to_csv(DIVERSITY)
```
## Plot diversity
```
taxonomy = pd.read_csv(TAXONOMY, index_col=0)
taxonomy.head(1)
comb_div['series'] = taxonomy['series']
comb_div['subgenus'] = taxonomy['subgenus']
def region(species):
sp_data = ampseq_meta[ampseq_meta.partner_species == species]
if sp_data.index[0].startswith('VBS'):
return 'SE Asia'
elif sp_data.index[0].startswith('A'):
return 'Africa'
return 'Unknown'
comb_div['region'] = comb_div.index.map(region)
fig, axs = plt.subplots(1,2,figsize=(10,5))
for ax, marker in zip(axs, ('coi','its2')):
# exclude compromised species labels and species with insufficient sample size
d = comb_div[~comb_div.index.isin(BAD_SPP) & # harder filter - MULTISP_LABELS
(comb_div.total_ampseq_nsamples >= MIN_SAMPLES_DIV) &
(comb_div[marker+'_nseq'] >= MIN_SAMPLES_DIV)]
print(d[d[marker+'_var'] > 12].index)
sns.scatterplot(x=marker+'_var',y='total_ampseq_var',
hue='region',
size='total_ampseq_nsamples',
style='subgenus',
data=d,
ax=ax)
ax.legend().set_visible(False)
ax.legend(bbox_to_anchor=(1, 0.7), frameon=False);
sns.scatterplot(data=comb_div[~comb_div.index.isin(MULTISP_LABELS)],
x='coi_var',y='its2_var',hue='region',size='total_ampseq_nsamples');
# stats for good species (not multiple species with same label)
comb_div[~comb_div.index.isin(BAD_SPP) &
(comb_div.total_ampseq_nsamples >= MIN_SAMPLES_DIV)] \
.describe()
```
| github_jupyter |
<!-- dom:TITLE: PHY321: Two-body problems, gravitational forces, scattering and begin Lagrangian formalism -->
# PHY321: Two-body problems, gravitational forces, scattering and begin Lagrangian formalism
<!-- dom:AUTHOR: [Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/) at Department of Physics and Astronomy and Facility for Rare Ion Beams (FRIB), Michigan State University, USA & Department of Physics, University of Oslo, Norway -->
<!-- Author: -->
**[Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/)**, Department of Physics and Astronomy and Facility for Rare Ion Beams (FRIB), Michigan State University, USA and Department of Physics, University of Oslo, Norway
Date: **Mar 29, 2021**
Copyright 1999-2021, [Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/). Released under CC Attribution-NonCommercial 4.0 license
## Aims and Overarching Motivation
### Monday
1. Physical interpretation of various orbit types and summary gravitational forces, examples on whiteboard and handwritten notes
2. Start discussion two-body scattering
**Reading suggestion**: Taylor chapter 8 and sections 14.1-14.2 and Lecture notes
### Wednesday
1. Two-body scattering
**Reading suggestion**: Taylor and sections 14.3-14.6
### Friday
1. Lagrangian formalism
**Reading suggestion**: Taylor Sections 6.1-6.2
### Code example with gravitional force
The code example here is meant to illustrate how we can make a plot of the final orbit. We solve the equations in polar coordinates (the example here uses the minimum of the potential as initial value) and then we transform back to cartesian coordinates and plot $x$ versus $y$. We see that we get a perfect circle when we place ourselves at the minimum of the potential energy, as expected.
```
%matplotlib inline
# Common imports
import numpy as np
import pandas as pd
from math import *
import matplotlib.pyplot as plt
# Simple Gravitational Force -alpha/r
DeltaT = 0.01
#set up arrays
tfinal = 8.0
n = ceil(tfinal/DeltaT)
# set up arrays for t, v and r
t = np.zeros(n)
v = np.zeros(n)
r = np.zeros(n)
phi = np.zeros(n)
x = np.zeros(n)
y = np.zeros(n)
# Constants of the model, setting all variables to one for simplicity
alpha = 1.0
AngMom = 1.0 # The angular momentum
m = 1.0 # scale mass to one
c1 = AngMom*AngMom/(m*m)
c2 = AngMom*AngMom/m
rmin = (AngMom*AngMom/m/alpha)
# Initial conditions, place yourself at the potential min
r0 = rmin
v0 = 0.0 # starts at rest
r[0] = r0
v[0] = v0
phi[0] = 0.0
# Start integrating using the Velocity-Verlet method
for i in range(n-1):
# Set up acceleration
a = -alpha/(r[i]**2)+c1/(r[i]**3)
# update velocity, time and position using the Velocity-Verlet method
r[i+1] = r[i] + DeltaT*v[i]+0.5*(DeltaT**2)*a
anew = -alpha/(r[i+1]**2)+c1/(r[i+1]**3)
v[i+1] = v[i] + 0.5*DeltaT*(a+anew)
t[i+1] = t[i] + DeltaT
phi[i+1] = t[i+1]*c2/(r0**2)
# Find cartesian coordinates for easy plot
x = r*np.cos(phi)
y = r*np.sin(phi)
fig, ax = plt.subplots(3,1)
ax[0].set_xlabel('time')
ax[0].set_ylabel('radius')
ax[0].plot(t,r)
ax[1].set_xlabel('time')
ax[1].set_ylabel('Angle $\cos{\phi}$')
ax[1].plot(t,np.cos(phi))
ax[2].set_ylabel('y')
ax[2].set_xlabel('x')
ax[2].plot(x,y)
#save_fig("Phasespace")
plt.show()
```
## Changing initial conditions
Try to change the initial value for $r$ and see what kind of orbits you get.
In order to test different energies, it can be useful to look at the plot of the effective potential discussed above.
However, for orbits different from a circle the above code would need modifications in order to allow us to display say an ellipse. For the latter, it is much easier to run our code in cartesian coordinates, as done here. In this code we test also energy conservation and see that it is conserved to numerical precision. The code here is a simple extension of the code we developed for homework 4.
```
%matplotlib inline
# Common imports
import numpy as np
import pandas as pd
from math import *
import matplotlib.pyplot as plt
DeltaT = 0.01
#set up arrays
tfinal = 10.0
n = ceil(tfinal/DeltaT)
# set up arrays
t = np.zeros(n)
v = np.zeros((n,2))
r = np.zeros((n,2))
E = np.zeros(n)
# Constants of the model
m = 1.0 # mass, you can change these
alpha = 1.0
# Initial conditions as compact 2-dimensional arrays
x0 = 0.5; y0= 0.
r0 = np.array([x0,y0])
v0 = np.array([0.0,1.0])
r[0] = r0
v[0] = v0
rabs = sqrt(sum(r[0]*r[0]))
E[0] = 0.5*m*(v[0,0]**2+v[0,1]**2)-alpha/rabs
# Start integrating using the Velocity-Verlet method
for i in range(n-1):
# Set up the acceleration
rabs = sqrt(sum(r[i]*r[i]))
a = -alpha*r[i]/(rabs**3)
# update velocity, time and position using the Velocity-Verlet method
r[i+1] = r[i] + DeltaT*v[i]+0.5*(DeltaT**2)*a
rabs = sqrt(sum(r[i+1]*r[i+1]))
anew = -alpha*r[i+1]/(rabs**3)
v[i+1] = v[i] + 0.5*DeltaT*(a+anew)
E[i+1] = 0.5*m*(v[i+1,0]**2+v[i+1,1]**2)-alpha/rabs
t[i+1] = t[i] + DeltaT
# Plot position as function of time
fig, ax = plt.subplots(3,1)
ax[0].set_ylabel('y')
ax[0].set_xlabel('x')
ax[0].plot(r[:,0],r[:,1])
ax[1].set_xlabel('time')
ax[1].set_ylabel('y position')
ax[1].plot(t,r[:,0])
ax[2].set_xlabel('time')
ax[2].set_ylabel('y position')
ax[2].plot(t,r[:,1])
fig.tight_layout()
#save_fig("2DimGravity")
plt.show()
print(E)
```
## Scattering and Cross Sections
Scattering experiments don't measure entire trajectories. For elastic
collisions, they measure the distribution of final scattering angles
at best. Most experiments use targets thin enough so that the number
of scatterings is typically zero or one. The cross section, $\sigma$,
describes the cross-sectional area for particles to scatter with an
individual target atom or nucleus. Cross section measurements form the
basis for MANY fields of physics. BThe cross section, and the
differential cross section, encapsulates everything measurable for a
collision where all that is measured is the final state, e.g. the
outgoing particle had momentum $\boldsymbol{p}_f$. y studying cross sections,
one can infer information about the potential interaction between the
two particles. Inferring, or constraining, the potential from the
cross section is a classic {\it inverse} problem. Collisions are
either elastic or inelastic. Elastic collisions are those for which
the two bodies are in the same internal state before and after the
collision. If the collision excites one of the participants into a
higher state, or transforms the particles into different species, or
creates additional particles, the collision is inelastic. Here, we
consider only elastic collisions.
## Scattering: Coulomb forces
For Coulomb forces, the cross section is infinite because the range of
the Coulomb force is infinite, but for interactions such as the strong
interaction in nuclear or particle physics, there is no long-range
force and cross-sections are finite. Even for Coulomb forces, the part
of the cross section that corresponds to a specific scattering angle,
$d\sigma/d\Omega$, which is a function of the scattering angle
$\theta_s$ is still finite.
If a particle travels through a thin target, the chance the particle
scatters is $P_{\rm scatt}=\sigma dN/dA$, where $dN/dA$ is the number
of scattering centers per area the particle encounters. If the density
of the target is $\rho$ particles per volume, and if the thickness of
the target is $t$, the areal density (number of target scatterers per
area) is $dN/dA=\rho t$. Because one wishes to quantify the collisions
independently of the target, experimentalists measure scattering
probabilities, then divide by the areal density to obtain
cross-sections,
$$
\begin{eqnarray}
\sigma=\frac{P_{\rm scatt}}{dN/dA}.
\end{eqnarray}
$$
## Scattering, more details
Instead of merely stating that a particle collided, one can measure
the probability the particle scattered by a given angle. The
scattering angle $\theta_s$ is defined so that at zero the particle is
unscattered and at $\theta_s=\pi$ the particle is scattered directly
backward. Scattering angles are often described in the center-of-mass
frame, but that is a detail we will neglect for this first discussion,
where we will consider the scattering of particles moving classically
under the influence of fixed potentials $U(\boldsymbol{r})$. Because the
distribution of scattering angles can be measured, one expresses the
differential cross section,
<!-- Equation labels as ordinary links -->
<div id="_auto1"></div>
$$
\begin{equation}
\frac{d^2\sigma}{d\cos\theta_s~d\phi}.
\label{_auto1} \tag{1}
\end{equation}
$$
Usually, the literature expresses differential cross sections as
<!-- Equation labels as ordinary links -->
<div id="_auto2"></div>
$$
\begin{equation}
d\sigma/d\Omega=\frac{d\sigma}{d\cos\theta d\phi}=\frac{1}{2\pi}\frac{d\sigma}{d\cos\theta},
\label{_auto2} \tag{2}
\end{equation}
$$
where the last equivalency is true when the scattering does not depend
on the azimuthal angle $\phi$, as is the case for spherically
symmetric potentials.
The differential solid angle $d\Omega$ can be thought of as the area
subtended by a measurement, $dA_d$, divided by $r^2$, where $r$ is the
distance to the detector,
$$
\begin{eqnarray}
dA_d=r^2 d\Omega.
\end{eqnarray}
$$
With this definition $d\sigma/d\Omega$ is independent of the distance
from which one places the detector, or the size of the detector (as
long as it is small).
## Differential scattering cross sections
Differential scattering cross sections are calculated by assuming a
random distribution of impact parameters $b$. These represent the
distance in the $xy$ plane for particles moving in the $z$ direction
relative to the scattering center. An impact parameter $b=0$ refers to
being aimed directly at the target's center. The impact parameter
describes the transverse distance from the $z=0$ axis for the
trajectory when it is still far away from the scattering center and
has not yet passed it. The differential cross section can be expressed
in terms of the impact parameter,
<!-- Equation labels as ordinary links -->
<div id="_auto3"></div>
$$
\begin{equation}
d\sigma=2\pi bdb,
\label{_auto3} \tag{3}
\end{equation}
$$
which is the area of a thin ring of radius $b$ and thickness $db$. In
classical physics, one can calculate the trajectory given the incoming
kinetic energy $E$ and the impact parameter if one knows the mass and
potential.
## More on Differential Cross Sections
From the trajectory, one then finds the scattering angle
$\theta_s(b)$. The differential cross section is then
<!-- Equation labels as ordinary links -->
<div id="_auto4"></div>
$$
\begin{equation}
\frac{d\sigma}{d\Omega}=\frac{1}{2\pi}\frac{d\sigma}{d\cos\theta_s}=b\frac{db}{d\cos\theta_s}=\frac{b}{(d/db)\cos\theta_s(b)}.
\label{_auto4} \tag{4}
\end{equation}
$$
Typically, one would calculate $\cos\theta_s$ and $(d/db)\cos\theta_s$
as functions of $b$. This is sufficient to plot the differential cross
section as a function of $\theta_s$.
The total cross section is
<!-- Equation labels as ordinary links -->
<div id="_auto5"></div>
$$
\begin{equation}
\sigma_{\rm tot}=\int d\Omega\frac{d\sigma}{d\Omega}=2\pi\int d\cos\theta_s~\frac{d\sigma}{d\Omega}.
\label{_auto5} \tag{5}
\end{equation}
$$
Even if the total cross section is infinite, e.g. Coulomb forces, one
can still have a finite differential cross section as we will see
later on.
## Rutherford Scattering
This refers to the calculation of $d\sigma/d\Omega$ due to an inverse
square force, $F_{12}=\pm\alpha/r^2$ for repulsive/attractive
interaction. Rutherford compared the scattering of $\alpha$ particles
($^4$He nuclei) off of a nucleus and found the scattering angle at
which the formula began to fail. This corresponded to the impact
parameter for which the trajectories would strike the nucleus. This
provided the first measure of the size of the atomic nucleus. At the
time, the distribution of the positive charge (the protons) was
considered to be just as spread out amongst the atomic volume as the
electrons. After Rutherford's experiment, it was clear that the radius
of the nucleus tended to be roughly 4 orders of magnitude smaller than
that of the atom, which is less than the size of a football relative
to Spartan Stadium.
## Rutherford Scattering, more details
In order to calculate differential cross section, we must find how the
impact parameter is related to the scattering angle. This requires
analysis of the trajectory. We consider our previous expression for
the trajectory where we derived the elliptic form for the trajectory,
For that case we considered an attractive
force with the particle's energy being negative, i.e. it was
bound. However, the same form will work for positive energy, and
repulsive forces can be considered by simple flipping the sign of
$\alpha$. For positive energies, the trajectories will be hyperbolas,
rather than ellipses, with the asymptotes of the trajectories
representing the directions of the incoming and outgoing
tracks.
## Rutherford Scattering, final trajectories
We have
<!-- Equation labels as ordinary links -->
<div id="eq:ruthtraj"></div>
$$
\begin{equation}\label{eq:ruthtraj} \tag{6}
r=\frac{1}{\frac{m\alpha}{L^2}+A\cos\theta}.
\end{equation}
$$
Once $A$ is large enough, which will happen when the energy is
positive, the denominator will become negative for a range of
$\theta$. This is because the scattered particle will never reach
certain angles. The asymptotic angles $\theta'$ are those for which
the denominator goes to zero,
<!-- Equation labels as ordinary links -->
<div id="_auto6"></div>
$$
\begin{equation}
\cos\theta'=-\frac{m\alpha}{AL^2}.
\label{_auto6} \tag{7}
\end{equation}
$$
## Rutherford Scattering, Closest Approach
The trajectory's point of closest approach is at $\theta=0$ and the
two angles $\theta'$, which have this value of $\cos\theta'$, are the
angles of the incoming and outgoing particles. From
Fig (**to come**), one can see that the scattering angle
$\theta_s$ is given by,
<!-- Equation labels as ordinary links -->
<div id="eq:sthetover2"></div>
$$
\begin{eqnarray}
\label{eq:sthetover2} \tag{8}
2\theta'-\pi&=&\theta_s,~~~\theta'=\frac{\pi}{2}+\frac{\theta_s}{2},\\
\nonumber
\sin(\theta_s/2)&=&-\cos\theta'\\
\nonumber
&=&\frac{m\alpha}{AL^2}.
\end{eqnarray}
$$
Now that we have $\theta_s$ in terms of $m,\alpha,L$ and $A$, we wish
to re-express $L$ and $A$ in terms of the impact parameter $b$ and the
energy $E$. This will set us up to calculate the differential cross
section, which requires knowing $db/d\theta_s$. It is easy to write
the angular momentum as
<!-- Equation labels as ordinary links -->
<div id="_auto7"></div>
$$
\begin{equation}
L^2=p_0^2b^2=2mEb^2.
\label{_auto7} \tag{9}
\end{equation}
$$
## Rutherford Scattering, getting there
Finding $A$ is more complicated. To accomplish this we realize that
the point of closest approach occurs at $\theta=0$, so from
Eq. ([6](#eq:ruthtraj))
<!-- Equation labels as ordinary links -->
<div id="eq:rminofA"></div>
$$
\begin{eqnarray}
\label{eq:rminofA} \tag{10}
\frac{1}{r_{\rm min}}&=&\frac{m\alpha}{L^2}+A,\\
\nonumber
A&=&\frac{1}{r_{\rm min}}-\frac{m\alpha}{L^2}.
\end{eqnarray}
$$
Next, $r_{\rm min}$ can be found in terms of the energy because at the
point of closest approach the kinetic energy is due purely to the
motion perpendicular to $\hat{r}$ and
<!-- Equation labels as ordinary links -->
<div id="_auto8"></div>
$$
\begin{equation}
E=-\frac{\alpha}{r_{\rm min}}+\frac{L^2}{2mr_{\rm min}^2}.
\label{_auto8} \tag{11}
\end{equation}
$$
## Rutherford Scattering, More Manipulations
One can solve the quadratic equation for $1/r_{\rm min}$,
<!-- Equation labels as ordinary links -->
<div id="_auto9"></div>
$$
\begin{equation}
\frac{1}{r_{\rm min}}=\frac{m\alpha}{L^2}+\sqrt{(m\alpha/L^2)^2+2mE/L^2}.
\label{_auto9} \tag{12}
\end{equation}
$$
We can plug the expression for $r_{\rm min}$ into the expression for $A$, Eq. ([10](#eq:rminofA)),
<!-- Equation labels as ordinary links -->
<div id="_auto10"></div>
$$
\begin{equation}
A=\sqrt{(m\alpha/L^2)^2+2mE/L^2}=\sqrt{(\alpha^2/(4E^2b^4)+1/b^2}
\label{_auto10} \tag{13}
\end{equation}
$$
## Rutherford Scattering, final expression
Finally, we insert the expression for $A$ into that for the scattering angle, Eq. ([8](#eq:sthetover2)),
<!-- Equation labels as ordinary links -->
<div id="eq:scattangle"></div>
$$
\begin{eqnarray}
\label{eq:scattangle} \tag{14}
\sin(\theta_s/2)&=&\frac{m\alpha}{AL^2}\\
\nonumber
&=&\frac{a}{\sqrt{a^2+b^2}}, ~~a\equiv \frac{\alpha}{2E}
\end{eqnarray}
$$
## Rutherford Scattering, the Differential Cross Section
The differential cross section can now be found by differentiating the
expression for $\theta_s$ with $b$,
<!-- Equation labels as ordinary links -->
<div id="eq:rutherford"></div>
$$
\begin{eqnarray}
\label{eq:rutherford} \tag{15}
\frac{1}{2}\cos(\theta_s/2)d\theta_s&=&\frac{ab~db}{(a^2+b^2)^{3/2}}=\frac{bdb}{a^2}\sin^3(\theta_s/2),\\
\nonumber
d\sigma&=&2\pi bdb=\frac{\pi a^2}{\sin^3(\theta_s/2)}\cos(\theta_s/2)d\theta_s\\
\nonumber
&=&\frac{\pi a^2}{2\sin^4(\theta_s/2)}\sin\theta_s d\theta_s\\
\nonumber
\frac{d\sigma}{d\cos\theta_s}&=&\frac{\pi a^2}{2\sin^4(\theta_s/2)},\\
\nonumber
\frac{d\sigma}{d\Omega}&=&\frac{a^2}{4\sin^4(\theta_s/2)}.
\end{eqnarray}
$$
where $a= \alpha/2E$. This the Rutherford formula for the differential
cross section. It diverges as $\theta_s\rightarrow 0$ because
scatterings with arbitrarily large impact parameters still scatter to
arbitrarily small scattering angles. The expression for
$d\sigma/d\Omega$ is the same whether the interaction is positive or
negative.
## Rutherford Scattering, Example
Consider a particle of mass $m$ and charge $z$ with kinetic energy $E$
(Let it be the center-of-mass energy) incident on a heavy nucleus of
mass $M$ and charge $Z$ and radius $R$. We want to find the angle at which the
Rutherford scattering formula breaks down.
Let $\alpha=Zze^2/(4\pi\epsilon_0)$. The scattering angle in Eq. ([14](#eq:scattangle)) is
$$
\sin(\theta_s/2)=\frac{a}{\sqrt{a^2+b^2}}, ~~a\equiv \frac{\alpha}{2E}.
$$
The impact parameter $b$ for which the point of closest approach
equals $R$ can be found by using angular momentum conservation,
$$
\begin{eqnarray*}
p_0b&=&b\sqrt{2mE}=Rp_f=R\sqrt{2m(E-\alpha/R)},\\
b&=&R\frac{\sqrt{2m(E-\alpha/R)}}{\sqrt{2mE}}\\
&=&R\sqrt{1-\frac{\alpha}{ER}}.
\end{eqnarray*}
$$
## Rutherford Scattering, Example, wrapping up
Putting these together
$$
\theta_s=2\sin^{-1}\left\{
\frac{a}{\sqrt{a^2+R^2(1-\alpha/(RE))}}
\right\},~~~a=\frac{\alpha}{2E}.
$$
It was from this departure of the experimentally measured
$d\sigma/d\Omega$ from the Rutherford formula that allowed Rutherford
to infer the radius of the gold nucleus, $R$.
## Variational Calculus
The calculus of variations involves
problems where the quantity to be minimized or maximized is an integral.
The usual minimization problem one faces involves taking a function
${\cal L}(x)$, then finding the single value $x$ for which ${\cal L}$
is either a maximum or minimum. In multivariate calculus one also
learns to solve problems where you minimize for multiple variables,
${\cal L}(x_1,x_2,\cdots x_n)$, and finding the points $(x_1\cdots
y_n)$ in an $n$-dimensional space that maximize or minimize the
function. Here, we consider what seems to be a much more ambitious
problem. Imagine you have a function ${\cal L}(x(t),\dot{x}(t),t)$,
and you wish to find the extrema for an infinite number of values of
$x$, i.e. $x$ at each point $t$. The function ${\cal L}$ will not only
depend on $x$ at each point $t$, but also on the slope at each point,
plus an additional dependence on $t$. Note we are NOT finding an
optimum value of $t$, we are finding the set of optimum values of $x$
at each point $t$, or equivalently, finding the function $x(t)$.
## Variational Calculus, introducing the action
One treats the function $x(t)$ as being unknown while minimizing the action
$$
S=\int_{t_1}^{t_2}dt~{\cal L}(x(t),\dot{x}(t),t).
$$
Thus, we are minimizing $S$ with respect to an infinite number of
values of $x(t_i)$ at points $t_i$. As an additional criteria, we will
assume that $x(t_1)$ and $x(t_2)$ are fixed, and that that we will
only consider variations of $x$ between the boundaries. The dependence
on the derivative, $\dot{x}=dx/dt$, is crucial because otherwise the
solution would involve simply finding the one value of $x$ that
minimized ${\cal L}$, and $x(t)$ would equal a constant if there were no
explicit $t$ dependence. Furthermore, $x$ wouldn't need to be
continuous at the boundary.
## Variational Calculus, general Action
In the general case we have an integral of the type
$$
S[q]= \int_{t_1}^{t_2} {\cal L}(q(t),\dot{q}(t),t)dt,
$$
where $S$ is the quantity which is sought minimized or maximized. The
problem is that although ${\cal L}$ is a function of the general variables
$q(t),\dot{q}(t),t$ (note our change of variables), the exact dependence of $q$ on $t$ is not known.
This means again that even though the integral has fixed limits $t_1$
and $t_2$, the path of integration is not known. In our case the unknown
quantities are the positions and general velocities of a given number
of objects and we wish to choose an integration path which makes the
functional $S[q]$ stationary. This means that we want to find minima,
or maxima or saddle points. In physics we search normally for minima.
Our task is therefore to find the minimum of $S[q]$ so that its
variation $\delta S$ is zero subject to specific constraints. The
constraints can be treated via the technique of Lagrangian multipliers
as we will see below.
## Variational Calculus, Optimal Path
We assume the existence of an optimum path, that is a path for which
$S[q]$ is stationary. There are infinitely many such paths. The
difference between two paths $\delta q$ is called the variation of
$q$.
We call the variation $\eta(t)$ and it is scaled by a factor $\alpha$.
The function $\eta(t)$ is arbitrary except for
$$
\eta(t_1)=\eta(t_2)=0,
$$
and we assume that we can model the change in $q$ as
$$
q(t,\alpha) = q(t)+\alpha\eta(t),
$$
and
$$
\delta q = q(t,\alpha) -q(t,0)=\alpha\eta(t).
$$
## Variational Calculus, Condition for an Extreme Value
We choose $q(t,\alpha=0)$ as the unkonwn path that will minimize $S$. The value
$q(t,\alpha\ne 0)$ describes a neighbouring path.
We have
$$
S[q(\alpha)]= \int_{t_1}^{t_2} {\cal L}(q(t,\alpha),\dot{q}(t,\alpha),t)dt.
$$
The condition for an extreme of
$$
S[q(\alpha)]= \int_{t_1}^{t_2} {\cal L}(q(t,\alpha),\dot{q}(t,\alpha),t)dt,
$$
is
$$
\left[\frac{\partial S[q(\alpha)]}{\partial t}\right]_{\alpha=0} =0.
$$
## Variational Calculus. $\alpha$ Dependence
The $\alpha$ dependence is contained in $q(t,\alpha)$ and $\dot{q}(t,\alpha)$ meaning that
$$
\left[\frac{\partial E[q(\alpha)]}{\partial \alpha}\right]=\int_{t_1}^{t_2} \left( \frac{\partial {\cal l}}{\partial q}\frac{\partial q}{\partial \alpha}+\frac{\partial {\cal L}}{\partial \dot{q}}\frac{\partial \dot{q}}{\partial \alpha}\right)dt.
$$
We have defined
$$
\frac{\partial q(x,\alpha)}{\partial \alpha}=\eta(x)
$$
and thereby
$$
\frac{\partial \dot{q}(t,\alpha)}{\partial \alpha}=\frac{d(\eta(t))}{dt}.
$$
## INtegrating by Parts
Using
$$
\frac{\partial q(t,\alpha)}{\partial \alpha}=\eta(t),
$$
and
$$
\frac{\partial \dot{q}(t,\alpha)}{\partial \alpha}=\frac{d(\eta(t))}{dt},
$$
in the integral gives
$$
\left[\frac{\partial S[q(\alpha)]}{\partial \alpha}\right]=\int_{t_1}^{t_2} \left( \frac{\partial {\cal L}}{\partial q}\eta(t)+\frac{\partial {\cal L}}{\partial \dot{q}}\frac{d(\eta(t))}{dt}\right)dt.
$$
Integrating the second term by parts
$$
\int_{t_1}^{t_2} \frac{\partial {\cal L}}{\partial \dot{q}}\frac{d(\eta(t))}{dt}dt =\eta(t)\frac{\partial {\cal L}}{\partial \dot{q}}|_{t_1}^{t_2}-
\int_a^b \eta(t)\frac{d}{dx}\frac{\partial {\cal L}}{\partial \dot{q}}dt,
$$
and since the first term dissappears due to $\eta(a)=\eta(b)=0$, we obtain
$$
\left[\frac{\partial S[q(\alpha)]}{\partial \alpha}\right]=\int_{t_1}^{t_2} \left( \frac{\partial {\cal L}}{\partial q}-\frac{d}{dx}\frac{\partial {\cal L}}{\partial \dot{q}}
\right)\eta(t)dt=0.
$$
## Euler-Lagrange Equations
The latter can be written as
$$
\left[\frac{\partial S[q(\alpha)]}{\partial \alpha}\right]_{\alpha=0}=\int_{t_1}^{t_2} \left( \frac{\partial {\cal L}}{\partial q}-\frac{d}{\
dx}\frac{\partial {\cal L}}{\partial \dot{q}}\right)\delta q(t)dt=\delta S = 0.
$$
The condition for a stationary value is thus a partial differential equation
$$
\frac{\partial {\cal L}}{\partial q}-\frac{d}{dx}\frac{\partial {\cal L}}{\partial \dot{q}}=0,
$$
known as the **Euler-Lagrange** equation.
| github_jupyter |
ENS'IA - Session 4: Convolutional neural networks
-----
Today, we move on to **Convolutional neural networks (CNN)**!
These are neural networks specialized in image processing.
You will implement a basic CNN architecture and learn some techniques to boost your scores!
Let's load the libraries we will use along with the CIFAR-10 data
```
# We import some useful things for later
import tensorflow as tf
from tensorflow import keras
from keras.datasets import cifar10
import matplotlib.pyplot as plt
import numpy as np
print(tf.__version__)
(x_train, y_train), (x_test, y_test) = cifar10.load_data() # We load the dataset
# Let's visualize an example and its class
plt.imshow(x_train[0])
plt.show()
print(y_train[0])
# Dataset visualization
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
class_count = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * class_count + y + 1
plt.subplot(samples_per_class, class_count, plt_idx)
plt.imshow(x_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
```
Data preprocessing... Do you remember which ones? :)
```
# TODO... Many things possible!
# For example, you can transform your y using one hot encoding...
```
We now load the required libraries for the CNN
```
from keras.models import Sequential
from keras.layers import Dense, Conv2D, MaxPool2D , Flatten, Dropout
```
We then build our CNN. Here is the order On fabrique ensuite notre CNN.
Here is the order we recommend:
- Convolution, 32 filters, 3x3
- Convolution, 32 filters, 3x3
- MaxPool
- Dropout
<br>
- Convolution, 64 filters, 3x3
- Convolution, 64 filters, 3x3
- MaxPool
- Dropout
<br>
- Convolution, 128 filters, 3x3
- Convolution, 128 filters, 3x3
- MaxPool
- Dropout
<br>
- Flatten
- Dense
- Dropout
- Dense
```
model = Sequential([
# TODO... looks pretty empty to me!
])
model.summary() # To check our model!
```
We have at our disposal a training dataset of 50 000 images, which is... quite limited. Actually, we would like to have an infinity of images for our training and, to achieve this goal, we are going to do some **Data augmentation**.
In other words, we are going to create new images from the ones we have.
For that, Keras has a pretty handy "ImageDataGenerator" (look for its doc online!) which is going to do random modifications on the images we feed the neural network with.
Which modifications could be useful?
```
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
width_shift_range=0.1,
height_shift_range=0.1,
fill_mode='nearest',
horizontal_flip=True,
)
```
In order to improve our score as much as possible, we will use a *callback*,
which is going to decrease the learning rate along in our training.
More precisely, if after X epochs the metric we chose (*accuracy*) has not
improved, then we decrease the learning rate.
```
from keras.callbacks import ReduceLROnPlateau
reduce_lr = ReduceLROnPlateau(
monitor = "val_accuracy",
factor=np.sqrt(0.1),
patience=3,
min_lr=0.5e-6)
```
Another callback will allow us to save the best version of our neural network
during the training. After each epoch, if the network improves its score on the validation set, we save it.
```
from keras.callbacks import ModelCheckpoint
checkpointer = ModelCheckpoint(filepath='model.hdf5', verbose=1, save_best_only=True)
```
Let's train the model! For that, we will use the functions we already saw together: *Adam* for the optimization and the loss function for the *cross entropy*.
```
model.compile(
# TODO :)
)
```
Quel batch size? Combien d'epochs? It's up to you! :P
```
history = model.fit(
# TODO
callbacks=[reduce_lr,checkpointer]
)
```
Let's now see in detail how our neural network training process went:
```
def plot_history(history):
"""
Plot the loss & accuracy
"""
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
plot_history(history)
```
Now that the training is done, we will load the saved model!
```
model.load_weights('model.hdf5')
```
You can now evaluate your model on the test set
```
model.evaluate(x_test, y_test)
```
To go further...
-----
Finished way too early?
Take this opportunity to discover https://keras.io/applications/!
You will find pre-trained and pre-built models that are very powerful and applicable to different tasks. Have fun!
To go even further...
-----
By now, you should have understood how convolution works and should have learned a few techniques to boost your score!
You should now be able to start working on bigger and more complex data sets!
We invite you to go to kaggle (www.kaggle.com) where you can find many datasets on which it is possible to work and to observe the work of other people on these data. It's really good for learning and it's perfect if you want to deepen your knowledge!
| github_jupyter |
<a href="https://colab.research.google.com/github/arthurflor23/handwritten-text-recognition/blob/master/src/tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<img src="https://github.com/arthurflor23/handwritten-text-recognition/blob/master/doc/image/header.png?raw=true" />
# Handwritten Text Recognition using TensorFlow 2.0
This tutorial shows how you can use the project [Handwritten Text Recognition](https://github.com/arthurflor23/handwritten-text-recognition) in your Google Colab.
## 1 Localhost Environment
We'll make sure you have the project in your Google Drive with the datasets in HDF5. If you already have structured files in the cloud, skip this step.
### 1.1 Datasets
The datasets that you can use:
a. [Bentham](http://transcriptorium.eu/datasets/bentham-collection/)
b. [IAM](http://www.fki.inf.unibe.ch/databases/iam-handwriting-database)
c. [Rimes](http://www.a2ialab.com/doku.php?id=rimes_database:start)
d. [Saint Gall](http://www.fki.inf.unibe.ch/databases/iam-historical-document-database/saint-gall-database)
e. [Washington](http://www.fki.inf.unibe.ch/databases/iam-historical-document-database/washington-database)
### 1.2 Raw folder
On localhost, download the code project from GitHub and extract the chosen dataset (or all if you prefer) in the **raw** folder. Don't change anything of the structure of the dataset, since the scripts were made from the **original structure** of them. Your project directory will be like this:
```
.
├── raw
│ ├── bentham
│ │ ├── BenthamDatasetR0-GT
│ │ └── BenthamDatasetR0-Images
│ ├── iam
│ │ ├── ascii
│ │ ├── forms
│ │ ├── largeWriterIndependentTextLineRecognitionTask
│ │ ├── lines
│ │ └── xml
│ ├── rimes
│ │ ├── eval_2011
│ │ ├── eval_2011_annotated.xml
│ │ ├── training_2011
│ │ └── training_2011.xml
│ ├── saintgall
│ │ ├── data
│ │ ├── ground_truth
│ │ ├── README.txt
│ │ └── sets
│ └── washington
│ ├── data
│ ├── ground_truth
│ ├── README.txt
│ └── sets
└── src
├── data
│ ├── evaluation.py
│ ├── generator.py
│ ├── preproc.py
│ ├── reader.py
│ ├── similar_error_analysis.py
├── main.py
├── network
│ ├── architecture.py
│ ├── layers.py
│ ├── model.py
└── tutorial.ipynb
```
After that, create virtual environment and install the dependencies with python 3 and pip:
> ```python -m venv .venv && source .venv/bin/activate```
> ```pip install -r requirements.txt```
### 1.3 HDF5 files
Now, you'll run the *transform* function from **main.py**. For this, execute on **src** folder:
> ```python main.py --source=<DATASET_NAME> --transform```
Your data will be preprocess and encode, creating and saving in the **data** folder. Now your project directory will be like this:
```
.
├── data
│ ├── bentham.hdf5
│ ├── iam.hdf5
│ ├── rimes.hdf5
│ ├── saintgall.hdf5
│ └── washington.hdf5
├── raw
│ ├── bentham
│ │ ├── BenthamDatasetR0-GT
│ │ └── BenthamDatasetR0-Images
│ ├── iam
│ │ ├── ascii
│ │ ├── forms
│ │ ├── largeWriterIndependentTextLineRecognitionTask
│ │ ├── lines
│ │ └── xml
│ ├── rimes
│ │ ├── eval_2011
│ │ ├── eval_2011_annotated.xml
│ │ ├── training_2011
│ │ └── training_2011.xml
│ ├── saintgall
│ │ ├── data
│ │ ├── ground_truth
│ │ ├── README.txt
│ │ └── sets
│ └── washington
│ ├── data
│ ├── ground_truth
│ ├── README.txt
│ └── sets
└── src
├── data
│ ├── evaluation.py
│ ├── generator.py
│ ├── preproc.py
│ ├── reader.py
│ ├── similar_error_analysis.py
├── main.py
├── network
│ ├── architecture.py
│ ├── layers.py
│ ├── model.py
└── tutorial.ipynb
```
Then upload the **data** and **src** folders in the same directory in your Google Drive.
## 2 Google Drive Environment
### 2.1 TensorFlow 2.0
Make sure the jupyter notebook is using GPU mode.
```
!nvidia-smi
```
Now, we'll install TensorFlow 2.0 with GPU support.
```
!pip install -q tensorflow-gpu==2.1.0-rc2
import tensorflow as tf
device_name = tf.test.gpu_device_name()
if device_name != "/device:GPU:0":
raise SystemError("GPU device not found")
print(f"Found GPU at: {device_name}")
```
### 2.2 Google Drive
Mount your Google Drive partition.
**Note:** *\"Colab Notebooks/handwritten-text-recognition/src/\"* was the directory where you put the project folders, specifically the **src** folder.
```
from google.colab import drive
drive.mount("./gdrive", force_remount=True)
%cd "./gdrive/My Drive/Colab Notebooks/handwritten-text-recognition/src/"
!ls -l
```
After mount, you can see the list os files in the project folder.
## 3 Set Python Classes
### 3.1 Environment
First, let's define our environment variables.
Set the main configuration parameters, like input size, batch size, number of epochs and list of characters. This make compatible with **main.py** and jupyter notebook:
* **dataset**: "bentham", "iam", "rimes", "saintgall", "washington"
* **arch**: network to run: "bluche", "puigcerver", "flor"
* **epochs**: number of epochs
* **batch_size**: number size of the batch
```
import os
import datetime
import string
# define parameters
source = "iam_words"
arch = "simpleHTR"
epochs = 1000
batch_size = 16
# define paths
source_path = os.path.join("..", "data", f"{source}.hdf5")
output_path = os.path.join("..", "output", source, arch)
target_path = os.path.join(output_path, "checkpoint_weights.hdf5")
os.makedirs(output_path, exist_ok=True)
# define input size, number max of chars per line and list of valid chars
# input_size = (1024, 128, 1)
# max_text_length = 128
input_size = (128, 32, 1)
max_text_length = 32
charset_base = string.printable[:80]
print("source:", source_path)
print("output", output_path)
print("target", target_path)
print("charset:", charset_base)
```
### 3.2 DataGenerator Class
The second class is **DataGenerator()**, responsible for:
* Load the dataset partitions (train, valid, test);
* Manager batchs for train/validation/test process.
```
from data.generator import DataGenerator
dtgen = DataGenerator(source=source_path,
batch_size=batch_size,
charset=charset_base,
max_text_length=max_text_length)
print(f"Train images: {dtgen.size['train']}")
print(f"Validation images: {dtgen.size['valid']}")
print(f"Test images: {dtgen.size['test']}")
```
### 3.3 HTRModel Class
The third class is **HTRModel()**, was developed to be easy to use and to abstract the complicated flow of a HTR system. It's responsible for:
* Create model with Handwritten Text Recognition flow, in which calculate the loss function by CTC and decode output to calculate the HTR metrics (CER, WER and SER);
* Save and load model;
* Load weights in the models (train/infer);
* Make Train/Predict process using *generator*.
To make a dynamic HTRModel, its parameters are the *architecture*, *input_size* and *vocab_size*.
```
from network.model import HTRModel
# create and compile HTRModel
# note: `learning_rate=None` will get architecture default value
model = HTRModel(architecture=arch, input_size=input_size, vocab_size=dtgen.tokenizer.vocab_size)
model.compile(learning_rate=0.001)
# save network summary
model.summary(output_path, "summary.txt")
# get default callbacks and load checkpoint weights file (HDF5) if exists
model.load_checkpoint(target=target_path)
callbacks = model.get_callbacks(logdir=output_path, checkpoint=target_path, verbose=1)
```
## 4 Tensorboard
To facilitate the visualization of the model's training, you can instantiate the Tensorboard.
**Note**: All data is saved in the output folder
```
%load_ext tensorboard
%tensorboard --reload_interval=300 --logdir={output_path}
```
## 5 Training
The training process is similar to the *fit()* of the Keras. After training, the information (epochs and minimum loss) is save.
```
# to calculate total and average time per epoch
start_time = datetime.datetime.now()
h = model.fit(x=dtgen.next_train_batch(),
epochs=epochs,
steps_per_epoch=dtgen.steps['train'],
validation_data=dtgen.next_valid_batch(),
validation_steps=dtgen.steps['valid'],
callbacks=callbacks,
shuffle=True,
verbose=1)
total_time = datetime.datetime.now() - start_time
loss = h.history['loss']
val_loss = h.history['val_loss']
min_val_loss = min(val_loss)
min_val_loss_i = val_loss.index(min_val_loss)
time_epoch = (total_time / len(loss))
total_item = (dtgen.size['train'] + dtgen.size['valid'])
t_corpus = "\n".join([
f"Total train images: {dtgen.size['train']}",
f"Total validation images: {dtgen.size['valid']}",
f"Batch: {dtgen.batch_size}\n",
f"Total time: {total_time}",
f"Time per epoch: {time_epoch}",
f"Time per item: {time_epoch / total_item}\n",
f"Total epochs: {len(loss)}",
f"Best epoch {min_val_loss_i + 1}\n",
f"Training loss: {loss[min_val_loss_i]:.8f}",
f"Validation loss: {min_val_loss:.8f}"
])
with open(os.path.join(output_path, "train.txt"), "w") as lg:
lg.write(t_corpus)
print(t_corpus)
```
## 6 Predict
The predict process is similar to the *predict* of the Keras:
```
from data import preproc as pp
from google.colab.patches import cv2_imshow
start_time = datetime.datetime.now()
# predict() function will return the predicts with the probabilities
predicts, _ = model.predict(x=dtgen.next_test_batch(),
steps=dtgen.steps['test'],
ctc_decode=True,
verbose=1)
# decode to string
predicts = [dtgen.tokenizer.decode(x[0]) for x in predicts]
total_time = datetime.datetime.now() - start_time
# mount predict corpus file
with open(os.path.join(output_path, "predict.txt"), "w") as lg:
for pd, gt in zip(predicts, dtgen.dataset['test']['gt']):
lg.write(f"TE_L {gt}\nTE_P {pd}\n")
for i, item in enumerate(dtgen.dataset['test']['dt'][:10]):
print("=" * 1024, "\n")
cv2_imshow(pp.adjust_to_see(item))
print(dtgen.dataset['test']['gt'][i])
print(predicts[i], "\n")
```
## 7 Evaluate
Evaluation process is more manual process. Here we have the `ocr_metrics`, but feel free to implement other metrics instead. In the function, we have three parameters:
* predicts
* ground_truth
* norm_accentuation (calculation with/without accentuation)
* norm_punctuation (calculation with/without punctuation marks)
```
from data import evaluation
evaluate = evaluation.ocr_metrics(predicts=predicts,
ground_truth=dtgen.dataset['test']['gt'],
norm_accentuation=False,
norm_punctuation=False)
e_corpus = "\n".join([
f"Total test images: {dtgen.size['test']}",
f"Total time: {total_time}",
f"Time per item: {total_time / dtgen.size['test']}\n",
f"Metrics:",
f"Character Error Rate: {evaluate[0]:.8f}",
f"Word Error Rate: {evaluate[1]:.8f}",
f"Sequence Error Rate: {evaluate[2]:.8f}"
])
with open(os.path.join(output_path, "evaluate.txt"), "w") as lg:
lg.write(e_corpus)
print(e_corpus)
```
| github_jupyter |
# Kili Tutorial: How to leverage Counterfactually augmented data to have a more robust model
This recipe is inspired by the paper *Learning the Difference that Makes a Difference with Counterfactually-Augmented Data*, that you can find here on [arXiv](https://arxiv.org/abs/1909.12434)
In this study, the authors point out the difficulty for Machine Learning models to generalize the classification rules learned, because their decision rules, described as 'spurious patterns', often miss the key elements that affects most the class of a text. They thus decided to delete what can be considered as a confusion factor, by changing the label of an asset at the same time as changing the minimum amount of words so those **key-words** would be much easier for the model to spot.
We'll see in this tutorial :
1. How to create a project in Kili, both for [IMDB](##Data-Augmentation-on-IMDB-dataset) and [SNLI](##Data-Augmentation-on-SNLI-dataset) datasets, to reproduce such a data-augmentation task, in order to improve our model, and decrease its variance when used in production with unseen data.
2. We'll also try to [reproduce the results of the paper](##Reproducing-the-results), using similar models, to show how such a technique can be of key interest while working on a text-classification task.
We'll use the data of the study, both IMDB and Stanford NLI, publicly available [here](https://github.com/acmi-lab/counterfactually-augmented-data).
Additionally, for an overview of Kili, visit the [website](https://kili-technology.com), you can also check out the Kili [documentation](https://cloud.kili-technology.com/docs), or some other recipes.

```
# Authentication
import os
# !pip install kili # uncomment if you don't have kili installed already
from kili.client import Kili
api_key = os.getenv('KILI_USER_API_KEY')
api_endpoint = os.getenv('KILI_API_ENDPOINT')
# If you use Kili SaaS, use the url 'https://cloud.kili-technology.com/api/label/v2/graphql'
kili = Kili(api_key=api_key, api_endpoint=api_endpoint)
user_id = kili.auth.user_id
```
## Data Augmentation on IMDB dataset
The data consists in reviews of films, that are classified as positives or negatives. State-of-the-art models performance is often measured against this dataset, making it a reference.
This is how our task would look like on Kili, into 2 different projects for each task, from Positive to Negative or Negative to Positive.
### Creating the projects
```
taskname = "NEW_REVIEW"
project_imdb_negative_to_positive = {
'title': 'Counterfactual data-augmentation - Negative to Positive',
'description': 'IMDB Sentiment Analysis',
'instructions': 'https://docs.google.com/document/d/1zhNaQrncBKc3aPKcnNa_mNpXlria28Ij7bfgUvJbyfw/edit?usp=sharing',
'input_type': 'TEXT',
'json_interface':{
"filetype": "TEXT",
"jobRendererWidth": 0.5,
"jobs": {
taskname : {
"mlTask": "TRANSCRIPTION",
"content": {
"input": None
},
"required": 1,
"isChild": False,
"instruction": "Write here the new review modified to be POSITIVE. Please refer to the instructions above before starting"
}
}
}
}
project_imdb_positive_to_negative = {
'title': 'Counterfactual data-augmentation - Positive to Negative',
'description': 'IMDB Sentiment Analysis',
'instructions': 'https://docs.google.com/document/d/1zhNaQrncBKc3aPKcnNa_mNpXlria28Ij7bfgUvJbyfw/edit?usp=sharing',
'input_type': 'TEXT',
'json_interface':{
"jobRendererWidth": 0.5,
"jobs": {
taskname : {
"mlTask": "TRANSCRIPTION",
"content": {
"input": None
},
"required": 1,
"isChild": False,
"instruction": "Write here the new review modified to be NEGATIVE. Please refer to the instructions above before starting"
}
}
}
}
for project_imdb in [project_imdb_positive_to_negative,project_imdb_negative_to_positive] :
project_imdb['id'] = kili.create_project(title=project_imdb['title'],
instructions=project_imdb['instructions'],
description=project_imdb['description'],
input_type=project_imdb['input_type'],
json_interface=project_imdb['json_interface'])['id']
```
We'll just create some useful functions for an improved readability :
```
def create_assets(dataframe, intro, objective, instructions, truth_label, target_label) :
return((intro + dataframe[truth_label] + objective + dataframe[target_label] + instructions + dataframe['Text']).tolist())
def create_json_responses(taskname,df,field="Text") :
return( [{taskname: { "text": df[field].iloc[k] }
} for k in range(df.shape[0]) ])
```
### Importing the data into Kili
```
import pandas as pd
datasets = ['dev','train','test']
for dataset in datasets :
url = f'https://raw.githubusercontent.com/acmi-lab/counterfactually-augmented-data/master/sentiment/combined/paired/{dataset}_paired.tsv'
df = pd.read_csv(url, error_bad_lines=False, sep='\t')
df = df[df.index%2 == 0] # keep only the original reviews as assets
for review_type,project_imdb in zip(['Positive','Negative'],[project_imdb_positive_to_negative,project_imdb_negative_to_positive]) :
dataframe = df[df['Sentiment']==review_type]
reviews_to_import = dataframe['Text'].tolist()
external_id_array = ('IMDB ' + review_type +' review ' + dataset + dataframe['batch_id'].astype('str')).tolist()
kili.append_many_to_dataset(
project_id=project_imdb['id'],
content_array=reviews_to_import,
external_id_array=external_id_array)
```
### Importing the labels into Kili
We will fill-in with the results of the study, as if they were predictions. In a real annotation project, we could fill in with the sentences as well so the labeler just has to write the changes.
```
model_name = 'results-arxiv:1909.12434'
for dataset in datasets :
url = f'https://raw.githubusercontent.com/acmi-lab/counterfactually-augmented-data/master/sentiment/combined/paired/{dataset}_paired.tsv'
df = pd.read_csv(url, error_bad_lines=False, sep='\t')
df = df[df.index%2 == 1] # keep only the modified reviews as predictions
for review_type,project_imdb in zip(['Positive','Negative'],[project_imdb_positive_to_negative,project_imdb_negative_to_positive]) :
dataframe = df[df['Sentiment']!=review_type]
external_id_array = ('IMDB ' + review_type +' review ' + dataset + dataframe['batch_id'].astype('str')).tolist()
json_response_array = create_json_responses(taskname,dataframe)
kili.create_predictions(project_id=project_imdb['id'],
external_id_array=external_id_array,
model_name_array=[model_name]*len(external_id_array),
json_response_array=json_response_array)
```
This is how our interface looks in the end, allowing to quickly perform the task at hand

## Data Augmentation on SNLI dataset
The data consists in a 3-class dataset, where, provided with two phrases, a premise and an hypothesis, the machine-learning task is to find the correct relation between those two sentences, that can be either entailment, contradiction or neutral.
Here is an example of a premise, and three sentences that could be the hypothesis for the three categories :

This is how our task would look like on Kili, this time keeping it as a single project. To do so, we strongly remind the instructions at each labeler.
### Creating the project
```
taskname = "SENTENCE_MODIFIED"
project_snli={
'title': 'Counterfactual data-augmentation NLI',
'description': 'Stanford Natural language Inference',
'instructions': '',
'input_type': 'TEXT',
'json_interface':{
"jobRendererWidth": 0.5,
"jobs": {
taskname: {
"mlTask": "TRANSCRIPTION",
"content": {
"input": None
},
"required": 1,
"isChild": False,
"instruction": "Write here the modified sentence. Please refer to the instructions above before starting"
}
}
}
}
project_snli['id'] = kili.create_project(title=project_snli['title'],
instructions=project_snli['instructions'],
description=project_snli['description'],
input_type=project_snli['input_type'],
json_interface=project_snli['json_interface'])['id']
print(f'Created project {project_snli["id"]}')
```
Again, we'll factorize our code a little, to merge datasets and differentiate properly all the cases of sentences :
```
def merge_datasets(dataset, sentence_modified) :
url_original = f'https://raw.githubusercontent.com/acmi-lab/counterfactually-augmented-data/master/NLI/original/{dataset}.tsv'
url_revised = f'https://raw.githubusercontent.com/acmi-lab/counterfactually-augmented-data/master/NLI/revised_{sentence_modified}/{dataset}.tsv'
df_original = pd.read_csv(url_original, error_bad_lines=False, sep='\t')
df_original = df_original[df_original.duplicated(keep='first')== False]
df_original['id'] = df_original.index.astype(str)
df_revised = pd.read_csv(url_revised, error_bad_lines=False, sep='\t')
axis_merge = 'sentence2' if sentence_modified=='premise' else 'sentence1'
# keep only one label per set of sentences
df_revised = df_revised[df_revised[[axis_merge,'gold_label']].duplicated(keep='first')== False]
df_merged = df_original.merge(df_revised, how='inner', left_on=axis_merge, right_on=axis_merge)
if sentence_modified == 'premise' :
df_merged['Text'] = df_merged['sentence1_x'] + '\nSENTENCE 2 :\n' + df_merged['sentence2']
instructions = " relation, by making a small number of changes in the FIRST SENTENCE\
such that the document remains coherent and the new label accurately describes the revised passage :\n\n\n\
SENTENCE 1 :\n"
else :
df_merged['Text'] = df_merged['sentence1'] + '\nSENTENCE 2 :\n' + df_merged['sentence2_x']
instructions = " relation, by making a small number of changes in the SECOND SENTENCE\
such that the document remains coherent and the new label accurately describes the revised passage :\n\n\n\
SENTENCE 1 : \n"
return(df_merged, instructions)
def create_external_ids(dataset,dataframe, sentence_modified):
return(('NLI ' + dataset + ' ' + dataframe['gold_label_x'] + ' to ' + dataframe['gold_label_y'] + ' ' + sentence_modified + ' modified ' + dataframe['id']).tolist())
```
### Importing the data into Kili
We'll add before each set of sentences a small precision of the task for the labeler :
```
datasets = ['dev','train','test']
sentences_modified = ['premise', 'hypothesis']
intro = "Those two sentences' relation is classified as "
objective = " to convert to a "
for dataset in datasets :
for sentence_modified in sentences_modified :
df,instructions = merge_datasets(dataset, sentence_modified)
sentences_to_import = create_assets(df, intro, objective, instructions, 'gold_label_x', 'gold_label_y')
external_id_array = create_external_ids(dataset, df, sentence_modified)
kili.append_many_to_dataset(project_id=project_snli['id'],
content_array=sentences_to_import,
external_id_array=external_id_array)
```
### Importing the labels into Kili
We will fill-in with the results of the study, as if they were predictions.
```
model_name = 'results-arxiv:1909.12434'
for dataset in datasets :
for sentence_modified in sentences_modified :
axis_changed = 'sentence1_y' if sentence_modified=='premise' else 'sentence2_y'
df,instructions = merge_datasets(dataset, sentence_modified)
external_id_array = create_external_ids(dataset, df, sentence_modified)
json_response_array = create_json_responses(taskname,df,axis_changed)
kili.create_predictions(project_id=project_snli['id'],
external_id_array=external_id_array,
model_name_array=[model_name]*len(external_id_array),
json_response_array=json_response_array)
```


## Conclusion
In this tutorial, we learned how Kili can be a great help in your data augmentation task, as it allows to set a simple and easy to use interface, with proper instructions for your task.
For the study, the quality of the labeling was a key feature in this complicated task, what Kili allows very simply. To monitor the quality of the results, we could set-up a consensus on a part or all of the annotations, or even keep a part of the dataset as ground truth to measure the performance of every labeler.
For an overview of Kili, visit [kili-technology.com](https://kili-technology.com). You can also check out [Kili documentation](https://cloud.kili-technology.com/docs).
| github_jupyter |
## by Jan Willem de Gee (jwdegee@gmail.com)
```
import sys, os
import numpy as np
import pandas as pd
import matplotlib as mpl
mpl.rcParams['pdf.fonttype'] = 42
import matplotlib.pyplot as plt
import seaborn as sns
import hddm
from joblib import Parallel, delayed
from IPython import embed as shell
```
Let's start with defining some functionality
```
def get_choice(row):
if row.condition == 'present':
if row.response == 1:
return 1
else:
return 0
elif row.condition == 'absent':
if row.response == 0:
return 1
else:
return 0
def simulate_data(a, v, t, z, dc, sv=0, sz=0, st=0, condition=0, nr_trials1=1000, nr_trials2=1000):
"""
Simulates stim-coded data.
"""
parameters1 = {'a':a, 'v':v+dc, 't':t, 'z':z, 'sv':sv, 'sz': sz, 'st': st}
parameters2 = {'a':a, 'v':v-dc, 't':t, 'z':1-z, 'sv':sv, 'sz': sz, 'st': st}
df_sim1, params_sim1 = hddm.generate.gen_rand_data(params=parameters1, size=nr_trials1, subjs=1, subj_noise=0)
df_sim1['condition'] = 'present'
df_sim2, params_sim2 = hddm.generate.gen_rand_data(params=parameters2, size=nr_trials2, subjs=1, subj_noise=0)
df_sim2['condition'] = 'absent'
df_sim = pd.concat((df_sim1, df_sim2))
df_sim['bias_response'] = df_sim.apply(get_choice, 1)
df_sim['correct'] = df_sim['response'].astype(int)
df_sim['response'] = df_sim['bias_response'].astype(int)
df_sim['stimulus'] = np.array((np.array(df_sim['response']==1) & np.array(df_sim['correct']==1)) + (np.array(df_sim['response']==0) & np.array(df_sim['correct']==0)), dtype=int)
df_sim['condition'] = condition
df_sim = df_sim.drop(columns=['bias_response'])
return df_sim
def conditional_response_plot(df, quantiles, xlim=None):
fig = plt.figure(figsize=(2,2))
ax = fig.add_subplot(1,1,1)
df.loc[:,'rt_bin'] = pd.qcut(df['rt'], quantiles, labels=False)
d = df.groupby(['subj_idx', 'rt_bin']).mean().reset_index()
for s, c in zip(np.unique(d["subj_idx"]), ['lightgrey', 'grey', 'black']):
ax.errorbar(d.loc[d["subj_idx"]==s, "rt"], d.loc[d["subj_idx"]==s, "response"], fmt='-o', color=c, markersize=5)
plt.axhline(0.5)
if xlim:
ax.set_xlim(xlim)
ax.set_ylim(0.2,1)
ax.set_title('P(correct) = {}\nP(bias) = {}'.format(
round(df['correct'].mean(), 2),
round(df['response'].mean(), 2),
))
ax.set_xlabel('RT (s)')
ax.set_ylabel('P(bias)')
sns.despine(offset=10, trim=True)
plt.tight_layout()
return fig
```
Let's simulate our own data, so we know what the fitting procedure should converge on:
```
# settings
trials_per_level = 50000
z = 1.8
absolute_z = True
# parameters:
if absolute_z:
params0 = {'cond':0, 'v':1, 'a':2.0, 't':0.1, 'z':z/2.0, 'dc':0, 'sz':0, 'st':0, 'sv':0.5}
params1 = {'cond':1, 'v':1, 'a':2.2, 't':0.1, 'z':z/2.2, 'dc':0, 'sz':0, 'st':0, 'sv':0.5}
params2 = {'cond':2, 'v':1, 'a':2.4, 't':0.1, 'z':z/2.4, 'dc':0, 'sz':0, 'st':0, 'sv':0.5}
params3 = {'cond':3, 'v':1, 'a':2.6, 't':0.1, 'z':z/2.6, 'dc':0, 'sz':0, 'st':0, 'sv':0.5}
params4 = {'cond':4, 'v':1, 'a':2.8, 't':0.1, 'z':z/2.8, 'dc':0, 'sz':0, 'st':0, 'sv':0.5}
else:
params0 = {'cond':0, 'v':1, 'a':1.0, 't':0.1, 'z':z, 'dc':0, 'sz':0, 'st':0, 'sv':0.5}
params1 = {'cond':1, 'v':1, 'a':1.5, 't':0.1, 'z':z, 'dc':0, 'sz':0, 'st':0, 'sv':0.5}
params2 = {'cond':2, 'v':1, 'a':2.0, 't':0.1, 'z':z, 'dc':0, 'sz':0, 'st':0, 'sv':0.5}
params3 = {'cond':3, 'v':1, 'a':2.5, 't':0.1, 'z':z, 'dc':0, 'sz':0, 'st':0, 'sv':0.5}
params4 = {'cond':4, 'v':1, 'a':3.0, 't':0.1, 'z':z, 'dc':0, 'sz':0, 'st':0, 'sv':0.5}
# simulate:
dfs = []
for i, params in enumerate([params0, params1, params2, params3, params4]):
df = simulate_data(z=params['z'], a=params['a'], v=params['v'], dc=params['dc'],
t=params['t'], sv=params['sv'], st=params['st'], sz=params['sz'],
condition=params['cond'], nr_trials1=trials_per_level, nr_trials2=trials_per_level)
df['subj_idx'] = 0
fig = conditional_response_plot(df, quantiles=[0, 0.1, 0.3, 0.5, 0.7, 0.9,], xlim=(0,3))
fig.savefig('crf{}.pdf'.format(i))
dfs.append(df)
# combine in one dataframe:
df_emp = pd.concat(dfs)
```
Fit using the g-quare method.
```
# fit chi-square:
quantiles = [.1, .3, .5, .7, .9]
m = hddm.HDDMStimCoding(df_emp, stim_col='stimulus', split_param='v', drift_criterion=True, bias=True,
include=('sv'), depends_on={'a':'condition', 'z':'condition', 'dc':'condition', }, p_outlier=0,)
m.optimize('gsquare', quantiles=quantiles, n_runs=5)
params_fitted = pd.concat((pd.DataFrame([m.values], index=[0]), pd.DataFrame([m.bic_info], index=[0])), axis=1)
params_fitted.drop(['bic', 'likelihood', 'penalty', 'z_trans(0)', 'z_trans(1)', 'z_trans(2)', 'z_trans(3)', 'z_trans(4)'], axis=1, inplace=True)
print(params_fitted.head())
# plot true vs recovered parameters:
x = np.arange(18)
y0 = np.array([params0['a'], params1['a'], params2['a'], params3['a'], params4['a'], params0['v'], params0['t'], params0['sv'], params0['z'], params1['z'], params2['z'],params3 ['z'],params4['z'], params0['dc'], params1['dc'], params2['dc'], params3['dc'], params4['dc']])
print(len(y0))
# y1 = np.array([params1['a'], params1['v'], params1['t'], params1['z'], params1['dc']])
fig = plt.figure(figsize=(6,2))
ax = fig.add_subplot(111)
ax.scatter(x, y0, marker="o", s=100, color='orange', label='True value')
# ax.scatter(x+1, y1, marker="o", s=100, color='orange',)
sns.stripplot(data=params_fitted, jitter=False, size=2, edgecolor='black', linewidth=0.25, alpha=1, palette=['black', 'black'], ax=ax)
plt.ylabel('Param value')
plt.legend()
sns.despine(offset=5, trim=True,)
plt.tight_layout()
fig.savefig('param_recovery.pdf')
fig = plt.figure(figsize=(2,2))
absolute_z = True
if absolute_z:
plt.scatter(df_emp.groupby('condition').mean()['rt'], [params0['z']*params0['a'], params1['z']*params1['a'], params2['z']*params2['a'], params3['z']*params3['a'], params4['z']*params4['a']])
plt.scatter(df_emp.groupby('condition').mean()['rt'], [params0['dc'], params1['dc'], params2['dc'],params3 ['dc'],params4['dc']])
else:
plt.scatter(df_emp.groupby('condition').mean()['rt'], [params0['z'], params1['z'], params2['z'],params3 ['z'],params4['z']])
plt.scatter(df_emp.groupby('condition').mean()['rt'], [params0['dc'], params1['dc'], params2['dc'],params3 ['dc'],params4['dc']])
plt.ylabel('Param value')
plt.xlabel('RT (s)')
plt.xlim(0.25,1.5)
plt.ylim(-0.1,2.0)
sns.despine(offset=5, trim=True,)
plt.tight_layout()
fig.savefig('param_recovery2.pdf')
```
| github_jupyter |
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
#defining the function
def function_for_roots(x): #defining the function where we are finding the roots
a = 1.01 #using variables for the constants
b = -3.04
c = 2.07
return a*x**2 + b*x + c # finding the roots of ax^2 + bx + c
#checking if our initial values are valid
def check_initial_values (f, x_min, x_max, tol):
#check the initial guesses
y_min = f(x_min)
y_max = f(x_max)
#check that the x_min and x_max contain a zero crossing
if(y_min*y_max>= 0.0):
print("No zero crossing found in the reange = ",x_min,x_max)
s = "f(%f) = %f, f(%f) = %f" % (x_min,y_min,x_max,y_max)
print (s)
return 0
#if x_min is a root, then return flag ==1
if(np.fabs(y_min)<tol):
return 1
#if x_max is a root, then return flag ==2
if(np.fabs(y_max)<tol):
return 2
#if we reach this point, the bracket is valid and we will return 3
return 3
#now we will define the main work to preform the iterative search
def bisection_root_finding(f, x_min_start, x_max_start, tol):
#this function will use bisection search to find the root
x_min = x_min_start #function value at x_min
x_max = x_max_start #function value at x_max
x_min = 0.0 #mid point
y_min = f(x_min) #function value at x_min
y_max = f(x_max) #function value at x_max
y_mid = 0.0 #function value at midpoint
imax = 10000 #set a max number of iterations
i = 0 #iteration counter
#check the initial values
flag = check_initial_values(f,x_min,x_max,tol)
if(flag==0):
print("Error in bisection_root_finding().")
raise ValueError ('Initial values invalid', x_min,x_max)
elif(flag==1):
#lucky guess
return x_min
elif(flag==2):
#another lucky guess
return x_max
#if we reach here, then we need to conduct the search
#set a flag
flag=1
#enter a while loop
while(flag):
x_mid = 0.5*(x_min+x_max) #mid point
y_mid = f(x_mid) #function value at x_mid
#check if x_mid is a root
if(np.fabs(y_mid)<tol):
flag = 0
else:
#x_mid is not a root
#if the product of the function at the midpoint
#and at one of the end points is greater than zero,
#replace thsi end pt
if(f(x_min)*f(x_mid)>0):
#replace x_min with x_mid
x_min = x_mid
else:
#replace x_max with x_mid
x_max = x_mid
#print the interation
print(x_min,f(x_min),x_max,f(x_max))
#count the iteration
i+= 1
#if we have exceeded the max number of iternations, exit
if(i>imax):
print("Exceeded max number of iterations = ",i)
s = "Min bracket f(%f) = %f" % (x_min,f(x_min))
print(s)
s = "Max bracket f(%f) = %f" % (x_max,f(x_max))
print(s)
s = "Mid bracket f(%f) = %f" % (x_mid,f(x_mid))
print(s)
raise StopIteration( 'Stopping iterations after', i)
#fin
return x_mid
x_min = 0.0
x_max = 1.5
tolerance = 1.0e-6
#print the initial guess
print(x_min,function_for_roots(x_min))
print(x_max,function_for_roots(x_max))
x_root = bisection_root_finding(function_for_roots,x_min,x_max,tolerance)
y_root = function_for_roots(x_root)
s = "Root found with y(%f) = %f" % (x_root,y_root)
print(s)
#plotting the function
x = np.arange(0,2.5,0.00001)
y = 1.01*(x**2) - 3.04*x + 2.07
z = 0*x
plt.plot(x_root,y_root,'ro')
plt.plot(x,y,'tab:brown')
plt.plot(x,z,'k')
plt.plot(z,x,'k')
plt.xlabel('x')
plt.ylabel('f(x)')
plt.show
#found that there are 19 iterations
```
| github_jupyter |
# Tutorial: FICO Explainable Machine Learning Challenge
In this tutorial, we use the dataset form the FICO Explainable Machine Learning Challenge: https://community.fico.com/s/explainable-machine-learning-challenge. The goal is to create a pipeline by combining a binning process and logistic regression to obtain an explainable model and compare it against a black-box model using Gradient Boosting Tree (GBT) as an estimator.
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from optbinning import BinningProcess
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import classification_report
from sklearn.metrics import auc, roc_auc_score, roc_curve
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
```
Download the dataset from the link above and load it.
```
df = pd.read_csv("data/FICO_challenge/heloc_dataset_v1.csv", sep=",")
variable_names = list(df.columns[1:])
X = df[variable_names].values
```
Transform the categorical dichotomic target variable into numerical.
```
y = df.RiskPerformance.values
mask = y == "Bad"
y[mask] = 1
y[~mask] = 0
y = y.astype(int)
```
#### Modeling
The data dictionary of this challenge includes three special values/codes:
* -9 No Bureau Record or No Investigation
* -8 No Usable/Valid Trades or Inquiries
* -7 Condition not Met (e.g. No Inquiries, No Delinquencies)
```
special_codes = [-9, -8, -7]
```
This challenge imposes monotonicity constraints with respect to the probability of a bad target for many of the variables. We apply these rules by passing the following dictionary of parameters for these variables involved.
```
binning_fit_params = {
"ExternalRiskEstimate": {"monotonic_trend": "descending"},
"MSinceOldestTradeOpen": {"monotonic_trend": "descending"},
"MSinceMostRecentTradeOpen": {"monotonic_trend": "descending"},
"AverageMInFile": {"monotonic_trend": "descending"},
"NumSatisfactoryTrades": {"monotonic_trend": "descending"},
"NumTrades60Ever2DerogPubRec": {"monotonic_trend": "ascending"},
"NumTrades90Ever2DerogPubRec": {"monotonic_trend": "ascending"},
"PercentTradesNeverDelq": {"monotonic_trend": "descending"},
"MSinceMostRecentDelq": {"monotonic_trend": "descending"},
"NumTradesOpeninLast12M": {"monotonic_trend": "ascending"},
"MSinceMostRecentInqexcl7days": {"monotonic_trend": "descending"},
"NumInqLast6M": {"monotonic_trend": "ascending"},
"NumInqLast6Mexcl7days": {"monotonic_trend": "ascending"},
"NetFractionRevolvingBurden": {"monotonic_trend": "ascending"},
"NetFractionInstallBurden": {"monotonic_trend": "ascending"},
"NumBank2NatlTradesWHighUtilization": {"monotonic_trend": "ascending"}
}
```
Instantiate a ``BinningProcess`` object class with variable names, special codes and dictionary of binning parameters. Create a explainable model pipeline and a black-blox pipeline.
```
binning_process = BinningProcess(variable_names, special_codes=special_codes,
binning_fit_params=binning_fit_params)
clf1 = Pipeline(steps=[('binning_process', binning_process),
('classifier', LogisticRegression(solver="lbfgs"))])
clf2 = LogisticRegression(solver="lbfgs")
clf3 = GradientBoostingClassifier()
```
Split dataset into train and test. Fit pipelines with training data, then generate classification reports to show the main classification metrics.
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
clf1.fit(X_train, y_train)
clf2.fit(X_train, y_train)
clf3.fit(X_train, y_train)
y_pred = clf1.predict(X_test)
print(classification_report(y_test, y_pred))
y_pred = clf2.predict(X_test)
print(classification_report(y_test, y_pred))
y_pred = clf3.predict(X_test)
print(classification_report(y_test, y_pred))
```
Plot the Receiver Operating Characteristic (ROC) metric to evaluate and compare the classifiers' prediction.
```
probs = clf1.predict_proba(X_test)
preds = probs[:,1]
fpr1, tpr1, threshold = roc_curve(y_test, preds)
roc_auc1 = auc(fpr1, tpr1)
probs = clf2.predict_proba(X_test)
preds = probs[:,1]
fpr2, tpr2, threshold = roc_curve(y_test, preds)
roc_auc2 = auc(fpr2, tpr2)
probs = clf3.predict_proba(X_test)
preds = probs[:,1]
fpr3, tpr3, threshold = roc_curve(y_test, preds)
roc_auc3 = auc(fpr3, tpr3)
plt.title('Receiver Operating Characteristic')
plt.plot(fpr1, tpr1, 'b', label='Binning+LR: AUC = {0:.2f}'.format(roc_auc1))
plt.plot(fpr2, tpr2, 'g', label='LR: AUC = {0:.2f}'.format(roc_auc2))
plt.plot(fpr3, tpr3, 'r', label='GBT: AUC = {0:.2f}'.format(roc_auc3))
plt.legend(loc='lower right')
plt.plot([0, 1], [0, 1],'k--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
```
The plot above shows the increment in terms of model performance after binning when the logistic estimator is chosen. Furthermore, a previous binning process might reduce numerical instability issues, as confirmed when fitting the classifier ``clf2``.
#### Binning process statistics
The binning process of the pipeline can be retrieved to show information about the problem and timing statistics.
```
binning_process.information(print_level=2)
```
The ``summary`` method returns basic statistics for each binned variable.
```
binning_process.summary()
```
The ``get_binned_variable`` method serves to retrieve an optimal binning object, which can be analyzed in detail afterward.
```
optb = binning_process.get_binned_variable("NumBank2NatlTradesWHighUtilization")
optb.binning_table.build()
optb.binning_table.plot(metric="event_rate")
optb.binning_table.analysis()
```
| github_jupyter |
# Sales Analysis
Source: [https://github.com/d-insight/code-bank.git](https://github.com/d-insight/code-bank.git)
License: [MIT License](https://opensource.org/licenses/MIT). See open source [license](LICENSE) in the Code Bank repository.
-------------
#### Import libraries
```
import os
import pandas as pd
```
#### Merge data from each month into one CSV
```
path = "data"
files = [file for file in os.listdir(path) if not file.startswith('.')] # Ignore hidden files
all_months_data = pd.DataFrame()
for file in files:
current_data = pd.read_csv(path+"/"+file)
all_months_data = pd.concat([all_months_data, current_data])
all_months_data.to_csv(path + "/" + "all_data.csv", index=False)
```
#### Read in updated dataframe
```
all_data = pd.read_csv(path+"/"+"all_data.csv")
all_data.head()
all_data.shape
all_data.dtypes
# If you need to Fix Types (e.g. dates/times)
#time_cols=['Order Date']
#all_data = pd.read_csv(path+"/"+"all_data.csv", parse_dates=time_cols)
```
### Clean up the data!
The first step in this is figuring out what we need to clean. In practice, you find things you need to clean as you perform operations and get errors. Based on the error, you decide how you should go about cleaning the data
#### Drop missing values
```
# Find NAN
nan_df = all_months_data[all_months_data.isna().any(axis=1)] # Filter and get all rows with NaN
display(nan_df.head())
all_data = all_months_data.dropna(how='all')
all_data.head()
```
#### Get rid of unwanted text in order date column
```
all_data = all_data[all_data['Order Date'].str[0:2]!='Or']
```
#### Make columns correct type
```
all_data['Quantity Ordered'] = pd.to_numeric(all_data['Quantity Ordered'])
all_data['Price Each'] = pd.to_numeric(all_data['Price Each'])
```
### Augment data with additional columns
#### Add month column
```
all_data['Month'] = all_data['Order Date'].str[0:2] # Grab the first two chars of a string
all_data['Month'] = all_data['Month'].astype('int32') # Month should not be string
all_data.head()
```
#### Add month column (alternative method)
```
all_data['Month 2'] = pd.to_datetime(all_data['Order Date']).dt.month
all_data.head()
```
#### Add city column
```
def get_city(address):
return address.split(",")[1].strip(" ")
def get_state(address):
return address.split(",")[2].split(" ")[1]
all_data['City'] = all_data['Purchase Address'].apply(lambda x: f"{get_city(x)} ({get_state(x)})")
all_data.head()
```
## Data Exploration!
#### Question 1: What was the best month for sales? How much was earned that month?
```
all_data['Sales'] = all_data['Quantity Ordered'].astype('int') * all_data['Price Each'].astype('float')
all_data.groupby(['Month']).sum()
import matplotlib.pyplot as plt
months = range(1,13)
print(months)
plt.bar(months,all_data.groupby(['Month']).sum()['Sales'])
plt.xticks(months)
plt.ylabel('Sales in USD ($)')
plt.xlabel('Month')
plt.show()
```
#### Question 2: What city sold the most product?
```
all_data.groupby(['City']).sum()
all_data.groupby(['City']).sum()['Sales'].idxmax()
import matplotlib.pyplot as plt
keys = [city for city, df in all_data.groupby(['City'])]
plt.bar(keys,all_data.groupby(['City']).sum()['Sales'])
plt.ylabel('Sales in USD ($)')
plt.xlabel('Month number')
plt.xticks(keys, rotation='vertical', size=8)
plt.show()
```
#### Question 3: What time should we display advertisements to maximize likelihood of customer's buying product?
```
# Add hour column
all_data['Hour'] = pd.to_datetime(all_data['Order Date']).dt.hour
all_data['Minute'] = pd.to_datetime(all_data['Order Date']).dt.minute
all_data['Count'] = 1
all_data.head()
keys = [pair for pair, df in all_data.groupby(['Hour'])]
plt.plot(keys, all_data.groupby(['Hour']).count()['Count'])
plt.xticks(keys)
plt.grid()
plt.show()
# My recommendation is slightly before 11am or 7pm
```
#### Question 4: What products are most often sold together?
```
df.head()
# Keep rows with duplicated OrderID (i.e. items bought together)
df = all_data[all_data['Order ID'].duplicated(keep=False)]
# Create new column "Grouped" with items sold together joined and comma separated
df['Grouped'] = df.groupby('Order ID')['Product'].transform(lambda x: ','.join(x))
df2 = df[['Order ID', 'Grouped']].drop_duplicates()
# Referenced: https://stackoverflow.com/questions/52195887/counting-unique-pairs-of-numbers-into-a-python-dictionary
from itertools import combinations
from collections import Counter
count = Counter()
for row in df2['Grouped']:
row_list = row.split(',')
count.update(Counter(combinations(row_list, 2))) # Can edit 2 (couples) to 3, 4, ..etc.
for key,value in count.most_common(10):
print(key, value)
```
Maybe to get smarter for promotions!
#### Question 5: What product sold the most? Why?
```
product_group = all_data.groupby('Product')
quantity_ordered = product_group.sum()['Quantity Ordered']
print(quantity_ordered)
keys = [pair for pair, df in product_group]
plt.bar(keys, quantity_ordered)
plt.xticks(keys, rotation='vertical', size=8)
plt.show()
prices = all_data.groupby('Product').mean()['Price Each']
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.bar(keys, quantity_ordered, color='g')
ax2.plot(keys, prices, color='b')
ax1.set_xlabel('Product Name')
ax1.set_ylabel('Quantity Ordered', color='g')
ax2.set_ylabel('Price ($)', color='b')
ax1.set_xticklabels(keys, rotation='vertical', size=8)
fig.show()
```
| github_jupyter |
### **Import Google Drive**
```
from google.colab import drive
drive.mount('/content/drive')
```
### **Import Library**
```
import glob
import numpy as np
import os
import shutil
np.random.seed(42)
from sklearn.preprocessing import LabelEncoder
import cv2
import tensorflow as tf
import keras
import shutil
import random
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.utils import class_weight
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, cohen_kappa_score
```
### **Load Data**
```
os.chdir('/content/drive/My Drive/Colab Notebooks/DATA RD/')
Train = glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Train/*')
Val=glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Validation/*')
Test=glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Test/*')
import matplotlib.image as mpimg
for ima in Train[600:601]:
img=mpimg.imread(ima)
imgplot = plt.imshow(img)
plt.show()
```
### **Data Preparation**
```
nrows = 224
ncolumns = 224
channels = 3
def read_and_process_image(list_of_images):
X = [] # images
y = [] # labels
for image in list_of_images:
X.append(cv2.resize(cv2.imread(image, cv2.IMREAD_COLOR), (nrows,ncolumns), interpolation=cv2.INTER_CUBIC)) #Read the image
#get the labels
if 'Normal' in image:
y.append(0)
elif 'Mild' in image:
y.append(1)
elif 'Moderate' in image:
y.append(2)
elif 'Severe' in image:
y.append(3)
return X, y
X_train, y_train = read_and_process_image(Train)
X_val, y_val = read_and_process_image(Val)
X_test, y_test = read_and_process_image(Test)
import seaborn as sns
import gc
gc.collect()
#Convert list to numpy array
X_train = np.array(X_train)
y_train= np.array(y_train)
X_val = np.array(X_val)
y_val= np.array(y_val)
X_test = np.array(X_test)
y_test= np.array(y_test)
print('Train:',X_train.shape,y_train.shape)
print('Val:',X_val.shape,y_val.shape)
print('Test',X_test.shape,y_test.shape)
sns.countplot(y_train)
plt.title('Total Data Training')
sns.countplot(y_val)
plt.title('Total Data Validasi')
sns.countplot(y_test)
plt.title('Total Data Test')
y_train_ohe = pd.get_dummies(y_train)
y_val_ohe=pd.get_dummies(y_val)
y_test_ohe=pd.get_dummies(y_test)
y_train_ohe.shape,y_val_ohe.shape,y_test_ohe.shape
```
### **Model Parameters**
```
batch_size = 16
EPOCHS = 100
WARMUP_EPOCHS = 2
LEARNING_RATE = 0.001
WARMUP_LEARNING_RATE = 1e-3
HEIGHT = 224
WIDTH = 224
CANAL = 3
N_CLASSES = 4
ES_PATIENCE = 5
RLROP_PATIENCE = 3
DECAY_DROP = 0.5
```
### **Data Generator**
```
train_datagen =tf.keras.preprocessing.image.ImageDataGenerator(
rotation_range=360,
horizontal_flip=True,
vertical_flip=True)
test_datagen=tf.keras.preprocessing.image.ImageDataGenerator()
train_generator = train_datagen.flow(X_train, y_train_ohe, batch_size=batch_size)
val_generator = test_datagen.flow(X_val, y_val_ohe, batch_size=batch_size)
test_generator = test_datagen.flow(X_test, y_test_ohe, batch_size=batch_size)
```
### **Define Model**
```
IMG_SHAPE = (224, 224, 3)
base_model =tf.keras.applications.DenseNet201(weights='imagenet',
include_top=False,
input_shape=IMG_SHAPE)
x =tf.keras.layers.GlobalAveragePooling2D()(base_model.output)
x =tf.keras.layers.Dropout(0.5)(x)
x =tf.keras.layers.Dense(128, activation='relu')(x)
x =tf.keras.layers.Dropout(0.5)(x)
final_output =tf.keras.layers.Dense(N_CLASSES, activation='softmax', name='final_output')(x)
model =tf.keras.models.Model(inputs=base_model.inputs,outputs=final_output)
```
### **Train Top Layers**
```
for layer in model.layers:
layer.trainable = False
for i in range(-5, 0):
model.layers[i].trainable = True
metric_list = ["accuracy"]
optimizer =tf.keras.optimizers.Adam(lr=WARMUP_LEARNING_RATE)
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=metric_list)
model.summary()
import time
start = time.time()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = val_generator.n//val_generator.batch_size
history_warmup = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=val_generator,
validation_steps=STEP_SIZE_VALID,
epochs=WARMUP_EPOCHS,
verbose=1).history
end = time.time()
print('Waktu Training:', end - start)
```
### **Train Fine Tuning**
```
for layer in model.layers:
layer.trainable = True
es =tf.keras.callbacks.EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
rlrop =tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', mode='min', patience=RLROP_PATIENCE, factor=DECAY_DROP, min_lr=1e-6, verbose=1)
callback_list = [es]
optimizer =tf.keras.optimizers.Adam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=metric_list)
model.summary()
history_finetunning = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=val_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callback_list,
verbose=1).history
```
### **Model Graph**
```
history = {'loss': history_warmup['loss'] + history_finetunning['loss'],
'val_loss': history_warmup['val_loss'] + history_finetunning['val_loss'],
'acc': history_warmup['accuracy'] + history_finetunning['accuracy'],
'val_acc': history_warmup['val_accuracy'] + history_finetunning['val_accuracy']}
sns.set_style("whitegrid")
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 18))
ax1.plot(history['loss'], label='Train loss')
ax1.plot(history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history['acc'], label='Train accuracy')
ax2.plot(history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
```
### **Evaluate Model**
```
loss_Val, acc_Val = model.evaluate(X_val, y_val_ohe,batch_size=1, verbose=1)
print("Validation: accuracy = %f ; loss_v = %f" % (acc_Val, loss_Val))
lastFullTrainPred = np.empty((0, N_CLASSES))
lastFullTrainLabels = np.empty((0, N_CLASSES))
lastFullValPred = np.empty((0, N_CLASSES))
lastFullValLabels = np.empty((0, N_CLASSES))
for i in range(STEP_SIZE_TRAIN+1):
im, lbl = next(train_generator)
scores = model.predict(im, batch_size=train_generator.batch_size)
lastFullTrainPred = np.append(lastFullTrainPred, scores, axis=0)
lastFullTrainLabels = np.append(lastFullTrainLabels, lbl, axis=0)
for i in range(STEP_SIZE_VALID+1):
im, lbl = next(val_generator)
scores = model.predict(im, batch_size=val_generator.batch_size)
lastFullValPred = np.append(lastFullValPred, scores, axis=0)
lastFullValLabels = np.append(lastFullValLabels, lbl, axis=0)
lastFullComPred = np.concatenate((lastFullTrainPred, lastFullValPred))
lastFullComLabels = np.concatenate((lastFullTrainLabels, lastFullValLabels))
complete_labels = [np.argmax(label) for label in lastFullComLabels]
train_preds = [np.argmax(pred) for pred in lastFullTrainPred]
train_labels = [np.argmax(label) for label in lastFullTrainLabels]
validation_preds = [np.argmax(pred) for pred in lastFullValPred]
validation_labels = [np.argmax(label) for label in lastFullValLabels]
fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe']
train_cnf_matrix = confusion_matrix(train_labels, train_preds)
validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)
train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)
validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train')
sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax2).set_title('Validation')
plt.show()
```
| github_jupyter |
# Generative Adversarial Networks
This code is based on https://arxiv.org/abs/1406.2661 paper from Ian J. Goodfellow, Jean Pouget-Abadie, et all

```
from google.colab import drive
drive.mount('/content/drive')
# import All prerequisites
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable
from torchvision.utils import save_image
import numpy as np
# Device configuration
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
ROOT = "/content/drive/My Drive/Colab Notebooks/DSC_UI_GAN/Batch1/W1/"
```
## Dataset
```
batch_size = 100
# MNIST Dataset
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize([0.5], [0.5])])
train_dataset = datasets.MNIST(root='./mnist_data/', train=True, transform=transform, download=True)
# Data Loader (Input Pipeline)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
examples = enumerate(train_loader)
batch_idx, (example_data, example_targets) = next(examples)
## Print example
import matplotlib.pyplot as plt
fig = plt.figure()
for i in range(6):
plt.subplot(2,3,i+1)
plt.tight_layout()
plt.imshow(example_data[i][0], cmap='gray', interpolation='none')
plt.title("Ground Truth: {}".format(example_targets[i]))
plt.xticks([])
plt.yticks([])
fig
```
## Build Network

resource : https://medium.com/@jonathan_hui/gan-whats-generative-adversarial-networks-and-its-application-f39ed278ef09
```
class Discriminator(nn.Module):
def __init__(self, d_input_dim):
super(Discriminator, self).__init__()
self.model = nn.Sequential(
nn.Linear(d_input_dim, 512),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(512, 256),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(256, 1),
nn.Sigmoid(),
)
def forward(self, image):
img_flat = image.view(image.size(0), -1)
validity = self.model(img_flat)
return validity
class Generator(nn.Module):
def __init__(self, g_input_dim, g_output_dim):
super(Generator, self).__init__()
def block(in_feat, out_feat, normalize=True):
layers = [nn.Linear(in_feat, out_feat)]
if normalize:
layers.append(nn.BatchNorm1d(out_feat, 0.8))
layers.append(nn.LeakyReLU(0.2, inplace=True))
return layers
self.model = nn.Sequential(
*block(g_input_dim, 128, normalize=False),
*block(128, 256),
*block(256, 512),
*block(512, 1024),
nn.Linear(1024, g_output_dim),
nn.Tanh())
def forward(self, z):
image = self.model(z)
image = image.view(image.size(0), -1)
return image
# build network
z_dim = 100
mnist_dim = train_dataset.train_data.size(1) * train_dataset.train_data.size(2)
G = Generator(g_input_dim = z_dim, g_output_dim = mnist_dim).to(device)
D = Discriminator(mnist_dim).to(device)
print(G, D)
```
# Train Process
```
# loss
criterion = nn.BCELoss()
# optimizer
lr = 0.0002
b1 = 0.5
b2 = 0.999
G_optimizer = torch.optim.Adam(G.parameters(), lr=lr, betas=(b1, b2))
D_optimizer = torch.optim.Adam(D.parameters(), lr=lr, betas=(b1, b2))
```
### Discriminator Update

### Generator Update
### Before : <br>
 <br>
### Because Generator diminished gradient: <br>
In practice, equation 1 may not provide sufficient gradient for G to learn well. Early in learning, when G is poor,D can reject samples with high confidence because they are clearly different fromthe training data. In this case, log(1−D(G(z))) saturates. Rather than training G to minimize log(1−D(G(z))) we can train G to maximize logD(G(z)). This objective function results in thesame fixed point of the dynamics of G and D but provides much stronger gradients early in learning. (GAN Paper)<br>
 <br>

```
Tensor = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor
epochs = 200
for epoch in range(epochs):
for i, (imgs, _) in enumerate(train_loader):
# Adversarial ground truths
valid = Variable(Tensor(imgs.size(0), 1).fill_(1.0), requires_grad=False)
fake = Variable(Tensor(imgs.size(0), 1).fill_(0.0), requires_grad=False)
# Configure input
real_imgs = Variable(imgs.type(Tensor))
# -----------------
# Train Generator
# -----------------
G_optimizer.zero_grad()
# Sample noise as generator input
z = Variable(Tensor(np.random.normal(0, 1, (imgs.shape[0], z_dim))))
# Generate a batch of images
gen_imgs = G(z)
# Loss measures generator's ability to fool the discriminator
# g_loss = criterion(D(gen_imgs), fake) # Normal MinMax
g_loss = criterion(D(gen_imgs), valid) # Non Saturated
g_loss.backward()
G_optimizer.step()
# ---------------------
# Train Discriminator
# ---------------------
D_optimizer.zero_grad()
# Measure discriminator's ability to classify real from generated samples
real_loss = criterion(D(real_imgs), valid)
fake_loss = criterion(D(gen_imgs.detach()), fake)
d_loss = (real_loss + fake_loss) / 2
d_loss.backward()
D_optimizer.step()
if i % 300 == 0:
print(
"[Epoch %d/%d] [Batch %d/%d] [D loss: %f] [G loss: %f]"
% (epoch, epochs, i, len(train_loader), d_loss.item(), g_loss.item()))
if epoch % 5 == 0:
save_image(gen_imgs.view(gen_imgs.size(0), 1, 28, 28), ROOT + "sample/%d.png" % epoch, nrow=5, normalize=True)
torch.save(G, ROOT + 'G.pt')
torch.save(D, ROOT + 'D.pt')
```
| github_jupyter |
```
# Imports
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
plt.rcParams['figure.figsize'] = (15.0, 8.0) # set default size of plots
plt.rcParams['figure.facecolor'] = 'white'
pd.set_option('display.max_rows', None)
matplotlib.rcParams.update({'font.size': 15})
test_column_names = ["speedtest100", "speedtest110", "speedtest120", "speedtest130", "speedtest140",
"speedtest142", "speedtest145", "speedtest160", "speedtest161", "speedtest170", "speedtest180",
"speedtest190", "speedtest210", "speedtest230", "speedtest240", "speedtest250", "speedtest260",
"speedtest270", "speedtest280", "speedtest290", "speedtest300", "speedtest320", "speedtest400",
"speedtest410", "speedtest500", "speedtest510", "speedtest520", "speedtest980", "speedtest990"]
column_names = ["database_type"] + test_column_names
def read_benchmark_data(filename):
return pd.read_csv(filename, names = column_names)
def filter_by_database_type(data, database_type):
filtered_by_database_type = data.loc[(data["database_type"] == database_type)]
#grouped_by_database_type = filtered_by_database_type.groupby('database_type', as_index=False).median()
return filtered_by_database_type.drop('database_type', 1).reset_index().drop('index', axis=1)
def columns_without_database_type(data):
return data.values.tolist()[0]
native_raw_data = read_benchmark_data('benchmark-native.csv')
native_memory = filter_by_database_type(native_raw_data, 0)
native_file = filter_by_database_type(native_raw_data, 1)
sgx_raw_data = read_benchmark_data('benchmark-sgx.csv')
sgx_memory = filter_by_database_type(sgx_raw_data, 0)
sgx_file = filter_by_database_type(sgx_raw_data, 1)
wasm_raw_data = read_benchmark_data('benchmark-wasm.csv')
wasm_memory = filter_by_database_type(wasm_raw_data, 0)
wasm_file = filter_by_database_type(wasm_raw_data, 1)
wasm_sgx_raw_data = read_benchmark_data('benchmark-wasm-sgx.csv')
wasm_sgx_memory = filter_by_database_type(wasm_sgx_raw_data, 0)
wasm_sgx_file = filter_by_database_type(wasm_sgx_raw_data, 1)
colors = ["#cccccc", "#FB9A99", "#B0DC89", "#33A02C", "#A6CEE3", "#1F78B4", "#FDBF6F", "#FF7F00"]
# Normalize the results based on native memory implementation
for col in native_memory:
native_file[col] = native_file[col] / native_memory[col]
sgx_memory[col] = sgx_memory[col] / native_memory[col]
sgx_file[col] = sgx_file[col] / native_memory[col]
wasm_memory[col] = wasm_memory[col] / native_memory[col]
wasm_file[col] = wasm_file[col] / native_memory[col]
wasm_sgx_memory[col] = wasm_sgx_memory[col] / native_memory[col]
wasm_sgx_file[col] = wasm_sgx_file[col] / native_memory[col]
native_memory[col] = 1
labels = ['100', '110', '120', '130', '140', '142', '145', '160', '161', '170', '180', '190', '210', '230',
'240', '250', '260', '270', '280', '290', '300', '320', '400', '410', '500', '510', '520', '980', '990']
x = np.arange(len(labels)) # the label locations
width = 0.7/8.0 # the width of the bars
fig, ax = plt.subplots()
native_memory_bar = ax.bar(x - 7*width/2, native_memory.median().values, width, label='Native (in-memory)')
native_file_bar = ax.bar(x - 5*width/2, native_file.median().values, width, label='Native (in-file)')
sgx_memory_bar = ax.bar(x - 3*width/2, sgx_memory.median().values, width, label='SGX (in-memory)')
sgx_file_bar = ax.bar(x - 1*width/2, sgx_file.median().values, width, label='SGX (in-file)')
wasm_memory_bar = ax.bar(x + 1*width/2, wasm_memory.median().values, width, label='WebAssembly AoT (in-memory)')
wasm_file_bar = ax.bar(x + 3*width/2, wasm_file.median().values, width, label='WebAssembly AoT (in-file)')
wasm_sgx_memory_bar = ax.bar(x + 5*width/2, wasm_sgx_memory.median().values, width, label='WebAssembly AoT in SGX (in-memory)')
wasm_sgx_file_bar = ax.bar(x + 7*width/2, wasm_sgx_file.median().values, width, label='WebAssembly AoT in SGX (in-file)')
i = 0
for bars in [native_memory_bar, native_file_bar, wasm_memory_bar, wasm_file_bar, wasm_sgx_memory_bar, wasm_sgx_file_bar, sgx_memory_bar, sgx_file_bar]:
for subbar in bars:
subbar.set_color(colors[i%len(colors)])
i += 1
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_xlabel('Name of experiment')
ax.set_ylabel('Normalised runtime')
ax.set_ylim([0, 15])
ax.set_title('Speedtest1 benchmark')
ax.set_xticks(x)
ax.set_xticklabels(labels)
ax.legend()
handles, labels = ax.get_legend_handles_labels()
ax.legend(reversed(handles), reversed(labels), loc='upper left') # reverse to keep order consistent
fig.tight_layout()
plt.show()
# Export script
#
# Files generated:
# - speedtest1_native_file_formatted.csv
# - speedtest1_wasm_memory_formatted.csv
# - speedtest1_wasm_file_formatted.csv
# - speedtest1_wasm_sgx_memory_formatted.csv
# - speedtest1_wasm_sgx_file_formatted.csv
#
# File format: experiment_name, mean, stddev
def column_name_to_label(column_name):
return column_name[-3:]
def export_to_file(dataset, filename):
file = pd.DataFrame(columns = ["experiment_name", "mean", "stddev"])
#dataset = dataset.loc[(dataset["database_type"] == database_type)]
i = 0
for test_column_name in test_column_names:
file.loc[i] = [column_name_to_label(test_column_name),
dataset[test_column_name].median(),
dataset[test_column_name].std()]
i += 1
display(file)
file.to_csv(filename, index=False)
export_to_file(native_file, 'speedtest1_native_file_formatted.csv')
export_to_file(sgx_memory, 'speedtest1_sgx_memory_formatted.csv')
export_to_file(sgx_file, 'speedtest1_sgx_file_formatted.csv')
export_to_file(wasm_memory, 'speedtest1_wasm_memory_formatted.csv')
export_to_file(wasm_file, 'speedtest1_wasm_file_formatted.csv')
export_to_file(wasm_sgx_memory, 'speedtest1_wasm_sgx_memory_formatted.csv')
export_to_file(wasm_sgx_file, 'speedtest1_wasm_sgx_file_formatted.csv')
wasm_memory
##
# Stats for the paper.
##
def r_ratio(value):
return f"{round(value, 1)}"
native_mem_compared_to_wasm_mem = r_ratio((wasm_memory / native_memory).median().median())
print("The slowdown of WAMR memory relative to native is " + native_mem_compared_to_wasm_mem + "x in average")
native_mem_compared_to_wasm_file = r_ratio((wasm_file / native_file).median().median())
print("The slowdown of WAMR file relative to native is " + native_mem_compared_to_wasm_file + "x in average")
print()
wasm_mem_compared_to_sgx_wasm_mem = r_ratio((wasm_sgx_memory / wasm_memory).median().median())
print("The slowdown of Twine memory relative to WAMR is " + wasm_mem_compared_to_sgx_wasm_mem + "x in average")
wasm_mem_compared_to_sgx_wasm_file = r_ratio((wasm_sgx_file / wasm_file).median().median())
print("The slowdown of Twine file relative to WAMR is " + wasm_mem_compared_to_sgx_wasm_file + "x in average")
print()
twine_mem_vs_file_ratio = r_ratio((wasm_sgx_file["speedtest410"] / wasm_sgx_memory["speedtest410"]).median())
print("The slowness of exp. 410 with Twine mem vs file: " + twine_mem_vs_file_ratio + "x")
sgx_mem_vs_file_ratio = r_ratio((sgx_file["speedtest410"] / sgx_memory["speedtest410"]).median())
print("The slowness of exp. 410 with SGX-LKL mem vs file: " + sgx_mem_vs_file_ratio + "x")
#
## Export to LaTeX
#
f = open("speedtest1-export.tex", "w")
f.write(f"\\def\\speedtestWamrMemToNativeRatio{{{native_mem_compared_to_wasm_mem}}}\n")
f.write(f"\\def\\speedtestWamrFileToNativeRatio{{{native_mem_compared_to_wasm_file}}}\n")
f.write(f"\\def\\speedtestTwineMemToWamrRatio{{{wasm_mem_compared_to_sgx_wasm_mem}}}\n")
f.write(f"\\def\\speedtestTwineFileToWamrRatio{{{wasm_mem_compared_to_sgx_wasm_file}}}\n")
f.write(f"\\def\\speedtestExpFourOneZeroTwineMemVsFileRatio{{{twine_mem_vs_file_ratio}}}\n")
f.write(f"\\def\\speedtestExpFourOneZeroSgxLklMemVsFileRatio{{{sgx_mem_vs_file_ratio}}}\n")
f.close()
```
| github_jupyter |
# Using Web Processing Service (WPS) with the Defra Earth Observation Data Service API
This notebook introduces the concept of the Web Processing Service (WPS) which enables users to submit instructions to the EO Data Service to return outputs. The instructions are contained in the XML files provided. Please ensure these are saved in the XML folder within the Scripts folder.
## Instructions:
1. Select one of the eight WPS processes to run by typing its name in the relevant cell.
2. Select 'Restart and Clear Output' from the Kernel menu at the top of this page if necessary.
3. Select 'Run All' from the Cell menu at the top of this page.
4. Look at the outputs beneath the final cell. You should see the following, preceded by a date-time stamp:
-WPS request submitted
-WPS request response code (200) and the full WPS response including executionId number
-exececutionId extracted from the WPS request response code
-The status check URL including the executionId. This will run automatically to check the status of your request.
-Status check results every 15 seconds: process is still running or result is ready for download.
-Download URL
5. Click on the download URL or copy and paste it into your browser.
6. The downloaded file should be saved to your default download location.
7. Check the downloaded file, e.g. open spatial datasets in a GIS application.
8. Now choose one of the other WPS processes (step 1) and repeat the above steps until you have tested them all.
```
import requests
import os
from IPython.display import Image
import time
from datetime import datetime
import importlib
import urllib3
import config
importlib.reload(config)
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
output_fmt='%Y%m%dT%H%M%S'
pretty_fmt='%Y-%m-%d %H:%M:%S'
wps_server = config.URL + config.WPS_SUF + '?access_token=' + config.ACCESS_TOKEN
headers = {'Content-type': 'application/xml','User-Agent': 'curl/7.65.1'}
wps_test_config = {
'bandselect':['bandselect_EODS-sp14.xml','image/tiff','result.tiff'],
'rasterclip':['rasterclip_EODS-sp14.xml','image/tiff','result.tiff'],
'zonalstats':['zonalstats_EODS.xml','text/csv','statistics.csv'],
'rastertopoints':['raster-to-points_EODS-sp14.xml','application/zip','result.zip'],
'coverageclassstats':['coverageclassstats_EODS-sp14.xml','text/xml','result.xml'],
'reproject':['reproject_EODS-sp14.xml','application/zip','result.zip'],
'gsdownload-small':['gsdownload-EODS-small-s2.xml','application/zip','result.zip'],
'gsdownload-large':['gsdownload-EODS-large-s1.xml','application/zip','result.zip'],
}
```
# Select the WPS "Tool" to Run
### The options are:
* 'bandselect' (WPS tool name: ras:BandSelect) - selects a single band from a raster
* 'rasterclip' (WPS tool name: ras:CropCoverage) - clips an area of a raster defined by geometry provided in well-known text (WKT) format.
* 'zonalstats' (WPS tool name: ras:RasterZonalStatistics) - generates zonal statistics from a raster using geometry supplied as a shapefile.
* 'rastertopoints' (WPS tool name: gs:RasterAsPointCollection) - generates a point for each valid pixel in a raster dataset. Band values are stored as attributes of the points.
* 'coverageclassstats' (WPS tool name: ras:CoverageClassStats) - NOT CURRENTLY WORKING but should calculate statistics from raster values classified into bins/classes (i.e. a histogram).
* 'reproject' (WPS tool name: gs:reproject) - reprojects a vector dataset into a supplied coordinate reference system.
* 'gsdownload-small' (WPS tool name: gs:download) - downloads a single layer, in this case a small S2 granule.
* 'gsdownload-large' (WPS tool name: gs:download) - downloads a single layer, in this case a large S1 scene.
```
wps_tool = 'bandselect'
# STEP 1 of 3. Submit WPS "Process submission" request
# get the configuration for the wps tool to run based in user input
xml_payload_file = open(os.path.join(os.path.join(os.getcwd(),'xml'),wps_test_config.get(wps_tool)[0]),'r')
mime_type = wps_test_config.get(wps_tool)[1]
output_id = wps_test_config.get(wps_tool)[2]
# set file extension based on mime type
print('\n' + datetime.utcnow().strftime(pretty_fmt) + ' :: the wps request was submitted to the address :: ' + wps_server)
wps_submit_response = requests.post(wps_server, data=xml_payload_file.read(), headers=headers, verify=True)
status_code = wps_submit_response.status_code
print('\n' + datetime.utcnow().strftime(pretty_fmt)
+ ' :: the wps request response code is: "' + str(status_code) + '" and the wps request response is \n'
+ wps_submit_response.text)
# if connection to the WPS server was successfully, check progress and download result
if status_code == 200 and not wps_submit_response.text.find('ExceptionReport') > 0:
execution_id = wps_submit_response.text.split('executionId=')[1].split('&')[0]
print('\n' + datetime.utcnow().strftime(pretty_fmt) + ' :: the WPS execution_id is ' + str(execution_id))
for ii in range(99):
# STEP 2 of 3. Submit WPS "STATUS CHECK" request, depending on the size of the wps job, this could take time
status_check_url = wps_server + '&request=GetExecutionStatus&executionid=' + execution_id
if ii == 0:
print('\n' + datetime.utcnow().strftime(pretty_fmt) + ' :: status check url is ' + status_check_url)
status_response = requests.get(status_check_url,headers=headers)
print('\n' + datetime.utcnow().strftime(pretty_fmt)
+ ' :: Status Check/Download Attempt No: ' + str(ii)
+ ' :: process currently running on wps server, please wait')
if status_response.text.find('wps:ProcessFailed') != -1:
print('\n' + datetime.utcnow().strftime(pretty_fmt)
+ ' :: WPS Processed Failed, check the status URL for the specific error = ' + status_check_url)
break
# STEP 3 of 3. if the download is ready with 'Process succeeded' message, then print the download URL to the output
if status_response.text.find('wps:ProcessSucceeded') != -1:
result_url = wps_server + '&request=GetExecutionResult&executionid=' + execution_id + '&outputId=' + output_id +'&mimetype=' + mime_type
print('\n' + datetime.utcnow().strftime(pretty_fmt)
+ ' :: Result is ready for download on URL = ' + result_url)
break
# wait 15 seconds between checks
time.sleep(15)
else:
# if WPS request was not successful, quit workflow
print('\n' + datetime.utcnow().strftime(pretty_fmt) + ' the WPS request submission was not successful, quitting workflow')
```
## Thank you for your help with testing the Earth Observation Data Service API and WPS.
| github_jupyter |
```
import warnings
warnings.filterwarnings('ignore')
import re
import time
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import pandas as pd
pd.options.display.max_columns = None
pd.options.display.mpl_style = 'default'
from nltk.tokenize import word_tokenize
from util3 import *
```
## Load Files
```
df_train = pd.read_csv('./data/train.csv', encoding='ISO-8859-1')
df_test = pd.read_csv('./data/test.csv', encoding='ISO-8859-1')
df_desp = pd.read_csv('./data/product_descriptions.csv', encoding='ISO-8859-1')
df_attr = pd.read_csv('./data/attributes.csv', encoding='ISO-8859-1')
num_train = df_train.shape[0]
```
## Attributes
```
df_attr['name'].value_counts()[:30]
df_attr.dropna(inplace=True)
```
Among the top 30 attributes, some seem to be not very useful. They are:
- Certifications and Listings
- Package Quantity
- Hardware Included
```
def filter_str(df, s, col='search_term'):
return df[df[col].str.lower().str.contains(s)]
filter_str(df, 'hardware')
filter_str(df_attr, 'energy star certified', 'name')['value'].value_counts()
```
### Brands
```
df_brand = df_attr[df_attr.name == 'MFG Brand Name'][['product_uid', 'value']].rename(columns={'value': 'brand'})
```
### Bullets
```
bullet = dict()
bullet_count = dict()
df_attr['about_bullet'] = df_attr['name'].str.lower().str.contains('bullet')
for idx, row in df_attr[df_attr['about_bullet']].iterrows():
pid = row['product_uid']
value = row['value']
bullet.setdefault(pid, '')
bullet_count.setdefault(pid, 0)
bullet[pid] = bullet[pid] + ' ' + str(value)
bullet_count[pid] = bullet_count[pid] + 1
df_bullet = pd.DataFrame.from_dict(bullet, orient='index').reset_index()
df_bullet_count = pd.DataFrame.from_dict(bullet_count, orient='index').reset_index().astype(np.float)
df_bullet.columns = ['product_uid', 'bullet']
df_bullet_count.columns = ['product_uid', 'bullet_count']
```
### Color
```
color = dict()
df_attr['about_color'] = df_attr['name'].str.lower().str.contains('color')
for idx, row in df_attr[df_attr['about_color']].iterrows():
pid = row['product_uid']
value = row['value']
color.setdefault(pid, '')
color[pid] = color[pid] + ' ' + str(value)
df_color = pd.DataFrame.from_dict(color, orient='index').reset_index()
df_color.columns = ['product_uid', 'color']
```
### Material
```
material = dict()
df_attr['about_material'] = df_attr['name'].str.lower().str.contains('material')
for idx, row in df_attr[df_attr['about_material']].iterrows():
pid = row['product_uid']
value = row['value']
material.setdefault(pid, '')
material[pid] = material[pid] + ' ' + str(value)
df_material = pd.DataFrame.from_dict(material, orient='index').reset_index()
df_material.columns = ['product_uid', 'material']
```
### Commercial / Residential Flag
```
comres_index = df_attr['name'].str.lower().str.contains('commercial / residential')
df_attr[comres_index]['value'].value_counts()
flag_comres = dict()
df_attr['about_comres'] = df_attr['name'].str.lower().str.contains('commercial / residential')
for idx, row in df_attr[df_attr['about_comres']].iterrows():
pid = row['product_uid']
value = row['value']
flag_comres.setdefault(pid, [0, 0])
if 'Commercial' in str(value):
flag_comres[pid][0] = 1
if 'Residential' in str(value):
flag_comres[pid][1] = 1
df_comres = pd.DataFrame.from_dict(flag_comres, orient='index').reset_index().astype(np.float)
df_comres.columns = ['product_uid', 'flag_commercial', 'flag_residential']
```
### Indoor/Outdoor Flag
```
filter_str(df_attr, 'indoor/outdoor', 'name')['value'].value_counts()
flag_inoutdoor = dict()
df_attr['about_intoutdoor'] = df_attr['name'].str.lower().str.contains('indoor/outdoor')
for idx, row in df_attr[df_attr['about_intoutdoor']].iterrows():
pid = row['product_uid']
value = row['value']
flag_inoutdoor.setdefault(pid, [0, 0])
if 'Indoor' in str(value):
flag_inoutdoor[pid][0] = 1
if 'Outdoor' in str(value):
flag_inoutdoor[pid][1] = 1
df_inoutdoor = pd.DataFrame.from_dict(flag_inoutdoor, orient='index').reset_index().astype(np.float)
df_inoutdoor.columns = ['product_uid', 'flag_indoor', 'flag_outdoor']
df_inoutdoor['flag_indoor'].value_counts()
```
### Energy Star
```
filter_str(df_attr, 'energy star certified', 'name')['value'].value_counts()
flag_estar = dict()
df_attr['about_estar'] = df_attr['name'].str.lower().str.contains('energy star certified')
for idx, row in df_attr[df_attr['about_estar']].iterrows():
pid = row['product_uid']
value = row['value']
flag_estar.setdefault(pid, 0)
if 'Yes' in str(value):
flag_estar[pid] = 1
df_estar = pd.DataFrame.from_dict(flag_estar, orient='index').reset_index().astype(np.float)
df_estar.columns = ['product_uid', 'flag_estar']
df_estar['flag_estar'].value_counts()
```
## Join (this rebuilds df, be sure to rerun subsequent operations.)
```
df = pd.concat((df_train, df_test), axis=0, ignore_index=True)
df = pd.merge(df, df_desp, how='left', on='product_uid')
df = pd.merge(df, df_brand, how='left', on='product_uid')
df = pd.merge(df, df_bullet, how='left', on='product_uid')
df = pd.merge(df, df_bullet_count, how='left', on='product_uid')
df = pd.merge(df, df_color, how='left', on='product_uid')
df = pd.merge(df, df_material, how='left', on='product_uid')
df = pd.merge(df, df_comres, how='left', on='product_uid')
df = pd.merge(df, df_inoutdoor, how='left', on='product_uid')
df = pd.merge(df, df_estar, how='left', on='product_uid')
```
### Fill NAs
```
df['brand'].fillna('nobrand', inplace=True)
df['bullet'].fillna('', inplace=True)
df['bullet_count'].fillna(0, inplace=True)
df['color'].fillna('', inplace=True)
df['material'].fillna('', inplace=True)
df['flag_commercial'].fillna(-1, inplace=True)
df['flag_residential'].fillna(-1, inplace=True)
df['flag_indoor'].fillna(-1, inplace=True)
df['flag_outdoor'].fillna(-1, inplace=True)
df['flag_estar'].fillna(-1, inplace=True)
```
### Relevance Distribution
```
sns.countplot(x='relevance', data=df)
df['majority_relevance'] = df['relevance'].map(lambda x: x in [1.0, 1.33, 1.67, 2.0, 2.33, 2.67, 3.0])
def majoritize(df):
return df[df['majority_relevance'] == 1]
```
## External Data Utilization
### Fix Typos
```
df['search_term'] = df['search_term'].map(correct_typo)
```
## Pre-Stemming Attributes Features
```
df['match_commercial'] = (df['search_term'].str.lower().str.contains('commercial') & df['flag_commercial']).astype(np.float)
sum(df['match_commercial'])
df['match_residential'] = (df['search_term'].str.lower().str.contains('residential') & df['flag_residential']).astype(np.float)
sum(df['match_residential'])
def filter_estar(df):
return df['search_term'].str.lower().str.contains('energy star') |\
df['search_term'].str.lower().str.contains('energy efficient')
df['match_estar'] = (filter_estar(df) & df['flag_residential']).astype(np.float)
sum(df['match_estar'])
df['match_indoor'] = (df['search_term'].str.lower().str.contains('indoor') & df['flag_indoor']).astype(np.float)
sum(df['match_indoor'])
df['match_outdoor'] = (df['search_term'].str.lower().str.contains('outdoor') & df['flag_outdoor']).astype(np.float)
sum(df['match_outdoor'])
df['match_outdoor'].describe()
```
## Stemming & Tokenizing
```
df['search_term'] = df['search_term'].map(lambda x: str_stem(x))
df['product_title'] = df['product_title'].map(lambda x: str_stem(x))
df['product_description'] = df['product_description'].map(lambda x: str_stem(x))
df['brand'] = df['brand'].map(lambda x: str_stem(x))
df['bullet'] = df['bullet'].map(lambda x: str_stem(x))
df['color'] = df['color'].map(lambda x: str_stem(x))
df['material'] = df['material'].map(lambda x: str_stem(x))
df['tokens_search_term'] = df['search_term'].map(lambda x: x.split())
df['tokens_product_title'] = df['product_title'].map(lambda x: x.split())
df['tokens_product_description'] = df['product_description'].map(lambda x: x.split())
df['tokens_brand'] = df['brand'].map(lambda x: x.split())
df['tokens_bullet'] = df['bullet'].map(lambda x: x.split())
# slow, not obvious improvment
# df['tokens_search_term'] = df['search_term'].map(lambda x: word_tokenize(x))
# df['tokens_product_title'] = df['product_title'].map(lambda x: word_tokenize(x))
# df['tokens_product_description'] = df['product_description'].map(lambda x: word_tokenize(x))
# df['tokens_brand'] = df['brand'].map(lambda x: word_tokenize(x))
```
## Meta-Features
### Length
```
df['len_search_term'] = df['tokens_search_term'].map(lambda x: len(x))
df['len_product_title'] = df['tokens_product_title'].map(lambda x: len(x))
df['len_product_description'] = df['tokens_product_description'].map(lambda x: len(x))
df['len_brand'] = df['tokens_brand'].map(lambda x: len(x))
df['len_bullet'] = df['tokens_bullet'].map(lambda x: len(x))
```
### Post-Stemming Attributes Features
```
def match_color(st, colors):
for w in st:
if w in colors:
return True
return False
df['match_color'] = df.apply(lambda x: match_color(x['tokens_search_term'], x['color']), axis=1).astype(np.float)
sum(df['match_color'])
def match_material(st, materials):
for w in st:
if w in materials:
return True
return False
df['match_material'] = df.apply(lambda x: match_material(x['tokens_search_term'], x['material']), axis=1).astype(np.float)
sum(df['match_material'])
```
### Flag & Count & Ratio
```
df['flag_st_in_pt'] = df.apply(lambda x: int(x['search_term'] in x['product_title']), axis=1)
df['flag_st_in_pd'] = df.apply(lambda x: int(x['search_term'] in x['product_description']), axis=1)
df['flag_st_in_br'] = df.apply(lambda x: int(x['search_term'] in x['brand']), axis=1)
df['flag_st_in_bl'] = df.apply(lambda x: int(x['search_term'] in x['bullet']), axis=1)
df['num_st_in_pt'] = \
df.apply(lambda x: len(set(x['tokens_search_term']).intersection(set(x['tokens_product_title']))), axis=1)
df['num_st_in_pd'] = \
df.apply(lambda x: len(set(x['tokens_search_term']).intersection(set(x['tokens_product_description']))), axis=1)
df['num_st_in_br'] = \
df.apply(lambda x: len(set(x['tokens_search_term']).intersection(set(x['tokens_brand']))), axis=1)
df['num_st_in_bl'] = \
df.apply(lambda x: len(set(x['tokens_search_term']).intersection(set(x['tokens_bullet']))), axis=1)
df['ratio_st_in_pt'] = \
df.apply(lambda x: x['num_st_in_pt'] / float(x['len_search_term']), axis=1)
df['ratio_st_in_pd'] = \
df.apply(lambda x: x['num_st_in_pd'] / float(x['len_search_term']), axis=1)
df['ratio_st_in_br'] = \
df.apply(lambda x: x['num_st_in_br'] / float(x['len_search_term']), axis=1)
df['ratio_st_in_bl'] = \
df.apply(lambda x: x['num_st_in_bl'] / float(x['len_search_term']), axis=1)
sns.set_palette("husl")
sns.boxplot(x='relevance', y='ratio_st_in_pt', data=majoritize(df))
sns.boxplot(x='relevance', y='ratio_st_in_pd', data=majoritize(df))
# not very useful
sns.boxplot(x='relevance', y='ratio_st_in_br', data=majoritize(df))
# not very useful
sns.boxplot(x='relevance', y='ratio_st_in_bl', data=majoritize(df))
```
### Positioned Word Matching
```
df['len_search_term'].max()
def match_pos(row, col, pos):
if pos >= row['len_search_term'] or pos >= row['len_'+col]:
return 0
else:
return int(row['tokens_search_term'][pos] in row[col])
for i in range(10):
df[str(i)+'th_word_in_pt'] = df.apply(lambda x: match_pos(x, 'product_title', i), axis=1)
for i in range(10):
df[str(i)+'th_word_in_pd'] = df.apply(lambda x: match_pos(x, 'product_description', i), axis=1)
for i in range(10):
df[str(i)+'th_word_in_bl'] = df.apply(lambda x: match_pos(x, 'bullet', i), axis=1)
```
### Encode Brand Feature
```
brands = pd.unique(df.brand.ravel())
brand_encoder = {}
index = 1000
for brand in brands:
brand_encoder[brand] = index
index += 10
brand_encoder['nobrand'] = 500
df['brand_encoded'] = df['brand'].map(lambda x: brand_encoder.get(x, 500))
pid_with_attr_material = pd.unique(df_material.product_uid.ravel())
material_encoder = {}
for pid in pid_with_attr_material:
material_encoder[pid] = 1
df['flag_attr_has_material'] = df['product_uid'].map(lambda x: material_encoder.get(x, 0)).astype(np.float)
pid_with_attr_color = pd.unique(df_color.product_uid.ravel())
color_encoder = {}
for pid in pid_with_attr_color:
color_encoder[pid] = 1
df['flag_attr_has_color'] = df['product_uid'].map(lambda x: color_encoder.get(x, 0)).astype(np.float)
```
### Encode Attributes Feature
```
pids_with_attr = pd.unique(df_attr.product_uid.ravel())
attr_encoder = {}
for pid in pids_with_attr:
attr_encoder[pid] = 1
df['flag_has_attr'] = df['product_uid'].map(lambda x: attr_encoder.get(x, 0)).astype(np.float)
sns.boxplot(x='flag_has_attr', y='relevance', data=majoritize(df))
```
## Distance Metrics
### BOW
```
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(stop_words='english', max_features=1000)
cv.fit(df['search_term'] + ' ' + df['product_title'] + ' ' + df['product_description'] + ' ' + df['bullet'])
cv_of_st = cv.transform(df['search_term'])
cv_of_pt = cv.transform(df['product_title'])
cv_of_pd = cv.transform(df['product_description'])
cv_of_bl = cv.transform(df['bullet'])
```
### BOW based Cosine Similarity
```
from sklearn.metrics.pairwise import cosine_similarity
cv_cos_sim_st_pt = [cosine_similarity(cv_of_st[i], cv_of_pt[i])[0][0] for i in range(cv_of_st.shape[0])]
cv_cos_sim_st_pd = [cosine_similarity(cv_of_st[i], cv_of_pd[i])[0][0] for i in range(cv_of_st.shape[0])]
cv_cos_sim_st_bl = [cosine_similarity(cv_of_st[i], cv_of_bl[i])[0][0] for i in range(cv_of_st.shape[0])]
df['cv_cos_sim_st_pt'] = cv_cos_sim_st_pt
df['cv_cos_sim_st_pd'] = cv_cos_sim_st_pd
df['cv_cos_sim_st_bl'] = cv_cos_sim_st_bl
sns.boxplot(x='relevance', y='cv_cos_sim_st_pt', data=majoritize(df))
sns.boxplot(x='relevance', y='cv_cos_sim_st_pd', data=majoritize(df))
sns.boxplot(x='relevance', y='cv_cos_sim_st_bl', data=majoritize(df))
```
### TF-IDF
```
from sklearn.feature_extraction.text import TfidfVectorizer
tiv = TfidfVectorizer(ngram_range=(1, 3), stop_words='english', max_features=1000)
tiv.fit(df['search_term'] + ' ' + df['product_title'] + ' ' + df['product_description'] + ' ' + df['bullet'])
tiv_of_st = tiv.transform(df['search_term'])
tiv_of_pt = tiv.transform(df['product_title'])
tiv_of_pd = tiv.transform(df['product_description'])
tiv_of_bl = tiv.transform(df['bullet'])
```
### TF-IDF based Cosine Similarity
```
tiv_cos_sim_st_pt = [cosine_similarity(tiv_of_st[i], tiv_of_pt[i])[0][0] for i in range(tiv_of_st.shape[0])]
tiv_cos_sim_st_pd = [cosine_similarity(tiv_of_st[i], tiv_of_pd[i])[0][0] for i in range(tiv_of_st.shape[0])]
tiv_cos_sim_st_bl = [cosine_similarity(tiv_of_st[i], tiv_of_bl[i])[0][0] for i in range(tiv_of_st.shape[0])]
df['tiv_cos_sim_st_pt'] = tiv_cos_sim_st_pt
df['tiv_cos_sim_st_pd'] = tiv_cos_sim_st_pd
df['tiv_cos_sim_st_bl'] = tiv_cos_sim_st_bl
sns.boxplot(x='relevance', y='tiv_cos_sim_st_pt', data=majoritize(df))
sns.boxplot(x='relevance', y='tiv_cos_sim_st_pd', data=majoritize(df))
sns.boxplot(x='relevance', y='tiv_cos_sim_st_bl', data=majoritize(df))
```
### Jaccard Similarity
```
def jaccard(A, B):
C = A.intersection(B)
return float(len(C)) / (len(A) + len(B) - len(C))
df['jaccard_st_pt'] = df.apply(lambda x: jaccard(set(x['tokens_search_term']), set(x['tokens_product_title'])), axis=1)
df['jaccard_st_pd'] = df.apply(lambda x: jaccard(set(x['tokens_search_term']), set(x['tokens_product_description'])), axis=1)
df['jaccard_st_br'] = df.apply(lambda x: jaccard(set(x['tokens_search_term']), set(x['tokens_brand'])), axis=1)
df['jaccard_st_bl'] = df.apply(lambda x: jaccard(set(x['tokens_search_term']), set(x['tokens_bullet'])), axis=1)
sns.boxplot(x='relevance', y='jaccard_st_pt', data=majoritize(df))
sns.boxplot(x='relevance', y='jaccard_st_pd', data=majoritize(df))
sns.boxplot(x='relevance', y='jaccard_st_br', data=majoritize(df))
sns.boxplot(x='relevance', y='jaccard_st_bl', data=majoritize(df))
```
### Edit Distance
```
from nltk.metrics import edit_distance
def calc_edit_distance(row, col):
dists = [min([edit_distance(w, x) for x in row['tokens_'+col]]) for w in row['tokens_search_term']]
return (min(dists), sum(dists)) if dists
df['edit_dist_st_pt_raw'] = df.apply(lambda x: calc_edit_distance(x, 'product_title'), axis=1)
df['edit_dist_st_pt_min'] = df['edit_dist_st_pt_raw'].map(lambda x: x[0])
df['edit_dist_st_pt_avg'] = df['edit_dist_st_pt_raw'].map(lambda x: x[1]) / df['len_search_term']
sns.boxplot(x='relevance', y='edit_dist_st_pt_avg', data=majoritize(df))
df['edit_dist_st_pd_raw'] = df.apply(lambda x: calc_edit_distance(x, 'product_description'), axis=1)
df['edit_dist_st_pd_min'] = df['edit_dist_st_pd_raw'].map(lambda x: x[0])
df['edit_dist_st_pd_avg'] = df['edit_dist_st_pd_raw'].map(lambda x: x[1]) / df['len_search_term']
sns.boxplot(x='relevance', y='edit_dist_st_pd_avg', data=majoritize(df))
df.drop(['edit_dist_st_pt_raw', 'edit_dist_st_pd_raw'], axis=1, inplace=True)
# df['edit_dist_st_bl_raw'] = df.apply(lambda x: calc_edit_distance(x, 'bullet'), axis=1)
# df['edit_dist_st_br_raw'] = df.apply(lambda x: calc_edit_distance(x, 'brand'), axis=1)
# df['edit_dist_st_bl_min'] = df['edit_dist_st_bl_raw'].map(lambda x: x[0])
# df['edit_dist_st_bl_avg'] = df['edit_dist_st_bl_raw'].map(lambda x: x[1]) / df['len_search_term']
# df['edit_dist_st_br_min'] = df['edit_dist_st_br_raw'].map(lambda x: x[0])
# df['edit_dist_st_br_avg'] = df['edit_dist_st_br_raw'].map(lambda x: x[1]) / df['len_search_term']
```
## Latent Semantic Space
By SVD-decomposing BOW / TF-IDF matrix, we obtain features that can be used to capture different query/product groups.
```
from sklearn.decomposition import TruncatedSVD
tsvd = TruncatedSVD(n_components=10, random_state=2016)
```
### tSVD for BOW
```
st_bow_tsvd = tsvd.fit_transform(cv_of_st)
for i in range(st_bow_tsvd.shape[1]):
df['st_bow_tsvd'+str(i)] = st_bow_tsvd[:,i]
pt_bow_tsvd = tsvd.fit_transform(cv_of_pt)
for i in range(pt_bow_tsvd.shape[1]):
df['pt_bow_tsvd'+str(i)] = pt_bow_tsvd[:,i]
pd_bow_tsvd = tsvd.fit_transform(cv_of_pd)
for i in range(pd_bow_tsvd.shape[1]):
df['pd_bow_tsvd'+str(i)] = pd_bow_tsvd[:,i]
bl_bow_tsvd = tsvd.fit_transform(cv_of_bl)
for i in range(bl_bow_tsvd.shape[1]):
df['bl_bow_tsvd'+str(i)] = bl_bow_tsvd[:,i]
```
### tSVD for TF-IDF
```
st_tfidf_tsvd = tsvd.fit_transform(tiv_of_st)
for i in range(st_tfidf_tsvd.shape[1]):
df['st_tfidf_tsvd_'+str(i)] = st_tfidf_tsvd[:,i]
pt_tfidf_tsvd = tsvd.fit_transform(tiv_of_pt)
for i in range(pt_tfidf_tsvd.shape[1]):
df['pt_tfidf_tsvd_'+str(i)] = pt_tfidf_tsvd[:,i]
pd_tfidf_tsvd = tsvd.fit_transform(tiv_of_pd)
for i in range(pd_tfidf_tsvd.shape[1]):
df['pd_tfidf_tsvd_'+str(i)] = pd_tfidf_tsvd[:,i]
bl_tfidf_tsvd = tsvd.fit_transform(tiv_of_bl)
for i in range(bl_tfidf_tsvd.shape[1]):
df['bl_tfidf_tsvd_'+str(i)] = bl_tfidf_tsvd[:,i]
```
## Append
```
append = pd.read_csv('df_lev_dist_more_jaccard.csv', encoding='ISO-8859-1')
cols_to_append = [
'query_in_title',
'query_in_description',
'query_last_word_in_title',
'query_last_word_in_description',
'word_in_title',
'word_in_description',
'word_in_brand',
'ratio_title',
'ratio_description',
'ratio_brand',
'lev_dist_to_product_title_min',
'lev_dist_to_product_title_max',
'lev_dist_to_product_title_sum',
'lev_dist_to_product_description_min',
'lev_dist_to_product_description_max',
'lev_dist_to_product_description_sum'
]
for x in cols_to_append:
df['old_'+x] = append[x]
```
## Export
```
cols_to_drop = [
#'product_uid',
'search_term',
'product_title',
'product_description',
'brand',
'bullet',
'color',
'material',
'tokens_search_term',
'tokens_product_title',
'tokens_product_description',
'tokens_brand',
'tokens_bullet',
'majority_relevance'
]
export_df = df.drop(cols_to_drop, axis=1)
print('Number of Features: ', len(export_df.columns.tolist()) - 2)
export_df.head(3)
export_df.to_csv('./df_new_423.csv')
df.to_csv('./df_full_423.csv')
```
| github_jupyter |
## Normal Distribution
Normal Distribution will is the distribution which calculates popularity of the population. This will get discussed on including standard deviation to determine Z score of particular value.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/49) video, 01:57*
<!-- TEASER_END -->
This distribution shows the distribution for both male and female, with height repored on the U.S survey and OkCupid. The height distribution of OkCupid is slightly to the right, which can be told as people often over-exaggerated their heights/tends to round up the height to the top.
Many data is not exactly shape like a bell-curve, as its represent the real data, it's natural that it's nearly but not exactly normal. But when you see that the distribution tends to like bell-curve, we can assume the distribution is normal distribution, an unimodal. A normal distribution will make all data distributed around center, the mean.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/49) video, 05:29*
The fact that the questions include the data to have normal distribution, is our critical information. A normal distribution should have all data that are in 3 standard deviation, 99.7%. With that in mind, because we are given 3 statistics value, we can predict the standard deviation that closely match the minimum and maximum. Turns out the answer (b) has matched the criteria perfectly.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/49) video, 06:29*
This is one of the example of standarized test. Instead of comparing raw score, which would give Pam absolute winning, we can calculate Z-score, which tells us the both scores in their respective standard deviation away from the mean.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/49) video, 07:20*
Plotting all of the value in standarized distributin, we see clearly that Pam has higher value from Jim.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/49) video, 09:03*
The Z score comes from letter z in 'standardizing', s has taken for 'standard deviation'. Z score of the mean is simply 0, because it only subtracting for the mean value iself.
When the distribution is converted to the standardized Z score, the mean will be 0, and the median will roughly 0 as well, because it's normal distribution.When you're standardizing, it's not take the original distribution, rather standardized distribution, which is exact normal distribution centered at 0. Therefore positive skewed will give you negative median, negative skewed will give you positive median.
Z-scores can also be used to calculate percentiles, which is the percentage of population below the mean. Other's distribution can take percentile as well, as long it's unimodal. But those will require us to using extra calculus.Note that this means **regardless the shape or skew of the distribution, z can always be defined, provided mean and standard deviation. But this doesn't mean we can compute percentiles, percentiles distribution need normalty**.
We can easily using R to specify the what value that we want to observe, the mean, and the value of standard deviation using R
```
%load_ext rmagic
%R pnorm(-1, mean=0, sd=1)
```

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/49) video, 14:37*
This question can be arrived at solving it in R or by using percentage. Note that we can also counting the percentiles manually but it's not recommended. We already know 68%-95%-99.7%, and we know that below standard deviation of 1 is roughly 84%.Using R:
```
%R pnorm(1800, mean=1500, sd=300)
```
This will answer that Pam's score is better than 84% of her class. We always calculate the percentiles **below** the value. To calculate who's better than her, we just calculate the complementary, which resulting in 0.1587
```
%R pnorm(24,mean=21,sd=5)dd
0.9*2400
```

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/49) video, 17:17*
Using this we can determine the X value, in R, we're using different method. Given quantile/percentile that is cut, we're trying to predict the value that resides there. Note that because she's in top 10% we cut 10% from the top.
```
%R qnorm(0.90,1500,300)
```
In summary:
* we can use z calcualtion to standardizing two or normal distribution.
* percentile of one cut-off value in the distribution can be calculated using *pnorm* and standard deviation of percentile can be calculated using *qnorm*. But both can also be calculated using normalty table.
* Regardless the shape of the distribution, unusual observation is more than 2 standard deviation away.
* If left skewed, mean is smaller than median. And if mean is centered around zero, then median is always positive.
* if right skewed, mean is higher than median. And if mean is centered around zero, the median is always negative.
### Evaluating Normal Distribution

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/51) video, 00:15*
This is the usual plotting histogram and scatter plot to validate if the population indeed has normal distribution.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/51) video, 01:23*

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/51) video, 02:44*
We can also checking the spread of the distribution by plotting standard deviation. If the spread of the distribution in 1 standard deviation is higher than 68% (normal distribution in 1 standard deviation), it's expected that the distribution has more variable, more spread.
### Normal Distribution Example

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/53) video, 00:41*
```
%load_ext rpy2.ipython
%R pnorm(50,mean=45,sd=3.2,lower.tail=F)
```
5.9% passengers are expected of excess the bagage.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/53) video, 04:06*
```
%R qnorm(0.2, 77, 5)
```

*Screenshot taken from [Udacity](https://www.udacity.com/course/viewer#!/c-ud827/l-1478678538/m-102088771) video, 00:36*
Remember that sum of normal distribution, because it all based on relative frequency add up to 1? That's called Probability Density Function (PDF).
In this way, because all of the relative frequency add up to 1, we can treat it the same as the probability

*Screenshot taken from [Udacity](https://www.udacity.com/course/viewer#!/c-ud827/l-1478678538/e-103339228/m-102088774) video, 00:12*
In normal distribution, the two-tail is never touch the x-axis(horizontal asymptote). That's because we can never be sure at 100%. If the distribution touch at x-axis, then it equals 1, however the PDF just close to 1, even it extend to 5 standard deviations away.
The green area is the probability of randomly selecting less than the threshold, which in turn proportion in sample/population with score less than the threshold. So you see, the two-tail actually goes to infinity.
```
0.95-0.135
```

*Screenshot taken from [Udacity](https://www.udacity.com/course/viewer#!/c-ud827/l-1478678538/e-102833897/m-102088794) video, 00:12*
```
%R pnorm(240,mean=190,sd=36)
```

*Screenshot taken from [Udacity](https://www.udacity.com/course/viewer#!/c-ud827/l-1478678538/e-102833890/m-102088819) video, 02:09*
```
%R pnorm(16,mean=13,sd=4.8) - pnorm(10,mean=13,sd=4.8,lower.tail=T)
```

*Screenshot taken from [Udacity](https://www.udacity.com/course/viewer#!/c-ud827/l-1478678538/m-102088823) video, 02:09*
```
%R qnorm(0.05,13,4.8,lower.tail=F)
```
## Sampling Distributions
We know how we can compare between two normal distribution. But how about sample distributions? How can we compare of one sample with the other sample within that population? We can:
* find the mean of the sample
* find other means of other samples
* And we can compare all the means from samples.
If we're using thethral die, and roll 2 times, it would mean we're sampling size 2 out of our population, resulting in 2^4 = 16 total possibilities.We can take means of all sampling possibilities.After that, if we take average of all means possibilities, we get Means that equals to the means of the population.This all sample means that we get from the means of sample size 2 possibilities.
Mean of each sample:
1, 1.5, 2, 2.5, 1.5, 2, 2.5, 3, 2, 2.5, 3, 3.5, 2.5, 3, 3.5, 4
**When we average all sample possibilities, we get 2.5, equals to means of the population.** So the mean of each sample is the mean of the population. This will taking an advantage of sampling, because it's often the case where population is too large and we're short on computing because the data is too big.
If we copy this to web http://www.wolframalpha.com/:
We get a histogram plot for this data. The data, a sample means, is actually a distribution itself which often called **Sampling Distributions**.

*Snapshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/61) 02:57*
Let's take another look. Earlier we're take inifinite number of sample possibilities. But that's just extreme condition. The everytime we take sample from the population we have sample distributions. And for example we're taking sample statistic, in this case, the mean. Plotting mean into the distribution would be called **sampling distribution**. So there's different between sample distributions and sampling distribution.

*Snapshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/61) 04:07*
But we're missing some important point. What do we need to compare mean in a single sample with other sample's mean in the distribution? We don't know whether the mean of the sample that we took will be located in which location of Sampling Distributions. In this case, we need **Standard Deviation**.
Note that because we're using sample means as our data, doesn't mean we need to use Bessel's Correction(n-1). We already have all the possible outcomes, and that's the entire data of our population(means sample).What we're calculating is not actually standard deviation, the standard deviation of all possible sample means is called **Standard Error**.
In the case above we're taking 1000 sample for each of the state. This problem arise to know the population height of all women. Suppose we know the sample size, the mean, and all height of women population in the US. Then we do that sampling distribution, collecting mean for each of sample state.The mean of sample distribution should approximately equal to the mean of population.The variability should be less than variability of the population, since the population outliers are outliers combining for all states.

*Screenshot taken from [Udacity](https://www.udacity.com/course/viewer#!/c-ud827/l-1455178654/e-117708709/m-117640039) video, 00:23*
There's actually a relationship between standard deviation of population and standard error,
$$ SE = \frac{\sigma}{\sqrt{n}} $$
* $\sigma$ = population standard deviation
* SE = Standard Deviation of distribution sample means
* n = sample size
This solve the original problem, which require us to find where the sample lie on distribution of sample means.This called **Central Limit Theorem**,that if you keep fetching fix sample size(n) all possible sample outcome, you're getting normal distribution that has standard deviation that equals **Standard Error**. Turns out the Standard Error is not easy to get, because that's mean we have to have all population data. And for that there's mathematical formula which stated above.

*Screenshot taken from [Udacity](https://www.udacity.com/course/viewer#!/c-ud827/l-1455178654/e-117708715/m-117640058) video, 00:23*
Note that as we grow the sample size, the sampling distribution will get skinnier. This can be proved by mathematical formula earlier, sample size bigger will make bigger value in the denominator, that makes smaller standard error, hence skinnier sampling distribution.
Intuitively, as the sample size bigger, the variety about where the mean of entire population get smaller, we get better prediction. To achieve half the mean error of the sampling distribution, we have to quarduple the sample size, 4xn, which we can replace it to the formula earlier to prove it.To get the error 1/3 from before we need:
$$ \frac{1}{3} . \frac{\sigma}{\sqrt{n}} = \frac{\sigma}{\sqrt{9n}}$$
The important thing to take away, is that when you're sampling to any distribution and distributed all the possible means, you will get aprroximately a normal distribution. As you're sample size bigger, the error will get smaller and converge to actual mean. This can get very intuitive. In the extreme condition, when your sample size match the size of the population, you will get the exact mean of the population.Keep in mind, that if you're averaging all possible means no matter what sample size, you're should get approximately the mean of the population.

*Snapshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/65) 12:55*
This is more official definition of CLT. Here we saw that the standard deviation is being corrected. Earlier I told you that it's not often the case that standard deviation population can be defined. So instead we can use standard deviation of our sample. In this case, we're using Bessel's correction. The observation must be independent, random sample if observational, or random assignment if on experiment. To get better intuition for CLT, there's an applet for that http:/bit.ly/clt_mean
The sample size doesn't to be large enough, less than 10 percent if without replacement. We also often don't know the shape of population distribution. If you do sample and turns out the shape is normal, it's more likely that the population which you take sample on is also normal. But to make sure for the distribution we set the minimum of sample size 30.But if you have skew population, high skew for example, then you would need larger sample size to make the shape normal so CLT can works
Why 10% if without replacement? Because too large of sample size would make a larger probability that we grab sample that are dependent one another. For case above, if we're taking 50% sample from women population, chances are there's sibling or related family, and perhaps make similar height because of the genetic, and thus make it dependant. This will disrupt our sample which we want to be perfectly randomised. So not too large, below 10%, but also not too small, above 30.

*Snapshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/65) 17:28*
So there's additional reason why the sample size has to be large enough. Consider the population distribution that has right skewed in the case above. When we're take sample means and make sampling distribution, it will be like the parent, right skewed. But if take the sample large enough, it will approximate normal distribution. Why we so obsessed about normal shape? Because we need that for Central Limit Theorem to work. CLT will allow many statistical inference to use, including null hypothesis and hypothesis testing to use, which require normal probability distribution. There's also confidence interval that we can use.
Doing z-score that we discussed earlier, we can pretty much calculate the probability in certain distribution.Lower probability would means that certain sampling distribution doesn't affected the whole population.The result of lower probability then, means lower error, and thus *not likely to happen by chance.*
Let's take an example. There's app Bieber Twitter that may affected Klout Score user. We have the mean, the standard deviation, and number of people, and average score of who uses the Bieber Twitter. From this examples we can extract some information related to Central Limit Theorem.
* Sampling : Bieber Twitter users
* population : Klout Score users
* mean population : 37.72
* sd population 16.04
* avg Klout Score of users that use Bieber Twitter = 40
Suppose that there are 35 Bieber Twitter user.What's the probability that using this app will affected the Klout Score users?
```
import numpy as np
mu = 37.72
sigma = 16.04
sample_size = 35
SE = sigma/np.sqrt(sample_size)
SE
```
This is our standard error.Average score of Klout population is 37.72. The average score Klout users who use Bieber Twitter is 40. Given this, what's the probability of Bieber Twitter affected Klout Score?
```
%R mu = 37.72
%R sigma = 2.711
%R pnorm(40, mean=mu,sd=sigma,lower.tail=F)
```
So doing pnorm tells us that 0.2 probability of using Bieber Tweeter affecting the Kloud Score. There's too much possible reason to mention, but what's important is that our sample is lacking.
Suppose that we increase the sample size to 250,the standard error would be
```
sample_size =250
SE = sigma/np.sqrt(sample_size)
SE
```
We can see that we have much smaller standard error from the previous one. Would this increase the probability? What's the Z-score of 40? (mean of sample size 250) Remember that the mean of sampling distribution is exactly the same of mean at entire population.
```
(40 - mu)/SE
```
That value means that it's more than 2 standard deviations away, which is very small probability. What's the probability? The probability of **randomly selecting a sample** with sample size 250 with **a mean greater than 110**?
```
%R mu = 37.72
%R sigma = 1.01
%R pnorm(40, mean=mu,sd=sigma,lower.tail=F)
```
Now please keep in mind, that we plot the mean of this particular sample (of size 250) in the sample means distribution. The probability randomly selecting average at least 40(more than 40, because less will be more common) is very rare, 0.01 if plot it into sampling distributions .This is very unlikely that is **random**. Things can be due to chance if the probability is still 0.2 like earlier. This explain us that there's maybe some correlation between the app and the Kloud score.Based on this low probability, it's unlikely due to chance.You may not notice this yet, but what we actually do is **using CLT for Hypothesis Testing**.Null Hypothesis is skeptic, Bieber Twitter app doesn't contribute to Klout Score. Alternative Hypothesis is using Bieber Twitter apps contribute to Klout Score. By observing that the the probability is rare, then it's happen not due to chance. Thus rejecting the null hypothesis favor the alternative.
So you see, increasing our sample size making our standard error smaller, thus tend to make less error. From our sampling distribution, the mean took from larger sample has lower probability, too low and so rare that is thought not due to chance.The closer the Z-score of a sample to 0 in sampling distribution, the higher the probability that the sample is due to chance.
Sampling distribution is also the gate to open *Statistical Inference*. Consider following case,
This kind of sample will
In summary, CLT let us know:
* the predicted shape of the distribution (Normal Distribution)
* Expected value of the mean of the distribution (Mean of population)
* Standard Deviation/Standard Error (known from standard deviation and sampling size)
* The bigger the sample size, the smaller standard error, resulted in bigger z-score for that sample mean
* The bigger the z-core,makes skinner distribution, makes less the proportion of sample means greater than that sample mean.
* To minimize the error exactly, we can calculate how much sample size do we need.
* The skewer the shape of the population, the higher sample size that we would need.
* Standard deviation of population is almost always unknown. We can use the sample standard deviation with bessel's correction.
* Point estimates like mean is expected to be equal to mean of the population
* Standard error measures the error variability when we plot sample means into sampling distribution. This to measure the probability of error of our sample. If the probability is low, then it's likely not error and not happend due to chance.This is different than the definition of standard deviation, that measures variability of our data.
### CLT example

*Snapshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/67) 03:31*
This one of the example that later we're going to calculate by CLT. Eyeballing educated guess this problem, we see that by the height of the bar of histogram, we can guess the frequency. Using simple probability calculation, we get approximately 0.17. Using R
```
%load_ext rmagic
%R pnorm(5,mean=3.45,sd=1.63,lower.tail=F)
```

*Snapshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/67) 07:41*
Now for this question, we actually taking the average of total 100 songs in ipod that average last **at least** 3.6 mins.This is not the same as taking probability that each of them last at least 3.6 mins, total the length will be perhaps more than 360 mins.
So we're going to calculate the probability of the skewed distribution. How? by transforming it to normal distribution using CLT. Given mean and standard population earlier, we can make sampling distribution, and calculate the Z-score. Using that, we calculate the probability and have 0.179 using upper tail.Thats the probability that if you randomly select 100 songs, it will last for 60 mins.

*Snapshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/67) 10:51*
Next we're going to match the plot from the probability. Distribution that transform to normal the most is the distribution of sample means that have larger size, in this case right most is 4. The distribution of sample observation, sample distribution but not sampling distribution is in the middle is 3. It also because it resemble right skewed and variability of the original population. The rest is the sampling distribution that have fewer size, left at 2.
## Binomial Distribution

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/55) video, 01:15*
For explaining more about Binomail, here we will take Milgram Experiment. In this experiment, the Teacher is the subject, which would tell whether the teacher wants to do thing against his conscious. The Experiment will order the teachers to shock the Learner, which actually just sound recorded each time the teacher send a shock. The teacher has 'blinding' about the learner and found that consistently overtime probability of teacher giving shock is 0.65
Each subject of Milgram's experiment is considered as **trial**.And she/he labeled **success** if he administer a 'shock' and **failure** otherwise. In this case the **probability of sucess**, p = 0.35
When a trial only has two possible outcomes, we call this **Bernoulli random variable**.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/55) video, 05:29*
So when doing the experiment on 4 people, we have possible outcomes of 2^4. But only regarding scenario where exactly 1 person that we concern. This turns out to be 4 scenario that each of the person try.
With all of the scenario simulated, we see that exactly all the probability is the same. Since all the person is **independent**, we can multiply them directly. Since this is **disjoint events** of 4 scenario, we can quickly do:
res = P(scenario) * n_scenario
This eventually make a perfect setting for **Binomial Distribution**.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/55) video, 07:12*
So doing earlier steps by hand would be quiet tedious, as we have to jot down all possible conditions. What we can do is using binomial distribution for our problem.
This is the main formula for Binomial Distribution, and as long we have three of the required variables, we can do the calculation.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/55) video, 09:23*
For the first K '4 choose 1' can simply be calculated doing the formula above. For the second one, as we can see at the first three simulated, things can get complicated easily, so we're doing '9 choose 2'.
This can get easily computed in R
```
%R choose(9,2)
```

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/55) video, 07:12. Only last one is false*
```
%R choose(5,4)
```
So to put it concretely,

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/55) video, 09:53*
The conditions for binomial are:
1. The trials must be independent
2. The number of trials, n, must be fixed
3. each trial outcome must be classified as a success or a failure.
4. The probability of sucess,p,must be the same for each trial(which is to be expected, agree with number 1)

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/55) video, 13:43*
Pay attention that **random sample** means we know that the employees, the trials, are **independent**. And the reason that we're getting very small probability because the power magnitude, we would expect that find probability of success from 10 people is so much lower than the probability of 8 people.
Alternative way of thinking is that, prob sucess is really low, finding 8 in 10 people that succeed, is magnitudely low. Things could go different if we only find 2 success in 10 people, which goes to probability 0.2~0.3.
To do this in R,
```
%load_ext rpy2.ipython
#k,n,p
%R dbinom(8,size=10,p=0.13)
```

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/55) video, 17:13*
So doing some simple probabilistic given n samples and probability, we can guess the exact value. And guess the exact value is the **expected value** in binomial. Standard deviation can also be counted as the following, which in this case give 3.36. So give or take 68% of 100 employees falls in plus minus 3.36. Say, why binomial? Is it the shape because there's only two possible conditions?

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/57) video, 02:14*
The binomial will looks closer to the normal distribution as the number of trials increases. This is get by the probability of success, p = 0.25. In the top left distribution, the complement probability, 0.75 is raise to the power of 10(because success case =0). This will give right skewed distribution to be more normal.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/57) video, 03:51*
Study found that we actually get more than what we give. The power users is the users in facebook who give more than they get. So we're actually calculating the probability of success, of finding the power users.P(K>= 70)?

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/57) video, 07:40*
As larger n shape more to normal distribution. We can use the benefit of normal distribution. With binomial, the probability of exact value is the only must condition. So more than probability of success, should be calculated independently like shown in the sample until the number of trials n.
We can calculate this using normal distribution and its properties. Like the formula given earlier, mean can be get from the directly product the number trials and probability of success. Standard deviation from square root of number of trials times probability of success times probability of failure. We can get Z-value and using it to predict the probability.Remember that we want to get the probability of at least 70, so we have to take the complementary probability, since z probability only compute below Z.
In R, this can get by
```
%R pnorm(70,mean=61.25,sd=6.78,lower.tail =F)
```
This also can be done using another command of R

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/57) video, 09:53*
```
%R sum(dbinom(70:245, size = 245, p=0.25))
```
So it turns out to be so different than earlier calculations. This can happen in two ways. First it's because the binomial doesn't exactly shape like normal-density function, you can see the bar chart. Second, the 70 of the value is actually cut off and out of the distribution. We can reduce the k by 0.5, and recalculating using R:
```
%R pnorm(69.5,mean=61.25,sd=6.78,lower.tail =F)
```
This can also be done by applet : http://bit.ly/dist_calc
So we know that by taking certain condition that binomial shape like normal distribution, we can take advantage of normal distribution. But if we can't see the distribution? What is the requirement?

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/57) video, 12:07*
We also have 0.5 point adjustment for k. But what's important here is the sample size, how you're going to be big enough that will somehow shape into normal distribution. Later in statistical inference, there's only two oucome in categorical variable, so it's shape binomial. We can predict the variable as long as binomial shape into normal.
For the formula above we know that we need at least 10 succeses and 10 failures to shape into normal distributions.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/57) video, 14:04*
### Binomal Examples

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/59) video, 02:58*
This can be calculated using R
```
%R dbinom(6,size=10,p=0.56)
```
This will get probability that closely approach to the normal distribution. What we expect is actually is(expected number of successes,k):
Expected Value = mean = nxp = 5.6
We expect 5.6 from our calculation, but the histogram prove that 5-6, and 5.6 is not too far behind.
```
%R dbinom(2,size=10,p=0.56)
```
Using it in n = 1000 and k = 600, because the Law of the Larger Numbers, makes the probability will be conceptually smaller.
```
%R dbinom(600,size=1000,p=0.56)
```

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/59) video, 06:29*
To describe normal, we need to be exactly sure of the minimum required from the binomial case. When shape like normal, we also need to know mean and standard deviation to plot.

*Screenshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/59) video, 09:59*
This can be calculated by applet or dbinom R command. Using z-score, the k value once again has to be decrement by 0.5 because no exact value is 60.
```
%R pnorm (60-0.5,mean=56,sd=4.96,lower.tail=F)
%R sum(dbinom(60:100,size=100,p=0.56))
```
> **REFERENCES:**
> * https://class.coursera.org/statistics-003/lecture
> * https://www.udacity.com/course/viewer#!/c-ud827
| github_jupyter |
# Test CKA
```
import numpy as np
import pickle
import gzip
import cca_core
from CKA import linear_CKA, kernel_CKA
X = np.random.randn(100, 64)
Y = np.random.randn(100, 64)
print('Linear CKA, between X and Y: {}'.format(linear_CKA(X, Y)))
print('Linear CKA, between X and X: {}'.format(linear_CKA(X, X)))
print('RBF Kernel CKA, between X and Y: {}'.format(kernel_CKA(X, Y)))
print('RBF Kernel CKA, between X and X: {}'.format(kernel_CKA(X, X)))
```
# MNIST Example of CKA
The minist layers are: 784(input)--500--500--10(output)
```
# Load up second hidden layer of MNIST networks and compare
with open("model_activations/MNIST/model_0_lay01.p", "rb") as f:
acts1 = pickle.load(f)
with open("model_activations/MNIST/model_1_lay01.p", "rb") as f:
acts2 = pickle.load(f)
print("activation shapes", acts1.shape, acts2.shape)
#results = cca_core.get_cca_similarity(acts1, acts2, epsilon=1e-10, verbose=False)
# The problem of CKA: time-consuming with large data points
print('Linear CKA: {}'.format(linear_CKA(acts1.T, acts2.T)))
print('RBF Kernel: {}'.format(kernel_CKA(acts1.T, acts2.T)))
```
The results of CCA for the same feature
```
# similarity index by CCA
results = cca_core.get_cca_similarity(acts1, acts2, epsilon=1e-10, verbose=False)
print("Mean CCA similarity", np.mean(results["cca_coef1"]))
```
# CKA for Conv Nets with SVHN
SVHN consists of images that are 32 x 32 (height 32, width 32). Our architecture looks like:
**conv1(3x3,32 channels)-->maxpool(2x2)-->conv2(3x3,64 channels)-->maxpool(2x2)-->batchnorm-->fc(200)-->fc(10)**
```
# Load up conv 2 activations from SVHN
with gzip.open("model_activations/SVHN/model_0_lay03.p", "rb") as f:
acts1 = pickle.load(f)
with gzip.open("model_activations/SVHN/model_1_lay03.p", "rb") as f:
acts2 = pickle.load(f)
print(acts1.shape, acts2.shape)
```
#### Average Pool for the features
```
avg_acts1 = np.mean(acts1, axis=(1,2))
avg_acts2 = np.mean(acts2, axis=(1,2))
print(avg_acts1.shape, avg_acts2.shape)
# CKA
print('Linear CKA: {}'.format(linear_CKA(avg_acts1, avg_acts2)))
print('RBF Kernel CKA: {}'.format(kernel_CKA(avg_acts1, avg_acts2)))
# CCA
a_results = cca_core.get_cca_similarity(avg_acts1.T, avg_acts2.T, epsilon=1e-10, verbose=False)
print("Mean CCA similarity", np.mean(a_results["cca_coef1"]))
```
#### Interpolate for the features
```
with gzip.open("./model_activations/SVHN/model_1_lay04.p", "rb") as f:
pool2 = pickle.load(f)
print("shape of first conv", acts1.shape, "shape of second conv", pool2.shape)
from scipy import interpolate
num_d, h, w, _ = acts1.shape
num_c = pool2.shape[-1]
pool2_interp = np.zeros((num_d, h, w, num_c))
for d in range(num_d):
for c in range(num_c):
# form interpolation function
idxs1 = np.linspace(0, pool2.shape[1],
pool2.shape[1],
endpoint=False)
idxs2 = np.linspace(0, pool2.shape[2],
pool2.shape[2],
endpoint=False)
arr = pool2[d,:,:,c]
f_interp = interpolate.interp2d(idxs1, idxs2, arr)
# creater larger arr
large_idxs1 = np.linspace(0, pool2.shape[1],
acts1.shape[1],
endpoint=False)
large_idxs2 = np.linspace(0, pool2.shape[2],
acts1.shape[2],
endpoint=False)
pool2_interp[d, :, :, c] = f_interp(large_idxs1, large_idxs2)
print("new shape", pool2_interp.shape)
num_datapoints, h, w, channels = acts1.shape
f_acts1 = acts1.reshape((num_datapoints*h*w, channels))
num_datapoints, h, w, channels = pool2_interp.shape
f_pool2 = pool2_interp.reshape((num_datapoints*h*w, channels))
# CCA
f_results = cca_core.get_cca_similarity(f_acts1.T[:,::5], f_pool2.T[:,::5], epsilon=1e-10, verbose=False)
print("Mean CCA similarity", np.mean(f_results["cca_coef1"]))
# CKA
#print('Linear CKA: {}'.format(linear_CKA(f_acts1, f_pool2))) # the shape is too large for CKA
#print('RBF Kernel CKA: {}'.format(kernel_CKA(f_acts1, f_pool2))) # the shape is too large for CKA
f_acts1.shape
```
| github_jupyter |
```
%matplotlib inline
"""This simpy model mimics the arrival and treatment of patients in an
emergency department with a limited number of doctors. Patients are generated,
wait for a doc (if none available), take up the doc resources for the time of
consulation, and exit the ED straight after seeing doc. Patients are of three
priorities (1=high to 3=low) and docs will always see higher priority patients
first (though they do not interrupt lower priority patients already being seen).
The model has four classes:
Global_vars: holds global variables
Model: holds model environemnt and th emain model run methods
Patient: each patient is an object instance of this class. This class also holds
a static variable (dictionary) containing all patient objects currently in
the simulation model.
Resources: defines the doc resources required to see/treat patient.
"""
import simpy
import random
import pandas as pd
import matplotlib.pyplot as plt
class Global_vars:
"""Storage of global variables. No object instance created. All times are
in minutes"""
# Simulation run time and warm-up (warm-up is time before audit results are
# collected)
sim_duration = 5000
warm_up = 1000
# Average time between patients arriving
inter_arrival_time = 10
# Number of doctors in ED
number_of_docs = 2
# Time between audits
audit_interval = 100
# Average and standard deviation of time patients spends being treated in ED
# (does not include any queuing time, this si the time a doc is occupied
# with a patient)
appointment_time_mean = 18
appointment_time_sd = 7
# Lists used to store audit results
audit_time = []
audit_patients_in_ED = []
audit_patients_waiting = []
audit_patients_waiting_p1 = []
audit_patients_waiting_p2 = []
audit_patients_waiting_p3 = []
audit_reources_used = []
# Set up dataframes to store results (will be transferred from lists)
patient_queuing_results = pd.DataFrame(
columns=['priority', 'q_time', 'consult_time'])
results = pd.DataFrame()
# Set up counter for number fo aptients entering simulation
patient_count = 0
# Set up running counts of patients waiting (total and by priority)
patients_waiting = 0
patients_waiting_by_priority = [0, 0, 0]
class Model:
"""
Model class contains the following methods:
__init__: constructor for initiating simpy simulation environment.
build_audit_results: At end of model run, transfers results held in lists
into a pandas DataFrame.
chart: At end of model run, plots model results using MatPlotLib.
perform_audit: Called at each audit interval. Records simulation time, total
patients waiting, patients waiting by priority, and number of docs
occupied. Will then schedule next audit.
run: Called immediately after initialising simulation object. This method:
1) Calls method to set up doc resources.
2) Initialises the two starting processes: patient admissions and audit.
3) Starts model envrionment.
4) Save individual patient level results to csv
5) Calls the build_audit_results metha and saves to csv
6) Calls the chart method to plot results
see_doc: After a patient arrives (generated in the trigger_admissions
method of this class), this see_doc process method is called (with
patient object passed to process method). This process requires a free
doc resource (resource objects held in this model class). The request is
prioritised by patient priority (lower priority numbers grab resources
first). The number of patients waiting is incremented, and doc resources
are requested. Once doc resources become available queuing times are
recorded (these are saved to global results if warm up period has been
completed). The patient is held for teh required time with doc (held in
patient object) and then time with doc recorded. The patient is then
removed from the Patient calss dictionary (which triggers Python to
remove the patient object).
trigger_admissions: Generates new patient admissions. Each patient is an
instance of the Patient obect class. This method allocates each
patient an ID, adds the patient to the dictionary of patients held by
the Patient class (static class variable), initiates a simpy process
(in this model class) to see a doc, and schedules the next admission.
"""
def __init__(self):
"""constructor for initiating simpy simulation environment"""
self.env = simpy.Environment()
def build_audit_results(self):
"""At end of model run, transfers results held in lists into a pandas
DataFrame."""
Global_vars.results['time'] = Global_vars.audit_time
Global_vars.results['patients in ED'] = Global_vars.audit_patients_in_ED
Global_vars.results['all patients waiting'] = \
Global_vars.audit_patients_waiting
Global_vars.results['priority 1 patients waiting'] = \
Global_vars.audit_patients_waiting_p1
Global_vars.results['priority 2 patients waiting'] = \
Global_vars.audit_patients_waiting_p2
Global_vars.results['priority 3 patients waiting'] = \
Global_vars.audit_patients_waiting_p3
Global_vars.results['resources occupied'] = \
Global_vars.audit_reources_used
def chart(self):
"""At end of model run, plots model results using MatPlotLib."""
# Define figure size and defintion
fig = plt.figure(figsize=(12, 4.5), dpi=75)
# Create two charts side by side
# Figure 1: patient perspective results
ax1 = fig.add_subplot(131) # 1 row, 3 cols, chart position 1
x = Global_vars.patient_queuing_results.index
# Chart loops through 3 priorites
markers = ['o', 'x', '^']
for priority in range(1, 4):
x = (Global_vars.patient_queuing_results
[Global_vars.patient_queuing_results['priority'] ==
priority].index)
y = (Global_vars.patient_queuing_results
[Global_vars.patient_queuing_results['priority'] ==
priority]['q_time'])
ax1.scatter(x, y,
marker=markers[priority - 1],
label='Priority ' + str(priority))
ax1.set_xlabel('Patient')
ax1.set_ylabel('Queuing time')
ax1.legend()
ax1.grid(True, which='both', lw=1, ls='--', c='.75')
# Figure 2: ED level queuing results
ax2 = fig.add_subplot(132) # 1 row, 3 cols, chart position 2
x = Global_vars.results['time']
y1 = Global_vars.results['priority 1 patients waiting']
y2 = Global_vars.results['priority 2 patients waiting']
y3 = Global_vars.results['priority 3 patients waiting']
y4 = Global_vars.results['all patients waiting']
ax2.plot(x, y1, marker='o', label='Priority 1')
ax2.plot(x, y2, marker='x', label='Priority 2')
ax2.plot(x, y3, marker='^', label='Priority 3')
ax2.plot(x, y4, marker='s', label='All')
ax2.set_xlabel('Time')
ax2.set_ylabel('Patients waiting')
ax2.legend()
ax2.grid(True, which='both', lw=1, ls='--', c='.75')
# Figure 3: ED staff usage
ax3 = fig.add_subplot(133) # 1 row, 3 cols, chart position 3
x = Global_vars.results['time']
y = Global_vars.results['resources occupied']
ax3.plot(x, y, label='Docs occupied')
ax3.set_xlabel('Time')
ax3.set_ylabel('Doctors occupied')
ax3.legend()
ax3.grid(True, which='both', lw=1, ls='--', c='.75')
# Create plot
plt.tight_layout(pad=3)
plt.show()
def perform_audit(self):
"""Called at each audit interval. Records simulation time, total
patients waiting, patients waiting by priority, and number of docs
occupied. Will then schedule next audit."""
# Delay before first aurdit if length of warm-up
yield self.env.timeout(Global_vars.warm_up)
# The trigger repeated audits
while True:
# Record time
Global_vars.audit_time.append(self.env.now)
# Record patients waiting by referencing global variables
Global_vars.audit_patients_waiting.append(
Global_vars.patients_waiting)
Global_vars.audit_patients_waiting_p1.append(
Global_vars.patients_waiting_by_priority[0])
Global_vars.audit_patients_waiting_p2.append(
Global_vars.patients_waiting_by_priority[1])
Global_vars.audit_patients_waiting_p3.append(
Global_vars.patients_waiting_by_priority[2])
# Record patients waiting by asking length of dictionary of all
# patients (another way of doing things)
Global_vars.audit_patients_in_ED.append(len(Patient.all_patients))
# Record resources occupied
Global_vars.audit_reources_used.append(
self.doc_resources.docs.count)
# Trigger next audit after interval
yield self.env.timeout(Global_vars.audit_interval)
def run(self):
"""Called immediately after initialising simulation object. This method:
1) Calls method to set up doc resources.
2) Initialises the two starting processes: patient admissions and audit.
3) Starts model envrionment.
4) Save individual patient level results to csv
5) Calls the build_audit_results metha and saves to csv
6) Calls the chart method to plot results
"""
# Set up resources using Resouces class
self.doc_resources = Resources(self.env, Global_vars.number_of_docs)
# Initialise processes that will run on model run
self.env.process(self.trigger_admissions())
self.env.process(self.perform_audit())
# Run
self.env.run(until=Global_vars.sim_duration)
# End of simulation run. Build and save results
Global_vars.patient_queuing_results.to_csv('patient results.csv')
self.build_audit_results()
Global_vars.results.to_csv('operational results.csv')
# plot results
self.chart()
def see_doc(self, p):
"""After a patient arrives (generated in the trigger_admissions
method of this class), this see_doc process method is called (with
patient object passed to process method). This process requires a free
doc resource (resource objects held in this model class). The request is
prioritised by patient priority (lower priority numbers grab resources
first). The number of patients waiting is incremented, and doc resources
are requested. Once doc resources become available queuing times are
recorded (these are saved to global results if warm up period has been
completed). The patient is held for teh required time with doc (held in
patient object) and then time with doc recorded. The patient is then
removed from the Patient calss dictionary (which triggers Python to
remove the patient object).
"""
# See doctor requires doc_resources
with self.doc_resources.docs.request(priority=p.priority) as req:
# Increment count of number of patients waiting. 1 is subtracted
# from priority to align priority (1-3) with zero indexed list.
Global_vars.patients_waiting += 1
Global_vars.patients_waiting_by_priority[p.priority - 1] += 1
# Wait for resources to become available
yield req
# Resources now available. Record time patient starts to see doc
p.time_see_doc = self.env.now
# Record patient queuing time in patient object
p.queuing_time = self.env.now - p.time_in
# Reduce count of number of patients (waiting)
Global_vars.patients_waiting_by_priority[p.priority - 1] -= 1
Global_vars.patients_waiting -= 1
# Create a temporary results list with patient priority and queuing
# time
_results = [p.priority, p.queuing_time]
# Hold patient (with doc) for consulation time required
yield self.env.timeout(p.consulation_time)
# At end of consultation add time spent with doc to temp results
_results.append(self.env.now - p.time_see_doc)
# Record results in global results data if warm-up complete
if self.env.now >= Global_vars.warm_up:
Global_vars.patient_queuing_results.loc[p.id] = _results
# Delete patient (removal from patient dictionary removes only
# reference to patient and Python then automatically cleans up)
del Patient.all_patients[p.id]
def trigger_admissions(self):
"""Generates new patient admissions. Each patient is an instance of the
Patient obect class. This method allocates each patient an ID, adds the
patient to the dictionary of patients held by the Patient class (static
class variable), initiates a simpy process (in this model class) to see
a doc, and then schedules the next admission"""
# While loop continues generating new patients throughout model run
while True:
# Initialise new patient (pass environment to be used to record
# current simulation time)
p = Patient(self.env)
# Add patient to dictionary of patients
Patient.all_patients[p.id] = p
# Pass patient to see_doc method
self.env.process(self.see_doc(p))
# Sample time for next asmissions
next_admission = random.expovariate(
1 / Global_vars.inter_arrival_time)
# Schedule next admission
yield self.env.timeout(next_admission)
class Patient:
"""The Patient class is for patient objects. Each patient is an instance of
this class. This class also holds a static dictionary which holds all
patient objects (a patient is removed after exiting ED).
Methods are:
__init__: constructor for new patient
"""
# The following static class dictionary stores all patient objects
# This is not actually used further but shows how patients may be tracked
all_patients = {}
def __init__(self, env):
"""Constructor for new patient object.
"""
# Increment global counts of patients
Global_vars.patient_count += 1
# Set patient id and priority (random between 1 and 3)
self.id = Global_vars.patient_count
self.priority = random.randint(1, 3)
# Set consultation time (time spent with doc) by random normal
# distribution. If value <0 then set to 0
self.consulation_time = random.normalvariate(
Global_vars.appointment_time_mean, Global_vars.appointment_time_sd)
self.consulation_time = 0 if self.consulation_time < 0 \
else self.consulation_time
# Set initial queuing time as zero (this will be adjusted in model if
# patient has to waiti for doc)
self.queuing_time = 0
# record simulation time patient enters simulation
self.time_in = env.now
# Set up variables to record simulation time that patient see doc and
# exit simulation
self.time_see_doc = 0
self.time_out = 0
class Resources:
"""Resources class for simpy. Only resource used is docs"""
def __init__(self, env, number_of_docs):
self.docs = simpy.PriorityResource(env, capacity=number_of_docs)
# Run model
if __name__ == '__main__':
# Initialise model environment
model = Model()
# Run model
model.run()
```
| github_jupyter |
# Seismic Cubeset tutorial
Welcome! This notebook shows how to use `SeismicCubeset` class to hold information about .sgy/.hdf5 cubes. Also, utilitary class `SeismicGeometry` is demonstrated.
```
# Necessary modules
import os
import sys
from glob import glob
sys.path.append('..')
from seismiqb.batchflow import FilesIndex
from seismiqb import SeismicGeometry, SeismicCubeset
```
## First of all, lets create instance of `SeismicCubeset` from `FilesIndex`:
```
path_data_0 = '/notebooks/SEISMIC_DATA/CUBE_1/E_anon.sgy'
path_data_1 = '/notebooks/SEISMIC_DATA/CUBE_3/P_cube.sgy'
dsi = FilesIndex(path=[path_data_0, path_data_1], no_ext=True)
ds = SeismicCubeset(dsi)
print(ds.indices)
```
#### `SeismicGeometry` class is used as a container for lots of information: cube shapes, exact coordinates, minimum/maximum value in cube and so on. To get it, use method `load_geometries`:
```
ds = ds.load_geometries()
# ~ 11 minutes
```
Wow, that takes a lot of time! Even considering total size of cubes, which is almost 80GB, that is bit too much..
**Note:** `load_geometries` makes one full pass through every cube in index. That is why it takes a lot of time, but, at the same time, it does not load more than one trace at a time in memory.
Don't worry though: by the end of this tutorial you will learn how to do it 1000 times faster!
#### Now, lets use not only cubes, but labeled horizonts.
Next two methods, `load_point_clouds` and `load_labels` result in creating `numba.Dict` with mapping from (iline, xline) to heights of horizonts. You need to pass mapping from cube names in index to location of .txt files to `load_point_clouds`.
**Note:** due to big amount of .txt files with labels for every cube it is crucial to have well-thought-out file structure.
```
paths_txt = {ds.indices[0]: glob('/notebooks/SEISMIC_DATA/CUBE_1/BEST_HORIZONS/*'),
ds.indices[1]: glob('/notebooks/SEISMIC_DATA/CUBE_3/BEST_HORIZONS/*')}
ds = (ds.load_point_clouds(paths = paths_txt)
.create_labels())
```
#### As we know, many cubes are not fully-labeled: some of ilines/xlines are missing. It might be a good idea to check where labels are present, and where not:
**Note:** argument to `show_labels` is cube identificator from index of `Cubeset`, not path to the file itself.
```
ds.show_labels(idx=0)
```
Looks like a lot of absent labels! If we are to cut crops from places without labels and train neural network on it, it would be detrimental to its quality.
#### That is why we provide `load_samplers` method: it creates instance of `Sampler`, that generates points from labeled places (or very close to them).
**Note:** default behaviour is to sample points with equal probability from every cube, according to this cube labels destribution. You can also change proportion between cubes or use different sampling strategy. To learn more about this, check [documentation.](https://github.com/analysiscenter/seismiqb/blob/master/seismiqb/src/cubeset.py#L158)
```
ds = ds.create_sampler()
print('Example of sampled point:', ds.sampler.sample(1))
ds.show_sampler(idx=0)
```
As you can see, each sampled point contains reference to cube it was cut from. That is necessary so that further methods know from where to load data. Next three values are iline, xline, and height, all scaled to [0, 1] range. That is done in order to abstract from different shapes of each cube and use one common syntax for all of them.
Method `show_sampler` allows to look at the locations of generated points from above, as well as gistogram of heights.
#### There are a lot of attributes in our Cubeset: `geometries`, `point_clouds`, `labels`, etc. Each of those entity is mapping from cube identificator. We provide methods to store them: for example, `save_point_clouds` does exactly what you would expect:
```
path_pc_saved = '/notebooks/SEISMIC_DATA/SAVED/DEMO/point_clouds.dill'
ds = ds.save_point_clouds(save_to=path_pc_saved)
```
#### To load entity from disk, pass `load_from` argument to `load_point_clouds` method.
```
%%time
ds = ds.load_point_clouds(path=path_pc_saved)
```
**Note:** similar functionality is present for `load_geometries`, `load_labels` and `load_samplers`.
# HDF5
Storing created geometries, labels and samplers is definetely faster, than inferring them every time. That does not mean that there is no room to improve!
#### First of all, lets convert our .sgy cubes to .hdf5 format
**Note:** by default, cubes in new format saved on disk right next to their .sgy counterpart.
```
# ds = ds.convert_to_h5py()
```
It takes some time, but this method needs to be called only once.
#### Lets check how `load_geometries` fares:
```
%%time
path_data_0 = '/notebooks/SEISMIC_DATA/CUBE_1/E_anon.hdf5'
path_data_1 = '/notebooks/SEISMIC_DATA/CUBE_3/P_cube.hdf5'
dsi = FilesIndex(path=[path_data_0, path_data_1], no_ext=True)
ds = SeismicCubeset(dsi)
ds = ds.load_geometries()
```
Almost instant!
| github_jupyter |
# A study of bias in data on Wikipedia
The purpose of this study is to explore bias in data on Wikipedia by analyzing Wikipedia articles on politicians from various countries with respect to their populations. A further metric used for comparison is the quality of articles on politicians across different countries.
### Import libraries
```
# For getting data from API
import requests
import json
# For data analysis
import pandas as pd
import numpy as np
```
### Load datasets
**Data Sources:**
We will combine the below two datasets for our analysis of bias in data on Wikipedia:
1) **Wikipedia articles** : This dataset contains information on Wikipedia articles for politicians by country. Details include the article name, revision id (last edit id) and country. This dataset can be downloaded from [figshare](https://figshare.com/articles/Untitled_Item/5513449). A downloaded version "page_data.csv" (downloaded on 28th Oct 2018) is also uploaded to the [git](https://github.com/priyankam22/DATA-512-Human-Centered-Data-Science/tree/master/data-512-a2) repository.
2) **Country Population** : This dataset contains a list of countries and their populations till mid-2018 in millions. This dataset is sourced from the [Population Reference Bureau] (https://www.prb.org/data/). As the dataset is copyrighted, it is not available on this repository. The data might have changed when you extract it from the website. For reproducibility, i have included the intermediate merged file for the final analysis.
```
# Load the Wikipedia articles
wiki_articles = pd.read_csv('page_data.csv')
wiki_articles.head()
# Load the country population
country_pop = pd.read_csv('WPDS_2018_data.csv')
country_pop.head()
print("Number of records in Wikipedia articles dataset: ", wiki_articles.shape[0])
print("Number of records in country population dataset: ", country_pop.shape[0])
```
### Get the quality of Wikipedia articles
To get the quality score of Wikipedia articles, we will use the machine learning system called [ORES](https://www.mediawiki.org/wiki/ORES) ("Objective Revision Evaluation Service"). ORES estimates the quality of a given Wikipedia article by assigning a series of probabilities that the article belongs to one of the six quality categories and returns the most probable category as the prediction. The quality of an article (from best to worst) can be categorized into six categories as below.
1. FA - Featured article
2. GA - Good article
3. B - B-class article
4. C - C-class article
5. Start - Start-class article
6. Stub - Stub-class article
More details about these categories can be found at [Wikipedia: Content Assessment](https://en.wikipedia.org/wiki/Wikipedia:Content_assessment#Grades)
We will use a Wikimedia RESTful API endpoint for ORES to get the predictions for each of the Wikipedia articles. Documentation for the API can be found [here](https://ores.wikimedia.org/v3/#!/scoring/get_v3_scores_context_revid_model).
```
# Set the headers with your github ID and email address. This will be used for identification while making calls to the API
headers = {'User-Agent' : 'https://github.com/priyankam22', 'From' : 'mhatrep@uw.edu'}
# Function to get the predictions for Wikipedia articles using API calls
def get_ores_predictions(rev_ids, headers):
'''
Takes a list of revision ids of Wikipedia articles and returns the quality of each article.
Input:
rev_ids: A list of revision ids of Wikipedia articles
headers: a dictionary with identifying information to be passed to the API call
Output: a dictionary of dictionaries storing a final predicted label and probabilities for each of the categories
for every revision id passed.
'''
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
# Specify the parameters for the endpoint
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in rev_ids) # A single string with all revision ids separated by '|'
}
# make the API call
api_call = requests.get(endpoint.format(**params))
# Get the response in json format
response = api_call.json()
return response
```
Lets look at the output of the API call by calling the function on a sample list of revision ids.
```
get_ores_predictions(list(wiki_articles['rev_id'])[0:5], headers)
```
We need to extract the prediction for each of the revision ids from the response. Note that the prediction is a key in one of the nested dictionaries.
We will call the API for all the Wikipedia articles in batches of 100 so that we do not overload the server with our requests. 100 was chosen after trial and error. Higher batchsize can throw an error.
```
# Make calls to the API in batches and append the scores portion of the dictionary response to the scores list.
scores = []
batch_size = 100
for begin_ind in range(0,len(wiki_articles),batch_size):
# set the end index by adding the batchsize except for the last batch.
end_ind = begin_ind+batch_size if begin_ind+batch_size <= len(wiki_articles) else len(wiki_articles)
# make the API call
output = get_ores_predictions(list(wiki_articles['rev_id'])[begin_ind:end_ind], headers)
# Append the scores extratced from the dictionary to scores list
scores.append(output['enwiki']['scores'])
```
Let us now extract the predicted labels for each revision_id from the list of scores.
```
# A list to store all the predicted labels
prediction = []
# Loop through all the scores dictionaries from the scores list.
for i in range(len(scores)):
# Get the predicted label from the value of all the keys(revision_ids)
for val in scores[i].values():
# Use the get function to get the value of 'score' key. If the score is not found (in case of no matches), none is returned.
prediction.append(val['wp10'].get('score')['prediction'] if val['wp10'].get('score') else None)
print("Number of predictions extracted : " , len(prediction))
```
This matches the number of revision ids we passed earlier.
```
print("Unique predictions extracted : " , set(prediction))
# Merging the predictions with the Wikipedia articles
wiki_articles['quality'] = prediction
wiki_articles.head()
```
### Merging the Wikipedia Quality data with the country population
```
# Create separate columns with lowercase country name so that we can join without mismatch
wiki_articles['country_lower'] = wiki_articles['country'].apply(lambda x: x.lower())
country_pop['Geography_lower'] = country_pop['Geography'].apply(lambda x: x.lower())
# Merge the two datasets on lowercase country name. Inner join will remove any countriess that do not have matching rows
dataset = wiki_articles.merge(country_pop, how='inner', left_on='country_lower', right_on='Geography_lower')
dataset.head()
```
### Data cleaning
```
# Drop the extra country columns.
dataset.drop(['country_lower','Geography','Geography_lower'], axis=1, inplace=True)
# Rename the remaining columns
dataset.columns = ['article_name','country','revision_id','article_quality','population']
# Remove columns where quality is None (not found from ORES)
quality_none_idx = dataset[dataset['article_quality'].isnull()].index
print("%d rows removed as ORES could not return the quality of the article" % len(quality_none_idx))
dataset.drop(quality_none_idx, inplace=True)
# Check the datatypes of the columns
dataset.info()
# Population is stored as text. Let us remove the commas used for separation and convert it to float
dataset['population'] = dataset['population'].apply(lambda x: float(x.replace(',','')))
dataset.shape
# Save the final dataset as a csv file for future reproducibility
dataset.to_csv('wiki_articles_country_pop.csv')
```
### Data Analysis
We will now perform some analysis on the number of articles on politicians with respect to a country's population and what proportion of these articles are good quality articles. By comparing the highest and lowest ranking countries in the list, we can get a fair idea of bias in the data on Wikipedia. Ideally we would expect to see similiar proportions in all countries.
```
# If you are skipping all the above steps then the prepared dataset can be loaded.
dataset = pd.read_csv('wiki_articles_country_pop.csv')
# Add a new binary column to classify the articles as good quality or not where good quality is defined as either FA or GA.
dataset['is_good_quality'] = dataset['article_quality'].apply(lambda x: 1 if x == 'FA' or x == 'GA' else 0)
```
To get an idea of the overall political coverage in Wikipedia by country, let us aggregate the data by country. We are interested in the total number of articles per country, the population of each country and the number of good articles per country.
```
output = dataset[['country','population','is_good_quality']].groupby(['country'], as_index=False).agg(['count','max','sum']).reset_index()
output.head()
# Drop the columns we dont need for the analysis.
output.drop(('population','count'), axis=1, inplace=True)
output.drop(('population','sum'), axis=1, inplace=True)
output.drop(('is_good_quality','max'), axis=1, inplace=True)
# Rename the useful columns
output.columns = ['country','population','total_articles','quality_articles']
output.head()
```
To be able to compare different countries, let us calculate the proportion of articles by unit population and the proportion of good quality articles.
```
# Create a new column with the proportion of articles per 100 ppl.
output['article_prop'] = np.round(output['total_articles']/(output['population']*10**4)*100,6)
# Create a new column for proportion of good quality articles
output['quality_prop'] = output['quality_articles']/output['total_articles']*100
```
### Results
### 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
```
# Sort by article_prop and extract top 10 countries
high_art_prop = output.sort_values(by='article_prop',ascending=False)[0:10].drop(['quality_articles','quality_prop'], axis=1).reset_index(drop=True)
# Rename the columns
high_art_prop.columns = ['Country', 'Population till mid-2018 (in millions)', 'Total Articles', 'Articles Per 100 Persons']
high_art_prop
```
### 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
```
# Sort by article_prop and extract lowest 10 countries
low_art_prop = output.sort_values(by='article_prop',ascending=True)[0:10:].drop(['quality_articles','quality_prop'], axis=1).reset_index(drop=True)
# Rename the columns
low_art_prop.columns = ['Country', 'Population till mid-2018 (in millions)', 'Total Articles', 'Articles Per 100 Persons']
low_art_prop
```
As seen in above tables, there is a huge variation in the proportion of Wikipedia articles on politicians with respect to the population of the country. The highest ranking country is Tuvalu with a population of 0.01 million and 55 Wikipedia articles (55 articles per 100 persons) on politicians whereas the lowest ranking country is India with a population of 1371.3 million and only 986 Wikipedia articles on politicians (0.007% per 100 persons). One important trend to be noted here is that all the highest ranking countries (except Iceland) have extremely low populations (less than 100K). All the high ranking countries have very high populations. Most of the low ranking countries are developing countries which can explain the bias seen in the data.
### 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
```
# Sort by quality_prop and extract highest 10 countries
high_qual_art = output.sort_values(by='quality_prop',ascending=False)[0:10].drop(['population','article_prop'], axis=1).reset_index(drop=True)
# Rename the columns
high_qual_art.columns = ['Country', 'Total Articles', 'Good Quality Articles', 'Proportion Of Good Quality Articles (%)']
high_qual_art
```
### 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
```
# Sort by quality_prop and extract highest 10 countries
low_qual_art = output.sort_values(by='quality_prop',ascending=True)[0:10].drop(['population','article_prop'], axis=1).reset_index(drop=True)
# Rename the columns
low_qual_art.columns = ['Country', 'Total Articles', 'Good Quality Articles', 'Proportion Of Good Quality Articles (%)']
low_qual_art
```
As seen in above two tables, the proportion of good quality articles is highest in North Korea at 17.94% and lowest at 0% in many countries like Sao Tome and Principe, Mozambique, Cameroon, etc. It seems like there are many countries with zero good quality articles. Lets find out all such countries.
```
no_good_quality_articles = list(output[output['quality_articles'] == 0]['country'])
len(no_good_quality_articles)
```
There are 37 countries with no good quality articles. All the countries are listed below.
```
no_good_quality_articles
```
| github_jupyter |
# Ensemble sorting of a Neuropixels recording
This notebook reproduces figures 1 and 4 from the paper [**SpikeInterface, a unified framework for spike sorting**](https://www.biorxiv.org/content/10.1101/796599v2).
The data set for this notebook is available on the Dandi Archive: [https://gui.dandiarchive.org/#/dandiset/000034](https://gui.dandiarchive.org/#/dandiset/000034)
The entire data archive can be downloaded with the command `dandi download https://gui.dandiarchive.org/#/dandiset/000034/draft` (about 75GB).
Files required to run the code are:
- the raw data: [sub-mouse412804_ecephys.nwb](https://girder.dandiarchive.org/api/v1/item/5f2b250fee8baa608594a166/download)
- two manually curated sortings:
- [sub-mouse412804_ses-20200824T155542.nwb](https://girder.dandiarchive.org/api/v1/item/5f43c74cbf3ae27e069e0aee/download)
- [sub-mouse412804_ses-20200824T155543.nwb](https://girder.dandiarchive.org/api/v1/item/5f43c74bbf3ae27e069e0aed/download)
These files should be in the same directory where the notebook is located (otherwise adjust paths below).
Author: [Matthias Hennig](http://homepages.inf.ed.ac.uk/mhennig/), University of Edinburgh, 24 Aug 2020
### Requirements
For this need you will need the following Python packages:
- numpy
- pandas
- matplotlib
- seaborn
- spikeinterface
- dandi
- matplotlib-venn
To run the MATLAB-based sorters, you would also need a MATLAB license.
For other sorters, please refer to the documentation on [how to install sorters](https://spikeinterface.readthedocs.io/en/latest/sortersinfo.html).
```
import os
# Matlab sorter paths:
# change these to match your environment
os.environ["IRONCLUST_PATH"] = "./ironclust"
os.environ["KILOSORT2_PATH"] = "./Kilosort2"
os.environ["HDSORT_PATH"] = "./HDsort"
import matplotlib.pyplot as plt
import numpy as np
from pathlib import Path
import pandas as pd
import seaborn as sns
from collections import defaultdict
from matplotlib_venn import venn3
import spikeinterface as si
import spikeextractors as se
import spiketoolkit as st
import spikesorters as ss
import spikecomparison as sc
import spikewidgets as sw
from spikecomparison import GroundTruthStudy, MultiSortingComparison
%matplotlib inline
def clear_axes(ax):
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
# print version information
si.print_spikeinterface_version()
ss.print_sorter_versions()
# where to find the data set
data_file = Path('./') / 'mouse412804_probeC_15min.nwb'
# results are stored here
study_path = Path('./')
# this folder will contain all results
study_folder = study_path / 'study_15min/'
# this folder will be used as temorary space, and hold the sortings etc.
working_folder = study_path / 'working_15min'
# sorters to use
sorter_list = ['herdingspikes', 'kilosort2', 'ironclust', 'tridesclous', 'spykingcircus', 'hdsort']
# pass the following parameters to the sorters
sorter_params = {
# 'kilosort2': {'keep_good_only': True}, # uncomment this to test the native filter for false positives
'spyking_circus': {'adjacency_radius': 50},
'herdingspikes': {'filter': True, }}
sorter_names = ['HerdingSpikes', 'Kilosort2', 'Ironclust','Tridesclous', 'SpykingCircus', 'HDSort']
sorter_names_short = ['HS', 'KS', 'IC', 'TDC', 'SC', 'HDS']
# create an extractor object for the raw data
recording = se.NwbRecordingExtractor(str(data_file))
print("Number of frames: {}\nSampling rate: {}Hz\nNumber of channels: {}".format(
recording.get_num_frames(), recording.get_sampling_frequency(),
recording.get_num_channels()))
```
# Run spike sorters and perform comparison between all outputs
```
# set up the study environment and run all sorters
# sorters are not re-run if outputs are found in working_folder
if not study_folder.is_dir():
print('Setting up study folder:', study_folder)
os.mkdir(study_folder)
# run all sorters
result_dict = ss.run_sorters(sorter_list=sorter_list, recording_dict_or_list={'rec': recording}, with_output=True,
sorter_params=sorter_params, working_folder=working_folder, engine='loop',
mode='keep', verbose=True)
# store sortings in a list for quick access
sortings = []
for s in sorter_list:
sortings.append(result_dict['rec',s])
# perform a multi-comparison, all to all sortings
# result is stored, and loaded from disk if the file is found
if not os.path.isfile(study_folder / 'multicomparison.gpickle'):
mcmp = sc.compare_multiple_sorters(sorting_list=sortings, name_list=sorter_names_short,
verbose=True)
print('saving multicomparison')
mcmp.dump(study_folder)
else:
print('loading multicomparison')
mcmp = sc.MultiSortingComparison.load_multicomparison(study_folder)
```
# Figure 1 - comparison of sorter outputs
```
# activity levels on the probe
plt.figure(figsize=(16,2))
ax = plt.subplot(111)
w = sw.plot_activity_map(recording, trange=(0,20), transpose=True, ax=ax, background='w', frame=True)
ax.plot((50,150),(-30,-30),'k-')
ax.annotate('100$\\mu m$',(100,-90), ha='center');
# example data traces
plt.figure(figsize=(16,6))
ax = plt.subplot(111)
w = sw.plot_timeseries(recording, channel_ids=range(20,28), color='k', ax=ax, trange=(1,2))
ax.axis('off')
p = ax.get_position()
p.y0 = 0.55
ax.set_position(p)
ax.set_xticks(())
ax.plot((1.01,1.11),(-1790,-1790),'k-')
ax.annotate('100ms',(1.051,-2900), ha='center');
ax.set_ylim((-2900,ax.set_ylim()[1]))
ax = plt.subplot(111)
ax.bar(range(len(sortings)), [len(s.get_unit_ids()) for s in sortings], color='tab:blue')
ax.set_xticks(range(len(sorter_names)))
ax.set_xticklabels(sorter_names_short, rotation=60, ha='center')
ax.set_ylabel('Units detected')
clear_axes(ax)
w = sw.plot_multicomp_agreement(mcmp, plot_type='pie')
w = sw.plot_multicomp_agreement_by_sorter(mcmp, show_legend=True)
# numbers for figure above
print('number of units detected:')
for i,s in enumerate(sortings):
print("{}: {}".format(sorter_names[i],len(s.get_unit_ids())))
sg_names, sg_units = mcmp.compute_subgraphs()
v, c = np.unique([len(np.unique(s)) for s in sg_names], return_counts=True)
df = pd.DataFrame(np.vstack((v,c,np.round(100*c/np.sum(c),2))).T,
columns=('in # sorters','# units','percentage'))
print('\nall sorters, all units:')
print(df)
df = pd.DataFrame()
for i, name in enumerate(sorter_names_short):
v, c = np.unique([len(np.unique(sn)) for sn in sg_names if name in sn], return_counts=True)
df.insert(2*i,name,c)
df.insert(2*i+1,name+'%',np.round(100*c/np.sum(c),1))
print('\nper sorter:')
print(df)
```
# Supplemental Figure - example unit templates
```
# show unit emplates and spike trains for two units/all sorters
sorting = mcmp.get_agreement_sorting(minimum_agreement_count=6)
get_sorting = lambda u: [mcmp.sorting_list[i] for i,n in enumerate(mcmp.name_list) if n==u[0]][0]
get_spikes = lambda u: [mcmp.sorting_list[i].get_unit_spike_train(u[1]) for i,n in enumerate(mcmp.name_list) if n==u[0]][0]
# one well matched and one not so well matched unit, all sorters
show_units = [2,17]
for i,unit in enumerate(show_units):
fig = plt.figure(figsize=(16, 2))
ax = plt.subplot(111)
ax.set_title('Average agreement: {:.2f}'.format(sorting.get_unit_property(sorting.get_unit_ids()[unit],'avg_agreement')))
units = sorting.get_unit_property(sorting.get_unit_ids()[unit], 'sorter_unit_ids')
cols = plt.cm.Accent(np.arange(len(units))/len(units))
for j,u in enumerate(dict(sorted(units.items())).items()):
s = get_sorting(u).get_units_spike_train((u[1],))[0]
s = s[s<20*get_sorting(u).get_sampling_frequency()]
ax.plot(s/get_sorting(u).get_sampling_frequency(), np.ones(len(s))*j, '|', color=cols[j], label=u[0])
ax.set_frame_on(False)
ax.set_xticks(())
ax.set_yticks(())
ax.plot((0,1),(-1,-1),'k')
ax.annotate('1s',(0.5,-1.75), ha='center')
ax.set_ylim((-2,len(units)+1))
fig = plt.figure(figsize=(16, 2))
units = sorting.get_unit_property(sorting.get_unit_ids()[unit], 'sorter_unit_ids')
print(units)
print('Agreement: {}'.format(sorting.get_unit_property(sorting.get_unit_ids()[unit],'avg_agreement')))
cols = plt.cm.Accent(np.arange(len(units))/len(units))
for j,u in enumerate(dict(sorted(units.items())).items()):
ax = plt.subplot(1, len(sorter_list), j+1)
w = sw.plot_unit_templates(recording, get_sorting(u), unit_ids=(u[1],), max_spikes_per_unit=10,
channel_locs=True, radius=75, show_all_channels=False, color=[cols[j]],
lw=1.5, ax=ax, plot_channels=False, set_title=False, axis_equal=True)
# was 100 spikes in original plot
ax.set_title(u[0])
```
# Figure 4 - comparsion between ensembe sortings and curated data
```
# perform a comparison with curated sortings (KS2)
curated1 = se.NwbSortingExtractor('sub-mouse412804_ses-20200824T155542.nwb', sampling_frequency=30000)
curated2 = se.NwbSortingExtractor('sub-mouse412804_ses-20200824T155543.nwb', sampling_frequency=30000)
comparison_curated = sc.compare_two_sorters(curated1, curated2)
comparison_curated_ks = sc.compare_multiple_sorters((curated1, curated2, sortings[sorter_list.index('kilosort2')]))
# consensus sortings (units where at least 2 sorters agree)
sorting = mcmp.get_agreement_sorting(minimum_agreement_count=2)
consensus_sortings = []
units_dict = defaultdict(list)
units = [sorting.get_unit_property(u,'sorter_unit_ids') for u in sorting.get_unit_ids()]
for au in units:
for u in au.items():
units_dict[u[0]].append(u[1])
for i,s in enumerate(sorter_names_short):
consensus_sortings.append(se.SubSortingExtractor(sortings[i], unit_ids=units_dict[s]))
# orphan units (units found by only one sorter)
sorting = mcmp.get_agreement_sorting(minimum_agreement_count=1, minimum_agreement_count_only=True)
unmatched_sortings = []
units_dict = defaultdict(list)
units = [sorting.get_unit_property(u,'sorter_unit_ids') for u in sorting.get_unit_ids()]
for au in units:
for u in au.items():
units_dict[u[0]].append(u[1])
for i,s in enumerate(sorter_names_short):
unmatched_sortings.append(se.SubSortingExtractor(sortings[i], unit_ids=units_dict[s]))
consensus_curated_comparisons = []
for s in consensus_sortings:
consensus_curated_comparisons.append(sc.compare_two_sorters(s, curated1))
consensus_curated_comparisons.append(sc.compare_two_sorters(s, curated2))
unmatched_curated_comparisons = []
for s in unmatched_sortings:
unmatched_curated_comparisons.append(sc.compare_two_sorters(s, curated1))
unmatched_curated_comparisons.append(sc.compare_two_sorters(s, curated2))
all_curated_comparisons = []
for s in sortings:
all_curated_comparisons.append(sc.compare_two_sorters(s, curated1))
all_curated_comparisons.append(sc.compare_two_sorters(s, curated2)) \
# count various types of units
count_mapped = lambda x : np.sum([u!=-1 for u in x.get_mapped_unit_ids()])
count_not_mapped = lambda x : np.sum([u==-1 for u in x.get_mapped_unit_ids()])
count_units = lambda x : len(x.get_unit_ids())
n_consensus_curated_mapped = np.array([count_mapped(c.get_mapped_sorting1()) for c in consensus_curated_comparisons]).reshape((len(sorter_list),2))
n_consensus_curated_unmapped = np.array([count_not_mapped(c.get_mapped_sorting1()) for c in consensus_curated_comparisons]).reshape((len(sorter_list),2))
n_unmatched_curated_mapped = np.array([count_mapped(c.get_mapped_sorting1()) for c in unmatched_curated_comparisons]).reshape((len(sorter_list),2))
n_all_curated_mapped = np.array([count_mapped(c.get_mapped_sorting1()) for c in all_curated_comparisons]).reshape((len(sorter_list),2))
n_all_curated_unmapped = np.array([count_not_mapped(c.get_mapped_sorting1()) for c in all_curated_comparisons]).reshape((len(sorter_list),2))
n_curated_all_unmapped = np.array([count_not_mapped(c.get_mapped_sorting2()) for c in all_curated_comparisons]).reshape((len(sorter_list),2))
n_all = np.array([count_units(s) for s in sortings])
n_consensus = np.array([count_units(s) for s in consensus_sortings])
n_unmatched = np.array([count_units(s) for s in unmatched_sortings])
n_curated1 = len(curated1.get_unit_ids())
n_curated2 = len(curated2.get_unit_ids())
# overlap between two manually curated data and the Kilosort2 sorting they were derived from
i = {}
for k in ['{0:03b}'.format(v) for v in range(1,2**3)]:
i[k] = 0
i['111'] = len(comparison_curated_ks.get_agreement_sorting(minimum_agreement_count=3).get_unit_ids())
s = comparison_curated_ks.get_agreement_sorting(minimum_agreement_count=2, minimum_agreement_count_only=True)
units = [s.get_unit_property(u,'sorter_unit_ids').keys() for u in s.get_unit_ids()]
for u in units:
if 'sorting1' in u and 'sorting2' in u:
i['110'] += 1
if 'sorting1' in u and 'sorting3' in u:
i['101'] += 1
if 'sorting2' in u and 'sorting3' in u:
i['011'] += 1
s = comparison_curated_ks.get_agreement_sorting(minimum_agreement_count=1, minimum_agreement_count_only=True)
units = [s.get_unit_property(u,'sorter_unit_ids').keys() for u in s.get_unit_ids()]
for u in units:
if 'sorting1' in u:
i['100'] += 1
if 'sorting2' in u:
i['010'] += 1
if 'sorting3' in u:
i['001'] += 1
colors = plt.cm.RdYlBu(np.linspace(0,1,3))
venn3(subsets = i,set_labels=('Curated 1', 'Curated 2', 'Kilosort2'),
set_colors=colors, alpha=0.6, normalize_to=100)
# overlaps betweem ensemble sortings (per sorter) and manually curated sortings
def plot_mcmp_results(data, labels, ax, ylim=None, yticks=None, legend=False):
angles = (np.linspace(0, 2*np.pi, len(sorter_list), endpoint=False)).tolist()
angles += angles[:1]
for i,v in enumerate(data):
v = v.tolist() + v[:1].tolist()
ax.bar(np.array(angles)+i*2*np.pi/len(sorter_list)/len(data)/2-2*np.pi/len(sorter_list)/len(data)/4,
v, label=labels[i],
alpha=0.8, width=np.pi/len(sorter_list)/2)
ax.set_thetagrids(np.degrees(angles), sorter_names_short)
if legend:
ax.legend(bbox_to_anchor=(1.0, 1), loc=2, borderaxespad=0., frameon=False, fontsize=8, markerscale=0.25)
ax.set_theta_offset(np.pi / 2)
ax.set_theta_direction(-1)
if ylim is not None:
ax.set_ylim(ylim)
if yticks is not None:
ax.set_yticks(yticks)
plt.figure(figsize=(14,3))
sns.set_palette(sns.color_palette("Set1"))
ax = plt.subplot(131, projection='polar')
plot_mcmp_results((n_all_curated_mapped[:,0]/n_all*100,
n_all_curated_mapped[:,1]/n_all*100),
('Curated 1','Curated 2'), ax, yticks=np.arange(20,101,20))
ax.set_title('Percent all units\nwith match in curated sets',pad=20);
plt.ylim((0,100))
ax = plt.subplot(132, projection='polar')
plot_mcmp_results((n_consensus_curated_mapped[:,0]/n_consensus*100,
n_consensus_curated_mapped[:,1]/n_consensus*100),
('Curated 1','Curated 2'), ax, yticks=np.arange(20,101,20))
ax.set_title('Percent consensus units\nwith match in curated sets',pad=20);
plt.ylim((0,100))
ax = plt.subplot(133, projection='polar')
plot_mcmp_results((n_unmatched_curated_mapped[:,0]/n_unmatched*100,
n_unmatched_curated_mapped[:,1]/n_unmatched*100),
('Curated 1','Curated 2'), ax, ylim=(0,30), yticks=np.arange(10,21,10), legend=True)
ax.set_title('Percent non-consensus units\nwith match in curated sets',pad=20);
# numbers for figure above
df = pd.DataFrame(np.vstack((n_all_curated_mapped[:,0]/n_all*100, n_all_curated_mapped[:,1]/n_all*100,
n_all_curated_mapped[:,0], n_all_curated_mapped[:,1])).T,
columns = ('C1 %', 'C2 %', 'C1', 'C2'), index=sorter_names_short)
print('Percent all units with match in curated sets')
print(df)
df = pd.DataFrame(np.vstack((n_consensus_curated_mapped[:,0]/n_consensus*100, n_consensus_curated_mapped[:,1]/n_consensus*100,
n_consensus_curated_mapped[:,0],n_consensus_curated_mapped[:,1])).T,
columns = ('C1 %', 'C2 %', 'C1', 'C2'), index=sorter_names_short)
print('\nPercent consensus units with match in curated sets')
print(df)
df = pd.DataFrame(np.vstack((n_unmatched_curated_mapped[:,0]/n_unmatched*100,
n_unmatched_curated_mapped[:,1]/n_unmatched*100,
n_unmatched_curated_mapped[:,0],n_unmatched_curated_mapped[:,1])).T,
columns = ('C1 %', 'C2 %', 'C1', 'C2'), index=sorter_names_short)
print('\nPercent non-consensus units with match in curated sets')
print(df)
```
| github_jupyter |
# Regresión lineal
En el siguiente archivo se va a desarrollar la regresión lineal para las combinaciones de cada una de las variables que se encuentran en los datos provistos, agrupada por departamentos. Los datos se pueden encontrar [acá](https://docs.google.com/spreadsheets/u/1/d/12h1Pk1ZO-BDcGldzKW-IA9VMkU9RlUOPopFoOK6stdU/pubhtml).
### Imports
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
reload(sys)
sys.setdefaultencoding('utf8')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import statsmodels.formula.api as smf
```
### Read file
```
municipios = pd.read_csv("/Users/Meili/Dropbox/Uniandes/Noveno/Visual/BonoParcial/Plebiscito-Colombia-2016/docs/Plebiscito.csv")
municipios.head()
# Create variables
variables=np.array(municipios.keys())
delete=np.array(['Municipio','Departamento','GanadorPrimeraVuelta','Ganador','AfectadoPorElConflictoPares','ZonasDeConcentracion','CultivosIlicitos','VotosPorElNo','PorcentajeNo','VotosPorElSi','PorcentajeSi','VotosValidos','VotosTotales','CuantosSalieronAVotar','Abstencion'])
variables=np.setdiff1d(variables,delete)
comparacion=np.array(['PorcentajeNo','PorcentajeSi','Abstencion'])
departamentos=municipios.Departamento.drop_duplicates()
# To numeric
for i in range(7, len(variables)):
pd.to_numeric(municipios[variables[i]])
for i in range(len(comparacion)):
pd.to_numeric(municipios[comparacion[i]])
```
### Regression and Correlation
```
results = {}
for departamento in departamentos:
correlaciones = municipios[(municipios.Departamento == departamento)].corr()
for j in range(len(variables)):
for k in range(len(comparacion)):
variable = str(variables[j])
comparador = str(comparacion[k])
if (departamento is not None) and (variable is not None) and (comparador is not None) :
# create a fitted model in one line
lm = smf.ols(formula=comparador + " ~ " + variable, data=municipios[(municipios.Departamento == departamento)]).fit()
results[departamento + " - " + comparador + " - " + variable] = [lm.params[1], lm.params[0],correlaciones[comparador][variable]]
```
### Write results
```
import csv
with open('/Users/Meili/Dropbox/Uniandes/Noveno/Visual/BonoParcial/Plebiscito-Colombia-2016/docs/departamentos_results.csv', 'wb') as csvfile:
spamwriter = csv.writer(csvfile, delimiter=',',quotechar=' ', quoting=csv.QUOTE_MINIMAL)
spamwriter.writerow(['Departamento', 'Variable1', 'Variable2', 'pendiente', 'intercepto', 'pearson'])
for item in results:
line = []
line.append((item.split(' - ')[0]).strip())
line.append((item.split(' - ')[1]).strip())
line.append((item.split(' - ')[2]).strip())
line.extend(results[item])
spamwriter.writerow([','.join(map(str, line))])
```
| github_jupyter |
# Webinar n°1: Pyleecan Basics
This notebook is the support of the first out of three webinars organized by the association [Green Forge Coop](https://www.linkedin.com/company/greenforgecoop/about/) and the UNICAS University.
The webinars schedule is:
- Friday 16th October 15h-17h (GMT+2): How to use pyleecan (basics)? Pyleecan basics, call of FEMM, use of the GUI
- Friday 30th October 15h-17h (GMT+1): How to use pyleecan (advanced)? Optimization tools, meshing, plot commands
- Friday 6th November 15h-17h (GMT+1): How to contribute to pyleecan? Github projects, Object Oriented Programming
Speakers: Pierre Bonneel, Hélène Toubin, Raphaël Pile from EOMYS.
This webinar will be recorded and the video will be shared on [pyleecan.org](pyleecan.org)
To use this notebook please:
- Install Anaconda
- In Anaconda Prompt run the command "pip install pyleecan"
- Install the latest version of [femm](http://www.femm.info/wiki/Download) (windows only)
- In Anaconda Navigator, lanch Jupyter Notebook
- Jupyter Notebook should open a tab in your web brower, select this notebook to open it
To check if everything is correctly set, please run the following cell (the webinar use pyleecan 1.0.1.post1):
```
%matplotlib notebook
# Print version of all packages
import pyleecan
print(pyleecan.__version__)
# Load the machine
from os.path import join
from pyleecan.Functions.load import load
from pyleecan.definitions import DATA_DIR
IPMSM_A = load(join(DATA_DIR, "Machine", "IPMSM_A.json"))
IPMSM_A.plot()
# Check FEMM installation
from pyleecan.Classes._FEMMHandler import FEMMHandler
femm = FEMMHandler()
femm.openfemm(0)
femm.closefemm()
```
# 1) How to define a machine
The first step to use pyleecan is to define a machine. This webinar presents the definition of the **Toyota Prius 2004** interior magnet with distributed winding \[1\].
## Type of machines Pyleecan can model
Pyleecan handles the geometrical modelling of main 2D radial flux machines such as:
- surface or interior permanent magnet machines (SPMSM, IPMSM)
- synchronous reluctance machines (SynRM)
- squirrel-cage induction machines and doubly-fed induction machines (SCIM, DFIM)
- would rotor synchronous machines and salient pole synchronous machines (WSRM)
- switched reluctance machines (SRM)
The architecture of Pyleecan also enables to define other kinds of machines (with more than two laminations for instance). More information in our ICEM 2020 pyblication \[2\]
Every machine can be defined by using the **Graphical User Interface** or directly in **Python script**.
## Defining machine with Pyleecan GUI
The GUI is the easiest way to define machine in Pyleecan. Its purpose is to create or load a machine and save it in JSON format to be loaded in a python script. The interface enables to define step by step in a user-friendly way every characteristics of the machine such as:
- topology
- dimensions
- materials
- winding
Each parameter is explained by a tooltip and the machine can be previewed at each stage of the design.
## Start the GUI
The GUI can be started by running the following command in the notebook:
To use it on Anaconda you may need to create the system variable:
QT_QPA_PLATFORM_PLUGIN_PATH : path\to\anaconda3\Lib\site-packages\PySide2\plugins\platforms
```
# Start Pyleecan GUI from the Jupyter Notebook
%run -m pyleecan
```
The GUI can also be launched in a terminal by calling the following command in a terminal:
```
Path/to/python.exe -m pyleecan
```
## load a machine
Once the machine defined in the GUI it can be loaded with the following commands:
```
%matplotlib notebook
# Load the machine
from os.path import join
from pyleecan.Functions.load import load
from pyleecan.definitions import DATA_DIR
IPMSM_A = load(join(DATA_DIR, "Machine", "IPMSM_A.json"))
IPMSM_A.plot()
```
## Defining Machine in scripting mode
Pyleecan also enables to define the machine in scripting mode, using different classes. Each class is defined from a csv file in the folder _pyleecan/Generator/ClasseRef_ and the documentation of every class is available on the dedicated [webpage](https://www.pyleecan.org/pyleecan.Classes.html).
The following image shows the machine classes organization :

Every rotor and stator can be created with the **Lamination** class or one of its daughters.

The scripting enables to define some complex and exotic machine that can't be defined in the GUI such as this one:
```
from pyleecan.Classes.MachineUD import MachineUD
from pyleecan.Classes.LamSlotWind import LamSlotWind
from pyleecan.Classes.LamSlot import LamSlot
from pyleecan.Classes.WindingCW2LT import WindingCW2LT
from pyleecan.Classes.SlotW10 import SlotW10
from pyleecan.Classes.SlotW22 import SlotW22
from numpy import pi
machine = MachineUD()
# Main geometry parameter
Rext = 170e-3 # Exterior radius of outter lamination
W1 = 30e-3 # Width of first lamination
A1 = 2.5e-3 # Width of the first airgap
W2 = 20e-3
A2 = 10e-3
W3 = 20e-3
A3 = 2.5e-3
W4 = 60e-3
# Outer stator
lam1 = LamSlotWind(Rext=Rext, Rint=Rext - W1, is_internal=False, is_stator=True)
lam1.slot = SlotW22(
Zs=12, W0=2 * pi / 12 * 0.75, W2=2 * pi / 12 * 0.75, H0=0, H2=W1 * 0.65
)
lam1.winding = WindingCW2LT(qs=3, p=3)
# Outer rotor
lam2 = LamSlot(
Rext=lam1.Rint - A1, Rint=lam1.Rint - A1 - W2, is_internal=True, is_stator=False
)
lam2.slot = SlotW10(Zs=22, W0=25e-3, W1=25e-3, W2=15e-3, H0=0, H1=0, H2=W2 * 0.75)
# Inner rotor
lam3 = LamSlot(
Rext=lam2.Rint - A2,
Rint=lam2.Rint - A2 - W3,
is_internal=False,
is_stator=False,
)
lam3.slot = SlotW10(
Zs=22, W0=17.5e-3, W1=17.5e-3, W2=12.5e-3, H0=0, H1=0, H2=W3 * 0.75
)
# Inner stator
lam4 = LamSlotWind(
Rext=lam3.Rint - A3, Rint=lam3.Rint - A3 - W4, is_internal=True, is_stator=True
)
lam4.slot = SlotW10(Zs=12, W0=25e-3, W1=25e-3, W2=1e-3, H0=0, H1=0, H2=W4 * 0.75)
lam4.winding = WindingCW2LT(qs=3, p=3)
# Machine definition
machine.lam_list = [lam1, lam2, lam3, lam4]
# Plot, check and save
machine.plot()
```
## Stator definition
To define the stator, we initialize a [**LamSlotWind**](http://pyleecan.org/pyleecan.Classes.LamSlotWind.html) object with the different parameters. In pyleecan, all the parameters must be set in SI units.
```
from pyleecan.Classes.LamSlotWind import LamSlotWind
mm = 1e-3 # Millimeter
# Lamination setup
stator = LamSlotWind(
Rint=80.95 * mm, # internal radius [m]
Rext=134.62 * mm, # external radius [m]
L1=83.82 * mm, # Lamination stack active length [m] without radial ventilation airducts
# but including insulation layers between lamination sheets
Nrvd=0, # Number of radial air ventilation duct
Kf1=0.95, # Lamination stacking / packing factor
is_internal=False,
is_stator=True,
)
```
Then we add 48 slots using [**SlotW11**](http://pyleecan.org/pyleecan.Classes.SlotW11.html) which is one of the 25 Slot classes:
```
from pyleecan.Classes.SlotW11 import SlotW11
# Slot setup
stator.slot = SlotW11(
Zs=48, # Slot number
H0=1.0 * mm, # Slot isthmus height
H1=0, # Height
H2=33.3 * mm, # Slot height below wedge
W0=1.93 * mm, # Slot isthmus width
W1=5 * mm, # Slot top width
W2=8 * mm, # Slot bottom width
R1=4 * mm # Slot bottom radius
)
stator.plot()
```
As for the slot, we can define the winding and its conductor with [**WindingDW1L**](http://pyleecan.org/pyleecan.Classes.WindingDW1L.html) and [**CondType11**](http://pyleecan.org/pyleecan.Classes.CondType11.html). The conventions for winding are further explained on [pyleecan website](https://pyleecan.org/winding.convention.html)
```
from pyleecan.Classes.WindingDW1L import WindingDW1L
from pyleecan.Classes.CondType11 import CondType11
# Winding setup
stator.winding = WindingDW1L(
qs=3, # number of phases
Lewout=0, # staight length of conductor outside lamination before EW-bend
p=4, # number of pole pairs
Ntcoil=9, # number of turns per coil
Npcpp=1, # number of parallel circuits per phase
Nslot_shift_wind=0, # 0 not to change the stator winding connection matrix built by pyleecan number
# of slots to shift the coils obtained with pyleecan winding algorithm
# (a, b, c becomes b, c, a with Nslot_shift_wind1=1)
is_reverse_wind=False # True to reverse the default winding algorithm along the airgap
# (c, b, a instead of a, b, c along the trigonometric direction)
)
# Conductor setup
stator.winding.conductor = CondType11(
Nwppc_tan=1, # stator winding number of preformed wires (strands)
# in parallel per coil along tangential (horizontal) direction
Nwppc_rad=1, # stator winding number of preformed wires (strands)
# in parallel per coil along radial (vertical) direction
Wwire=0.000912, # single wire width without insulation [m]
Hwire=2e-3, # single wire height without insulation [m]
Wins_wire=1e-6, # winding strand insulation thickness [m]
type_winding_shape=0, # type of winding shape for end winding length calculation
# 0 for hairpin windings
# 1 for normal windings
)
```
## Rotor definition
For this example, we use the [**LamHole**](http://www.pyleecan.org/pyleecan.Classes.LamHole.html) class to define the rotor as a lamination with holes to contain magnets.
In the same way as for the stator, we start by defining the lamination:
```
from pyleecan.Classes.LamHole import LamHole
# Rotor setup
rotor = LamHole(
Rint=55.32 * mm, # Internal radius
Rext=80.2 * mm, # external radius
is_internal=True,
is_stator=False,
L1=stator.L1 # Lamination stack active length [m]
# without radial ventilation airducts but including insulation layers between lamination sheets
)
```
After that, we can add holes with magnets to the rotor using the class [**HoleM50**](http://www.pyleecan.org/pyleecan.Classes.HoleM50.html):
```
from pyleecan.Classes.HoleM50 import HoleM50
rotor.hole = list()
rotor.hole.append(
HoleM50(
Zh=8, # Number of Hole around the circumference
W0=42.0 * mm, # Slot opening
W1=0, # Tooth width (at V bottom)
W2=0, # Distance Magnet to bottom of the V
W3=14.0 * mm, # Tooth width (at V top)
W4=18.9 * mm, # Magnet Width
H0=10.96 * mm, # Slot Depth
H1=1.5 * mm, # Distance from the lamination Bore
H2=1 * mm, # Additional depth for the magnet
H3=6.5 * mm, # Magnet Height
H4=0, # Slot top height
)
)
```
The holes are defined as a list to enable to create several layers of holes and/or to combine different kinds of holes
## Create a shaft and a frame
The classes [**Shaft**](http://www.pyleecan.org/pyleecan.Classes.Shaft.html) and [**Frame**](http://www.pyleecan.org/pyleecan.Classes.Frame.html) enable to add a shaft and a frame to the machine. For this example there is no shaft:
```
from pyleecan.Classes.Shaft import Shaft
from pyleecan.Classes.Frame import Frame
# Set shaft
shaft = Shaft(Drsh=rotor.Rint * 2, # Diamater of the rotor shaft [m]
# used to estimate bearing diameter for friction losses
Lshaft=1.2 # length of the rotor shaft [m]
)
```
## Set materials and magnets
Every Pyleecan object can be saved in JSON using the method `save` and can be loaded with the `load` function.
In this example, the materials *M400_50A* and *Copper1* are loaded while the material *Magnet_prius* is created with the classes [**Material**](http://www.pyleecan.org/pyleecan.Classes.Material.html) and [**MatMagnetics**](http://www.pyleecan.org/pyleecan.Classes.MatMagnetics.html).
```
from pyleecan.Classes.Material import Material
from pyleecan.Classes.MatMagnetics import MatMagnetics
# Loading Materials
M400_50A = load(join(DATA_DIR, "Material", "M400-50A.json"))
Copper1 = load(join(DATA_DIR, "Material", "Copper1.json"))
M400_50A.mag.plot_BH()
# M400_50A.mag.BH_curve # nonlinear B(H) curve (two columns matrix, H and B(H))
# Defining magnets
Magnet_prius = Material(name="Magnet_prius")
# Definition of the magnetic properties of the material
Magnet_prius.mag = MatMagnetics(
mur_lin = 1.05, # Relative magnetic permeability
Hc = 902181.163126629, # Coercitivity field [A/m]
alpha_Br = -0.001, # temperature coefficient for remanent flux density /°C compared to 20°C
Brm20 = 1.24, # magnet remanence induction at 20°C [T]
Wlam = 0, # lamination sheet width without insulation [m] (0 == not laminated)
)
# Definition of the electric properties of the material
Magnet_prius.elec.rho = 1.6e-06 # Resistivity at 20°C
# Definition of the structural properties of the material
Magnet_prius.struct.rho = 7500.0 # mass per unit volume [kg/m3]
# Set Materials
stator.mat_type = M400_50A
rotor.mat_type = M400_50A
stator.winding.conductor.cond_mat = Copper1
# Set magnets in the rotor hole
rotor.hole[0].magnet_0.mat_type = Magnet_prius
rotor.hole[0].magnet_1.mat_type = Magnet_prius
rotor.hole[0].magnet_0.type_magnetization = 1
rotor.hole[0].magnet_1.type_magnetization = 1
```
## Create, save and plot the machine
Finally, the Machine object can be created with [**MachineIPMSM**](http://www.pyleecan.org/pyleecan.Classes.MachineIPMSM.html) and saved using the `save` method.
```
from pyleecan.Classes.MachineIPMSM import MachineIPMSM
%matplotlib notebook
IPMSM_Prius_2004 = MachineIPMSM(
name="Toyota Prius 2004",
stator=stator,
rotor=rotor,
shaft=shaft,
frame=None
)
IPMSM_Prius_2004.save('IPMSM_Toyota_Prius_2004.json')
im=IPMSM_Prius_2004.plot()
```
Note that Pyleecan also handles ventilation duct thanks to the classes :
- [**VentilationCirc**](http://www.pyleecan.org/pyleecan.Classes.VentilationCirc.html)
- [**VentilationPolar**](http://www.pyleecan.org/pyleecan.Classes.VentilationPolar.html)
- [**VentilationTrap**](http://www.pyleecan.org/pyleecan.Classes.VentilationTrap.html)
# 2) Magnetic model with FEMM
This part of the webinar shows the different steps to **compute magnetic flux and electromagnetic torque** with pyleecan **automated coupling with FEMM**. Every electrical machine defined in Pyleecan can be automatically drawn in [FEMM](http://www.femm.info/wiki/HomePage) to compute torque, airgap flux and electromotive force.
## Simulation definition
### Inputs
The simulation is defined with a [**Simu1**](http://www.pyleecan.org/pyleecan.Classes.Simu1.html) object. This object corresponds to a simulation with 5 sequential physics (or modules):
- electrical
- magnetic
- force
- structural
- acoustic
For now pyleecan includes:
- an Electrical module for PMSM machine with FEMM
- a Magnetic module with FEMM for all machines
- a Force module (Maxwell Tensor)
[**Simu1**](http://www.pyleecan.org/pyleecan.Classes.Simu1.html) object enforces a weak coupling between each physics: the input of each physic is the output of the previous one.
In this part of the webinar, the Magnetic physics is defined with the object [**MagFEMM**](https://www.pyleecan.org/pyleecan.Classes.MagFEMM.html) and the other physics are deactivated (set to None).
We define the starting point of the simulation with an [**InputCurrent**](http://www.pyleecan.org/pyleecan.Classes.InputCurrent.html) object to enforce the electrical module output with:
- angular and the time discretization
- rotor speed
- stator currents
```
from os.path import join
from numpy import ones, pi, array, linspace, cos, sqrt
from pyleecan.Classes.Simu1 import Simu1
from pyleecan.Classes.InputCurrent import InputCurrent
from pyleecan.Classes.MagFEMM import MagFEMM
from pyleecan.Functions.load import load
from pyleecan.definitions import DATA_DIR
# Reload machine IPMSM_A
IPMSM_A = load(join(DATA_DIR, "Machine", "IPMSM_A.json"))
# Create the Simulation
simu_femm = Simu1(name="Webinar_1_MagFemm", machine=IPMSM_A)
p = simu_femm.machine.stator.winding.p
qs = simu_femm.machine.stator.winding.qs
# Defining Simulation Input
simu_femm.input = InputCurrent()
# Rotor speed [rpm]
simu_femm.input.N0 = 2000
# time discretization [s]
time = linspace(start=0, stop=60/simu_femm.input.N0, num=32*p, endpoint=False) # 16 timesteps
simu_femm.input.time = time
# Angular discretization along the airgap circonference for flux density calculation
simu_femm.input.angle = linspace(start = 0, stop = 2*pi, num=2048, endpoint=False) # 2048 steps
# Stator currents as a function of time, each column correspond to one phase [A]
I0_rms = 250/sqrt(2)
felec = p * simu_femm.input.N0 /60 # [Hz]
rot_dir = simu_femm.machine.stator.comp_rot_dir()
Phi0 = 140*pi/180 # Maximum Torque Per Amp
Ia = (
I0_rms
* sqrt(2)
* cos(2 * pi * felec * time + 0 * rot_dir * 2 * pi / qs + Phi0)
)
Ib = (
I0_rms
* sqrt(2)
* cos(2 * pi * felec * time + 1 * rot_dir * 2 * pi / qs + Phi0)
)
Ic = (
I0_rms
* sqrt(2)
* cos(2 * pi * felec * time + 2 * rot_dir * 2 * pi / qs + Phi0)
)
simu_femm.input.Is = array([Ia, Ib, Ic]).transpose()
```
In this first example stator currents are enforced as a function of time for each phase.
### MagFEMM configuration
For the configuration of the Magnetic module, we use the object [**MagFEMM**](https://www.pyleecan.org/pyleecan.Classes.MagFEMM.html) that computes the airgap flux density by calling FEMM. The model parameters are set though the properties of the [**MagFEMM**](https://www.pyleecan.org/pyleecan.Classes.MagFEMM.html) object. In this tutorial we will present the main ones, the complete list is available by looking at [**Magnetics**](http://www.pyleecan.org/pyleecan.Classes.Magnetics.html) and [**MagFEMM**](http://www.pyleecan.org/pyleecan.Classes.MagFEMM.html) classes documentation.
*type_BH_stator* and *type_BH_rotor* enable to select how to model the B(H) curve of the laminations in FEMM. The material parameters and in particular the B(H) curve are setup directly [in the machine](https://www.pyleecan.org/tuto_Machine.html).
```
from pyleecan.Classes.MagFEMM import MagFEMM
simu_femm.mag = MagFEMM(
type_BH_stator=0, # 0 to use the material B(H) curve,
# 1 to use linear B(H) curve according to mur_lin,
# 2 to enforce infinite permeability (mur_lin =100000)
type_BH_rotor=0, # 0 to use the material B(H) curve,
# 1 to use linear B(H) curve according to mur_lin,
# 2 to enforce infinite permeability (mur_lin =100000)
file_name = "", # Name of the file to save the FEMM model
)
# We only use the magnetic part
simu_femm.force = None
simu_femm.struct = None
```
Pyleecan coupling with FEMM enables to define the machine with symmetry and with sliding band to optimize the computation time. The angular periodicity of the machine will be computed and (in the particular case) only 1/8 of the machine will be drawn (4 symmetry + antiperiodicity):
```
simu_femm.mag.is_periodicity_a=True
```
The same is done for time periodicity only half of one electrical period is calculated (i.e: 1/8 of mechanical period):
```
simu_femm.mag.is_periodicity_t=True
```
At the end of the simulation, the mesh and the solution can be saved in the Output object with:
```
simu_femm.mag.is_get_mesh = True # To get FEA mesh for latter post-procesing
simu_femm.mag.is_save_FEA = False # To save FEA results in a dat file
```
## Run simulation
```
out_femm = simu_femm.run()
```
When running the simulation, an FEMM window runs in background. You can open it to see pyleecan drawing the machine and defining the surfaces.

The simulation will compute 16 different timesteps by updating the current and the sliding band boundary condition.
Once the simulation is finished, an Output object is return. The results are stored in the magnetic part of the output (i.e. _out_femm.mag_ ) and different plots can be called. This _out_femm.mag_ contains:
- *time*: magnetic time vector without symmetry
- *angle*: magnetic position vector without symmetry
- *B*: airgap flux density (contains radial and tangential components)
- *Tem*: electromagnetic torque
- *Tem_av*: average electromagnetic torque
- *Tem_rip_pp* : Peak to Peak Torque ripple
- *Tem_rip_norm*: Peak to Peak Torque ripple normalized according to average torque
- *Phi_wind_stator*: stator winding flux
- *emf*: electromotive force
Some of these properties are "Data objects" from the [SciDataTool](https://github.com/Eomys/SciDataTool) project. These object enables to handle unit conversion, interpolation, fft, periodicity... They will be introduced more in details in the second webinar.
## Plot results
**Output** object embbed different plots to visualize results easily. A dedicated tutorial is available [here](https://www.pyleecan.org/tuto_Plots.html).
For instance, the radial and tangential magnetic flux in the airgap at a specific timestep can be plotted with:
```
# Radial magnetic flux over space
out_femm.plot_2D_Data("mag.B","angle","time[0]", component_list=["radial"])
# Radial magnetic flux FFT over space
out_femm.plot_2D_Data("mag.B","wavenumber=[0,76]","time[0]", component_list=["radial"])
# Tangential magnetic flux over space
out_femm.plot_2D_Data("mag.B","angle","time[0]", component_list=["tangential"])
# Tangential magnetic flux FFT over space
out_femm.plot_2D_Data("mag.B","wavenumber=[0,76]","time[0]", component_list=["tangential"])
# Torque, one can see that the first torque ripple harmonic is at N0/60*p*6=800 Hz
out_femm.plot_2D_Data("mag.Tem","time")
out_femm.plot_2D_Data("mag.Tem","freqs=[0,2000]")
```
If the mesh was saved in the output object (mySimu.mag.is_get_mesh = True), it can be plotted with:
```
out_femm.mag.meshsolution.plot_contour(label="B", group_names="stator")
out_femm.plot_2D_Data("mag.Phi_wind_stator","time","phase")
```
# 3) How to set the Operating Point
This part of the webinar explains how to use the object InputCurrent and VarLoadCurrent to run a magnetic simulation on several operating points by setting Id/Iq or I0/Phi0.
The reference used to validate this part of the webinar is \[3\] : Z. Yang, M. Krishnamurthy and I. P. Brown, "Electromagnetic and vibrational characteristic of IPM over full torque-speed range,"
```
from pyleecan.Classes.Simu1 import Simu1
from pyleecan.Classes.MagFEMM import MagFEMM
# Initialization of the Simulation
simu_op = Simu1(name="Webinar_1_Id_Iq", machine=IPMSM_A)
# Definition of the magnetic simulation (FEMM with symmetry and sliding band)
simu_op.mag = MagFEMM(
type_BH_stator=0,
type_BH_rotor=0,
is_periodicity_a=True,
is_periodicity_t=True,
Kgeo_fineness=1,
)
# Run only Magnetic module
simu_op.elec = None
simu_op.force = None
simu_op.struct = None
```
## Defining an Operating point with Id/Iq
The InputCurrent object enable to create an "OutElec" object that corresponds to the output of the Electrical module and the input of the Magnetic module. In this example, InputCurrent is used to define the starting point with a sinusoidal current defined with Id_ref and Iq_ref:
```
from pyleecan.Classes.InputCurrent import InputCurrent
from numpy import exp
# Definition of a sinusoidal current
simu_op.input = InputCurrent()
Id_ref = (I0_rms*exp(1j*Phi0)).real
Iq_ref = (I0_rms*exp(1j*Phi0)).imag
simu_op.input.Id_ref = Id_ref # [A] (RMS)
simu_op.input.Iq_ref = Iq_ref # [A] (RMS)
(Id_ref,Iq_ref)
```
The discretization of the current and for the magnetic computation can also be set with the following parameters:
```
simu_op.input.Nt_tot = 5*8 # Number of time step
simu_op.input.Na_tot = 2048 # Spatial discretization
simu_op.input.N0 = 2000 # Rotor speed [rpm]
```
When Nt_tot is defined, the time vector is automatically set to:
linspace(0, 60 / N0 * Nrev, Nt_tot)
With Nrev the number of revolution of the rotor (1 by default)
When Na_tot is defined, the angle vector is automatically set to:
linspace(0, 2*pi, Na_tot)
The input is now fully defined, the simulation can now be run:
```
out_op = simu_op.run()
# Plot the flux
out_op.plot_2D_Data("mag.B", "angle")
# Plot the current
out_op.plot_2D_Data("elec.Is", "time", "phase")
```
The Operating Point can also be defined with I0 / Phi0 with (the object properties are still Id_ref and Iq_ref):
```
from numpy import pi
simu_op.input.set_Id_Iq(I0=I0_rms, Phi0=Phi0)
print("Id: "+str(simu_op.input.Id_ref))
print("Iq: "+str(simu_op.input.Iq_ref))
```
## Iterating on several Operating Point
Reference torque and current angle vector are:
```
from numpy import linspace, array, pi
Tem_av_ref = array([79, 125, 160, 192, 237, 281, 319, 343, 353, 332, 266, 164, 22]) # Yang et al, 2013
Phi0_ref = linspace(60 * pi / 180, 180 * pi / 180, Tem_av_ref.size)
```
Each pyleecan simulation is assumed to be quasi-static and run on a single operating point (fixed speed). To run a simulation on several operating points two steps are needed: First define a simulation that run correctly on a single operating point (like the one defined above), then define a VarLoadCurrent object.
The VarLoadCurrent object is defined with a matrix with each line corresponding to an operating point and the column are:
- (N0, I0, Phi0) if type_OP_matrix==0
- (N0, Id, Iq) if type_OP_matrix==1
The following VarLoadCurrent object will run the previous simulation N_simu times by changing the value of Phi0.
A fourth column can be added by setting is_torque=True. It enables to define the reference torque for the Operating Point. The reference is stored in output.elec.Tem_av_ref, the real computed torque is stored in output.mag.Tem_av.
```
from pyleecan.Classes.VarLoadCurrent import VarLoadCurrent
from numpy import zeros, ones, linspace, array, sqrt, arange
varload = VarLoadCurrent(is_torque=True, ref_simu_index=0)
varload.type_OP_matrix = 0 # Matrix N0, I0, Phi0
# Choose which operating points to run
step = 2 # step=1 to do all OP
# step=2 to do 1 OP out of 2 (fastest)
I_simu = arange(0,Tem_av_ref.size, step)
N_simu = I_simu.size
# Creating the Operating point matrix
OP_matrix = zeros((N_simu,4))
# Set N0 = 2000 [rpm] for all simulation
OP_matrix[:,0] = 2000 * ones((N_simu))
# Set I0 = 250 / sqrt(2) [A] (RMS) for all simulation
OP_matrix[:,1] = I0_rms * ones((N_simu))
# Set Phi0 from 60° to 180°
OP_matrix[:,2] = Phi0_ref[I_simu]
# Set reference torque from Yang et al, 2013
OP_matrix[:,3] = Tem_av_ref[I_simu]
varload.OP_matrix = OP_matrix
print(OP_matrix)
```
The original simulation will be duplicated N_simu times with the value of InputCurrent updated according to the matrix.
```
simu_vop = simu_op.copy()
simu_vop.var_simu = varload.copy()
simu_vop.input.Nt_tot = 5*8
Xout_vop = simu_vop.run()
```
VarLoadCurrent defines a "multi-simulation" that returns an "XOutput" rather than an "Output" object. XOutput is an Output object (all the normal properties are set with the reference simulation results) and some other properties corresponding to the multi-simulation.
Pyleecan will automatically extract some values from each simulation. These values are all gathered in the xoutput_dict:
```
print("Values available in XOutput:")
print(Xout_vop.xoutput_dict.keys())
print("\nI0 for each simulation:")
print(Xout_vop["I0"].result)
print("\nPhi0 for each simulation:")
print(Xout_vop["Phi0"].result)
```
Any parameter in the XOutput can be plot as a function of any other
```
fig = Xout_vop.plot_multi("Phi0", "Tem_av")
fig = Xout_vop.plot_multi("Id", "Iq")
```
It is possible to select what pyleecan store in the XOutput with "DataKeeper" objects that will be introduced in the second webinar.
Finally, the computed average torque can be compared to the one in the publication from Yang et al (data has been extracted from their graph using [Engauge Digitizer](http://markummitchell.github.io/engauge-digitizer/). Note that the generic plot function `plot_A_2D` has been used here.
```
from pyleecan.Functions.Plot.plot_2D import plot_2D
from pyleecan.definitions import config_dict
from numpy import array
curve_colors = config_dict["PLOT"]["COLOR_DICT"]["CURVE_COLORS"]
plot_2D(
array([x*180/pi for x in Xout_vop.xoutput_dict["Phi0"].result]),
[Xout_vop.xoutput_dict["Tem_av"].result, Xout_vop.xoutput_dict["Tem_av_ref"].result],
color_list=curve_colors,
legend_list=["Pyleecan", "Yang et al, 2013"],
xlabel="Current angle [°]",
ylabel="Electrical torque [N.m]",
title="Electrical torque vs current angle"
)
```
# 4) How to compute currents, voltage and torque using the Electrical Module
This part of the webinar explains how to use the Electrical Module to compute currents, voltage and torque, using a simple **electrical equivalent circuit**. The idea is to provide insight on how to implement other methods.
The reference used to validate this part is \[3\] : Z. Yang, M. Krishnamurthy and I. P. Brown, "Electromagnetic and vibrational characteristic of IPM over full torque-speed range,"
## Electrical Equivalent Circuit (EEC)
The electrical module is defined with the object EEC_PMSM that correspond to the **electrical equivalent circuit** from "Advanced Electrical Drives, analysis, modeling, control", Rik de doncker, Duco W.J. Pulle, Andre Veltman, Springer edition, is then used for the computation of Ud/Uq or Id/Iq (see schematics hereafter).
The parameters from the EEC are first computed according to the `FluxLinkFEMM` and `IndMagFEMM` objects. They enable to compute the flux linkage and the magnetic inductances using FEMM simulations (with symmetries and number of time steps). For the flux linkage computation, the currents are set to 0A.
Once the parameter of the EEC known, the voltage can be computed. The electrical torque is then computed according to the formula: $T_{em}=\frac{P-RI^2}{\Omega}$, where $P$ is the magnetic power $P=q_s\Re(VI^*)$
<--- --->
-----R-----wsLqIq---- -----R-----wsLdId----
| | | |
| | | BEMF
| | | |
---------Id---------- ---------Iq----------
---> --->
Ud Uq
```
from pyleecan.Classes.Simu1 import Simu1
from pyleecan.Classes.Electrical import Electrical
from pyleecan.Classes.EEC_PMSM import EEC_PMSM
from pyleecan.Classes.FluxLinkFEMM import FluxLinkFEMM
from pyleecan.Classes.IndMagFEMM import IndMagFEMM
# Initialization of the Simulation
simu_elec = Simu1(name="Webinar_1_elec", machine=IPMSM_A)
# Definition of the magnetic simulation (FEMM with symmetry and sliding band)
simu_elec.elec = Electrical(
eec=EEC_PMSM(
indmag=IndMagFEMM(is_periodicity_a=True, Nt_tot=10),
fluxlink=FluxLinkFEMM(is_periodicity_a=True, Nt_tot=10),
)
)
# Run only Electrical module
simu_elec.mag = None
simu_elec.force = None
simu_elec.struct = None
```
## Defining starting point with InputElec or InputCurrent
The starting point of the simulation is defined with InputElec or InputCurrent. These objects will create an "OutElec" object and initialize it with the provided values for Id/Iq, and/or Ud/Uq with InputElec.
Note that Id/Iq are required to accurately compute the magnetic inductances, so that if only Ud/Uq is provided, a current Id=1A and Iq=1A will be used for the computation of Ld and Lq. A more satisfactory method would be to compute a map of Ld and Lq over Id/Iq. Future developments would be welcomed.
```
from pyleecan.Classes.InputCurrent import InputCurrent
# Definition of a sinusoidal current
simu_elec.input = InputCurrent()
simu_elec.input.Id_ref = Id_ref # [A] (RMS)
simu_elec.input.Iq_ref = Iq_ref # [A] (RMS)
simu_elec.input.Nt_tot = 10 # number of time steps to calculate flux linkage and inductance
simu_elec.input.Na_tot = 2048 # Spatial discretization
simu_elec.input.N0 = 2000 # Rotor speed [rpm]
```
## Running the simulation and postprocessings
```
out_elec = simu_elec.run()
# Print voltage and torque
print("Ud: "+str(out_elec.elec.Ud_ref))
print("Uq: "+str(out_elec.elec.Uq_ref))
print("Tem: "+str(out_elec.elec.Tem_av_ref))
# Plot the currents
out_elec.plot_2D_Data("elec.Is", "time", "phase")
# Plot the voltages
out_elec.plot_2D_Data("elec.Us", "time", "phase")
```
## Iterating on several Operating Points
As in the previous part, VarLoadCurrent can be used to compute the electrical torque (instead of the magnetic torque) on several Operating points.
```
# Run multisimulation
simu_Velec = simu_elec.copy() # copy simulation at fixed speed
simu_Velec.var_simu = varload.copy() # copy varload calculated in previous cell
simu_Velec.input.Nt_tot = 10 # number of time steps to calculate inductance
Xout_Velec = simu_Velec.run()
```
Once the simulation is done, the torque as a function of Phi0 can be plotted with:
```
from pyleecan.Functions.Plot.plot_2D import plot_2D
from pyleecan.definitions import config_dict
# Plot torque as a function of Phi0
curve_colors = config_dict["PLOT"]["COLOR_DICT"]["CURVE_COLORS"]
try :
# If previous magnetic model has been launched, overlay results
Ydatas = [Xout_vop.xoutput_dict["Tem_av"].result, Xout_Velec.xoutput_dict["Tem_av_ref"].result, Tem_av_ref[I_simu]]
legend_list = ["Pyleecan (FEMM)", "Pyleecan (EEC)", "Yang et al, 2013"]
linestyle_list=["-","dotted","-"]
except :
# Compare EEC torque results with reference torque
Ydatas = [Xout_Velec.xoutput_dict["Tem_av_ref"].result, Tem_av_ref[I_simu]]
legend_list=["Pyleecan", "Yang et al, 2013"]
linestyle_list=["-","-"]
ax = plot_2D(
array([x*180/pi for x in Xout_Velec.xoutput_dict["Phi0"].result]),
Ydatas,
color_list=curve_colors,
legend_list=legend_list,
linestyle_list = linestyle_list,
xlabel="Current angle [°]",
ylabel="Electrical torque [N.m]",
title="Electrical torque vs current angle"
)
```
[1] Z. Yang, M. Krishnamurthy and I. P. Brown, "Electromagnetic and vibrational characteristic of IPM over full torque-speed range", *2013 International Electric Machines & Drives Conference*, Chicago, IL, 2013, pp. 295-302.
[2] P. Bonneel, J. Le Besnerais, E. Devillers, C. Marinel, and R. Pile, “Design Optimization of Innovative Electrical Machines Topologies Based on Pyleecan Opensource Object-Oriented Software,” in 24th International Conference on Electrical Machines (ICEM), 2020.
[3] Z. Yang, M. Krishnamurthy and I. P. Brown, "Electromagnetic and vibrational characteristic of IPM over full torque-speed range," 2013 International Electric Machines & Drives Conference, Chicago, IL, 2013, pp. 295-302, doi: 10.1109/IEMDC.2013.6556267.
| github_jupyter |
```
import re
```
# Intro to Class: (if we really want to use a class for Book)
The real implementation of this class to real text should should requires more sophiscated discussion.
My suggestion is to write some methods to automatically convert Book class's instance attributes to tags (TEI, XML, or MARKUS format).
```
text = '''
<P>異物志一卷(注:後漢議郎楊孚撰。)
<P>南州異物志一卷(注:吳丹陽太守萬震撰。)
<P>蜀志一卷(注:東京武平太守常寬撰。)
<P>發蒙記一卷(注:束皙撰。載物產之異。)
<P>地理書一百四十九卷錄一卷。陸澄合山海經已來一百六十家,以為此書。澄本之外,其舊事並多零失。見存別部自行者,唯四十二家,今列之於上。
<P>三輔故事二卷(注:晉世撰)
<P>湘州記二卷(注:庾仲雍撰。)
'''
```
`([^,。<>〔〕a-zA-Z0-9]+?)([一二三四五六七八九十百千]+?)卷(?:(?:[^,。<>〔〕a-zA-Z0-9]+?)?錄([一二三四五六七八九十]+?)卷)?`
|group 1| group 2| group 3|
| --- | --- | ---- |
| book title | number of 卷 | number of 錄 |
There are still some problems with this regex,
so do not take it very serious.
```
pattern = '([^,。<>〔〕a-zA-Z0-9]+?)([一二三四五六七八九十百千]+?)卷(?:(?:[^,。<>〔〕a-zA-Z0-9]+?)?錄([一二三四五六七八九十]+?)卷)?'
pattern_object = re.compile(pattern)
pattern_object.findall(text)
class Book:
def __init__(self, name, juan, lu):
self.name = name
self.juan = juan # instance attributes
self.lu = lu
def __repr__(self):
'''The string representation for Book class.
Just a prettier way to print the content of
the class.'''
if self.lu is not None:
return '《{}》{}卷~錄{}卷'.format(
self.name, self.juan, self.lu)
else:
return '《{}》{}卷'.format(
self.name, self.juan)
book = Book('異物志', '一', None)
book.name, book.juan, book.lu
Book_collect = []
for match in pattern_object.finditer(text):
Book_collect.append(Book(
match.group(1),
match.group(2),
match.group(3)
))
Book_collect
```
Actually, if we only want to save the attributes, we could just use the `dict`. Maybe next time we could figure out what kind of methods (functions) we could put in the Book class.
```
# do similar things in the dict
Book_collect = []
for match in pattern_object.finditer(text):
Book_collect.append({
'name' : match.group(1),
'juan' : match.group(2),
'lu' : match.group(3)
})
Book_collect
```
# Easier case
From python official tutorial https://docs.python.org/3/tutorial/classes.html
```
class Dog:
# class attribute
kind = 'not the cat'
def __init__(self, name, owner):
# instance attribute
self.name = name
self.owner = owner
# creates a new empty list for each dog
self.tricks = []
def add_trick(self, trick):
self.tricks.append(trick)
# initialization a instance
d = Dog('Fido', 'Bob')
e = Dog('Buddy', 'Alice')
d.add_trick('roll over')
e.add_trick('play dead')
d.tricks
e.tricks
d.add_trick('jump')
d.tricks
```
# Case study: hand-crafted Set class
```
# from Book: Data science from scratch, by Joel Grus
class Set:
# self is a convention
# to refer the particular Set object being used
def __init__(self, values=None): # initializetion operation
'''This is the constructor.
It gets called when you create a new Set'''
self.dict = {}
if values is not None:
for value in values:
self.add(value)
def __repr__(self):
'''string representation of a Set object'''
return "Set: " + str(self.dict.keys())
# we'll represent membership by being a key in self.dict with value True
def add(self, value):
self.dict[value] = True
# value is in the Set if it's a key in the dictionary
def contains(self, value):
return value in self.dict
def remove(self, value):
del self.dict[value]
s = Set([1, 2, 3])
s.add(4)
s
s.contains(4)
s.remove(3)
s
```
# Case Study: Complex Number
```
# what a complex number should look like
complex_numer = complex(1, 2)
complex_numer
```
Now, we can made a hand-crafted Complex number class
```
class Complex:
def __init__(self, real, imag):
self.real = real
self.imag = imag
def __repr__(self):
return '({} {} {}j)'.format(self.real,
['-', '+'][self.imag >= 0],
abs(self.imag))
def __add__(self, that):
return Complex(self.real + that.real,
self.imag + that.imag)
def __sub__(self, that):
return Complex(self.real - that.real,
self.imag - that.imag)
def __abs__(self):
return (self.real**2 + self.imag**2)**0.5
def conjugate(self):
return Complex(self.real, - self.imag)
# repr
hand_made_complex_number = Complex(3, 4)
hand_made_complex_number
# add
hand_made_complex_number_2 = Complex(2, 1)
hand_made_complex_number + hand_made_complex_number_2
# sub
hand_made_complex_number - hand_made_complex_number_2
# absolute
abs(hand_made_complex_number)
# class method
hand_made_complex_number.conjugate()
```
# Combinations: recursive
```
def all_possible_combinations(combinations, sub_set_len, index_list):
# if the length of the subset (list) is equal to the sub_set_len,
# and if the subset is not stored in the combinations list,
# append to the combinations list
if (len(index_list) == sub_set_len
) and (index_list not in combinations):
combinations.append(index_list)
# looping over each element in the list
# and remove the element,
# and then send back to the all_possible_combinations itself
# to recursively get all the subsets
for i, index in enumerate(index_list):
new_index_list = index_list.copy()
new_index_list.pop(i)
combinations = all_possible_combinations(
combinations, sub_set_len, new_index_list)
return combinations
```
$$C^3_2 = \frac{3 \times 2}{2 \times 1} = 3$$
```
all_possible_combinations([], 2, list(range(3)))
len(all_possible_combinations([], 2, list(range(3))))
```
$$
C^5_3 = \frac{5 \times 4 \times 3}{3 \times 2 \times 1} = 10
$$
```
all_possible_combinations([], 3, list(range(5)))
len(all_possible_combinations([], 3, list(range(5))))
```
# Another recursive example
```
# other recursive example
def factorial(n):
if n == 1:
return 1
else:
return n * factorial(n - 1)
factorial(3) # 3! = 6
```
# All combinations, regardless of length of subset
```
def all_possible_combinations_regardless_len(combinations, index_list):
# save all non-repeated subset with length larger than 0
if (index_list not in combinations
) and (index_list != []):
combinations.append(index_list)
for i, index in enumerate(index_list):
new_index_list = index_list.copy()
new_index_list.pop(i)
combinations = all_possible_combinations_regardless_len(
combinations, new_index_list)
return combinations
all_possible_combinations_regardless_len([], list(range(4)))
```
| github_jupyter |
0 0~6
1 6~12
2 12~18
3 18~24
시간대, 분, 수유량
0, 100, 50
1, 200, 30
2, 100, 30
```
param_window_size = 6
param_seq_length = 287
param_num_epoch = 10
param_lstm_units = 64
param_lstm_stack = 2
num_sample = param_seq_length - param_window_size
import numpy as np
import os
import pandas
import theano
from keras.models import Sequential
from keras.layers import Dense, LSTM, Dropout
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
import matplotlib.animation as animation
%matplotlib inline
# convert an array of values into a dataset matrix
def create_dataset(dataset, look_back=1):
dataX, dataY = [], []
for i in range(len(dataset)-look_back):
dataX.append(dataset[i:(i+look_back)])
dataY.append(dataset[i + look_back])
return np.array(dataX), np.array(dataY)
def diff_dataset(dataset):
diff = np.zeros((dataset.shape[0]-1, 1))
for i in range(dataset.shape[0]-1):
diff[i] = dataset[i,0]-dataset[i+1,0]
return diff
def reverse_diff_dataset(diff, init_value):
reverse = np.zeros((diff.shape[0]+1, 1))
reverse[0] = init_value
for i in range(diff.shape[0]):
reverse[i+1] = reverse[i] - diff[i]
return reverse
dataset_file_path = './warehouse/feeding.csv'
df = pandas.read_csv(dataset_file_path, header=None)
ds_org = df.values
ds_org = ds_org.astype('float32')
dataset = ds_org
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(dataset)
# split into train and test sets
train_size = int(len(dataset) * 0.67)
test_size = len(dataset) - train_size
train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]
trainX, trainY = create_dataset(train, param_window_size)
testX, testY = create_dataset(test, param_window_size)
trainX = np.reshape(trainX, (trainX.shape[0], trainX.shape[1], 2))
testX = np.reshape(testX, (testX.shape[0], testX.shape[1], 2))
test_size
%%time
theano.config.compute_test_value = "ignore"
model = Sequential()
for i in range(param_lstm_stack):
model.add(LSTM(param_lstm_units, batch_input_shape=(1, param_window_size, 2), stateful=True, return_sequences=True))
model.add(Dropout(0.3))
model.add(LSTM(param_lstm_units, batch_input_shape=(1, param_window_size, 2), stateful=True))
model.add(Dropout(0.3))
model.add(Dense(2))
model.compile(loss='mean_squared_error', optimizer='adam')
for epoch_idx in range(param_num_epoch):
print ('epochs : ' + str(epoch_idx) )
model.fit(trainX, trainY, epochs=1, batch_size=1, verbose=2, shuffle=False)
model.reset_states()
testScore = model.evaluate(testX, testY, batch_size=1, verbose=0)
print('Test Score: ', testScore)
model.reset_states()
predictions = np.zeros((look_ahead,2))
predictions
look_ahead = test_size
trainPredict = [np.vstack([trainX[-1][1:], trainY[-1]])]
predictions = np.zeros((look_ahead,2))
for i in range(look_ahead):
prediction = model.predict(np.array([trainPredict[-1]]), batch_size=1)
predictions[i] = prediction
trainPredict.append(np.vstack([trainPredict[-1][1:],prediction]))
predictions
plt.figure(figsize=(12,5))
# plt.plot(np.arange(len(trainX)),np.squeeze(trainX))
# plt.plot(np.arange(200),scaler.inverse_transform(np.squeeze(trainPredict)[:,None][1:]))
# plt.plot(np.arange(200),scaler.inverse_transform(np.squeeze(testY)[:,None][:200]),'r')
plt.plot(np.arange(look_ahead),predictions[:,0],'r',label="prediction")
plt.plot(np.arange(look_ahead),dataset[train_size:(train_size+look_ahead),0],label="test function")
plt.legend()
plt.show()
plt.figure(figsize=(12,5))
# plt.plot(np.arange(len(trainX)),np.squeeze(trainX))
# plt.plot(np.arange(200),scaler.inverse_transform(np.squeeze(trainPredict)[:,None][1:]))
# plt.plot(np.arange(200),scaler.inverse_transform(np.squeeze(testY)[:,None][:200]),'r')
plt.plot(np.arange(look_ahead),predictions[:,1],'r',label="prediction")
plt.plot(np.arange(look_ahead),dataset[train_size:(train_size+look_ahead),1],label="test function")
plt.legend()
plt.show()
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import seaborn as sns
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import style
train_data = pd.read_csv('train.csv')
train_data.info()
# Data Cleaning
#
#from the above output we can see that we are missing 177 values in the age column,
# 687 from the Cabin column, and only 2 from the Embarked column. So long as the
# number of missing values in the Cabin column is the same as the number of
# passengers in third (3rd) class, that is fine
#
first_class = train_data[train_data['Pclass']==1]
second_class = train_data[train_data['Pclass']==2]
third_class = train_data[train_data['Pclass']==3]
print('train - number of first class passengers:',len(first_class))
print('train - number of second class passengers:',len(second_class))
print('train - number of third class passengers:',len(third_class))
steerage_passage = train_data[train_data['Cabin'].isnull()]
print('train - number of steerage passengers:',len(steerage_passage))
cabin_passage = train_data[train_data['Cabin'].notnull()]
print('train - number of cabin passengers:',len(cabin_passage))
#
#Since the number of steerage passengers is not equal to the number of third class
# passengers, there are data inconsistencies. Therefore, we will ignore the values
# in the cabin column. Instead, use 'Pclass'
#
survival_by_class = train_data[["Pclass", "Survived"]].groupby(['Pclass'], as_index=False).agg(['count','sum','mean','var']).sort_values(by='Pclass', ascending=True)
survival_by_sex = train_data[["Sex", "Survived"]].groupby(['Sex'], as_index=False).agg(['count','sum','mean','var']).sort_values(by='Sex', ascending=True)
#remember there are only 714 values in the age column
train_data['AgeBin'] = pd.cut(train_data['Age'],7)
survival_by_age_bracket = train_data[['AgeBin','Survived']].groupby(['AgeBin'], as_index=False).agg(['count','sum','mean','var']).sort_values(by='AgeBin', ascending=True)
# NOTE: the 'count' result is the total number of records in that bin, 'sum' represents total number of survivors
survival_by_age_bracket #for viewing
survival_by_numSiblingsOrSpouses = train_data[["SibSp", "Survived"]].groupby(['SibSp'], as_index=False).agg(['count','sum','mean','var']).sort_values(by='SibSp', ascending=False)
survival_by_numSiblingsOrSpouses
survival_by_numParentsOrChildren = train_data[["Parch", "Survived"]].groupby(['Parch'], as_index=False).agg(['count','sum','mean','var']).sort_values(by='Parch', ascending=False)
survival_by_numParentsOrChildren
females = train_data[train_data['Sex']=='female']
mothers_or_daughters = females[females['Parch']>0]
mothers_or_daughters[['AgeBin','Survived']].groupby(['AgeBin'], as_index=False).agg(['count','sum','mean']).sort_values(by='AgeBin',ascending=True)
g = sns.FacetGrid(train_data, col='Survived')
g.map(plt.hist, 'Age', bins=20)
# handy ML packages
from sklearn import linear_model
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.metrics import mean_squared_error
train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
#Fixing up data
train = train.drop('PassengerId',axis=1)
train = train.drop('Cabin',axis=1)
train = train.drop('Ticket',axis=1)
train = train.drop('Fare',axis=1)
test = test.drop('PassengerId', axis=1)
test = test.drop('Cabin',axis=1)
test = test.drop('Ticket',axis=1)
test = test.drop('Fare',axis=1)
gender = {"male":0, "female":1}
embark = {"S":0,"C":1,"Q":2}
all_data = [train, test]
for each in all_data:
each['Sex']=each['Sex'].map(gender)
each['Embarked']=each['Embarked'].fillna('S')
each['Embarked']=each['Embarked'].map(embark)
for record in all_data:
randAge=np.random.randint(train['Age'].mean() - train['Age'].std(), train['Age'].mean() + train['Age'].std(), size = record['Age'].isnull().sum())
age_slice = record['Age'].copy()
age_slice[np.isnan(record['Age'])] = randAge
record['Age']=age_slice
record['Age']=train['Age'].astype(int)
Y_train = train['Survived']
X_train = train.drop("Survived",axis=1)
X_test = test.copy()
X_test.info()
#X_train.shape, Y_train.shape, X_test.shape
#set up ML model
## Linear Regression
lr = linear_model.LinearRegression()
lr.fit(X_train, Y_train)
y_pred = lr.predict(X_test)
print('Coefficients:',lr.coef_)
plt.scatter(X_train['Age'],Y_train,color='black')
plt.scatter(X_train['Sex'],Y_train,color='blue')
plt.scatter(X_test['Age'],y_pred,color='red')
plt.show()
#Look at that mess of an LR graph... should have done binary...
```
| github_jupyter |
# Initial Operations:
```
import pandas as pd
import os
import numpy as np
from sklearn.decomposition import PCA, TruncatedSVD
from sklearn.manifold import TSNE
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import RidgeClassifierCV
import matplotlib.pyplot as plt
import seaborn as sns
```
### Load dataset
```
path = '/Users/andrew/school_assignments/LMPM/full_combined_datasets/'
filename = 'combined_yeast_UniRep_dataset.pkl'
data = pd.read_pickle(path+filename)
```
### Define dependent (localization labels) and independent variables (UniReps), then do a randomized 80:20 split of inputs into training and testing datasets
```
y = data.location.to_numpy()
X = np.stack(data["UniRep"].to_numpy())
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=42)
```
# Supervised-learning method comparison
### Ridge-regression classifier
```
clf = RidgeClassifierCV(alphas=[1e-3, 1e-2, 1e-1, 1],normalize=True)
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
print(score)
```
### KNN classifier
```
K_neighbors = int(np.sqrt(len(y_train))) # set number of neigbors to square root of number of inputs
clf = KNeighborsClassifier(K_neighbors, weights='uniform')
clf.fit(X_train, y_train)
mean_accuracy = clf.score(X_test, y_test)
print(mean_accuracy)
```
### Conclusions from classifier model comparison:
* Overall, we see that the ridge-regression outperforms K-nearest-neighbors for classification.
* I tested 3 other linear classifier models from sklearn, but none performed very well, so not worth including here.
* The dataset is imbalanced, largely due to the crazy amount of membrane proteins in eukaryotes. Separate tests where we only classify if a protein is secreted vs cytoplasm-localized perform amazingly (~ 90% accuracy!!!), so the membrane proteins are really the ones causing trouble.
* I think our dataset is large enough for deep-learning to start achieving performance improvements over linear models
* Does anybody want to try whipping up a quick perceptron-type neural network (a generic MNIST-classifier tutorial type of model would probably work here) and see if we can beat the ridge-regression score?
.
# Unsupervised-learning: the search for implicit meaning within arbitrary sequence features
### Principal Component Analysis:
#### Mnimum number of components to retain >= 90% explained variance per principal component
```
pca = PCA().fit(X)
explained_variance = np.cumsum(pca.explained_variance_ratio_)
plt.plot(explained_variance)
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
N_COMPS = np.where(explained_variance>=0.9)[0][0]
print('Number of compnents needed: ',N_COMPS)
```
#### Making a plot of the first 3 principal components following dimension-reduction, colored by localization label
```
label_df = data['location']
pca_labels = ['PC'+str(i+1) for i in range(N_COMPS)]
pca = PCA(n_components=N_COMPS)
pca_result = pca.fit_transform(X)
pca_df = pd.concat([pd.DataFrame(data.location.values, columns=['location']),pd.DataFrame(pca_result, columns=pca_labels)],axis=1)
colors = {'secreted':'springgreen', 'cytoplasm':'deepskyblue', 'membrane':'violet'}
plt.clf()
fig = plt.figure(figsize=(18,6))
plt.style.use('seaborn-white')
ax1 = fig.add_subplot(1,3,1, title="comps 1 and 2")
sns.scatterplot(x="PC1", y="PC2",hue="location",palette=sns.color_palette("husl", 3),data=pca_df,legend="brief",ax=ax1)
ax2 = fig.add_subplot(1,3,2, title="comps 1 and 3")
sns.scatterplot(x="PC1", y="PC3",hue="location",palette=sns.color_palette("husl", 3),data=pca_df,legend="brief",ax=ax2)
ax3 = fig.add_subplot(1,3,3, title="comps 2 and 3")
sns.scatterplot(x="PC2", y="PC3",hue="location",palette=sns.color_palette("husl", 3),data=pca_df,legend="brief",ax=ax3)
plt.tight_layout()
plt.show()
```
### t-SNE: because we can take this even further
#### Performing that t-distributed stochastic neighbor embedding computation on our dimension-reduced PCA data
```
tsne = TSNE(n_components=2, verbose=1, perplexity=40, n_iter=300)
tsne_results = tsne.fit_transform(pca_result)
tsne_df = pd.DataFrame(data.location.values, columns=['location'])
tsne_df['tsne-2d-1'] = tsne_results[:,0]
tsne_df['tsne-2d-2'] = tsne_results[:,1]
```
#### Plotting some dank feature separations from the t-SNE
```
plt.clf()
fig = plt.figure(figsize=(6,6))
ax1 = fig.add_subplot(1,1,1, title="UniRep features: \n t-distributed stochastic neighbor embedding")
sns.scatterplot(
x="tsne-2d-1", y="tsne-2d-2",
hue="location",
palette=sns.color_palette("husl", 3),
data=tsne_df,
legend="brief",
s=10,
ax=ax1
)
```
### Conclusions from classifier model comparison:
* There are implicit feature differences between cytoplasmic and secreted proteins.
* It's just those darn membrane proteins causing problems again.
* can we minimize this somehow?
.
| github_jupyter |
```
import os
import torch
import torch.nn as nn
import torch.optim as optim
import torch.backends.cudnn as cudnn
import torch.nn.init as init
import argparse
from torch.autograd import Variable
import torch.utils.data as data
#CHANGE
from data import v2, v1, AnnotationTransform, VOCDetection, detection_collate, VOC_CLASSES
from utils.augmentations import SSDAugmentation
from layers.modules import MultiBoxLoss
from ssd import build_ssd
import numpy as np
import time
from commonData import commonDataset
from logger import Logger
def str2bool(v):
return v.lower() in ("yes", "true", "t", "1")
#CHANGE
cocoimgPath = "/new_data/gpu/utkrsh/coco/images/train2014/"
annFilePath = "/new_data/gpu/utkrsh/coco/annotations/instances_train2014.json"
# RESUME = "./weights/ssd300_0712_COCO14_2000_run2_BCELoss.pth" # change to saved model file path
RESUME = None
START_ITER = 1
CUDA = True
VOCroot = "/users/gpu/utkrsh/data/VOCdevkit/"
logFolder = "./.logs/run1_singleGPU_BCELoss/"
logger = Logger(logFolder)
logTensorboard = True
version ='v2'
basenet ='vgg16_reducedfc.pth'
jaccard_threshold = 0.5
batch_size = 16
resume = RESUME
num_workers = 4
iterations = 120000
start_iter = START_ITER
cuda = CUDA
lr = 1e-3
momentum = 0.9
weight_decay = 5e-4
gamma = 0.1
log_iters = False
visdom = False
send_images_to_visdom = False
save_folder = 'weights/'
cocoimg = cocoimgPath
annFile = annFilePath
voc_root = VOCroot
if cuda and torch.cuda.is_available():
torch.set_default_tensor_type('torch.cuda.FloatTensor')
else:
torch.set_default_tensor_type('torch.FloatTensor')
cfg = (v1, v2)[version == 'v2']
if not os.path.exists(save_folder):
os.mkdir(save_folder)
train_sets = [('2007', 'trainval')]
# train_sets = 'train'
ssd_dim = 300 # only support 300 now
means = (104, 117, 123) # only support voc now
num_classes = len(VOC_CLASSES) + 1
batch_size = batch_size
accum_batch_size = 32
iter_size = accum_batch_size / batch_size
max_iter = 30000
weight_decay = 0.0005
stepvalues = (80000, 100000, 120000)
gamma = 0.1
momentum = 0.9
if visdom:
import visdom
viz = visdom.Visdom()
ssd_net = build_ssd('train', 300, num_classes)
net = ssd_net
if False:
net = torch.nn.DataParallel(ssd_net)
cudnn.benchmark = True
if resume:
print('Resuming training, loading {}...'.format(resume))
ssd_net.load_weights(resume)
else:
vgg_weights = torch.load(save_folder + basenet)
print('Loading base network...')
ssd_net.vgg.load_state_dict(vgg_weights)
if cuda:
net = net.cuda()
def xavier(param):
init.xavier_uniform(param)
def weights_init(m):
if isinstance(m, nn.Conv2d):
xavier(m.weight.data)
m.bias.data.zero_()
if not resume:
print('Initializing weights...')
# initialize newly added layers' weights with xavier method
ssd_net.extras.apply(weights_init)
ssd_net.loc.apply(weights_init)
ssd_net.conf.apply(weights_init)
#CHANGE
ssd_net.dmn.apply(weights_init)
optimizer = optim.SGD(net.parameters(), lr=lr,
momentum=momentum, weight_decay=weight_decay)
criterion = MultiBoxLoss(num_classes, 0.5, True, 0, True, 3, 0.5, False, cuda)
```
dataset = commonDataset(voc_root, train_sets, ssd_dim, means,
cocoimg, annFile)
data_loader = data.DataLoader(dataset, batch_size, num_workers=num_workers,
shuffle=True, collate_fn=detection_collate, pin_memory=True)
batch_iter = iter(data_loader)
img, targets = next(batch_iter)
net
%env CUDA_LAUNCH_BLOCKING=1
```
def train():
net.train()
# loss counters
loc_loss = 0 # epoch
conf_loss = 0
epoch = 0
print('Loading Dataset...')
#CHANGE
dataset = commonDataset(voc_root, train_sets, ssd_dim, means,
cocoimg, annFile)
#dataset = VOCDetection(voc_root, train_sets, SSDAugmentation(
# ssd_dim, means), AnnotationTransform())
epoch_size = len(dataset) // batch_size
print('Training SSD on', dataset.name)
step_index = 0
if visdom:
# initialize visdom loss plot
lot = viz.line(
X=torch.zeros((1,)).cpu(),
Y=torch.zeros((1, 3)).cpu(),
opts=dict(
xlabel='Iteration',
ylabel='Loss',
title='Current SSD Training Loss',
legend=['Loc Loss', 'Conf Loss', 'Loss']
)
)
epoch_lot = viz.line(
X=torch.zeros((1,)).cpu(),
Y=torch.zeros((1, 3)).cpu(),
opts=dict(
xlabel='Epoch',
ylabel='Loss',
title='Epoch SSD Training Loss',
legend=['Loc Loss', 'Conf Loss', 'Loss']
)
)
batch_iterator = None
data_loader = data.DataLoader(dataset, batch_size, num_workers=num_workers,
shuffle=True, collate_fn=detection_collate, pin_memory=True)
for iteration in range(start_iter, max_iter):
if (not batch_iterator) or (iteration % epoch_size == 0):
# create batch iterator
batch_iterator = iter(data_loader)
if iteration in stepvalues:
step_index += 1
adjust_learning_rate(optimizer, gamma, step_index)
if visdom:
viz.line(
X=torch.ones((1, 3)).cpu() * epoch,
Y=torch.Tensor([loc_loss, conf_loss,
loc_loss + conf_loss]).unsqueeze(0).cpu() / epoch_size,
win=epoch_lot,
update='append'
)
# reset epoch loss counters
loc_loss = 0
conf_loss = 0
epoch += 1
# load train data
images, targets = next(batch_iterator)
if cuda:
images = Variable(images.cuda())
targets = [Variable(anno.cuda(), volatile=True) for anno in targets]
else:
images = Variable(images)
targets = [Variable(anno, volatile=True) for anno in targets]
# forward
t0 = time.time()
out = net(images)
# backprop
optimizer.zero_grad()
#CHANGE
loss_l, loss_c, loss_d = criterion(out, targets)
loss = loss_l + loss_c + loss_d
loss.backward()
optimizer.step()
t1 = time.time()
loc_loss += loss_l.data[0]
conf_loss += loss_c.data[0]
if iteration % 10 == 0:
print('Timer: %.4f sec.' % (t1 - t0))
print("iter : "+repr(iteration)+" || loc: %.4f || conf: %.4f || dom: %.4f || loss: %.4f ||\n" %
(loss_l.data[0], loss_c.data[0], loss_d.data[0], loss.data[0]) )
if visdom and send_images_to_visdom:
random_batch_index = np.random.randint(images.size(0))
viz.image(images.data[random_batch_index].cpu().numpy())
if logTensorboard:
info = {
'loc_loss' : loss_l.data[0],
'conf_loss' : loss_c.data[0],
'domain_loss' : loss_d.data[0],
'loss' : loss.data[0]
}
for tag, value in info.items():
logger.scalar_summary(tag, value, iteration)
def to_np(x):
return x.data.cpu().numpy()
for tag, value in net.named_parameters():
tag = tag.replace('.','/')
logger.histo_summary(tag, to_np(value), iteration)
logger.histo_summary(tag+'/grad', to_np(value.grad), iteration)
if iteration % 2000 == 0:
print('Saving state, iter:', iteration)
torch.save(ssd_net.state_dict(), 'weights/ssd300_0712_COCO14_' +
repr(iteration) + '.pth')
torch.save(ssd_net.state_dict(), save_folder + '' + version + '.pth')
def adjust_learning_rate(optimizer, gamma, step):
"""Sets the learning rate to the initial LR decayed by 10 at every specified step
# Adapted from PyTorch Imagenet example:
# https://github.com/pytorch/examples/blob/master/imagenet/main.py
"""
lr = lr * (gamma ** (step))
for param_group in optimizer.param_groups:
param_group['lr'] = lr
if __name__ == '__main__':
train()
```
| github_jupyter |
# Operations on Word Vectors
Welcome to your first assignment of Week 2, Course 5 of the Deep Learning Specialization!
Because word embeddings are very computationally expensive to train, most ML practitioners will load a pre-trained set of embeddings. In this notebook you'll try your hand at loading, measuring similarity between, and modifying pre-trained embeddings.
**After this assignment you'll be able to**:
* Explain how word embeddings capture relationships between words
* Load pre-trained word vectors
* Measure similarity between word vectors using cosine similarity
* Use word embeddings to solve word analogy problems such as Man is to Woman as King is to ______.
At the end of this notebook you'll have a chance to try an optional exercise, where you'll modify word embeddings to reduce their gender bias. Reducing bias is an important consideration in ML, so you're encouraged to take this challenge!
## Table of Contents
- [Packages](#0)
- [1 - Load the Word Vectors](#1)
- [2 - Embedding Vectors Versus One-Hot Vectors](#2)
- [3 - Cosine Similarity](#3)
- [Exercise 1 - cosine_similarity](#ex-1)
- [4 - Word Analogy Task](#4)
- [Exercise 2 - complete_analogy](#ex-2)
- [5 - Debiasing Word Vectors (OPTIONAL/UNGRADED)](#5)
- [5.1 - Neutralize Bias for Non-Gender Specific Words](#5-1)
- [Exercise 3 - neutralize](#ex-3)
- [5.2 - Equalization Algorithm for Gender-Specific Words](#5-2)
- [Exercise 4 - equalize](#ex-4)
- [6 - References](#6)
<a name='0'></a>
## Packages
Let's get started! Run the following cell to load the packages you'll need.
```
import numpy as np
from w2v_utils import *
```
<a name='1'></a>
## 1 - Load the Word Vectors
For this assignment, you'll use 50-dimensional GloVe vectors to represent words.
Run the following cell to load the `word_to_vec_map`.
```
words, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt')
```
You've loaded:
- `words`: set of words in the vocabulary.
- `word_to_vec_map`: dictionary mapping words to their GloVe vector representation.
<a name='2'></a>
## 2 - Embedding Vectors Versus One-Hot Vectors
Recall from the lesson videos that one-hot vectors don't do a good job of capturing the level of similarity between words. This is because every one-hot vector has the same Euclidean distance from any other one-hot vector.
Embedding vectors, such as GloVe vectors, provide much more useful information about the meaning of individual words.
Now, see how you can use GloVe vectors to measure the similarity between two words!
<a name='3'></a>
## 3 - Cosine Similarity
To measure the similarity between two words, you need a way to measure the degree of similarity between two embedding vectors for the two words. Given two vectors $u$ and $v$, cosine similarity is defined as follows:
$$\text{CosineSimilarity(u, v)} = \frac {u \cdot v} {||u||_2 ||v||_2} = cos(\theta) \tag{1}$$
* $u \cdot v$ is the dot product (or inner product) of two vectors
* $||u||_2$ is the norm (or length) of the vector $u$
* $\theta$ is the angle between $u$ and $v$.
* The cosine similarity depends on the angle between $u$ and $v$.
* If $u$ and $v$ are very similar, their cosine similarity will be close to 1.
* If they are dissimilar, the cosine similarity will take a smaller value.
<img src="images/cosine_sim.png" style="width:800px;height:250px;">
<caption><center><font color='purple'><b>Figure 1</b>: The cosine of the angle between two vectors is a measure of their similarity.</font></center></caption>
<a name='ex-1'></a>
### Exercise 1 - cosine_similarity
Implement the function `cosine_similarity()` to evaluate the similarity between word vectors.
**Reminder**: The norm of $u$ is defined as $ ||u||_2 = \sqrt{\sum_{i=1}^{n} u_i^2}$
#### Additional Hints
* You may find [np.dot](https://numpy.org/doc/stable/reference/generated/numpy.dot.html), [np.sum](https://numpy.org/doc/stable/reference/generated/numpy.sum.html), or [np.sqrt](https://numpy.org/doc/stable/reference/generated/numpy.sqrt.html) useful depending upon the implementation that you choose.
```
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: cosine_similarity
def cosine_similarity(u, v):
"""
Cosine similarity reflects the degree of similarity between u and v
Arguments:
u -- a word vector of shape (n,)
v -- a word vector of shape (n,)
Returns:
cosine_similarity -- the cosine similarity between u and v defined by the formula above.
"""
# Special case. Consider the case u = [0, 0], v=[0, 0]
if np.all(u == v):
return 1
### START CODE HERE ###
# Compute the dot product between u and v (≈1 line)
dot = np.dot(u, v)
# Compute the L2 norm of u (≈1 line)
norm_u = np.sqrt(np.sum(u*u))
# Compute the L2 norm of v (≈1 line)
norm_v = np.sqrt(np.sum(v*v))
# Avoid division by 0
if np.isclose(norm_u * norm_v, 0, atol=1e-32):
return 0
# Compute the cosine similarity defined by formula (1) (≈1 line)
cosine_similarity = dot/(norm_u*norm_v)
### END CODE HERE ###
return cosine_similarity
# START SKIP FOR GRADING
father = word_to_vec_map["father"]
mother = word_to_vec_map["mother"]
ball = word_to_vec_map["ball"]
crocodile = word_to_vec_map["crocodile"]
france = word_to_vec_map["france"]
italy = word_to_vec_map["italy"]
paris = word_to_vec_map["paris"]
rome = word_to_vec_map["rome"]
print("cosine_similarity(father, mother) = ", cosine_similarity(father, mother))
print("cosine_similarity(ball, crocodile) = ",cosine_similarity(ball, crocodile))
print("cosine_similarity(france - paris, rome - italy) = ",cosine_similarity(france - paris, rome - italy))
# END SKIP FOR GRADING
# PUBLIC TESTS
def cosine_similarity_test(target):
a = np.random.uniform(-10, 10, 10)
b = np.random.uniform(-10, 10, 10)
c = np.random.uniform(-1, 1, 23)
assert np.isclose(cosine_similarity(a, a), 1), "cosine_similarity(a, a) must be 1"
assert np.isclose(cosine_similarity((c >= 0) * 1, (c < 0) * 1), 0), "cosine_similarity(a, not(a)) must be 0"
assert np.isclose(cosine_similarity(a, -a), -1), "cosine_similarity(a, -a) must be -1"
assert np.isclose(cosine_similarity(a, b), cosine_similarity(a * 2, b * 4)), "cosine_similarity must be scale-independent. You must divide by the product of the norms of each input"
print("\033[92mAll test passed!")
cosine_similarity_test(cosine_similarity)
```
#### Try different words!
After you get the correct expected output, please feel free to modify the inputs and measure the cosine similarity between other pairs of words! Playing around with the cosine similarity of other inputs will give you a better sense of how word vectors behave.
<a name='4'></a>
## 4 - Word Analogy Task
* In the word analogy task, complete this sentence:
<font color='brown'>"*a* is to *b* as *c* is to **____**"</font>.
* An example is:
<font color='brown'> '*man* is to *woman* as *king* is to *queen*' </font>.
* You're trying to find a word *d*, such that the associated word vectors $e_a, e_b, e_c, e_d$ are related in the following manner:
$e_b - e_a \approx e_d - e_c$
* Measure the similarity between $e_b - e_a$ and $e_d - e_c$ using cosine similarity.
<a name='ex-2'></a>
### Exercise 2 - complete_analogy
Complete the code below to perform word analogies!
```
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: complete_analogy
def complete_analogy(word_a, word_b, word_c, word_to_vec_map):
"""
Performs the word analogy task as explained above: a is to b as c is to ____.
Arguments:
word_a -- a word, string
word_b -- a word, string
word_c -- a word, string
word_to_vec_map -- dictionary that maps words to their corresponding vectors.
Returns:
best_word -- the word such that v_b - v_a is close to v_best_word - v_c, as measured by cosine similarity
"""
# convert words to lowercase
word_a, word_b, word_c = word_a.lower(), word_b.lower(), word_c.lower()
### START CODE HERE ###
# Get the word embeddings e_a, e_b and e_c (≈1-3 lines)
e_a, e_b, e_c = word_to_vec_map[word_a],word_to_vec_map[word_b],word_to_vec_map[word_c]
### END CODE HERE ###
words = word_to_vec_map.keys()
max_cosine_sim = -100 # Initialize max_cosine_sim to a large negative number
best_word = None # Initialize best_word with None, it will help keep track of the word to output
# loop over the whole word vector set
for w in words:
# to avoid best_word being one the input words, skip the input word_c
# skip word_c from query
if w == word_c:
continue
### START CODE HERE ###
# Compute cosine similarity between the vector (e_b - e_a) and the vector ((w's vector representation) - e_c) (≈1 line)
cosine_sim = cosine_similarity(e_b - e_a, word_to_vec_map[w] - e_c)
# If the cosine_sim is more than the max_cosine_sim seen so far,
# then: set the new max_cosine_sim to the current cosine_sim and the best_word to the current word (≈3 lines)
if cosine_sim > max_cosine_sim:
max_cosine_sim = cosine_sim
best_word = w
### END CODE HERE ###
return best_word
# PUBLIC TEST
def complete_analogy_test(target):
a = [3, 3] # Center at a
a_nw = [2, 4] # North-West oriented vector from a
a_s = [3, 2] # South oriented vector from a
c = [-2, 1] # Center at c
# Create a controlled word to vec map
word_to_vec_map = {'a': a,
'synonym_of_a': a,
'a_nw': a_nw,
'a_s': a_s,
'c': c,
'c_n': [-2, 2], # N
'c_ne': [-1, 2], # NE
'c_e': [-1, 1], # E
'c_se': [-1, 0], # SE
'c_s': [-2, 0], # S
'c_sw': [-3, 0], # SW
'c_w': [-3, 1], # W
'c_nw': [-3, 2] # NW
}
# Convert lists to np.arrays
for key in word_to_vec_map.keys():
word_to_vec_map[key] = np.array(word_to_vec_map[key])
assert(target('a', 'a_nw', 'c', word_to_vec_map) == 'c_nw')
assert(target('a', 'a_s', 'c', word_to_vec_map) == 'c_s')
assert(target('a', 'synonym_of_a', 'c', word_to_vec_map) != 'c'), "Best word cannot be input query"
assert(target('a', 'c', 'a', word_to_vec_map) == 'c')
print("\033[92mAll tests passed")
complete_analogy_test(complete_analogy)
```
Run the cell below to test your code. Patience, young grasshopper...this may take 1-2 minutes.
```
# START SKIP FOR GRADING
triads_to_try = [('italy', 'italian', 'spain'), ('india', 'delhi', 'japan'), ('man', 'woman', 'boy'), ('small', 'smaller', 'large')]
for triad in triads_to_try:
print ('{} -> {} :: {} -> {}'.format( *triad, complete_analogy(*triad, word_to_vec_map)))
# END SKIP FOR GRADING
print(complete_analogy('man','woman','father',word_to_vec_map))
```
Once you get the output, try modifying the input cells above to test your own analogies.
**Hint**: Try to find some other analogy pairs that will work, along with some others where the algorithm doesn't give the right answer:
* For example, you can try small->smaller as big->?
## Congratulations!
You've come to the end of the graded portion of the assignment. By now, you've:
* Loaded some pre-trained word vectors
* Measured the similarity between word vectors using cosine similarity
* Used word embeddings to solve word analogy problems such as Man is to Woman as King is to __.
Cosine similarity is a relatively simple and intuitive, yet powerful, method you can use to capture nuanced relationships between words. These exercises should be helpful to you in explaining how it works, and applying it to your own projects!
<font color='blue'>
<b>What you should remember</b>:
- Cosine similarity is a good way to compare the similarity between pairs of word vectors.
- Note that L2 (Euclidean) distance also works.
- For NLP applications, using a pre-trained set of word vectors is often a great way to get started. </font>
Even though you've finished the graded portion, please take a look at the rest of this notebook to learn about debiasing word vectors.
<a name='5'></a>
## 5 - Debiasing Word Vectors (OPTIONAL/UNGRADED)
In the following exercise, you'll examine gender biases that can be reflected in a word embedding, and explore algorithms for reducing the bias. In addition to learning about the topic of debiasing, this exercise will also help hone your intuition about what word vectors are doing. This section involves a bit of linear algebra, though you can certainly complete it without being an expert! Go ahead and give it a shot. This portion of the notebook is optional and is not graded...so just have fun and explore.
First, see how the GloVe word embeddings relate to gender. You'll begin by computing a vector $g = e_{woman}-e_{man}$, where $e_{woman}$ represents the word vector corresponding to the word *woman*, and $e_{man}$ corresponds to the word vector corresponding to the word *man*. The resulting vector $g$ roughly encodes the concept of "gender".
You might get a more accurate representation if you compute $g_1 = e_{mother}-e_{father}$, $g_2 = e_{girl}-e_{boy}$, etc. and average over them, but just using $e_{woman}-e_{man}$ will give good enough results for now.
```
g1 = word_to_vec_map['woman'] - word_to_vec_map['man']
g2 = word_to_vec_map['girl'] - word_to_vec_map['boy']
g3 = word_to_vec_map['father'] - word_to_vec_map['mother']
g = (g1+g2+g3)/3
print(g)
```
Now, consider the cosine similarity of different words with $g$. What does a positive value of similarity mean, versus a negative cosine similarity?
```
print ('List of names and their similarities with constructed vector:')
# girls and boys name
name_list = ['john', 'marie', 'sophie', 'ronaldo', 'priya', 'rahul', 'danielle', 'reza', 'katy', 'yasmin']
for w in name_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
```
As you can see, female first names tend to have a positive cosine similarity with our constructed vector $g$, while male first names tend to have a negative cosine similarity. This is not surprising, and the result seems acceptable.
Now try with some other words:
```
print('Other words and their similarities:')
word_list = ['lipstick', 'guns', 'science', 'arts', 'literature', 'warrior','doctor', 'tree', 'receptionist',
'technology', 'fashion', 'teacher', 'engineer', 'pilot', 'computer', 'singer']
for w in word_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
```
Do you notice anything surprising? It is astonishing how these results reflect certain unhealthy gender stereotypes. For example, we see “computer” is negative and is closer in value to male first names, while “literature” is positive and is closer to female first names. Ouch!
You'll see below how to reduce the bias of these vectors, using an algorithm due to [Boliukbasi et al., 2016](https://arxiv.org/abs/1607.06520). Note that some word pairs such as "actor"/"actress" or "grandmother"/"grandfather" should remain gender-specific, while other words such as "receptionist" or "technology" should be neutralized, i.e. not be gender-related. You'll have to treat these two types of words differently when debiasing.
<a name='5-1'></a>
### 5.1 - Neutralize Bias for Non-Gender Specific Words
The figure below should help you visualize what neutralizing does. If you're using a 50-dimensional word embedding, the 50 dimensional space can be split into two parts: The bias-direction $g$, and the remaining 49 dimensions, which is called $g_{\perp}$ here. In linear algebra, we say that the 49-dimensional $g_{\perp}$ is perpendicular (or "orthogonal") to $g$, meaning it is at 90 degrees to $g$. The neutralization step takes a vector such as $e_{receptionist}$ and zeros out the component in the direction of $g$, giving us $e_{receptionist}^{debiased}$.
Even though $g_{\perp}$ is 49-dimensional, given the limitations of what you can draw on a 2D screen, it's illustrated using a 1-dimensional axis below.
<img src="images/neutral.png" style="width:800px;height:300px;">
<caption><center><font color='purple'><b>Figure 2</b>: The word vector for "receptionist" represented before and after applying the neutralize operation.</font> </center></caption>
<a name='ex-3'></a>
### Exercise 3 - neutralize
Implement `neutralize()` to remove the bias of words such as "receptionist" or "scientist."
Given an input embedding $e$, you can use the following formulas to compute $e^{debiased}$:
$$e^{bias\_component} = \frac{e \cdot g}{||g||_2^2} * g\tag{2}$$
$$e^{debiased} = e - e^{bias\_component}\tag{3}$$
If you are an expert in linear algebra, you may recognize $e^{bias\_component}$ as the projection of $e$ onto the direction $g$. If you're not an expert in linear algebra, don't worry about this. ;)
<!--
**Reminder**: a vector $u$ can be split into two parts: its projection over a vector-axis $v_B$ and its projection over the axis orthogonal to $v$:
$$u = u_B + u_{\perp}$$
where : $u_B = $ and $ u_{\perp} = u - u_B $
!-->
```
def neutralize(word, g, word_to_vec_map):
"""
Removes the bias of "word" by projecting it on the space orthogonal to the bias axis.
This function ensures that gender neutral words are zero in the gender subspace.
Arguments:
word -- string indicating the word to debias
g -- numpy-array of shape (50,), corresponding to the bias axis (such as gender)
word_to_vec_map -- dictionary mapping words to their corresponding vectors.
Returns:
e_debiased -- neutralized word vector representation of the input "word"
"""
### START CODE HERE ###
# Select word vector representation of "word". Use word_to_vec_map. (≈ 1 line)
e = word_to_vec_map[word]
# Compute e_biascomponent using the formula given above. (≈ 1 line)
e_biascomponent = (np.dot(e,g)/(np.sum(g*g)))*g
# Neutralize e by subtracting e_biascomponent from it
# e_debiased should be equal to its orthogonal projection. (≈ 1 line)
e_debiased = e - e_biascomponent
### END CODE HERE ###
return e_debiased
e = "receptionist"
print("cosine similarity between " + e + " and g, before neutralizing: ", cosine_similarity(word_to_vec_map["receptionist"], g))
e_debiased = neutralize("receptionist", g, word_to_vec_map)
print("cosine similarity between " + e + " and g, after neutralizing: ", cosine_similarity(e_debiased, g))
```
**Expected Output**: The second result is essentially 0, up to numerical rounding (on the order of $10^{-17}$).
<table>
<tr>
<td>
<b>cosine similarity between receptionist and g, before neutralizing:</b> :
</td>
<td>
0.330779417506
</td>
</tr>
<tr>
<td>
<b>cosine similarity between receptionist and g, after neutralizing</b> :
</td>
<td>
-4.442232511624783e-17
</tr>
</table>
<a name='5-2'></a>
### 5.2 - Equalization Algorithm for Gender-Specific Words
Next, let's see how debiasing can also be applied to word pairs such as "actress" and "actor." Equalization is applied to pairs of words that you might want to have differ only through the gender property. As a concrete example, suppose that "actress" is closer to "babysit" than "actor." By applying neutralization to "babysit," you can reduce the gender stereotype associated with babysitting. But this still does not guarantee that "actor" and "actress" are equidistant from "babysit." The equalization algorithm takes care of this.
The key idea behind equalization is to make sure that a particular pair of words are equidistant from the 49-dimensional $g_\perp$. The equalization step also ensures that the two equalized steps are now the same distance from $e_{receptionist}^{debiased}$, or from any other work that has been neutralized. Visually, this is how equalization works:
<img src="images/equalize10.png" style="width:800px;height:400px;">
The derivation of the linear algebra to do this is a bit more complex. (See Bolukbasi et al., 2016 in the References for details.) Here are the key equations:
$$ \mu = \frac{e_{w1} + e_{w2}}{2}\tag{4}$$
$$ \mu_{B} = \frac {\mu \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{5}$$
$$\mu_{\perp} = \mu - \mu_{B} \tag{6}$$
$$ e_{w1B} = \frac {e_{w1} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{7}$$
$$ e_{w2B} = \frac {e_{w2} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{8}$$
$$e_{w1B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w1B}} - \mu_B} {||(e_{w1} - \mu_{\perp}) - \mu_B||_2} \tag{9}$$
$$e_{w2B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w2B}} - \mu_B} {||(e_{w2} - \mu_{\perp}) - \mu_B||_2} \tag{10}$$
$$e_1 = e_{w1B}^{corrected} + \mu_{\perp} \tag{11}$$
$$e_2 = e_{w2B}^{corrected} + \mu_{\perp} \tag{12}$$
<a name='ex-4'></a>
### Exercise 4 - equalize
Implement the `equalize()` function below.
Use the equations above to get the final equalized version of the pair of words. Good luck!
**Hint**
- Use [np.linalg.norm](https://numpy.org/doc/stable/reference/generated/numpy.linalg.norm.html)
```
def equalize(pair, bias_axis, word_to_vec_map):
"""
Debias gender specific words by following the equalize method described in the figure above.
Arguments:
pair -- pair of strings of gender specific words to debias, e.g. ("actress", "actor")
bias_axis -- numpy-array of shape (50,), vector corresponding to the bias axis, e.g. gender
word_to_vec_map -- dictionary mapping words to their corresponding vectors
Returns
e_1 -- word vector corresponding to the first word
e_2 -- word vector corresponding to the second word
"""
### START CODE HERE ###
# Step 1: Select word vector representation of "word". Use word_to_vec_map. (≈ 2 lines)
w1, w2 = pair
e_w1, e_w2 = word_to_vec_map[w1], word_to_vec_map[w2]
# Step 2: Compute the mean of e_w1 and e_w2 (≈ 1 line)
mu = (e_w1 + e_w2)/2.0
# Step 3: Compute the projections of mu over the bias axis and the orthogonal axis (≈ 2 lines)
mu_B = np.divide(np.dot(mu, bias_axis),np.linalg.norm(bias_axis)**2)*bias_axis
mu_orth = mu - mu_B
# Step 4: Use equations (7) and (8) to compute e_w1B and e_w2B (≈2 lines)
e_w1B = np.divide(np.dot(e_w1, bias_axis),np.linalg.norm(bias_axis)**2)*bias_axis
e_w2B = np.divide(np.dot(e_w2, bias_axis),np.linalg.norm(bias_axis)**2)*bias_axis
# Step 5: Adjust the Bias part of e_w1B and e_w2B using the formulas (9) and (10) given above (≈2 lines)
corrected_e_w1B = np.sqrt(np.abs(1-np.sum(mu_orth**2)))*np.divide(e_w1B-mu_B, np.abs(e_w1-mu_orth-mu_B))
corrected_e_w2B = np.sqrt(np.abs(1-np.sum(mu_orth**2)))*np.divide(e_w2B-mu_B, np.abs(e_w2-mu_orth-mu_B))
# Step 6: Debias by equalizing e1 and e2 to the sum of their corrected projections (≈2 lines)
e1 = corrected_e_w1B + mu_orth
e2 = corrected_e_w2B + mu_orth
### END CODE HERE ###
return e1, e2
print("cosine similarities before equalizing:")
print("cosine_similarity(word_to_vec_map[\"man\"], gender) = ", cosine_similarity(word_to_vec_map["man"], g))
print("cosine_similarity(word_to_vec_map[\"woman\"], gender) = ", cosine_similarity(word_to_vec_map["woman"], g))
print()
e1, e2 = equalize(("man", "woman"), g, word_to_vec_map)
print("cosine similarities after equalizing:")
print("cosine_similarity(e1, gender) = ", cosine_similarity(e1, g))
print("cosine_similarity(e2, gender) = ", cosine_similarity(e2, g))
```
**Expected Output**:
cosine similarities before equalizing:
<table>
<tr>
<td>
<b>cosine_similarity(word_to_vec_map["man"], gender)</b> =
</td>
<td>
-0.117110957653
</td>
</tr>
<tr>
<td>
<b>cosine_similarity(word_to_vec_map["woman"], gender)</b> =
</td>
<td>
0.356666188463
</td>
</tr>
</table>
cosine similarities after equalizing:
<table>
<tr>
<td>
<b>cosine_similarity(e1, gender)</b> =
</td>
<td>
-0.942653373599985
</td>
</tr>
<tr>
<td>
<b>cosine_similarity(e2, gender)</b> =
</td>
<td>
0.9231551731025899
</td>
</tr>
</table>
Go ahead and play with the input words in the cell above, to apply equalization to other pairs of words.
Hint: Try...
These debiasing algorithms are very helpful for reducing bias, but aren't perfect and don't eliminate all traces of bias. For example, one weakness of this implementation was that the bias direction $g$ was defined using only the pair of words _woman_ and _man_. As discussed earlier, if $g$ were defined by computing $g_1 = e_{woman} - e_{man}$; $g_2 = e_{mother} - e_{father}$; $g_3 = e_{girl} - e_{boy}$; and so on and averaging over them, you would obtain a better estimate of the "gender" dimension in the 50 dimensional word embedding space. Feel free to play with these types of variants as well!
### Congratulations!
You have come to the end of both graded and ungraded portions of this notebook, and have seen several of the ways that word vectors can be applied and modified. Great work pushing your knowledge in the areas of neutralizing and equalizing word vectors! See you next time.
<a name='6'></a>
## 6 - References
- The debiasing algorithm is from Bolukbasi et al., 2016, [Man is to Computer Programmer as Woman is to
Homemaker? Debiasing Word Embeddings](https://papers.nips.cc/paper/6228-man-is-to-computer-programmer-as-woman-is-to-homemaker-debiasing-word-embeddings.pdf)
- The GloVe word embeddings were due to Jeffrey Pennington, Richard Socher, and Christopher D. Manning. (https://nlp.stanford.edu/projects/glove/)
| github_jupyter |
# Variational Bayes Notes
## What is variational Bayes?
A technique that allows us to compute approximations to non-closed form Bayes update equations.
## Why would we use it?
In **Bayesian parameter estimation**, to compute the intractable integration of the denominator in the parameter posterior probability:
\begin{equation}
p(\Theta \vert \Omega, Y) = \frac{p(Y, \Theta \vert \Omega)}{\int p(Y, \Theta \vert \Omega) d\Omega}
\end{equation}
and in **Bayesian model fitting**, to find the denominator of the model posterior probability (for a fixed $\Omega$):
\begin{equation}
p(\Omega \vert Y) = \frac{p(Y \vert \Omega) p( \Omega )}{\sum_{\Omega \in \mathcal{M}} p(Y\vert \Omega)p(\Omega)}
\end{equation}
Where $\Omega$ is the candidate model, $Y = \{(x_1,d_1),\dots,(x_N,d_N)\}$ is the training data, $\Theta$ is the set of unknown model parameters (and prior hyperparameters) under $\Omega$.
Also for **Bayesian data fusion**, if we're using GMM priors with MMS likelihoods, we're trying to evaluate
\begin{equation}\label{eq:bdf}
p(X_k \vert D_k) = \frac{P(D_k \vert X_k) p( X_k )}{\int P(D_k \vert X_k) p( X_k )dX_k} = \frac{p(X_k,D_k)}{P(D_k)}
\end{equation}
Where $p(D_k \vert X_k)$ is the MMS model and
$$
p( X_k ) = p(X_{k} \vert D_{1:k-1}) = \int p(X_k \vert X_{k-1}) p(X_{k} \vert D_{1:k-1}) dX_{k-1}
$$
where $p(X_k \vert X_{k-1})$ is the state transition pdf and we are only fusing $D_k$ sensor data.
## Alternatives & Extensions
* Grid-based approaches
* [MCMC](https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo) (and other Monte Carlo/Particle Filter techniques)
* [Laplace Approximation](https://en.wikipedia.org/wiki/Laplace's_method)
* VBIS: Use variational bayes outputs as the parameters for imporance distribution $q(X_k)$
* LWIS: Use prior $p(X_k)$ as imporance distribution $q(X_k)$
## How does it work?
VB minimizes the KLD between the integrable, closed-form *variational posterior* parameter distribution $q(\Theta \vert Y, \Omega)$ and the true posterior $p(\Theta \vert \Omega, Y)$ for $\Omega \in \mathcal{M}$:
$$
KL(q\vert\vert p) = - \int q(\Theta \vert Y, \Omega) log{\frac{p(\Theta \vert Y, \Omega)}{q(\Theta \vert Y, \Omega)}}d\Theta
$$
But, since $p(\Theta \vert Y, \Omega)$ is unavailable, we can instead minimize the KLD by maximizing a lower bound $\mathcal{L}$ to $\log{p(Y\vert\Omega)}$, where:
$$
\mathcal{L} = \int q(\Theta \vert Y, \Omega) log{\frac{p(Y, \Omega \vert \Theta)}{q(\Theta \vert Y, \Omega)}}d\Theta
$$
and
\begin{align*}
\log{p(Y\vert\Omega)} &= \mathcal{L} + KL(q \vert\vert p) \\
&= \int q(\Theta \vert Y, \Omega) log{\frac{p(Y, \Omega \vert \Theta)}{q(\Theta \vert Y, \Omega)}}d\Theta
- \int q(\Theta \vert Y, \Omega) log{\frac{p(\Theta \vert Y, \Omega)}{q(\Theta \vert Y, \Omega)}}d\Theta \\
&= \int q(\Theta \vert Y, \Omega) log{(p(Y, \Omega \vert \Theta))}
- q(\Theta \vert Y, \Omega) log{(p(\Theta \vert Y, \Omega))}d\Theta \\
&= \int q(\Theta \vert Y, \Omega) log{\frac{p(Y, \Omega \vert \Theta)}{p(\Theta \vert Y, \Omega)}}
d\Theta
\end{align*}
## How do we use it?
Take equation \ref{eq:bdf}. We want to approximate $p(X_k,D_k)$ (analytically intractable) with an unnormalized Gaussian lower bound pdf, which leads to a *variational Bayesian* Gaussian posterior approximation $\hat{p}(X_k\vert D_k)$.
If $f(D_k,X_k)$ is an unnormalized Gaussian function that approximates the softmax likelihood $P(D_k \vert X_k)$, then
\begin{align}\label{eq:approx_joint}
p(X_k,D_k) &\approx \hat{p}(X_k,D_k) = p(X_k)f(D_k,X_k) \\
P(D_k) = C &\approx \hat{C} = \int_{-\infty}^{\infty} \hat{p}(X_k,D_k)
\end{align}
Since $p(X_k)$ is Gaussian, $\hat{p}(X_k,D_k)$ is as well (as the product of two gaussians).
### How do we derive $f(D_k,X_k)$?
[2] derives an upper bound to the softmax denominator:
\begin{equation}\label{eq:upper}
\log\left(\sum_{c=1}^m e^{y_c}\right) \leq \alpha + \sum_{c=1}^m \frac{y_c - \alpha - \xi_c}{2} + \lambda(\xi_c)[(y_c - \alpha)^2 - \xi_c^2] + log(1 + e^{\xi_c})
\end{equation}
where $\lambda(\xi_c) = \frac{1}{2\xi_c}\left[\frac{1}{1 + e^{-\xi_c}}\right] - \frac{1}{2}$ and $y_c = w^T_cx + b_c$. $\alpha$ and $\xi_c$ are *free variational parameters*; given $y_c$, $\alpha$ and $\xi_c$ can be selected to minimize the upper bound in \ref{eq:upper}.
Assuming known $\alpha$ and $\xi_c$, we take the log of the softmax likelihood to get:
\begin{align*}
\log{P(D_k=j\vert X_k)} &= w^T_jx + b_j - \log{\left(\sum_{c=1}^m e^{w^T_cx + v_c}\right)} \\
&\leq \log{f(D_j=j,X_k)} = g_j + h^T_jx - \frac{1}{2}x^TK_jx
\end{align*}
Where
\begin{equation}\label{eq:approx_likelihood}
f(D_j=j,X_k) = \exp\{g_j + h^T_jx - \frac{1}{2}x^TK_jx\}
\end{equation}
The prior $p(X_k)$ can be expressed similarly:
\begin{equation}\label{eq:gaussian_prior}
p(X_k) = \exp\{g_p + h^T_px - \frac{1}{2}x^TK_px\}
\end{equation}
where $g_p = -\frac{1}{2}(log{\lvert2\pi\Sigma\rvert} + \mu^TK_p\mu)$, $h_p = K_p\mu$ $K_p = \Sigma^{-1}$. This is simply a reformulation of the equation of a gaussian:
\begin{align*}
p(X_k) &= \frac{1}{\sqrt{\lvert 2 \pi \Sigma \rvert}} exp{\{-\frac{1}{2}(x - \mu)^T\Sigma^{-1}(x - \mu) \}} \\
&=exp{\{-\frac{1}{2}(x^T - \mu^T)K_p(x - \mu) -\frac{1}{2}\log{\lvert 2 \pi \Sigma \rvert} \}}\\
&=exp{\{-\frac{1}{2}x^TK_px +\frac{1}{2}(x^TK_p\mu +\mu^TK_px) -\frac{1}{2}\mu^TK_p\mu -\frac{1}{2}\log{\lvert 2 \pi \Sigma \rvert} \}}\\
&=exp{\{-\frac{1}{2}(\log{\lvert 2 \pi \Sigma \rvert} + \mu^TK_p\mu ) +\frac{1}{2}(K_p\mu x +K_p \mu x) -\frac{1}{2}x^TK_px \}}\\
&= \exp\{g_p + h^T_px - \frac{1}{2}x^TK_px\}
\end{align*}
Since equation \ref{eq:approx_joint} is simply the product of two Gaussians, it becomes:
\begin{equation}\label{eq:approx_joint_product}
\hat{p}(X_k,D_k) = p(X_k)f(D_k,X_k) = \exp\{g_l + h^T_lx - \frac{1}{2}x^TK_lx\} = \mathcal{N}(\hat{\mu}_{VB},\hat{\Sigma}_{VB})
\end{equation}
### But how do we optimize $\alpha$ and $\xi_c$ ?
Minimizing the RHS of \ref{eq:upper} gives us:
\begin{align}
\xi^2_c &= y^2_c + \alpha^2 - 2\alpha y_c \label{eq:xi} \\
\alpha &= \frac{\left(\frac{m-2}{4}\right) \sum_{c=1}^m\lambda(\xi_c)y_c}{\sum_{c=1}^m\lambda(\xi_c)} \label{eq:alpha}
\end{align}
But, both depend on $X_k$, which is unobserved. Instead, we minimize the *expected value* of the RHS of \ref{eq:upper} with respect to the posterior.
Apparently **(?)** this is equivalent to maximizing $\log{\hat{P}(D_k)}$, the *approximate* the marginal log-likelihood of the observation $D_k = j$:
$$
\log{\hat{P}(D_k)} = \log{\hat{C}} = \log \int_{-\infty}^{\infty} \hat{p}(X_k,D_k)dX_k
$$
We can now use **expectation-maximization (EM)** to iteratively optimize $\alpha$ and $\xi_c$. We take the expectations of \ref{eq:xi} and \ref{eq:alpha} under the current $\hat{p}(X_k \vert D_k)$ estimate. Additionally, we'll need the following:
\begin{align}
\langle y_c\rangle &= w^T_c\hat{\mu}_{VB} + b_c \label{eq:y_expected} \\
\langle y^2_c\rangle &= w^T_c(\hat{\Sigma}_{VB} + \hat{\mu}_{VB}\hat{\mu}_{VB}^T)w_c + 2w^T_c\hat{\mu}_{VB}b_c + b^2_c \label{eq:y2_expected}
\end{align}
The code below demonstrates the expectation maximization algorithm.
## Issues
The approximate posterior is **optimistic** relative to the true posterior, and will be biased due to this optimism as well.
# Examples
## Variational Bayes (VB) with Softmax and Gaussian Prior
Let's restate the bayesian data fusion problem. We want to compute the following:
\begin{equation}\label{eq:vb-bdf}
p(X_k \vert D_k) = \frac{P(D_k = j \vert X_k) p( X_k )}{\int P(D_k = j \vert X_k) p( X_k )dX_k} = \frac{p(X_k,D_k)}{P(D_k)}
\end{equation}
Where, for a softmax likelihood,
\begin{equation}\label{eq:softmax}
P(D_k = j \vert X_k) = \frac{e^{w^T_jx + b_j}}{\sum_{c=1}^m e^{w^T_cx + b_c}}
\end{equation}
and, for a gaussian prior,
\begin{equation}
P(X_k) = \frac{1}{\sqrt{\lvert 2 \pi \Sigma}} \exp{\{-\frac{1}{2}(x - \mu)^T\Sigma^{-1}(x - \mu)\}}
\end{equation}
We show these two below for a one-dimensional problem of estimating the speed of a target.
```
from cops_and_robots.robo_tools.fusion.softmax import speed_model
%matplotlib inline
sm = speed_model()
sm.plot(plot_classes=False)
from __future__ import division
from scipy.stats import norm
import numpy as np
import matplotlib.pyplot as plt
mu = 0.3
sigma = 0.1
min_x = -5
max_x = 5
res = 10000
prior = norm(loc=mu, scale=sigma)
x_space = np.linspace(min_x, max_x, res)
# Plot the frozen distribution
fig, ax = plt.subplots(1, 1)
fig.set_size_inches(8, 8)
ax.plot(x_space, prior.pdf(x_space), lw=2, label='frozen pdf', c='g')
ax.fill_between(x_space, 0, prior.pdf(x_space), alpha=0.2, facecolor='g')
ax.set_xlim([0,.4])
ax.set_ylim([0,10])
ax.set_title('Gaussian prior')
```
While the numerator of \ref{eq:vb-bdf} can be computed easily, there is no closed form solution for its denominator.
We'll follow the following algorithm:
**Inputs**
* prior $\mu$ and $\Sigma$;
* $D_k = j$ with likelihood in eq.(7);
* initial $\alpha$ and $\xi_c$, for j, c ∈ {1, ...,m}
**Outputs**
* posterior mean $\hat{\mu}_{VB}$
* Posterior covariance $\hat{\Sigma}_{VB}$
**Steps**
1. E-step: for all fixed $\xi_c$ and $\alpha$,
1. compute $\hat{\alpha}_{VB}$ and $\hat{\Sigma}_{VB}$ via eq. (19);
2. compute $\langle y_c\rangle$ and $\langle y_c^2\rangle$ via eqs. (23)-(24);
2. M-step: for all fixed $\langle y_c\rangle$ and $\langle y_c^2\rangle$, <br />
**for $i = 1 : n_{lc}$ do **
1. compute all $\xi_c$ for fixed $\alpha$ via eq. (20)
2. compute $\alpha$ for all fixed $\xi_c$ via eq. (21) <br />
**end for **
3. If converged, return $\hat{C}$ via eq. (25) and stop; otherwise, return to step 1
```
from numpy.linalg import inv
import pandas as pd
np.set_printoptions(precision=2, suppress=True)
pd.set_option('precision', 3)
# SETTINGS:
n_lc = 15 # number of convergence loops
measurement = 'Medium'
tolerance = 10 ** -3 # for convergence
max_EM_steps = 1000
# INPUT: Define input priors and initial values
prior_mu = np.zeros(1)
prior_sigma = np.array([[1]])
initial_alpha = 0.5
initial_xi = np.ones(4)
# Softmax values
m = 4
w = sm.weights
b = sm.biases
j = sm.class_labels.index(measurement)
# Preparation
xis = initial_xi
alpha = initial_alpha
mu_hat = prior_mu
sigma_hat = prior_sigma
# dataframe for debugging
df = pd.DataFrame({'Alpha': alpha,
'g_j' : np.nan,
'h_j' : np.nan,
'K_j' : np.nan,
'Mu': mu_hat[0],
'Sigma': sigma_hat[0][0],
'Xi': [xis],
})
def lambda_(xi_c):
return 1 / (2 * xi_c) * ( (1 / (1 + np.exp(-xi_c))) - 0.5)
converged = False
EM_step = 0
while not converged and EM_step < max_EM_steps:
################################################################
# STEP 1 - EXPECTATION
################################################################
# PART A #######################################################
# find g_j
sum1 = 0
for c in range(m):
if c != j:
sum1 += b[c]
sum2 = 0
for c in range(m):
sum2 = xis[c] / 2 \
+ lambda_(xis[c]) * (xis[c] ** 2 - (b[c] - alpha) ** 2) \
- np.log(1 + np.exp(xis[c]))
g_j = 0.5 *(b[j] - sum1) + alpha * (m / 2 - 1) + sum2
# find h_j
sum1 = 0
for c in range(m):
if c != j:
sum1 += w[c]
sum2 = 0
for c in range(m):
sum2 += lambda_(xis[c]) * (alpha - b[c]) * w[c]
h_j = 0.5 * (w[j] - sum1) + 2 * sum2
# find K_j
sum1 = 0
for c in range(m):
sum1 += lambda_(xis[c]) * w[c].T .dot (w[c])
K_j = 2 * sum1
K_p = inv(prior_sigma)
g_p = -0.5 * (np.log( np.linalg.det(2 * np.pi * prior_sigma))) \
+ prior_mu.T .dot (K_p) .dot (prior_sigma)
h_p = K_p .dot (prior_mu)
g_l = g_p + g_j
h_l = h_p + h_j
K_l = K_p + K_j
mu_hat = inv(K_l) .dot (h_l)
sigma_hat = inv(K_l)
# PART B #######################################################
y_cs = np.zeros(m)
y_cs_squared = np.zeros(m)
for c in range(m):
y_cs[c] = w[c].T .dot (mu_hat) + b[c]
y_cs_squared[c] = w[c].T .dot (sigma_hat + mu_hat .dot (mu_hat.T)) .dot (w[c]) \
+ 2 * w[c].T .dot (mu_hat) * b[c] + b[c] ** 2
################################################################
# STEP 2 - MAXIMIZATION
################################################################
for i in range(n_lc):
# PART A #######################################################
# Find xi_cs
for c in range(m):
xis[c] = np.sqrt(y_cs_squared[c] + alpha ** 2 - 2 * alpha * y_cs[c])
# PART B #######################################################
# Find alpha
num_sum = 0
den_sum = 0
for c in range(m):
num_sum += lambda_(xis[c]) * y_cs[c]
den_sum += lambda_(xis[c])
alpha = ((m - 2) / 4 + num_sum) / den_sum
################################################################
# STEP 3 - CONVERGENCE CHECK
################################################################
new_df = pd.DataFrame([[alpha, g_j, h_j, K_j, mu_hat, sigma_hat,
[xis]]],
columns=('Alpha','g_j','h_j','K_j','Mu','Sigma',
'Xi',))
df = df.append(new_df, ignore_index=True)
EM_step += 1
# df
#plot results
mu_post = mu_hat[0]
sigma_post = np.sqrt(sigma_hat[0][0])
print('Mu and sigma found to be {} and {}, respectively.'.format(mu_hat[0],sigma_hat[0][0]))
ax = sm.plot_class(measurement_i, fill_between=False)
posterior = norm(loc=mu_post, scale=sigma_post)
ax.plot(x_space, posterior.pdf(x_space), lw=2, label='posterior pdf', c='b')
ax.fill_between(x_space, 0, posterior.pdf(x_space), alpha=0.2, facecolor='b')
ax.plot(x_space, prior.pdf(x_space), lw=1, label='prior pdf', c='g')
ax.set_title('Posterior distribtuion')
ax.legend()
ax.set_xlim([0, 0.4])
ax.set_ylim([0, 7])
plt.show()
```
Correct output: sigma 5.926215694086777e-05 and mu 0.2227
### Comparison: discretized state space
Using a discrete environment, we get the following:
```
measurement = 'Slow'
measurement_i = sm.class_labels.index(measurement)
dx = (max_x - min_x)/res
normalizer = 0
for x in x_space:
lh = sm.probs_at_state(x, measurement)
if np.isnan(lh):
lh = 1.00
normalizer += lh * gaussian.pdf(x)
normalizer *= dx
posterior = np.zeros_like(x_space)
for i, x in enumerate(x_space):
lh = sm.probs_at_state(x, measurement)
if np.isnan(lh):
lh = 1.00
posterior[i] = lh * gaussian.pdf(x) / normalizer
ax = sm.plot_class(measurement_i, fill_between=False)
ax.plot(x_space, posterior, lw=3, label='posterior pdf', c='b')
ax.fill_between(x_space, 0, posterior, alpha=0.2, facecolor='b')
ax.plot(x_space, prior.pdf(x_space), lw=1, label='prior pdf', c='g')
ax.set_title('Posterior distribtuion')
ax.legend()
ax.set_xlim([0, 0.4])
plt.show()
```
## References
[1] N. Ahmed and M. Campbell, “Variational Bayesian learning of probabilistic discriminative models with latent softmax variables,” Signal Process. IEEE Trans. …, vol. XX, no. c, pp. 1–27, 2011.
[2] G. Bouchard, “Efficient bounds for the softmax function and applications to approximate inference in hybrid models,” in NIPS 2007 Workshop for Approximate Bayesian Inference in Continuous/Hybrid Systems, Whistler, BC, Canada, 2007.
[3] N. Ahmed, E. Sample, and M. Campbell, “Bayesian Multicategorical Soft Data Fusion for Human--Robot Collaboration,” IEEE Trans. Robot., vol. 29, no. 1, pp. 189–206, 2013.
```
from IPython.core.display import HTML
# Borrowed style from Probabilistic Programming and Bayesian Methods for Hackers
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
| github_jupyter |
# Example of Data Analysis with DCD Hub Data
First, we will install the Python SDK of DCD-hub and other libraries to gerate plot from the data.
In your project folder, create "requirements.txt" file and save the file with the text written below:
dcd-sdk>=0.0.22 <br />
paho-mqtt <br />
python-dotenv <br />
pyserial <br />
requests <br />
jwt>=0.6.1 <br />
dotenv <br />
numpy <br />
pandas <br />
matplotlib <br />
scipy <br />
Open the terminal (unix)/commant prompt(windows) and enter command "without Quotes"<br /> "pip3 install -r requirements.txt --user"
Also, create a .env file in the same project folder and write your Thing_ID and THING_TOKEN in the format mentione below "without Quotes"
THING_ID="YOUR THING ID"<br />
THING_TOKEN="YOUR THING TOKEN"<br />
Now here in the code, we first import the dcd-hub here.
```
from dcd.entities.thing import Thing
```
Then, we provide the thing ID and access token (replace with yours)
```
from dotenv import load_dotenv
import os
load_dotenv()
```
Now here in the code, we instantiate a Thing with its credential that we stored in .env file, and then we fetch its details
```
THING_ID = os.environ['THING_ID']
THING_TOKEN = os.environ['THING_TOKEN']
my_thing = Thing(thing_id=THING_ID, token=THING_TOKEN)
my_thing.read()
```
What does a Thing look like? Lets see it here in the output generated below from json parser function "to_json()"
```
my_thing.to_json()
```
Which property do we want to explore and over which time frame? To do that, We will define the "START_DATE" and "END_DATE" for our time frame
```
from datetime import datetime
# What dates?
START_DATE = "2019-10-08 21:17:00"
END_DATE = "2019-11-08 21:25:00"
from datetime import datetime
DATE_FORMAT = '%Y-%m-%d %H:%M:%S'
from_ts = datetime.timestamp(datetime.strptime(START_DATE, DATE_FORMAT)) * 1000
to_ts = datetime.timestamp(datetime.strptime(END_DATE, DATE_FORMAT)) * 1000
```
Let's find this property and read the data. Replace "Property_Name" with your own property name which you would like to read data from. For example, read the accelerometer value of the thing, use PROPERTY_NAME = "Accelerometer"
```
PROPERTY_NAME = "Accelerometer"
my_property = my_thing.find_property_by_name(PROPERTY_NAME)
my_property.read(from_ts, to_ts)
```
How many data point did we get?
```
print(len(my_property.values))
```
Display values
```
my_property.values
```
# From CSV
Here we will extract data from the CSV file and plot some chart
```
from numpy import genfromtxt
import pandas as pd
data = genfromtxt('data.csv', delimiter=',')
data_frame = pd.DataFrame(data[:,1:], index = pd.DatetimeIndex(pd.to_datetime(data[:,0], unit='ms')), columns = ['x', 'y', 'z'])
data_frame
```
# Plot some charts with Matplotlib
In this example we plot an histogram, distribution of all values and dimensions.
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
from numpy import ma
data = ma.array(my_property.values)
```
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
from numpy import ma
figure(num=None, figsize=(15, 5))
t = data_frame.index
plt.plot(t, data_frame.x, t, data_frame.y, t, data_frame.z)
plt.hist(data[:,1:])
plt.show()
```
# Generate statistics with NumPy and Pandas
```
import numpy as np
from scipy.stats import kurtosis, skew
np.min(data[:,1:4], axis=0)
skew(data[:,1:4])
```
You can select a column (slice) of data, or a subset of data. In the example below we select rows
from 10 to 20 (10 in total) and the colum 1 to x (i.e skiping the first column representing the time).
```
data[:10,1:]
```
Out of the box, Pandas give you some statistics, do not forget to convert your array into a DataFrame.
```
data_frame = pd.DataFrame(data[:,1:], index = pd.DatetimeIndex(pd.to_datetime(data[:,0], unit='ms')))
pd.DataFrame.describe(data_frame)
data_frame.rolling(10).std()
```
# Rolling / Sliding Window
To apply statistics on a sliding (or rolling) window, we can use the rolling() function of a data frame. In the example below, we roll with a window size of 4 elements to apply a skew()
```
rolling2s = data_frame.rolling('2s').std()
plt.plot(rolling2s)
plt.show()
rolling100_data_points = data_frame.rolling(100).skew()
plt.plot(rolling100_data_points)
plt.show()
```
# Zero Crossing
```
plt.hist(np.where(np.diff(np.sign(data[:,1]))))
plt.show()
```
https://docs.scipy.org/doc/scipy/reference/stats.html#discrete-distributions
| github_jupyter |
# Presenting SOTA results on CIMA dataset
This notebook serves as visualisation for State-of-the-Art methods on CIMA dataset
_Note: In case you want to get some further evaluation related to new submission, you may contact JB._
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
import os, sys
import glob, json
import shutil
import tqdm
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sys.path += [os.path.abspath('.'), os.path.abspath('..')] # Add path to root
from birl.utilities.data_io import update_path
from birl.utilities.evaluate import compute_ranking
from birl.utilities.drawing import RadarChart, draw_scatter_double_scale
from bm_ANHIR.generate_regist_pairs import VAL_STATUS_TRAIN, VAL_STATUS_TEST
from bm_ANHIR.evaluate_submission import COL_TISSUE
```
This notebook serves for computing extended statistics (e.g. metrics inliding ranks) and visualie some more statistics.
You can run the notebook to see result on both scales `10k` and `full`.
To do so you need to unzip paticular archive in `bm_CIMA` to a separate folder and point-out this path as `PATH_RESULTS` bellow.
```
# folder with all participants submissions
PATH_RESULTS = os.path.join(update_path('bm_CIMA'), 'size-10k')
# temporary folder for unzipping submissions
PATH_TEMP = os.path.abspath(os.path.expanduser('~/Desktop/CIMA_size-10k'))
# configuration needed for recomputing detail metrics
PATH_DATASET = os.path.join(update_path('bm_ANHIR'), 'dataset_ANHIR')
PATH_TABLE = os.path.join(update_path('bm_CIMA'), 'dataset_CIMA_10k.csv')
# landmarks provided to participants, in early ANHIR stage we provided only 20% points per image pair
PATH_LNDS_PROVIDED = os.path.join(PATH_DATASET, 'landmarks_all')
# complete landmarks dataset
PATH_LNDS_COMPLATE = os.path.join(PATH_DATASET, 'landmarks_all')
# baseline for normalization of computing time
PATH_COMP_BM = os.path.join(PATH_DATASET, 'computer-performances_cmpgrid-71.json')
FIELD_TISSUE = 'type-tissue'
# configuration for Pandas tables
pd.set_option("display.max_columns", 25)
assert os.path.isdir(PATH_TEMP)
```
Some initial replacement and name adjustments
```
# simplify the metrics names according paper
METRIC_LUT = {'Average-': 'A', 'Rank-': 'R', 'Median-': 'M', 'Max-': 'S'}
def col_metric_rename(col):
for m in METRIC_LUT:
col = col.replace(m, METRIC_LUT[m])
return col
```
## Parse and load submissions
### Extract metrics from particular submissions
All submissions are expected to be as a zip archives in single folder. The archive name is the author name.
```
# Find all archives and unzip them to the same folder.
archive_paths = sorted(glob.glob(os.path.join(PATH_RESULTS, '*.zip')))
submission_dirs = []
for path_zip in tqdm.tqdm(archive_paths, desc='unzipping'):
sub = os.path.join(PATH_TEMP, os.path.splitext(os.path.basename(path_zip))[0])
os.system('unzip -o "%s" -d "%s"' % (path_zip, sub))
sub_ins = glob.glob(os.path.join(sub, '*'))
# if the zip subfolder contain only one folder move it up
if len(sub_ins) == 1:
[shutil.move(p, sub) for p in glob.glob(os.path.join(sub_ins[0], '*'))]
submission_dirs.append(sub)
```
Parse submissions and compute the final metrics. This can be computed just once.
**NOTE:** you can skip this step if you have already computed metrics in JSON files
```
import bm_ANHIR.evaluate_submission
bm_ANHIR.evaluate_submission.REQUIRE_OVERLAP_INIT_TARGET = False
tqdm_bar = tqdm.tqdm(total=len(submission_dirs))
for path_sub in submission_dirs:
tqdm_bar.set_description(path_sub)
# run the evaluation with details
path_json = bm_ANHIR.evaluate_submission.main(
path_experiment=path_sub, path_table=PATH_TABLE, path_dataset=PATH_LNDS_PROVIDED,
path_reference=PATH_LNDS_COMPLATE, path_comp_bm=PATH_COMP_BM, path_output=path_sub,
min_landmarks=1., details=True, allow_inverse=True)
# rename the metrics by the participant
shutil.copy(os.path.join(path_sub, 'metrics.json'),
os.path.join(PATH_RESULTS, os.path.basename(path_sub) + '.json'))
tqdm_bar.update()
```
### Load parsed measures from each experiment
```
submission_paths = sorted(glob.glob(os.path.join(PATH_RESULTS, '*.json')))
submissions = {}
# loading all participants metrics
for path_sub in tqdm.tqdm(submission_paths, desc='loading'):
with open(path_sub, 'r') as fp:
metrics = json.load(fp)
# rename tissue types accoding new LUT
for case in metrics['cases']:
metrics['cases'][case][FIELD_TISSUE] = metrics['cases'][case][FIELD_TISSUE]
m_agg = {stat: metrics['aggregates'][stat] for stat in metrics['aggregates']}
metrics['aggregates'] = m_agg
submissions[os.path.splitext(os.path.basename(path_sub))[0]] = metrics
print ('Users: %r' % submissions.keys())
# split the particular fields inside the measured items
users = list(submissions.keys())
print ('Fields: %r' % submissions[users[0]].keys())
user_aggreg = {u: submissions[u]['aggregates'] for u in users}
user_computer = {u: submissions[u]['computer'] for u in users}
user_cases = {u: submissions[u]['cases'] for u in users}
print ('required-landmarks: %r' % [submissions[u]['required-landmarks'] for u in users])
tissues = set(user_cases[users[0]][cs][FIELD_TISSUE] for cs in user_cases[users[0]])
print ('found tissues: %r' % sorted(tissues))
```
Define colors and markers later used in charts
```
METHODS = sorted(submissions.keys())
# https://seaborn.pydata.org/tutorial/color_palettes.html
# https://www.codecademy.com/articles/seaborn-design-ii
COLOR_PALETTE = "Set1"
METHOD_CMAP = sns.color_palette(COLOR_PALETTE, len(submissions))
METHOD_COLORS = {m: METHOD_CMAP[i] for i, m in enumerate(METHODS)}
def list_methods_colors(methods):
return [METHOD_COLORS[m] for m in methods]
def cmap_methods(method):
return METHOD_COLORS[m]
# define cyclic buffer of markers for methods
# https://matplotlib.org/3.1.1/api/markers_api.html
METHOD_MARKERS = dict(zip(submissions.keys(), list('.*^v<>pPhHXdD')))
# METHOD_MARKERS = dict(zip(submissions.keys(), list('.1234+xosD^v<>')))
def list_methods_markers(methods):
return [METHOD_MARKERS[m] for m in methods]
# display(pd.DataFrame([METHOD_COLORS, METHOD_MARKERS]).T)
```
## Compute ranked measures
Extend the aggregated statistic by Rank measures such as compute ranking over all cases for each selected field and average it
```
for field, field_agg in [('rTRE-Median', 'Median-rTRE'),
('rTRE-Max', 'Max-rTRE')]:
# Compute ranking per user in selected metric `field` over all dataset
user_cases = compute_ranking(user_cases, field)
for user in users:
# iterate over Robust or all cases
for robust in [True, False]:
# chose inly robyst if it is required
vals = [user_cases[user][cs][field + '_rank'] for cs in user_cases[user]
if (robust and user_cases[user][cs]['Robustness']) or (not robust)]
s_robust = '-Robust' if robust else ''
user_aggreg[user]['Average-Rank-' + field_agg + s_robust] = np.mean(vals)
user_aggreg[user]['STD-Rank-' + field_agg + s_robust] = np.std(vals)
# iterate over all tissue kinds
for tissue in tissues:
vals = [user_cases[user][cs][field + '_rank'] for cs in user_cases[user]
if user_cases[user][cs][FIELD_TISSUE] == tissue]
user_aggreg[user]['Average-Rank-' + field_agg + '__tissue_' + tissue + '__All'] = np.mean(vals)
user_aggreg[user]['STD-Rank-' + field_agg + '__tissue_' + tissue + '__All'] = np.std(vals)
```
Show the raw table with **global** statistic (joint training and testing/evaluation).
```
cols_all = [col for col in pd.DataFrame(user_aggreg).T.columns
if not any(n in col for n in [VAL_STATUS_TRAIN, VAL_STATUS_TEST, '_tissue_', '_any'])]
cols_general = list(filter(lambda c: not c.endswith('-Robust'), cols_all))
dfx = pd.DataFrame(user_aggreg).T.sort_values('Average-Median-rTRE')[cols_general]
display(dfx)
# Exporting results to CSV
dfx.sort_index().to_csv(os.path.join(PATH_TEMP, 'results_overall.csv'))
```
Only **robust** metrics (computed over images pairs with robustness higher then a threshold)
```
cols_robust = list(filter(lambda c: c.endswith('-Robust'), cols_all))
dfx = pd.DataFrame(user_aggreg).T.sort_values('Average-Median-rTRE')[cols_robust]
dfx.columns = list(map(lambda c: c.replace('-Robust', ''), dfx.columns))
display(dfx)
```
Define color and markers per method which shall be used later...
```
col_ranking = 'Average-Rank-Median-rTRE'
dfx = pd.DataFrame(user_aggreg).T.sort_values(col_ranking)
# display(dfx[[col_ranking]])
users_ranked = dfx.index
print('Odered methods by "%s": %s' % (col_ranking, list(users_ranked)))
```
## Basic visualizations
Show general results in a chart...
```
dfx = pd.DataFrame(user_aggreg)[users_ranked].T[list(filter(lambda c: not c.startswith('STD-'), cols_general))]
ax = dfx.T.plot.bar(figsize=(len(cols_general) * 0.7, 4), grid=True, logy=True, rot=75, color=list_methods_colors(dfx.index))
# ax.legend(loc='upper center', bbox_to_anchor=(0.5, 1.35),
# ncol=int(len(users) / 1.5), fancybox=True, shadow=True)
ax.legend(bbox_to_anchor=(1.1, 0.95))
ax.get_figure().tight_layout()
ax.get_figure().savefig(os.path.join(PATH_TEMP, 'bars_teams-scores.pdf'))
# plt.savefig(os.path.join(PATH_TEMP, 'fig_teams-scores.pdf'), constrained_layout=True)
for col, name in [('Average-Rank-Median-rTRE', 'ARMrTRE'),
('Average-Median-rTRE', 'AMrTRE'),
('Median-Median-rTRE', 'MMrTRE')]:
plt.figure(figsize=(4, 2.5))
dfx = pd.DataFrame(user_aggreg)[users_ranked].T[col].sort_values()
ax = dfx.plot.bar(grid=True, rot=40, color=list_methods_colors(dfx.index))
# ax = pd.DataFrame(user_aggreg).T.sort_values(col)[col].plot.bar(grid=True, rot=90, color='blue')
_= plt.ylabel(name)
ax.get_figure().tight_layout()
ax.get_figure().savefig(os.path.join(PATH_TEMP, 'bar_teams-scores_%s.pdf' % col))
```
Transform the case format data to be simple form with extra colums for used and case ID to be able to draw a violine plot later.
```
dfs_ = []
for usr in users:
df = pd.DataFrame(user_cases[usr]).T
df['method'] = usr
df['case'] = df.index
dfs_.append(df)
df_cases = pd.concat(dfs_).reset_index()
del dfs_
for col in df_cases.columns:
try:
df_cases[col] = pd.to_numeric(df_cases[col])
except Exception:
print('skip not numerical column: "%s"' % col)
# df_cases.head()
```
### Showing several distribution plots
```
def _format_ax(ax, name, use_log=False, vmax=None):
plt.xticks(rotation=60)
if use_log:
ax.set_yscale('log')
if vmax:
ax.set_ylim([0, vmax])
ax.grid(True)
ax.set_xlabel('')
ax.set_ylabel(name)
ax.get_figure().tight_layout()
show_metrics = [('rTRE-Median', 'MrTRE', True, None, 0.01),
('rTRE-Max', 'SrTRE', True, None, 0.01),
('Robustness', 'Robust.', False, None, 0.05),
('Norm-Time_minutes', 'Time [min]', True, 180, 0.1)]
for field, name, log, vmax, bw in show_metrics:
# methods_ = list(dfg['method'].unique())
vals_ = [df_cases[df_cases['method'] == m][field].values for m in users_ranked]
df_ = pd.DataFrame(np.array(vals_).T, columns=users_ranked)
fig, ax = plt.subplots(figsize=(5, 3))
bp = df_.plot.box(ax=ax, showfliers=True, showmeans=True,
color=dict(boxes='b', whiskers='b', medians='g', caps='k'),
boxprops=dict(linestyle='-', linewidth=1),
flierprops=dict(linestyle='-', linewidth=1),
medianprops=dict(linestyle='-', linewidth=1),
whiskerprops=dict(linestyle='-.', linewidth=1),
capprops=dict(linestyle='-', linewidth=1),
return_type='dict')
_format_ax(ax, name, log, vmax)
ax.get_figure().savefig(os.path.join(PATH_TEMP, 'boxbar_teams-scores_%s.pdf' % field))
for field, name, log, vmax, bw in show_metrics:
# methods_ = list(dfg['method'].unique())
vals_ = [df_cases[df_cases['method'] == m][field].values for m in users_ranked]
df_ = pd.DataFrame(np.array(vals_).T, columns=users_ranked)
fig = plt.figure(figsize=(5, 3))
# clr = sns.palplot(sns.color_palette(tuple(list_methods_colors(df_.columns))))
ax = sns.violinplot(ax=plt.gca(), data=df_, inner="quartile", trim=True, cut=0., palette=COLOR_PALETTE, linewidth=1.)
_format_ax(fig.gca(), name, log, vmax)
fig.gca().grid(True)
fig.savefig(os.path.join(PATH_TEMP, 'violin_teams-scores_%s.pdf' % field))
```
### Visualise global results
```
fields = ['Average-Max-rTRE', # 'Average-Max-rTRE-Robust',
'Average-Median-rTRE', # 'Average-Median-rTRE-Robust',
'Median-Median-rTRE', # 'Median-Median-rTRE-Robust',
# 'Average-Rank-Max-rTRE', 'Average-Rank-Median-rTRE',
'Average-Norm-Time', # 'Average-Norm-Time-Robust',
'Average-Robustness',]
df = pd.DataFrame(user_aggreg)[users_ranked].T[fields]
df['Average-Weakness'] = 1 - df['Average-Robustness']
del df['Average-Robustness']
radar = RadarChart(df, fig=plt.figure(figsize=(5, 4)), colors=list_methods_colors(df.index), fill_alpha=0.02)
lgd = radar.ax.legend(loc='lower center', bbox_to_anchor=(0.5, 1.15), ncol=int(len(users) / 1.5))
radar.fig.savefig(os.path.join(PATH_TEMP, 'radar_teams-scores.pdf'),
bbox_extra_artists=radar._labels + [lgd], bbox_inches='tight')
```
### Visual statistic over tissue types
Present some statistis depending on the tissue types...
```
cols_all = pd.DataFrame(user_aggreg).T.columns
col_avg_med_tissue = sorted(filter(
lambda c: 'Median-rTRE_tissue' in c and not 'Rank' in c and 'Median-Median-' not in c, cols_all))
col_robust_tissue = sorted(filter(
lambda c: 'Average-Robustness_tissue' in c and not 'Rank' in c, cols_all))
params_tuple = [
(col_avg_med_tissue, 'Avg. Median rTRE', 'Average-Median-rTRE__tissue_{}__All', True),
(col_robust_tissue,'Avg. Robust', 'Average-Robustness__tissue_{}__All', False),
]
for cols, desc, drop, use_log in params_tuple:
# print('"%s" with sample columns: %s' % (desc, cols[:3]))
dfx = pd.DataFrame(user_aggreg)[users_ranked].T[cols]
# colors = plt.get_cmap('nipy_spectral', len(dfx))
fig, extras = draw_scatter_double_scale(
dfx, colors=list_methods_colors(users_ranked), ax_decs={desc: None},
idx_markers=list_methods_markers(users_ranked), xlabel='Methods', figsize=(2 + len(dfx.columns) * 0.95, 3),
# legend_style=dict(bbox_to_anchor=(0.5, 1.15), ncol=4),
legend_style=dict(bbox_to_anchor=(1.15, 0.95), ncol=1),
x_spread=(0.4, 5))
# DEPRICATED visualisation
# ax = dfx.T.plot(style='X', cmap=plt.get_cmap('nipy_spectral', len(dfx)), figsize=(len(dfx) / 2 + 1, 5), grid=True)
# ax.legend(loc='upper center', bbox_to_anchor=(1.2, 1.0), ncol=1)
extras['ax1'].set_xticks(range(len(cols)))
extras['ax1'].set_xticklabels(list(map(lambda c: col_metric_rename(c.replace(drop, '')), cols)), rotation=45, ha="center")
_format_ax(extras['ax1'], desc, use_log, vmax=None)
name = ''.join(filter(lambda s: s not in '(.)', desc)).replace(' ', '-')
fig.savefig(os.path.join(PATH_TEMP, 'scat_teams-scores_tissue-%s.pdf' % name),
bbox_extra_artists=(extras['legend'],), bbox_inches='tight') #
```
| github_jupyter |
# Latent Dirichlet Allocation for Text Data
In this assignment you will
* apply standard preprocessing techniques on Wikipedia text data
* use GraphLab Create to fit a Latent Dirichlet allocation (LDA) model
* explore and interpret the results, including topic keywords and topic assignments for documents
Recall that a major feature distinguishing the LDA model from our previously explored methods is the notion of *mixed membership*. Throughout the course so far, our models have assumed that each data point belongs to a single cluster. k-means determines membership simply by shortest distance to the cluster center, and Gaussian mixture models suppose that each data point is drawn from one of their component mixture distributions. In many cases, though, it is more realistic to think of data as genuinely belonging to more than one cluster or category - for example, if we have a model for text data that includes both "Politics" and "World News" categories, then an article about a recent meeting of the United Nations should have membership in both categories rather than being forced into just one.
With this in mind, we will use GraphLab Create tools to fit an LDA model to a corpus of Wikipedia articles and examine the results to analyze the impact of a mixed membership approach. In particular, we want to identify the topics discovered by the model in terms of their most important words, and we want to use the model to predict the topic membership distribution for a given document.
**Note to Amazon EC2 users**: To conserve memory, make sure to stop all the other notebooks before running this notebook.
## Text Data Preprocessing
We'll start by importing our familiar Wikipedia dataset.
The following code block will check if you have the correct version of GraphLab Create. Any version later than 1.8.5 will do. To upgrade, read [this page](https://turi.com/download/upgrade-graphlab-create.html).
```
import graphlab as gl
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
'''Check GraphLab Create version'''
from distutils.version import StrictVersion
assert (StrictVersion(gl.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.'
# import wiki data
wiki = gl.SFrame('../data/people_wiki.gl/')
wiki
```
In the original data, each Wikipedia article is represented by a URI, a name, and a string containing the entire text of the article. Recall from the video lectures that LDA requires documents to be represented as a _bag of words_, which ignores word ordering in the document but retains information on how many times each word appears. As we have seen in our previous encounters with text data, words such as 'the', 'a', or 'and' are by far the most frequent, but they appear so commonly in the English language that they tell us almost nothing about how similar or dissimilar two documents might be.
Therefore, before we train our LDA model, we will preprocess the Wikipedia data in two steps: first, we will create a bag of words representation for each article, and then we will remove the common words that don't help us to distinguish between documents. For both of these tasks we can use pre-implemented tools from GraphLab Create:
```
wiki_docs = gl.text_analytics.count_words(wiki['text'])
wiki_docs = wiki_docs.dict_trim_by_keys(gl.text_analytics.stopwords(), exclude=True)
```
## Model fitting and interpretation
In the video lectures we saw that Gibbs sampling can be used to perform inference in the LDA model. In this assignment we will use a GraphLab Create method to learn the topic model for our Wikipedia data, and our main emphasis will be on interpreting the results. We'll begin by creating the topic model using create() from GraphLab Create's topic_model module.
Note: This may take several minutes to run.
```
topic_model = gl.topic_model.create(wiki_docs, num_topics=10, num_iterations=200)
```
GraphLab provides a useful summary of the model we have fitted, including the hyperparameter settings for alpha, gamma (note that GraphLab Create calls this parameter beta), and K (the number of topics); the structure of the output data; and some useful methods for understanding the results.
```
topic_model
```
It is certainly useful to have pre-implemented methods available for LDA, but as with our previous methods for clustering and retrieval, implementing and fitting the model gets us only halfway towards our objective. We now need to analyze the fitted model to understand what it has done with our data and whether it will be useful as a document classification system. This can be a challenging task in itself, particularly when the model that we use is complex. We will begin by outlining a sequence of objectives that will help us understand our model in detail. In particular, we will
* get the top words in each topic and use these to identify topic themes
* predict topic distributions for some example documents
* compare the quality of LDA "nearest neighbors" to the NN output from the first assignment
* understand the role of model hyperparameters alpha and gamma
## Load a fitted topic model
The method used to fit the LDA model is a _randomized algorithm_, which means that it involves steps that are random; in this case, the randomness comes from Gibbs sampling, as discussed in the LDA video lectures. Because of these random steps, the algorithm will be expected to yield slighty different output for different runs on the same data - note that this is different from previously seen algorithms such as k-means or EM, which will always produce the same results given the same input and initialization.
It is important to understand that variation in the results is a fundamental feature of randomized methods. However, in the context of this assignment this variation makes it difficult to evaluate the correctness of your analysis, so we will load and analyze a pre-trained model.
We recommend that you spend some time exploring your own fitted topic model and compare our analysis of the pre-trained model to the same analysis applied to the model you trained above.
```
topic_model = gl.load_model('../data/lda/lda_assignment_topic_model')
```
# Identifying topic themes by top words
We'll start by trying to identify the topics learned by our model with some major themes. As a preliminary check on the results of applying this method, it is reasonable to hope that the model has been able to learn topics that correspond to recognizable categories. In order to do this, we must first recall what exactly a 'topic' is in the context of LDA.
In the video lectures on LDA we learned that a topic is a probability distribution over words in the vocabulary; that is, each topic assigns a particular probability to every one of the unique words that appears in our data. Different topics will assign different probabilities to the same word: for instance, a topic that ends up describing science and technology articles might place more probability on the word 'university' than a topic that describes sports or politics. Looking at the highest probability words in each topic will thus give us a sense of its major themes. Ideally we would find that each topic is identifiable with some clear theme _and_ that all the topics are relatively distinct.
We can use the GraphLab Create function get_topics() to view the top words (along with their associated probabilities) from each topic.
__Quiz Question:__ Identify the top 3 most probable words for the first topic.
```
df=topic_model.get_topics()
df.head()
df[df['topic']==0].print_rows(10,3)
```
__ Quiz Question:__ What is the sum of the probabilities assigned to the top 50 words in the 3rd topic?
```
#temp = topic_model.get_topics(num_words=50)
#temp.print_rows(200,3)
temp = topic_model.get_topics(num_words=50)
temp['score'][20:30].sum()
```
Let's look at the top 10 words for each topic to see if we can identify any themes:
```
[x['words'] for x in topic_model.get_topics(output_type='topic_words', num_words=10)]
```
We propose the following themes for each topic:
- topic 0: Science and research
- topic 1: Team sports
- topic 2: Music, TV, and film
- topic 3: American college and politics
- topic 4: General politics
- topic 5: Art and publishing
- topic 6: Business
- topic 7: International athletics
- topic 8: Great Britain and Australia
- topic 9: International music
We'll save these themes for later:
```
themes = ['science and research','team sports','music, TV, and film','American college and politics','general politics', \
'art and publishing','Business','international athletics','Great Britain and Australia','international music']
```
### Measuring the importance of top words
We can learn more about topics by exploring how they place probability mass (which we can think of as a weight) on each of their top words.
We'll do this with two visualizations of the weights for the top words in each topic:
- the weights of the top 100 words, sorted by the size
- the total weight of the top 10 words
Here's a plot for the top 100 words by weight in each topic:
```
for i in range(10):
plt.plot(range(100), topic_model.get_topics(topic_ids=[i], num_words=100)['score'])
plt.xlabel('Word rank')
plt.ylabel('Probability')
plt.title('Probabilities of Top 100 Words in each Topic')
```
In the above plot, each line corresponds to one of our ten topics. Notice how for each topic, the weights drop off sharply as we move down the ranked list of most important words. This shows that the top 10-20 words in each topic are assigned a much greater weight than the remaining words - and remember from the summary of our topic model that our vocabulary has 547462 words in total!
Next we plot the total weight assigned by each topic to its top 10 words:
```
top_probs = [sum(topic_model.get_topics(topic_ids=[i], num_words=10)['score']) for i in range(10)]
ind = np.arange(10)
width = 0.5
fig, ax = plt.subplots()
ax.bar(ind-(width/2),top_probs,width)
ax.set_xticks(ind)
plt.xlabel('Topic')
plt.ylabel('Probability')
plt.title('Total Probability of Top 10 Words in each Topic')
plt.xlim(-0.5,9.5)
plt.ylim(0,0.15)
plt.show()
```
Here we see that, for our topic model, the top 10 words only account for a small fraction (in this case, between 5% and 13%) of their topic's total probability mass. So while we can use the top words to identify broad themes for each topic, we should keep in mind that in reality these topics are more complex than a simple 10-word summary.
Finally, we observe that some 'junk' words appear highly rated in some topics despite our efforts to remove unhelpful words before fitting the model; for example, the word 'born' appears as a top 10 word in three different topics, but it doesn't help us describe these topics at all.
# Topic distributions for some example documents
As we noted in the introduction to this assignment, LDA allows for mixed membership, which means that each document can partially belong to several different topics. For each document, topic membership is expressed as a vector of weights that sum to one; the magnitude of each weight indicates the degree to which the document represents that particular topic.
We'll explore this in our fitted model by looking at the topic distributions for a few example Wikipedia articles from our data set. We should find that these articles have the highest weights on the topics whose themes are most relevant to the subject of the article - for example, we'd expect an article on a politician to place relatively high weight on topics related to government, while an article about an athlete should place higher weight on topics related to sports or competition.
Topic distributions for documents can be obtained using GraphLab Create's predict() function. GraphLab Create uses a collapsed Gibbs sampler similar to the one described in the video lectures, where only the word assignments variables are sampled. To get a document-specific topic proportion vector post-facto, predict() draws this vector from the conditional distribution given the sampled word assignments in the document. Notice that, since these are draws from a _distribution_ over topics that the model has learned, we will get slightly different predictions each time we call this function on a document - we can see this below, where we predict the topic distribution for the article on Barack Obama:
```
obama = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Barack Obama')[0])]])
pred1 = topic_model.predict(obama, output_type='probability')
pred2 = topic_model.predict(obama, output_type='probability')
print(gl.SFrame({'topics':themes, 'predictions (first draw)':pred1[0], 'predictions (second draw)':pred2[0]}))
```
To get a more robust estimate of the topics for each document, we can average a large number of predictions for the same document:
```
def average_predictions(model, test_document, num_trials=100):
avg_preds = np.zeros((model.num_topics))
for i in range(num_trials):
avg_preds += model.predict(test_document, output_type='probability')[0]
avg_preds = avg_preds/num_trials
result = gl.SFrame({'topics':themes, 'average predictions':avg_preds})
result = result.sort('average predictions', ascending=False)
return result
print average_predictions(topic_model, obama, 100)
```
__Quiz Question:__ What is the topic most closely associated with the article about former US President George W. Bush? Use the average results from 100 topic predictions.
```
bush = gl.SArray([wiki_docs[int(np.where(wiki['name']=='George W. Bush')[0])]])
print average_predictions(topic_model, bush, 100)
```
__Quiz Question:__ What are the top 3 topics corresponding to the article about English football (soccer) player Steven Gerrard? Use the average results from 100 topic predictions.
```
gerrard = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Steven Gerrard')[0])]])
print average_predictions(topic_model, gerrard, 100)
```
# Comparing LDA to nearest neighbors for document retrieval
So far we have found that our topic model has learned some coherent topics, we have explored these topics as probability distributions over a vocabulary, and we have seen how individual documents in our Wikipedia data set are assigned to these topics in a way that corresponds with our expectations.
In this section, we will use the predicted topic distribution as a representation of each document, similar to how we have previously represented documents by word count or TF-IDF. This gives us a way of computing distances between documents, so that we can run a nearest neighbors search for a given document based on its membership in the topics that we learned from LDA. We can contrast the results with those obtained by running nearest neighbors under the usual TF-IDF representation, an approach that we explored in a previous assignment.
We'll start by creating the LDA topic distribution representation for each document:
```
wiki['lda'] = topic_model.predict(wiki_docs, output_type='probability')
```
Next we add the TF-IDF document representations:
```
wiki['word_count'] = gl.text_analytics.count_words(wiki['text'])
wiki['tf_idf'] = gl.text_analytics.tf_idf(wiki['word_count'])
```
For each of our two different document representations, we can use GraphLab Create to compute a brute-force nearest neighbors model:
```
model_tf_idf = gl.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='cosine')
model_lda_rep = gl.nearest_neighbors.create(wiki, label='name', features=['lda'],
method='brute_force', distance='cosine')
```
Let's compare these nearest neighbor models by finding the nearest neighbors under each representation on an example document. For this example we'll use Paul Krugman, an American economist:
```
model_tf_idf.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10)
model_lda_rep.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10)
```
Notice that that there is no overlap between the two sets of top 10 nearest neighbors. This doesn't necessarily mean that one representation is better or worse than the other, but rather that they are picking out different features of the documents.
With TF-IDF, documents are distinguished by the frequency of uncommon words. Since similarity is defined based on the specific words used in the document, documents that are "close" under TF-IDF tend to be similar in terms of specific details. This is what we see in the example: the top 10 nearest neighbors are all economists from the US, UK, or Canada.
Our LDA representation, on the other hand, defines similarity between documents in terms of their topic distributions. This means that documents can be "close" if they share similar themes, even though they may not share many of the same keywords. For the article on Paul Krugman, we expect the most important topics to be 'American college and politics' and 'science and research'. As a result, we see that the top 10 nearest neighbors are academics from a wide variety of fields, including literature, anthropology, and religious studies.
__Quiz Question:__ Using the TF-IDF representation, compute the 5000 nearest neighbors for American baseball player Alex Rodriguez. For what value of k is Mariano Rivera the k-th nearest neighbor to Alex Rodriguez? (Hint: Once you have a list of the nearest neighbors, you can use `mylist.index(value)` to find the index of the first instance of `value` in `mylist`.)
__Quiz Question:__ Using the LDA representation, compute the 5000 nearest neighbors for American baseball player Alex Rodriguez. For what value of k is Mariano Rivera the k-th nearest neighbor to Alex Rodriguez? (Hint: Once you have a list of the nearest neighbors, you can use `mylist.index(value)` to find the index of the first instance of `value` in `mylist`.)
```
rets = model_tf_idf.query(wiki[wiki['name'] == 'Alex Rodriguez'], label='name', k=5000)
list(rets['reference_label']).index('Mariano Rivera')
rets2 = model_lda_rep.query(wiki[wiki['name'] == 'Alex Rodriguez'], label='name', k=5000)
list(rets2['reference_label']).index('Mariano Rivera')
```
# Understanding the role of LDA model hyperparameters
Finally, we'll take a look at the effect of the LDA model hyperparameters alpha and gamma on the characteristics of our fitted model. Recall that alpha is a parameter of the prior distribution over topic weights in each document, while gamma is a parameter of the prior distribution over word weights in each topic.
In the video lectures, we saw that alpha and gamma can be thought of as smoothing parameters when we compute how much each document "likes" a topic (in the case of alpha) or how much each topic "likes" a word (in the case of gamma). In both cases, these parameters serve to reduce the differences across topics or words in terms of these calculated preferences; alpha makes the document preferences "smoother" over topics, and gamma makes the topic preferences "smoother" over words.
Our goal in this section will be to understand how changing these parameter values affects the characteristics of the resulting topic model.
__Quiz Question:__ What was the value of alpha used to fit our original topic model?
```
topic_model
```
__Quiz Question:__ What was the value of gamma used to fit our original topic model? Remember that GraphLab Create uses "beta" instead of "gamma" to refer to the hyperparameter that influences topic distributions over words.
We'll start by loading some topic models that have been trained using different settings of alpha and gamma. Specifically, we will start by comparing the following two models to our original topic model:
- tpm_low_alpha, a model trained with alpha = 1 and default gamma
- tpm_high_alpha, a model trained with alpha = 50 and default gamma
```
tpm_low_alpha = gl.load_model('../data/lda/lda_low_alpha')
tpm_high_alpha = gl.load_model('../data/lda/lda_high_alpha')
```
### Changing the hyperparameter alpha
Since alpha is responsible for smoothing document preferences over topics, the impact of changing its value should be visible when we plot the distribution of topic weights for the same document under models fit with different alpha values. In the code below, we plot the (sorted) topic weights for the Wikipedia article on Barack Obama under models fit with high, original, and low settings of alpha.
```
a = np.sort(tpm_low_alpha.predict(obama,output_type='probability')[0])[::-1]
b = np.sort(topic_model.predict(obama,output_type='probability')[0])[::-1]
c = np.sort(tpm_high_alpha.predict(obama,output_type='probability')[0])[::-1]
ind = np.arange(len(a))
width = 0.3
def param_bar_plot(a,b,c,ind,width,ylim,param,xlab,ylab):
fig = plt.figure()
ax = fig.add_subplot(111)
b1 = ax.bar(ind, a, width, color='lightskyblue')
b2 = ax.bar(ind+width, b, width, color='lightcoral')
b3 = ax.bar(ind+(2*width), c, width, color='gold')
ax.set_xticks(ind+width)
ax.set_xticklabels(range(10))
ax.set_ylabel(ylab)
ax.set_xlabel(xlab)
ax.set_ylim(0,ylim)
ax.legend(handles = [b1,b2,b3],labels=['low '+param,'original model','high '+param])
plt.tight_layout()
param_bar_plot(a,b,c,ind,width,ylim=1.0,param='alpha',
xlab='Topics (sorted by weight of top 100 words)',ylab='Topic Probability for Obama Article')
```
Here we can clearly see the smoothing enforced by the alpha parameter - notice that when alpha is low most of the weight in the topic distribution for this article goes to a single topic, but when alpha is high the weight is much more evenly distributed across the topics.
__Quiz Question:__ How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on Paul Krugman in the **low alpha** model? Use the average results from 100 topic predictions.
```
paul = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Paul Krugman')[0])]])
preds_low_alpha = average_predictions(tpm_low_alpha, paul, 100)
#preds['average predictions'] > 0.3
(np.array(preds_low_alpha['average predictions'] > 0.3, dtype=int)\
| np.array(preds_low_alpha['average predictions'] < 0.05, dtype=int)).sum()
```
__Quiz Question:__ How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on Paul Krugman in the **high alpha** model? Use the average results from 100 topic predictions.
```
preds_high_alpha = average_predictions(tpm_high_alpha, paul, 100)
(np.array(preds_high_alpha['average predictions'] > 0.3, dtype=int)\
|np.array(preds_high_alpha['average predictions'] < 0.05, dtype=int)).sum()
```
### Changing the hyperparameter gamma
Just as we were able to see the effect of alpha by plotting topic weights for a document, we expect to be able to visualize the impact of changing gamma by plotting word weights for each topic. In this case, however, there are far too many words in our vocabulary to do this effectively. Instead, we'll plot the total weight of the top 100 words and bottom 1000 words for each topic. Below, we plot the (sorted) total weights of the top 100 words and bottom 1000 from each topic in the high, original, and low gamma models.
Now we will consider the following two models:
- tpm_low_gamma, a model trained with gamma = 0.02 and default alpha
- tpm_high_gamma, a model trained with gamma = 0.5 and default alpha
```
del tpm_low_alpha
del tpm_high_alpha
tpm_low_gamma = gl.load_model('../data/lda/lda_low_gamma')
tpm_high_gamma = gl.load_model('../data/lda/lda_high_gamma')
a_top = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]
b_top = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]
c_top = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]
a_bot = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]
b_bot = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]
c_bot = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]
ind = np.arange(len(a))
width = 0.3
param_bar_plot(a_top, b_top, c_top, ind, width, ylim=0.6, param='gamma',
xlab='Topics (sorted by weight of top 100 words)',
ylab='Total Probability of Top 100 Words')
param_bar_plot(a_bot, b_bot, c_bot, ind, width, ylim=0.0002, param='gamma',
xlab='Topics (sorted by weight of bottom 1000 words)',
ylab='Total Probability of Bottom 1000 Words')
```
From these two plots we can see that the low gamma model results in higher weight placed on the top words and lower weight placed on the bottom words for each topic, while the high gamma model places relatively less weight on the top words and more weight on the bottom words. Thus increasing gamma results in topics that have a smoother distribution of weight across all the words in the vocabulary.
__Quiz Question:__ For each topic of the **low gamma model**, compute the number of words required to make a list with total probability 0.5. What is the average number of words required across all topics? (HINT: use the get\_topics() function from GraphLab Create with the cdf\_cutoff argument).
```
nums = [len(tpm_low_gamma.get_topics([i], num_words=100000, cdf_cutoff=0.5)) for i in range(10)]
np.array(nums).mean()
```
__Quiz Question:__ For each topic of the **high gamma model**, compute the number of words required to make a list with total probability 0.5. What is the average number of words required across all topics? (HINT: use the get\_topics() function from GraphLab Create with the cdf\_cutoff argument).
```
nums2=[len(tpm_high_gamma.get_topics([i],num_words=100000, cdf_cutoff=0.5)) for i in range(10)]
np.array(nums2).mean()
```
We have now seen how the hyperparameters alpha and gamma influence the characteristics of our LDA topic model, but we haven't said anything about what settings of alpha or gamma are best. We know that these parameters are responsible for controlling the smoothness of the topic distributions for documents and word distributions for topics, but there's no simple conversion between smoothness of these distributions and quality of the topic model. In reality, there is no universally "best" choice for these parameters. Instead, finding a good topic model requires that we be able to both explore the output (as we did by looking at the topics and checking some topic predictions for documents) and understand the impact of hyperparameter settings (as we have in this section).
| github_jupyter |
# Basic Functionality
ghoul version 0.1.0
## Collapsing symbols
The purpose of ghoul is to randomly generate internally-consistent python objects.
The basic unit in ghoul is the `Symbol`. A symbol is an object in a "superposition": it contains many possible states, until it "collapses" to a concrete value.
```
from ghoul import Symbol
# create a new symbol
number = Symbol([1, 2, 3])
# check the possible values of number
number
# force number to adopt a concrete value
number.Observe()
# after this, number no long contains other possibilities
number
```
Upon being observed, `number` collapsed to a concrete value of 2, but it could have taken any of its potential values.
## Symbolic objects
Symbols can represent superpositions of generic python objects.
```
class Fruit(object):
pass
class Apple(Fruit):
def __init__(self):
self.name = 'apple'
self.color = Symbol(['green', 'red'])
class Pear(Fruit):
def __init__(self):
self.name = 'pear'
self.color = Symbol(['green', 'yellow'])
# define a superposed instance of Fruit
fruit = Symbol(Fruit)
# check the possible attributes of the fruit
print(fruit)
print(fruit.name)
print(fruit.color)
```
`fruit` is in a superposition of Apple and Pear, and this is reflected in the superposed values of its attributes `name` and `color`.
## Top-down collapsing
```
# collapse the fruit
fruit.Observe()
# inspect its new values
print(fruit)
print(fruit.name)
print(fruit.color)
```
After collapsing `fruit` to an instance of `Pear`, we know its name must be `'pear'`, but its color is still undefined, being either green or yellow. It is no longer possible for `fruit` to be red.
## Bottom-up collapsing
Critically, when any attribute of the object is observed and takes on a concrete value, all of its superposed attributes will change as necessary to maintain consistency with the observation.
### The consistency requirement:
**The state of a symbolic object will always be consistent with past observations.**
```
# create a new superposed instance of Fruit
fruit = Symbol(Fruit)
# observe the fruit's name
fruit.name.Observe()
```
after observing `fruit.name`, it collapsed to `'apple'`. `fruit` must now adjust its possible values to be consistent with this name.
`fruit` had two possible values, an `Apple` object and a `Pear` object. If `Pear` remains a possibility, then it would be possible for `fruit` to collapse to a `Pear` named `apple`.
This is inconsistent with the definition of the `Pear` object, so once `fruit.name` collapses to `'apple'`, `Pear` is removed as a possiblity in order to preserve internal consistency.
```
# check all of the attribute values now
print(fruit)
print(fruit.name)
print(fruit.color)
```
Knowing that `fruit` is an `Apple` object still does not tell us everything about its color, because an `Apple` may be still be either green or red.
## Minimal restriction
Both `Apple` and `Pear` are consistent with the fruit having a green color. Let's see what happens if we force `fruit` to be green:
```
# create a new superposed instance of Fruit
fruit = Symbol(Fruit)
# force fruit to be green
fruit.color.Collapse('green')
# check all of the attribute values now
print(fruit)
print(fruit.name)
print(fruit.color)
```
The fruit being green is not sufficient to discriminate between an Apple or a Pear. Therefore `fruit` may be either an `Apple` or `Pear` object, and `fruit.name` can be either `'apple'` or `'pear'`.
## Subclass restriction
Let's define a few subclasses of `Apple`:
```
class GrannySmith(Apple):
def __init__(self):
super().__init__()
self.color = 'green'
class Honeycrisp(Apple):
def __init__(self):
super().__init__()
self.color = 'red'
```
Now when we define fruit as symbolic instance of `Fruit`, it takes on potential values of `GrannySmith`, `Honeycrisp`, or `Pear`:
```
# create a new superposed instance of Fruit
fruit = Symbol(Fruit)
print(fruit)
print(fruit.name)
print(fruit.color)
```
We force `fruit` to collapse to `Apple`. It therefore restricts its potential values to only those which are instances of `Apple` or subclasses of `Apple`:
```
fruit.Collapse(Apple)
print(fruit)
print(fruit.name)
print(fruit.color)
```
| github_jupyter |
```
import sys, os, glob
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import logging
# from scipy.interpolate import UnivariateSpline, interp1d
from statsmodels.stats.multicomp import pairwise_tukeyhsd, MultiComparison
from statsmodels.stats.libqsturng import psturng
import scipy.stats as stats
# logging.basicConfig(stream=sys.stdout, format='%(asctime)s - %(levelname)s - %(message)s', level=logging.DEBUG)
logging.basicConfig(stream=sys.stdout, format='%(asctime)s - %(levelname)s - %(message)s', level=logging.INFO)
%matplotlib inline
font = {'family' : 'Arial',
'size' : 7}
matplotlib.rc('font', **font)
plt.rcParams['svg.fonttype'] = 'none'
# Make a folder if it is not already there to store exported figures
!mkdir ../jupyter_figures
# Facility Functions
def tukeyTest(data, groups, alpha=0.05):
'''Perform pairwise Tukey test for the data by groups
'''
# pairwise comparisons using Tukey's test, calculating p-values
res = pairwise_tukeyhsd(data, groups, alpha)
print("Summary of test:\n", res)
# print(dir(results))# prints out all attributes of an object
pVal = psturng(np.abs(res.meandiffs / res.std_pairs), len(res.groupsunique), res.df_total)
print("p values of all pair-wise tests:\n", pVal)
return res
def plotDensityBarSwarm(groups, density, outputFigPath, yTickSpacing=30,
plot_order=["sg-Control", "sg1-Cdh1", "sg2-Cdh1", "sg-Itgb1"],
yMax=None, yTicks=None, fig_width=0.7, fig_height=1.0):
'''plot bar and swarm plots of cell density data, save .svg as outputFigPath
Note: error bar here is 95% confidence interval by bootstrapping
'''
fig = plt.figure(figsize=(fig_width,fig_height), dpi=300)
ax = fig.add_axes([0.1, 0.1, 0.8, 0.8])
ax = sns.swarmplot(groups, density, color="blue", size=1, alpha=.6,
order=plot_order)
ax = sns.barplot(groups, density, color="Gray", alpha=1.0,
errwidth=.7, errcolor="k", capsize=.2, ci=95,
order=plot_order)
if yMax == None:
yMax = int(max(density)/5 + 1) * 5
if yTicks == None:
spacing = yTickSpacing
yTicks = [spacing*i for i in range(int(yMax/spacing) + 1)]
plt.ylim(0, yMax)
plt.yticks(yTicks)
plt.xlabel(None)
# plt.ylabel("Attached cells / $mm^2$")
plt.ylabel("Attached cells / mm2")
ax.set_xticklabels(labels=plot_order, rotation=45, ha="right")
for o in fig.findobj():
o.set_clip_on(False)
for o in ax.findobj():
o.set_clip_on(False)
plt.savefig(outputFigPath)
return ax
# The density when 1E5 total cells evenly attached to a circular area with 35 mm diamter
DENSITY_UPPER = 1E5 / (np.pi*17.5*17.5)
DENSITY_UPPER
# This is the normalization constant in square mm
# Every Filed of view (FOV) is identical with 2048x2044 pixels
# The pixel size is 0.65 um
FOV_AREA = 2048 * 0.65 * 2044 * 0.65 / 1000 / 1000
FOV_AREA
# Read in and clean up the data for 2 hour fixed E-cadherin coated surface
#
# Each spreadsheet contains two columns: file name and the cell counts
#
# Each experimental condition has 3 or 4 replicates (3 or 4 wells), each
# well we had taken 13 fields of view images
#
# Nomenclature:
#
# cell_line D193 D267 D266 D301
# cell_id* 1 2 3 4
# sgRNA Control sg1-Cdh1 sg2-Cdh1 sg-Itgb1
#
# * cell_id is used for denoting the wells.
# For example, 1-1, 1-2, 1-3 and 1-4 denote 4 wells (replicates) for cell line #1, which is D193
folder = "../data/cell-attachment-assay-count-data/"
fileList = glob.glob(folder + "20200205-Ecad-coating-cell-attachment-2h*.txt")
fileList.sort()
fileList
# 1. 20200205-Ecad-coating-cell-attachment-2h-D193-D301-plate3-1well-each-splitPositions-cell-counts.txt
df = pd.read_csv(fileList[0], header=0, sep="\t")
incubation_time = 2 * 13 * ["2h"]
cell_line = 1 * 13 * ["D193"] + 1 * 13 * ["D301"]
sgRNA = 1 * 13 * ["sg-Control"] + 1 * 13 * ["sg-Itgb1"]
# This is the extra plate in which we did one extra well each for D193 and D301
wells = 13 * ["1-4"] + 13 * ["4-4"]
df["incubation_time"] = incubation_time
df["cell_line"] = cell_line
df["sgRNA"] = sgRNA
df["wells"] = wells
df1 = df
# 2. 20200205-Ecad-coating-cell-attachment-2h-D193-plate1-3wells-splitPositions-cell-counts.txt
df = pd.read_csv(fileList[1], header=0, sep="\t")
incubation_time = 3 * 13 * ["2h"]
cell_line = 3 * 13 * ["D193"]
sgRNA = 3 * 13 * ["sg-Control"]
wells = 13 * ["1-1"] + 13 * ["1-2"] + 13 * ["1-3"]
df["incubation_time"] = incubation_time
df["cell_line"] = cell_line
df["sgRNA"] = sgRNA
df["wells"] = wells
df2 = df
# 3. 20200205-Ecad-coating-cell-attachment-2h-D266-plate2-3wells-splitPositions-cell-counts.txt
df = pd.read_csv(fileList[2], header=0, sep="\t")
incubation_time = 3 * 13 * ["2h"]
cell_line = 3 * 13 * ["D266"]
sgRNA = 3 * 13 * ["sg2-Cdh1"]
wells = 13 * ["3-1"] + 13 * ["3-2"] + 13 * ["3-3"]
df["incubation_time"] = incubation_time
df["cell_line"] = cell_line
df["sgRNA"] = sgRNA
df["wells"] = wells
df3 = df
# 4. 20200205-Ecad-coating-cell-attachment-2h-D267-plate1-3wells-splitPositions-cell-counts.txt
df = pd.read_csv(fileList[3], header=0, sep="\t")
incubation_time = 3 * 13 * ["2h"]
cell_line = 3 * 13 * ["D267"]
sgRNA = 3 * 13 * ["sg1-Cdh1"]
wells = 13 * ["2-1"] + 13 * ["2-2"] + 13 * ["2-3"]
df["incubation_time"] = incubation_time
df["cell_line"] = cell_line
df["sgRNA"] = sgRNA
df["wells"] = wells
df4 = df
# 5. 20200205-Ecad-coating-cell-attachment-2h-D301-plate2-3wells-splitPositions-cell-counts.txt
df = pd.read_csv(fileList[4], header=0, sep="\t")
incubation_time = 3 * 13 * ["2h"]
sgRNA = 3 * 13 * ["sg-Itgb1"]
wells = 13 * ["4-1"] + 13 * ["4-2"] + 13 * ["4-3"]
df["incubation_time"] = incubation_time
df["cell_line"] = cell_line
df["sgRNA"] = sgRNA
df["wells"] = wells
df5 = df
df = pd.concat([df1, df2, df3, df4, df5])
df.reset_index(inplace=True)
df.sort_values(by="cell_line", inplace=True)
df["cell_density"] = df.cell_number / FOV_AREA
df_Ecad_2h = df
df.groupby("sgRNA")["cell_density"].describe()
outputPrefix = "cell_attachment_Ecad_coating_2h_tall"
outputFigPath = "../jupyter_figures/" + outputPrefix + ".svg"
plotDensityBarSwarm(df.sgRNA, df.cell_density, outputFigPath,
yTickSpacing=30, yMax=160,
fig_width=0.7, fig_height=1.5)
# Filter out values close to the mean to select representative images
df1 = df[df.cell_number>=46]
df2 = df1[df1.cell_number<56]
df2
# # Filter out values close to the mean to select representative images
# df1 = df[df.cell_number>=0]
# df2 = df1[df1.cell_number<10]
# df2
# Read in and clean up the data for 1 hour fixed E-cadherin coated surface
#
# Each spreadsheet contains two columns: file name and the cell counts
#
# Each experimental condition has 3 replicates (3 wells), each well we had
# taken 13 fields of view images.
#
# Note that for D301, one well is in a separate plate because one well of
# the original plate had its glass coverslip fallen off during coating
#
# Nomenclature:
#
# cell_line D193 D267 D266 D301
# cell_id* 1 2 3 4
# sgRNA Control sg1-Cdh1 sg2-Cdh1 sg-Itgb1
#
# * cell_id is used for denoting the wells.
# For example, 1-1, 1-2, 1-3 and 1-4 denote 4 wells (replicates) for cell line #1, which is D193
folder = "../data/cell-attachment-assay-count-data/"
fileList = glob.glob(folder + "20200205-Ecad-coating-cell-attachment-1h*.txt")
fileList.sort()
fileList
# 1. 20200205-Ecad-coating-cell-attachment-1h-D193-plate1-3wells-splitPositions-cell-counts.txt
df = pd.read_csv(fileList[0], header=0, sep="\t")
incubation_time = 3 * 13 * ["1h"]
cell_line = 3 * 13 * ["D193"]
sgRNA = 3 * 13 * ["sg-Control"]
wells = 13 * ["1-1"] + 13 * ["1-2"] + 13 * ["1-3"]
df["incubation_time"] = incubation_time
df["cell_line"] = cell_line
df["sgRNA"] = sgRNA
df["wells"] = wells
df1 = df
# 2. 20200205-Ecad-coating-cell-attachment-1h-D266-plate2-3wells-splitPositions-cell-counts.txt
df = pd.read_csv(fileList[1], header=0, sep="\t")
incubation_time = 3 * 13 * ["1h"]
cell_line = 3 * 13 * ["D266"]
sgRNA = 3 * 13 * ["sg2-Cdh1"]
wells = 13 * ["3-1"] + 13 * ["3-2"] + 13 * ["3-3"]
df["incubation_time"] = incubation_time
df["cell_line"] = cell_line
df["sgRNA"] = sgRNA
df["wells"] = wells
df2 = df
# 3. 20200205-Ecad-coating-cell-attachment-1h-D267-plate1-3wells-splitPositions-cell-counts.txt
df = pd.read_csv(fileList[2], header=0, sep="\t")
incubation_time = 3 * 13 * ["1h"]
cell_line = 3 * 13 * ["D267"]
sgRNA = 3 * 13 * ["sg1-Cdh1"]
wells = 13 * ["2-1"] + 13 * ["2-2"] + 13 * ["2-3"]
df["incubation_time"] = incubation_time
df["cell_line"] = cell_line
df["sgRNA"] = sgRNA
df["wells"] = wells
df3 = df
# 4. 20200205-Ecad-coating-cell-attachment-1h-D301-plate2-2wells-splitPositions-cell-counts.txt
df = pd.read_csv(fileList[3], header=0, sep="\t")
incubation_time = 2 * 13 * ["1h"]
cell_line = 2 * 13 * ["D301"]
sgRNA = 2 * 13 * ["sg-Itgb1"]
wells = 13 * ["4-1"] + 13 * ["4-2"]
df["incubation_time"] = incubation_time
df["cell_line"] = cell_line
df["sgRNA"] = sgRNA
df["wells"] = wells
df4 = df
# 5. 20200205-Ecad-coating-cell-attachment-1h-D301-plate3-1well-splitPositions-cell-counts.txt
df = pd.read_csv(fileList[4], header=0, sep="\t")
incubation_time = 1 * 13 * ["1h"]
cell_line = 1 * 13 * ["D301"]
sgRNA = 1 * 13 * ["sg-Itgb1"]
wells = 13 * ["4-3"]
df["incubation_time"] = incubation_time
df["cell_line"] = cell_line
df["sgRNA"] = sgRNA
df["wells"] = wells
df5 = df
df = pd.concat([df1, df2, df3, df4, df5])
df.reset_index(inplace=True)
df.sort_values(by="cell_line", inplace=True)
df["cell_density"] = df.cell_number / FOV_AREA
df_Ecad_1h = df
outputPrefix = "cell_attachment_Ecad_coating_1h"
# outputFigPath = "../jupyter_figures/" + outputPrefix + ".svg"
outputFigPath = "../jupyter_figures/" + outputPrefix + ".png"
plotDensityBarSwarm(df.sgRNA, df.cell_density, outputFigPath, yTickSpacing=10)
# Read in and clean up the data for 3 hour fixed E-cadherin coated surface
#
# Each spreadsheet contains two columns: file name and the cell counts
#
# Each experimental condition has 3 replicates (3 wells), each well we had
# taken 13 fields of view images.
#
# Nomenclature:
#
# cell_line D193 D267 D266 D301
# cell_id* 1 2 3 4
# sgRNA Control sg1-Cdh1 sg2-Cdh1 sg-Itgb1
#
# * cell_id is used for denoting the wells.
# For example, 1-1, 1-2, 1-3 and 1-4 denote 4 wells (replicates) for cell line #1, which is D193
f1 = "../data/cell-attachment-assay-count-data/20200203-D193-top-D301-bottom-Ecad-splitPositions-cell-counts.txt"
df = pd.read_csv(f1, header=0, sep="\t")
incubation_time = 2 * 3 * 13 * ["3h"]
cell_line = 3 * 13 * ["D193"] + 3 * 13 * ["D301"]
sgRNA = 3 * 13 * ["sg-Control"] + 3 * 13 * ["sg-Itgb1"]
wells = 13 * ["1-1"] + 13 * ["1-2"] + 13 * ["1-3"] + 13 * ["4-1"] + 13 * ["4-2"] + 13 * ["4-3"]
df["incubation_time"] = incubation_time
df["cell_line"] = cell_line
df["sgRNA"] = sgRNA
df["wells"] = wells
df1 = df
f2 = "../data/cell-attachment-assay-count-data/20200203-D266-top-D267-bottom-Ecad-splitPositions-cell-counts.txt"
df = pd.read_csv(f2, header=0, sep="\t")
incubation_time = 2 * 3 * 13 * ["3h"]
cell_line = 3 * 13 * ["D266"] + 3 * 13 * ["D267"]
sgRNA = 3 * 13 * ["sg2-Cdh1"] + 3 * 13 * ["sg1-Cdh1"]
wells = 13 * ["3-1"] + 13 * ["3-2"] + 13 * ["3-3"] + 13 * ["2-1"] + 13 * ["2-2"] + 13 * ["2-3"]
df["incubation_time"] = incubation_time
df["cell_line"] = cell_line
df["sgRNA"] = sgRNA
df["wells"] = wells
df2 = df
df = pd.concat([df1, df2])
df.reset_index(inplace=True)
df.sort_values(by="cell_line", inplace=True)
df["cell_density"] = df.cell_number / FOV_AREA
df_Ecad_3h = df
outputPrefix = "cell_attachment_Ecad_coating_3h"
# outputFigPath = "../jupyter_figures/" + outputPrefix + ".svg"
outputFigPath = "../jupyter_figures/" + outputPrefix + ".png"
plotDensityBarSwarm(df.sgRNA, df.cell_density, outputFigPath, yTickSpacing=30)
df = df_Ecad_1h
tukeyTest(df.cell_number, df.sgRNA)
df = df_Ecad_2h
tukeyTest(df.cell_number, df.sgRNA)
df = df_Ecad_3h
tukeyTest(df.cell_number, df.sgRNA)
df = pd.concat([df_Ecad_1h, df_Ecad_2h, df_Ecad_3h])
plot_order=["sg-Control", "sg1-Cdh1", "sg2-Cdh1", "sg-Itgb1"]
sns.barplot(data=df, x="sgRNA", y="cell_density", hue="incubation_time", order=plot_order)
outputPrefix = "cell_attachment_Ecad_coating_all_time_points"
# outputFigPath = "../jupyter_figures/" + outputPrefix + ".svg"
outputFigPath = "../jupyter_figures/" + outputPrefix + ".png"
plt.savefig(outputFigPath)
# Read in and clean up the data for 15-min fixed Matrigel (MG) coated surface
#
# Each spreadsheet contains two columns: file name and the cell counts
#
# Each experimental condition has 3 replicates (3 wells), each well we had
# taken 13 fields of view images.
#
# Nomenclature:
#
# cell_line D193 D267 D266 D301
# cell_id* 1 2 3 4
# sgRNA Control sg1-Cdh1 sg2-Cdh1 sg-Itgb1
#
# * cell_id is used for denoting the wells.
# For example, 1-1, 1-2, 1-3 and 1-4 denote 4 wells (replicates) for cell line #1, which is D193
f1 = "../data/cell-attachment-assay-count-data/20200203-D193-top-D301-bottom-MG-splitPositions-cell-counts.txt"
df = pd.read_csv(f1, header=0, sep="\t")
incubation_time = 2 * 3 * 13 * ["15_min"]
cell_line = 3 * 13 * ["D193"] + 3 * 13 * ["D301"]
sgRNA = 3 * 13 * ["sg-Control"] + 3 * 13 * ["sg-Itgb1"]
wells = 13 * ["1-1"] + 13 * ["1-2"] + 13 * ["1-3"] + 13 * ["4-1"] + 13 * ["4-2"] + 13 * ["4-3"]
df["incubation_time"] = incubation_time
df["cell_line"] = cell_line
df["sgRNA"] = sgRNA
df["wells"] = wells
df1 = df
f2 = "../data/cell-attachment-assay-count-data/20200203-D266-top-D267-bottom-MG-splitPositions-cell-counts.txt"
df = pd.read_csv(f2, header=0, sep="\t")
incubation_time = 2 * 3 * 13 * ["15_min"]
cell_line = 3 * 13 * ["D266"] + 3 * 13 * ["D267"]
sgRNA = 3 * 13 * ["sg2-Cdh1"] + 3 * 13 * ["sg1-Cdh1"]
wells = 13 * ["3-1"] + 13 * ["3-2"] + 13 * ["3-3"] + 13 * ["2-1"] + 13 * ["2-2"] + 13 * ["2-3"]
df["incubation_time"] = incubation_time
df["cell_line"] = cell_line
df["sgRNA"] = sgRNA
df["wells"] = wells
df2 = df
df = pd.concat([df1, df2])
df.reset_index(inplace=True)
df.sort_values(by="cell_line", inplace=True)
df["cell_density"] = df.cell_number / FOV_AREA
df_MG_15min = df
df.groupby("sgRNA")["cell_density"].describe()
outputPrefix = "cell_attachment_MG_coating_15min_tall"
outputFigPath = "../jupyter_figures/" + outputPrefix + ".svg"
plotDensityBarSwarm(df.sgRNA, df.cell_density, outputFigPath,
yTickSpacing=30, yMax=160,
fig_width=0.7, fig_height=1.5)
# Filter out values close to the mean to select representative images
df1 = df[df.cell_number>=82]
df2 = df1[df1.cell_number<108]
df2
# Read in and clean up the data for 1 hour (60 min) fixed Matrigel (MG) coated surface
#
# Each spreadsheet contains two columns: file name and the cell counts
#
# Each experimental condition has 3 replicates (3 wells), each well we had
# taken 10 fields of view images.
#
# Nomenclature:
#
# cell_line D193 D267 D266 D301
# cell_id* 1 2 3 4
# sgRNA Control sg1-Cdh1 sg2-Cdh1 sg-Itgb1
#
# * cell_id is used for denoting the wells.
# For example, 1-1, 1-2, 1-3 and 1-4 denote 4 wells (replicates) for cell line #1, which is D193
folder = "../data/cell-attachment-assay-count-data/"
f1 = "2020-02-01-1h-fixed-MG-D193-ABA-splitPositions-cell-counts.txt"
df = pd.read_csv(folder + f1, header=0, sep="\t")
incubation_time = 3 * 10 * ["60 min"]
cell_line = 3 * 10 * ["D193"]
sgRNA = 3 * 10 * ["sg-Control"]
wells = 10 * ["1-1"] + 10 * ["1-2"] + 10 * ["1-3"]
df["incubation_time"] = incubation_time
df["cell_line"] = cell_line
df["sgRNA"] = sgRNA
df["wells"] = wells
df1 = df
f2 = "2020-02-01-1h-fixed-MG-D266-ABA-splitPositions-cell-counts.txt"
df = pd.read_csv(folder + f2, header=0, sep="\t")
incubation_time = 3 * 10 * ["60 min"]
cell_line = 3 * 10 * ["D266"]
sgRNA = 3 * 10 * ["sg2-Cdh1"]
wells = 10 * ["3-1"] + 10 * ["3-2"] + 10 * ["3-3"]
df["incubation_time"] = incubation_time
df["cell_line"] = cell_line
df["sgRNA"] = sgRNA
df["wells"] = wells
df2 = df
f3 = "2020-02-01-1h-fixed-MG-D267-ABA-splitPositions-cell-counts.txt"
df = pd.read_csv(folder + f3, header=0, sep="\t")
incubation_time = 3 * 10 * ["60 min"]
cell_line = 3 * 10 * ["D267"]
sgRNA = 3 * 10 * ["sg1-Cdh1"]
wells = 10 * ["2-1"] + 10 * ["2-2"] + 10 * ["2-3"]
df["incubation_time"] = incubation_time
df["cell_line"] = cell_line
df["sgRNA"] = sgRNA
df["wells"] = wells
df3 = df
f4 = "2020-02-01-1h-fixed-MG-D301-ABA-splitPositions-cell-counts.txt"
df = pd.read_csv(folder + f4, header=0, sep="\t")
incubation_time = 3 * 10 * ["60 min"]
cell_line = 3 * 10 * ["D301"]
sgRNA = 3 * 10 * ["sg-Itgb1"]
wells = 10 * ["4-1"] + 10 * ["4-2"] + 10 * ["4-3"]
df["incubation_time"] = incubation_time
df["cell_line"] = cell_line
df["sgRNA"] = sgRNA
df["wells"] = wells
df4 = df
df = pd.concat([df1, df2, df3, df4])
df.reset_index(inplace=True)
df.sort_values(by="cell_line", inplace=True)
df["cell_density"] = df.cell_number / FOV_AREA
# Optional: drop extreme values caused by clustered cells (only 5 in total)
df.drop(df[ df.cell_number > 500 ].index, inplace=True)
df_MG_60min = df
outputPrefix = "cell_attachment_MG_coating_60min"
# outputFigPath = "../jupyter_figures/" + outputPrefix + ".svg"
outputFigPath = "../jupyter_figures/" + outputPrefix + ".png"
plotDensityBarSwarm(df.sgRNA, df.cell_density, outputFigPath, yTickSpacing=50)
des = df_MG_60min.groupby("wells")["cell_number"].describe()
des
df = df_MG_15min
tukeyTest(df.cell_number, df.sgRNA)
df = df_MG_60min
tukeyTest(df.cell_number, df.sgRNA)
df = pd.concat([df_MG_15min, df_MG_60min])
plot_order=["sg-Control", "sg1-Cdh1", "sg2-Cdh1", "sg-Itgb1"]
sns.barplot(data=df, x="sgRNA", y="cell_density", hue="incubation_time", order=plot_order)
outputPrefix = "cell_attachment_MG_coating_both_time_points"
# outputFigPath = "../jupyter_figures/" + outputPrefix + ".svg"
outputFigPath = "../jupyter_figures/" + outputPrefix + ".png"
plt.savefig(outputFigPath)
```
| github_jupyter |
```
NAME = "Robina Shaheen"
DATE = "06242020"
COLLABORATORS = ""
```
# Wildfires in California: Causes and Consequences
The rising carbon dioxide in the atmosphere is contributing to constant increase in global temperatures. Over the last two decades, humanity has observed record-breaking extreme weather events. A comparison with the historical data indicates higher frequency, larger magnitude, longer duration, and timing of many of these events around the world has also changed singniciantly (Seneviratne et al., 2018). Wildfires are no exception, as seen in the recent years, devastating blazes across the globe (Amazon forest in 2019, California 2017-18 and Australia in 2019-2020. In the western U.S., wildfires are projected to increase in both frequency and intensity as the planet warms (Abatzoglou and Williams, 2016). The state of California with ~ 40 million residents and ~ 4.5 million housings and properties has experienced the most devastating economic, ecological and health consequences during and after wildfires (Baylis and Boomhower, 2019; Liao Y and Kousky C, 2020,).
During 2014 wildfires in San Diego, I volunteered to help people in the emergency shelters and watching the devastating effects of wildfires on children and adults emotional and physical health had a profound impact on me and moved me at a deeper level to search for potential markers/predictors of these events in the nature. This analysis is not only an academic endeavor but also a humble beginning to understand the complex interactions between atmosphere and biosphere in the nature and how we can plan for a better future for those who suffered the most during these catastrophic wildfires.
This goal of this study is to understand the weather data that can trigger wildfires and consequences of wildfires on the atmospheric chemistry and how we can predict and plan for a better future to mitigate disastrous consequences.
I have laid out this study in multiple sections:
1. Understanding weather pattern using advanced machine learning tools to identify certain markers that can be used to forecast future events.
2. Understanding changes in the chemical composition of the atmosphere and identify markers that can trigger respiratory stress and cardio-vascular diseases.
## Pictrue of an unprecdent wildfire near Camp Pendelton.
**Pulgas Fire 2014, San Diego, CA (source= wikimedia)**
<a href="https://en.wikipedia.org/wiki/May_2014_San_Diego_County_wildfires" target="_blank">May 2014 Wildfires</a>.

## Workflow
1. Import packages and modules
2. Import datetime conversion tools beteween panda and matplotlib for time series analysis
3. Download air quality data from the EPA website
4. Set working directory to "earth-analytics"
5. Define paths to download data files from data folder 'sd_fires_2014'
6. Import data into dataframes using appropriate functions(date-parser, indexing, remove missing values)
* weather data Jan-Dec. 2014
* Atmospheric gases and particulate matter data Jan - Dec. 2014
* Annual precipitation (2007-2020) San Diego, CA.
7. view nature and type of data
8. Use Scikit learn multivariate analysis to predict ozone levels.
9. Plot data to view any anomalies in data and identify source of the anomaly .
10. discuss plots and conclusions.
## Resources
* Environmental Protection Agency, USA. <a href="https://https://www.epa.gov/outdoor-air-quality-data//" target="_blank">EPA website/ User Guide to download data</a>.
* Precipitation Record (2007-2020), San Diego, CA. <a href="http://www.wx4mt.com/wxraindetail.php?year=2020//" target="_blank"> San Diego Weather, CA</a>.
## Import packages/ modules and Set Working Directory
```
# Import packages/ modules
import os
import numpy as np
import pandas as pd
from pandas.plotting import autocorrelation_plot
from pandas.plotting import lag_plot
import earthpy as et
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from matplotlib.dates import DateFormatter
import seaborn as sns
import datetime
from textwrap import wrap
from statsmodels.formula.api import ols
# Handle date time conversions between pandas and matplotlib
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
# Scikit learn to train model and make predictions.
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.linear_model import LinearRegression
# Use white grid plot background from seaborn
sns.set(font_scale=1.5, style="whitegrid")
# Set Working Directory
ea_path = os.path.join(et.io.HOME, 'earth-analytics')
# Set base path to download data
base_path = os.path.join(ea_path, "data")
base_path
```
# Data exploration and analysis
The EPA provides data for the entire state and it is a large data set, often slowing the processing time.
Therefore, it is important to select data required to check air quality and weather conditions.
I have selected ozone, oxides of nitrogen and carbon monoxide that are produced during wildfires.
Additionally, black carbon and particulate matter is emitted during wildfires which is dangerous to inhale.
These dataset will allow me to conduct my preliminary analysis of the effects of wildfires on the air quality in San Diego County.
```
file_path21 = os.path.join(base_path, "output_figures",
"sandiego_2014_fires", "air_quality_csv",
"sd_chemical_composition_2014_mean_v02.csv")
# To check if path is created
os.path.exists(file_path21)
# Define relative path to files
file_path1 = os.path.join(base_path, "output_figures",
"sandiego_2014_fires", "air_quality_csv",
"sd_weather_2014_mean_values_only.csv")
# To check if path is created
os.path.exists(file_path1)
# Import csv files into dataframe and ensure date time is imported properly.
sd_weather_2014_df = pd.read_csv(file_path1, parse_dates=['Date Local'],
index_col=['Date Local'])
sd_weather_2014_df.head(3)
# Use pandas scatter matrix function to view relationships between various components of weather system.
pd.plotting.scatter_matrix(sd_weather_2014_df, s=30, figsize=[
8, 6], marker='o', color='r')
plt.show()
```
The figure above shows a Gaussian distribution of temperature and pressure, whereas relative humidity data is skewed towards lower humidity levels.The wind pattern shows two distinct populations. Both relative humidity and wind pattern indicates extreme weather changes during Santa Ana events.
The correlation matrix graph shows inverse correlation between temperature and relative humidity. One can easily identify how extremely low relative humidity (<30%) can easily damage shrubs and other vegetations. The inverse relationship between pressure, temperature and winds is obvious from these graphs. A time series data has already shown that a high resolution record of temperature, relative humidity and wind can be used as a useful indicator of wildfire season.
# Autocorrelation plot and lag plot
Autocorrelation plots are often used for checking randomness in time series. This is done by computing autocorrelations for data values at varying time lags. If time series is random, such autocorrelations should be near zero for any and all time-lag separations. If time series is non-random then one or more of the autocorrelations will be significantly non-zero. The horizontal lines displayed in the plot correspond to 95% and 99% confidence bands. The dashed line is 99% confidence band.
1. The auto correlation plot shows the weather parameters are strongly related to each other.
2. A lag plot is a special type of scatter plot with the two variables (X,Y) and their frequency of occurrence. A lag plot is used to checks
a. Outliers (data points with extremely high or low values).
b. Randomness (data without a pattern).
c. Serial correlation (where error terms in a time series transfer from one period to another).
d. Seasonality (periodic fluctuations in time series data that happens at regular periods).
A lag plot of San Diego weather data shows two distinct populations, anomalous values during wildfires in the right upper corner and normal values are linearly correlated in the lag plot.
The lag plot help us to choose appropriate model for Machine learning.
```
# For this plot we need pandas.plotting import autocorrelation_plot function.
autocorrelation_plot(sd_weather_2014_df)
# For the lag plot we need to import pandas.plotting import lag_plot functions
plt.figure()
lag_plot(sd_weather_2014_df)
```
# The Chemical Composition of the Atmosphere
The atmosphere is composed of gases and aerosols (liquid and solid particles in the size range of few nanometer to micrometers).
In the cells below we are going to explore machine learning models and stats to understand the relations between various parameters. We will use Scikit learn multivariate analytical tools to predict the ozone concentrations. Ozone is a strong oxidant and very irritating gas. It can enter the respiratory system and damage thin linings of the respiratory system by producing free radicals.
# Scikit-learn
Scikit-learn is a simple and efficient tools for predictive data analysis.
It is designed for machine learning in python and built on NumPy, SciPy, and matplotlib.
In this notebook I have employed Scikkit-learn to understand relationship between ozone and its precursors.
Ozone formation depends on the presence of oxide of nitrogen, carbon monoxide and volatile organic compounds.
However, VOC's are not measured at all the stations and few stations have values measured for half of the year.
These msissing values cannot be predicted or filled due to spatial variability of its sources.
```
# Import dataframe and view columns
sd_atm_df = pd.read_csv(file_path21, parse_dates=['Date Local'],
index_col=['Date Local'])
sd_atm_df.head(2)
sd_atm_df.columns
# Understanding nature of the data
sd_atm_df.describe()
# To check empty columns, False means no empty colums
sd_atm_df.isnull().any()
# To fill empty cells with the nearest neighbour.
sd_atm_df = sd_atm_df.fillna(method='ffill')
# selected data frame for training
X = sd_atm_df[['NO2 (ppb)', 'CO_ppm', 'PM2.5 (ug/m3)']].values
y = sd_atm_df['O3 (ppb)'].values
X
# To view relaion between ozone and its precursor NOX.
sd_atm_df.plot(x='NO2 (ppb)', y='O3 (ppb)', style='o', c='r')
# sd_atm_df.plot(x2='NO2 (ppb)', y2='CO_ppm', style='o', c='b')
plt.title('Ozone vs Nitric oxide')
plt.xlabel('NO2 (ppb)')
plt.ylabel('O3 (ppb)')
plt.show()
# plt.savefig('data/output_figures/sandiego_2014_fires/air_quality_csv/O3_NOx_relation.png')
# To view relaion between ozone and its precursor NOX.
sd_atm_df.plot(x='PM2.5 (ug/m3)', y='O3 (ppb)', style='o', c='b')
plt.title('Ozone vs PM2.5')
plt.xlabel('PM2.5 (ug/m3)')
plt.ylabel('O3 (ppb)')
# plt.savefig('data/output_figures/sandiego_2014_fires/air_quality_csv/O3_PM2.5_relation.png')
plt.show()
# random state is a seed for data training
# X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
```
Our next step is to divide the data into “attributes” and “labels”.
Attributes are the independent variables while labels are dependent variables whose values are to be predicted. In our dataset, we only have two columns. We want to predict the oxide of nitrogen depending upon the carbon monoxide (CO) recorded.
Next, we will split 80% of the data to the training set while 20% of the data to test set using below code. The test_size variable is where we actually specify the proportion of the test set.
```
# random state is a seed for data training
indices = sd_atm_df.index.values
X_train, X_test, y_train, y_test, idx1, idx2 = train_test_split(
X, y, indices, test_size=0.2, random_state=0)
X
# Training Algorithm
regressor = LinearRegression()
regressor.fit(X_train, y_train)
#To retrieve the intercept:
print(regressor.intercept_)
#For retrieving the slope:
print(regressor.coef_)
y_pred = regressor.predict(X_test)
X_test
# print(y_pred)
type(y_pred)
# converted to DF
df2 = pd.DataFrame({'Date': idx2, 'Actual': y_test, 'Predicted': y_pred})
df2.head(2)
# Set index on date to identify anomalous events in the plot.
df3 = df2.set_index('Date')
df3.index = df3.index.date
# df3 = df2.set_index('Date')
df3.head(3)
# # Create figure and plot space
fig, ax = plt.subplots(figsize=(10, 6))
# printing only 20% values
df3 = df3.head(20)
df3.plot(ax=ax, kind='bar')
# plt.grid(which='major', linestyle='-', linewidth='0.5', color='green')
plt.grid(which='minor', linestyle=':', linewidth='0.5', color='black')
plt.title('Comparison of actual and predicted values of ozone')
plt.ylabel('O3 (ppb)')
# Rotate tick marks on x-axis
plt.setp(ax.get_xticklabels(), rotation=45)
# plt.savefig('data/output_figures/sandiego_2014_fires/air_quality_csv/O3_multivariate_analysis.png')
plt.show()
# Results:
print('Mean Absolute Error:', metrics.mean_absolute_error(y_test, y_pred))
print('Mean Squared Error:', metrics.mean_squared_error(y_test, y_pred))
print('Root Mean Squared Error:', np.sqrt(
metrics.mean_squared_error(y_test, y_pred)))
```
Mean O3 (ppb) = 30.693
RMSE = 6.51
# Conclusion
1. The ozone prediction from model = (6.51/30.69)*100 ~ 20% less than actual value. still reasonable model fit.
2. Bar grpah with predicted values indicated huge deviation on May 01, 2014 which is the wildfire day at its peak.
3. Taking wildfire days out will improve the relationship between normal parameters and hence MLR coefficents.
4. The ozone fromation depends on temp, pressure and destruction with OH radical, photolysis and collisons with air moleucles. The data for one of the important source of ozone formation VOC is incomplete and it can signficntly brings predicted values in alignment with the observed one.
5. A comparison with the rain in the previous years tells us an intersting story that good rainy season in 2011-12 resulted in huge biomass accumulation in San Diego.
6. Intense drought in 2012-2013 caused the biomass to turn into dry fuel for fire ignition.
7. In future studies, it would be valuable to compare rain patterns with wildfires in California.
# Future Outlook
There are many exciting avenues to expand this work to more wildfire events and comparison with weather and atmospheric conditions to test our model.
# Preliminary Analysis of Rain Patterns in San Diego.
**Figure . cumulative rain and number of rainy days in San Diego California, USA.**
<img src="http://drive.google.com/uc?export=view&id=12tON_T9EbfuwqyTGbNwcp60P65VEaHov" width="400" height="200">
The figure above shows that precipitation in 2010 was more than normal and resulted in rapid growth of shrubs, grasses and other vegetations which later served as tinder for fires in early 2014. The black line indicates exponential decrease in precipitation after heavy rains in 2010.
Tinder is an easily combustible material (dried grass and shrubs) and is used to ignite fires.
The black line indicates exponential decrease in precipitation after heavy rains in 2010. A further analysis of rain pattern in 2020 indicated 110mm of rain in Dec. 2020 which is almost half of the annual rain in San Diego. There was no rain from July-Aug. 2020 after deadly wildfires and had even worse consequences for the health of locals.
As climate is changing the populations are suffering not only from the immediate impact of wildfires but lingering effects of fine particulate matter and toxic gases are even worst especially for children and individuals suffering from respiratory diseases (Amy Maxman 2019).
# References
Abatzoglou, T. J., and Williams P. A., Proceedings of the National Academy of Sciences Oct 2016, 113 (42) 11770-11775; DOI: 10.1073/pnas.1607171113
Seneviratne, S. et al., ,Philos Trans A Math Phys Eng Sci. 2018 May 13; 376(2119): 20160450, doi: 10.1098/rsta.2016.0450
Baylis, P., and Boomhower, J., Moral hazard, wildfires, and the economic incidence of natural disasters, Posted: 2019, https://www.nber.org/papers/w26550.pdf
Liao, Yanjun and Kousky, Carolyn, The Fiscal Impacts of Wildfires on California Municipalities (May 27, 2020). https://ssrn.com/abstract=3612311 or http://dx.doi.org/10.2139/ssrn.3612311
Amy Maxman, California biologists are using wildfires to assess health risks of smoke. Nature 575, 15-16 (2019),doi: 10.1038/d41586-019-03345-2
| github_jupyter |
# Keras Basics
Welcome to the section on deep learning! We'll be using Keras with a TensorFlow backend to perform our deep learning operations.
This means we should get familiar with some Keras fundamentals and basics!
## Imports
```
import numpy as np
```
## Dataset
We will use the Bank Authentication Data Set to start off with. This data set consists of various image features derived from images that had 400 x 400 pixels. You should note **the data itself that we will be using ARE NOT ACTUAL IMAGES**, they are **features** of images. In the next lecture we will cover grabbing and working with image data with Keras. This notebook focuses on learning the basics of building a neural network with Keras.
_____
More info on the data set:
https://archive.ics.uci.edu/ml/datasets/banknote+authentication
Data were extracted from images that were taken from genuine and forged banknote-like specimens. For digitization, an industrial camera usually used for print inspection was used. The final images have 400x 400 pixels. Due to the object lens and distance to the investigated object gray-scale pictures with a resolution of about 660 dpi were gained. Wavelet Transform tool were used to extract features from images.
Attribute Information:
1. variance of Wavelet Transformed image (continuous)
2. skewness of Wavelet Transformed image (continuous)
3. curtosis of Wavelet Transformed image (continuous)
4. entropy of image (continuous)
5. class (integer)
## Reading in the Data Set
We've already downloaded the dataset, its in the DATA folder. So let's open it up.
```
from numpy import genfromtxt
data = genfromtxt('../DATA/bank_note_data.txt', delimiter=',')
data
labels = data[:,4]
labels
features = data[:,0:4]
features
X = features
y = labels
```
## Split the Data into Training and Test
Its time to split the data into a train/test set. Keep in mind, sometimes people like to split 3 ways, train/test/validation. We'll keep things simple for now. **Remember to check out the video explanation as to why we split and what all the parameters mean!**
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
X_train
X_test
y_train
y_test
```
## Standardizing the Data
Usually when using Neural Networks, you will get better performance when you standardize the data. Standardization just means normalizing the values to all fit between a certain range, like 0-1, or -1 to 1.
The scikit learn library also provides a nice function for this.
http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html
```
from sklearn.preprocessing import MinMaxScaler
scaler_object = MinMaxScaler()
scaler_object.fit(X_train)
scaled_X_train = scaler_object.transform(X_train)
scaled_X_test = scaler_object.transform(X_test)
```
Ok, now we have the data scaled!
```
X_train.max()
scaled_X_train.max()
X_train
scaled_X_train
```
## Building the Network with Keras
Let's build a simple neural network!
```
from keras.models import Sequential
from keras.layers import Dense
# Creates model
model = Sequential()
# 8 Neurons, expects input of 4 features.
# Play around with the number of neurons!!
model.add(Dense(4, input_dim=4, activation='relu'))
# Add another Densely Connected layer (every neuron connected to every neuron in the next layer)
model.add(Dense(8, activation='relu'))
# Last layer simple sigmoid function to output 0 or 1 (our label)
model.add(Dense(1, activation='sigmoid'))
```
### Compile Model
```
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
```
## Fit (Train) the Model
```
# Play around with number of epochs as well!
model.fit(scaled_X_train,y_train,epochs=50, verbose=2)
```
## Predicting New Unseen Data
Let's see how we did by predicting on **new data**. Remember, our model has **never** seen the test data that we scaled previously! This process is the exact same process you would use on totally brand new data. For example , a brand new bank note that you just analyzed .
```
scaled_X_test
# Spits out probabilities by default.
# model.predict(scaled_X_test)
model.predict_classes(scaled_X_test)
```
# Evaluating Model Performance
So how well did we do? How do we actually measure "well". Is 95% accuracy good enough? It all depends on the situation. Also we need to take into account things like recall and precision. Make sure to watch the video discussion on classification evaluation before running this code!
```
model.metrics_names
model.evaluate(x=scaled_X_test,y=y_test)
from sklearn.metrics import confusion_matrix,classification_report
predictions = model.predict_classes(scaled_X_test)
confusion_matrix(y_test,predictions)
print(classification_report(y_test,predictions))
```
## Saving and Loading Models
Now that we have a model trained, let's see how we can save and load it.
```
model.save('myfirstmodel.h5')
from keras.models import load_model
newmodel = load_model('myfirstmodel.h5')
newmodel.predict_classes(X_test)
```
Great job! You now know how to preprocess data, train a neural network, and evaluate its classification performance!
| github_jupyter |
# Comparative Analysis with MELD
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scprep
import meld
import sklearn
import pickle
import cna
import time
# making sure plots & clusters are reproducible
np.random.seed(42)
```
# Test on Sepsis
```
# Import sepsis expression data
d = cna.read('/data/srlab/lrumker/MCSC_Project/sepsis_data/pbmc.h5ad')
sepsis_data = d.to_df() # Cells x genes
sepsis_metadata = d.obs
## Define phenotype of interest as ANY sepsis to match CNA
any_sepsis = np.zeros(sepsis_metadata.shape[0])
any_sepsis[np.where(sepsis_metadata['pheno']=='Bac-SEP')[0]] = 1
any_sepsis[np.where(sepsis_metadata['pheno']=='ICU-SEP')[0]] = 1
any_sepsis[np.where(sepsis_metadata['pheno']=='URO')[0]] = 1
any_sepsis[np.where(sepsis_metadata['pheno']=='Int-URO')[0]] = 1
sepsis_metadata['AnySepsis'] = any_sepsis
# PCA for dimensionality reduction with default parameters (100 components)
data_pca = scprep.reduce.pca(sepsis_data)
# Used default MELD parameters
time_a = time.time()
meld_op = meld.MELD()
sample_densities = meld_op.fit_transform(data_pca, sample_labels=sepsis_metadata['patient'])
time_b = time.time()
print(time_b-time_a)
# Save sample densities object
pickle.dump(sample_densities, open( "MELD_sample_densities_sepsis.p", "wb" ) )
# Load saved object
sample_densities = pickle.load( open( "MELD_sample_densities_sepsis.p", "rb" ) )
# Apply row-wise L1 normalization
sample_likelihoods = sample_densities/sample_densities.sum(axis=1)[:,None]
# Identify case sample colums
sepsis_samples = np.unique(sepsis_metadata['patient'][np.where(sepsis_metadata['AnySepsis']==1)[0]])
sepsis_metadata['likelihood_sepsis'] = sample_likelihoods[sepsis_samples].mean(axis=1).values
```
## Compare per-cell values from MELD and CNA
```
# Load CNA results
cna_res = pd.read_csv("/data/srlab/lrumker/MCSC_Project/mcsc_scratch/sepsisres.csv")
# Correlation between per-cell values from MELD and CNA
np.corrcoef(cna_res['ncorrs'], sepsis_metadata['likelihood_sepsis'])[0,1]
```
## Analysis of statistical properties of MELD
```
# read sepsis data
import cna
d = cna.read('/data/srlab/lrumker/MCSC_Project/sepsis_data/pbmc.h5ad')
sepsis_data = d.to_df() # Cells x genes
sepsis_metadata = d.obs
## Define phenotype of interest as ANY sepsis
any_sepsis = np.zeros(sepsis_metadata.shape[0])
any_sepsis[np.where(sepsis_metadata['pheno']=='Bac-SEP')[0]] = 1
any_sepsis[np.where(sepsis_metadata['pheno']=='ICU-SEP')[0]] = 1
any_sepsis[np.where(sepsis_metadata['pheno']=='URO')[0]] = 1
any_sepsis[np.where(sepsis_metadata['pheno']=='Int-URO')[0]] = 1
sepsis_metadata['AnySepsis'] = any_sepsis
# initiatlize sampleXmeta for sepsis
sampleXmeta = sepsis_metadata[['patient', 'AnySepsis']].drop_duplicates().set_index('patient', drop=True)
true_pheno = sampleXmeta.AnySepsis.values.astype(bool)
# obtain memory requirement for storing NAM
cna.tl.nam(d)
d.uns['NAM.T'].values.nbytes / np.power(2, 20)
res = cna.tl.association(d, true_pheno)
# read MELD sample densities
# path = '/data/srlab/lrumker/MCSC_Project/sepsis_data/sepsis_sample_densities.p'
path = "MELD_sample_densities_sepsis.p"
sample_densities = pickle.load(open(path, 'rb'))
sample_densities /= sample_densities.sum(axis=1)[:,None] # This is termed sample "likelihoods" in the publication
color_scale_threshold = 0.0025
# perform non-null analysis
val = sample_densities[sampleXmeta.index[true_pheno]].mean(axis=1) - sample_densities.mean(axis=1)
plt.scatter(d.obsm['X_tsne'][:,0], d.obsm['X_tsne'][:,1],
alpha=0.5, c = val, cmap = "seismic", s=0.2, vmin=-color_scale_threshold, vmax=color_scale_threshold)
# create null phenotype visualizations
np.random.seed(0)
for i in range(5):
ix = np.argsort(np.random.randn(len(true_pheno)))
null = true_pheno[ix]
nullval = sample_densities[sampleXmeta.index[null]].mean(axis=1) - sample_densities.mean(axis=1)
plt.scatter(d.obsm['X_tsne'][:,0], d.obsm['X_tsne'][:,1],
alpha=0.5, c=nullval, cmap = "seismic", s=0.2, vmin=-color_scale_threshold, vmax=color_scale_threshold)
plt.show()
nullres = cna.tl.association(d, null, local_test=False)
print('CNA p =', nullres.p)
# compute null distribution for MELD scores
T = 500
null_scores = np.zeros((T, len(sample_densities)))
for i in range(500):
print('.', end='')
ix = np.argsort(np.random.randn(len(true_pheno)))
null = true_pheno[ix]
null_scores[i] = sample_densities[sampleXmeta.index[null]].mean(axis=1) - sample_densities.mean(axis=1)
null_scores = null_scores.T
# compute FDRs
fdrs = []
thresholds = np.arange(50, 100, 1)
for thresh in np.percentile(np.abs(val), thresholds):
print('.', end='')
discoveries = (np.abs(val) >= thresh).sum()
Efd = (np.abs(null_scores) >= thresh).sum(axis=0).mean()
fdrs.append(Efd/discoveries)
CNA_thresholds = np.array([(np.abs(res.ncorrs) <= t).mean()*100 for t in res.fdrs.threshold])
CNA_fdrs = res.fdrs.fdr.values
plt.plot(thresholds, fdrs, label='MELD')
plt.plot(CNA_thresholds, CNA_fdrs, label='CNA')
plt.xlabel('Per-Cell Value Threshold (%ile)')
plt.ylim(0,0.2)
plt.axhline(y=0.05, color='k', linestyle='--')
plt.ylabel('FDR')
plot_thresholds = [0.001, 0.01, 0.05, 0.1, 0.15, 0.2]
total_cells = d.obs.shape[0]
MELD_cells_passing = []
for t in plot_thresholds:
if np.min(fdrs)>t:
MELD_cells_passing.append(0)
else:
i_first_below = np.min(np.where(np.array(fdrs)<t)[0])
numcells = total_cells*(1-thresholds[i_first_below]/100)
MELD_cells_passing.append(numcells)
MELD_cells_passing = np.ceil(MELD_cells_passing).astype(int)
total_cells = d.obs.shape[0]
CNA_cells_passing = []
for t in plot_thresholds:
if np.min(CNA_fdrs)>t:
CNA_cells_passing.append(0)
else:
i_first_below = np.min(np.where(np.array(CNA_fdrs)<t)[0])
numcells = res.fdrs['num_detected'].iloc[i_first_below]
CNA_cells_passing.append(numcells)
CNA_cells_passing = np.array(CNA_cells_passing).astype(int)
x = np.arange(len(plot_thresholds)) # the label locations
width = 0.3 # the width of the bars
fig, ax = plt.subplots()
rects1 = ax.bar(x - width/2, CNA_cells_passing, width, label='CNA')
rects2 = ax.bar(x + width/2, MELD_cells_passing, width, label='MELD')
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_ylabel('# Cells Passing FDR Threshold')
ax.set_xlabel('FDR Threshold')
ax.set_xticks(x)
ax.set_xticklabels(plot_thresholds)
ax.legend()
fig.tight_layout()
plt.show()
```
## Create Figure
```
fig = plt.figure(figsize = (10,5))
gs = fig.add_gridspec(nrows=2, ncols=15, hspace = 0.55, wspace=0.5)
ax1 = fig.add_subplot(gs[0,0:4])
ax2 = fig.add_subplot(gs[0,5:9])
ax3 = fig.add_subplot(gs[0,10:15])
ax4 = fig.add_subplot(gs[1,0:3])
ax5 = fig.add_subplot(gs[1,3:6])
ax6 = fig.add_subplot(gs[1,6:9])
ax7 = fig.add_subplot(gs[1,9:12])
ax8 = fig.add_subplot(gs[1,12:15])
ax = ax1 # Scores on true sepsis values
val = sample_densities[sampleXmeta.index[true_pheno]].mean(axis=1) - sample_densities.mean(axis=1)
ax.scatter(d.obsm['X_tsne'][:,0], d.obsm['X_tsne'][:,1],
alpha=0.5, c = val, cmap = "seismic", s=0.2, vmin=-color_scale_threshold, vmax=color_scale_threshold)
ax.set_title('Sepsis MELD Scores')
ax.axis('off')
ax = ax2 # Correlation between per-cell scores
ax.scatter(cna_res['ncorrs'], sample_densities[sampleXmeta.index[true_pheno]].mean(axis=1), s=0.2)
ax.set_xticks([-0.4, 0, 0.4])
ax.set_xlabel('CNA Neighborhood Coefficient')
ax.set_ylabel('MELD Score')
ax = ax3
x = np.arange(len(plot_thresholds)) # the label locations
width = 0.3 # the width of the bars
rects1 = ax.bar(x - width/2, CNA_cells_passing/1000, width, label='CNA')
rects2 = ax.bar(x + width/2, MELD_cells_passing/1000, width, label='MELD')
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_ylabel('# Sig. Cells (Thousands)')
ax.set_xlabel('FDR Threshold')
ax.set_xticks(x)
ax.set_xticklabels(plot_thresholds)
ax.legend()
np.random.seed(0)
for ax in [ax4, ax5, ax6, ax7, ax8]:
ix = np.argsort(np.random.randn(len(true_pheno)))
null = true_pheno[ix]
nullval = sample_densities[sampleXmeta.index[null]].mean(axis=1) - sample_densities.mean(axis=1)
ax.scatter(d.obsm['X_tsne'][:,0], d.obsm['X_tsne'][:,1],
alpha=0.5, c=nullval, cmap = "seismic", s=0.2, vmin=-color_scale_threshold, vmax=color_scale_threshold)
nullres = cna.tl.association(d, null, local_test=False)
ax.text(0.13, -0.05, 'CNA p = '+str(np.round(nullres.p,2)),
transform=ax.transAxes, fontsize=10, color="black")
ax.axis('off')
ax=ax6
ax.set_title('MELD Scores on Randomly-Shuffled Per-Sample Values')
plt.tight_layout()
plt.savefig('../_figs/suppfig.meld.pdf')
```
| github_jupyter |
# Evolutionary Shape Prediction
An experiment in evolutionary software using *reinforcement learning* to discover interesting data objects within a given set of graph data.
```
import kglab
namespaces = {
"nom": "http://example.org/#",
"wtm": "http://purl.org/heals/food/",
"ind": "http://purl.org/heals/ingredient/",
"skos": "http://www.w3.org/2004/02/skos/core#",
}
kg = kglab.KnowledgeGraph(
name = "A recipe KG example based on Food.com",
base_uri = "https://www.food.com/recipe/",
language = "en",
namespaces = namespaces,
)
kg.load_rdf("dat/recipes.ttl")
import sys
import inspect
__name__ = "kglab"
clsmembers = inspect.getmembers(sys.modules[__name__], inspect.isclass)
clsmembers
```
## Graph measures and topological analysis
Let's measure this graph, to develop some estimators that we'll use later...
```
import pandas as pd
pd.set_option("max_rows", None)
measure = kglab.Measure()
measure.measure_graph(kg)
print("edges", measure.edge_count)
print("nodes", measure.node_count)
measure.s_gen.get_tally()
measure.p_gen.get_tally()
measure.o_gen.get_tally()
measure.l_gen.get_tally()
df, link_map = measure.n_gen.get_tally_map()
df
df, link_map = measure.e_gen.get_tally_map()
print(link_map)
```
## ShapeFactory and evolved shapes
```
factory = kglab.ShapeFactory(kg, measure)
subgraph = factory.subgraph
es0 = factory.new_shape()
print(es0.serialize(subgraph))
[ print(r) for r in es0.get_rdf() ];
```
Now we can use this `ShapeFactory` object to evolve a *shape* within the graph, then generate a SPARQL query to test its cardinality:
```
sparql, bindings = es0.get_sparql()
print(sparql)
print(bindings)
for row in kg.query(sparql):
print(row)
```
We can also use this library to construct a specific shape programatically, e.g., a recipe:
```
es1 = kglab.EvoShape(kg, measure)
type_uri = "http://purl.org/heals/food/Recipe"
type_node = kglab.EvoShapeNode(type_uri, terminal=True)
es1.add_link(es1.root, kg.get_ns("rdf").type, type_node)
edge_uri = "http://purl.org/heals/food/hasIngredient"
edge_node_uri = "http://purl.org/heals/ingredient/VanillaExtract"
edge_node = kglab.EvoShapeNode(edge_node_uri)
es1.add_link(es1.root, edge_uri, edge_node)
edge_uri = "http://purl.org/heals/food/hasIngredient"
edge_node_uri = "http://purl.org/heals/ingredient/AllPurposeFlour"
edge_node = kglab.EvoShapeNode(edge_node_uri)
es1.add_link(es1.root, edge_uri, edge_node)
edge_uri = "http://purl.org/heals/food/hasIngredient"
edge_node_uri = "http://purl.org/heals/ingredient/Salt"
edge_node = kglab.EvoShapeNode(edge_node_uri)
es1.add_link(es1.root, edge_uri, edge_node)
edge_uri = "http://purl.org/heals/food/hasIngredient"
edge_node_uri = "http://purl.org/heals/ingredient/ChickenEgg"
edge_node = kglab.EvoShapeNode(edge_node_uri)
es1.add_link(es1.root, edge_uri, edge_node)
[ print(r) for r in es1.get_rdf() ]
es1.serialize(subgraph)
sparql, bindings = es1.get_sparql()
print(sparql)
print(bindings)
```
Query to find matching instances for this shape `es1` within the graph:
```
for row in kg.query(sparql, bindings=bindings):
print(row)
```
## Leaderboard which can be distributed across a cluster
We can calculate metrics to describe how these shapes `es0` and `es1` might rank on a *leaderboard*:
```
es0.get_cardinality()
es1.get_cardinality()
```
Then calculate a vector distance between `es1` and `es0` which we'd generated earlier:
```
es0.calc_distance(es1)
```
Now we can generate a compact, ordinal representation for the `es1` shape, which can be serialized as a string, transferred across a network to an actor, then deserialized as the same shape -- *as long as we use a similarly structured subgraph*
```
import json
ser = es1.serialize(subgraph)
j_ser = json.dumps(ser)
print(j_ser)
ser = json.loads(j_ser)
ser
print(subgraph.id_list)
```
Test the deseserialization
```
es2 = kglab.EvoShape(kg, measure)
uri_map = es2.deserialize(ser, subgraph)
print(es2.root.uri)
for k, v in uri_map.items():
print(k, v)
for e in es2.root.edges:
print("obj", e.obj)
print("edge", e.pred, e.obj.uri)
for n in es2.nodes:
print(n)
print(n.uri)
[ print(r) for r in es2.get_rdf() ]
es2.serialize(subgraph)
es2.get_sparql()
```
Prototype a leaderboard -
```
leaderboard = kglab.Leaderboard()
leaderboard.df
dist = leaderboard.add_shape(es0.serialize(subgraph))
print(dist)
leaderboard.df
dist = leaderboard.add_shape(es1.serialize(subgraph))
print(dist)
leaderboard.df
es3 = kglab.EvoShape(kg, measure)
type_uri = "http://purl.org/heals/food/Recipe"
type_node = kglab.EvoShapeNode(type_uri, terminal=True)
es3.add_link(es3.root, kg.get_ns("rdf").type, type_node)
edge_uri = "http://purl.org/heals/food/hasIngredient"
edge_node_uri = "http://purl.org/heals/ingredient/Butter"
edge_node = kglab.EvoShapeNode(edge_node_uri)
es3.add_link(es3.root, edge_uri, edge_node)
shape = es3.serialize(subgraph)
shape
dist = leaderboard.add_shape(es3.serialize(subgraph))
print(dist)
leaderboard.df
```
## Generating triads from co-occurrence
| github_jupyter |
## Вот и первая домашка! :)
Воспользуемся датасетом про цены на дома в Бостоне. Небольшая расшифровка:
* `crim` – уровень преступности на душу населения по городам;
* `zn` – доля земли под жилую застройку, зонированная под участки площадью более 25000 кв. футов;
* `indus` – доля акров, не связанных с розничной торговлей на город;
* `chas` – дамми (бинарная) переменная Charles River (= 1, если участок ограничивает реку; 0 иначе);
* `nox` – концентрация оксидов азота (частей на 10 млн);
* `rm` – среднее количество комнат в доме;
* `age` – доля занятых владельцами единиц, построенных до 1940 года;
* `dis` – средневзвешенное расстояние до пяти бостонских центров занятости;
* `rad` – индекс доступности радиальных автомобильных дорог;
* `tax` – полная ставка налога на имущество за \$ 10,000;
* `ptratio` – соотношение учеников на учителя по городам;
* `black` 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town (количество чернокожего населения... никогда не замечала этой колонки пока не засела переводить);
* `lstat` – более низкий статус населения (в процентах);
* `medv` – медианная стоимость домов, занимаемых владельцами, в \$1000.
```
import pandas as pd
df = pd.read_csv('boston_data.csv')
df.head()
```
## Задание 1
Посмотрите на переменные, которые содержатся в датасете и определите их тип:
1. числовые (иначе называются количественные)
2. качественные порядковые
3. качественные номинальные.
Распределите все переменные на эти 3 категории.
## Задание 2
Посчитайте все пройденные нами описательные статистики для переменной `medv` (среднее, медиану, моду, стандартное отклонение, дисперсию, квартили, максимум, минимум, размах и межквартильное расстояние). Прокомметируйте полученные результаты, посмотрите, насколько близко находятся медиана и среднее.
**Бонус:** Постройте гистограмму для данной переменной и прокомментриуйте ее (как выглядит полученный график? Полученные столбики примерно равны по высоте (то есть распределены равномерно) или их высота варьируется? где у него находятся самые высокие столбики, а где, наоборот, наблюдаются низкие значения?)
## Задание 3
Посчитайте следующие описательные статистики: среднее, медиану, моду, стандартное отклонение, дисперсию для переменной `chas`. Внимательно прокомментрийте полученные результаты. Можем ли мы в этом случае интерпретировать значения медианы и среднего так же, как делали в пункте 3 домашнего задания? Если нет, аргументируйте свой ответ.
## Задание 4
Выберите еще одну любую переменную из датасета и посчитайте для нее описательные статистики из задания 2.
| github_jupyter |
# DQN Breakout
```
from __future__ import division
import gym
import numpy as np
import random
import tensorflow as tf
import tensorflow.contrib.slim as slim
import matplotlib.pyplot as plt
import scipy.misc
import os
%matplotlib inline
```
### Load the game environment
```
env = gym.make('BreakoutDeterministic-v4')
obv = env.reset()
plt.imshow(obv)
plt.show()
## These helper functions are found on stackoverflow
from scipy.misc import imresize
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])
def crop(img, cropx=None,cropy=None):
y,x = img.shape
if (cropx is None) :
cropx = min(y, x)
if (cropy is None) :
cropy = min(y, x)
startx = x//2 - cropx//2
starty = y - cropy
return img[starty:starty+cropy, startx:startx+cropx]
def processState(image):
image = image[50:, :, :]
image = rgb2gray(image)
image = imresize(image/127.5-1, (84, 84), 'cubic', 'F')
return np.reshape(image,[84*84]) # 84 x 84 x 3
plt.imshow(processState(obv).reshape((84, 84)), cmap='gray')
plt.show()
class experience_buffer():
def __init__(self, buffer_size = 50000):
self.buffer = []
self.buffer_size = buffer_size
def add(self,experience):
if len(self.buffer) + len(experience) >= self.buffer_size:
self.buffer[0:(len(experience)+len(self.buffer))-self.buffer_size] = []
self.buffer.extend(experience)
def sample(self,size):
return np.reshape(np.array(random.sample(self.buffer,size)),[size,5])
def updateTargetGraph(tfVars,tau):
total_vars = len(tfVars)
op_holder = []
for idx,var in enumerate(tfVars[0:total_vars//2]):
op_holder.append(tfVars[idx+total_vars//2].assign((var.value()*tau) + ((1-tau)*tfVars[idx+total_vars//2].value())))
return op_holder
def updateTarget(op_holder,sess):
for op in op_holder:
sess.run(op)
```
### Implementing the network itself
```
class Qnetwork():
def __init__(self,h_size):
#The network recieves a frame from the game, flattened into an array.
#It then resizes it and processes it through four convolutional layers.
#We use slim.conv2d to set up our network
self.scalarInput = tf.placeholder(shape=[None,84*84],dtype=tf.float32)
self.imageIn = tf.reshape(self.scalarInput,shape=[-1,84,84,1])
self.conv1 = slim.conv2d( \
inputs=self.imageIn,num_outputs=32,kernel_size=[8,8],stride=[4,4],padding='VALID', biases_initializer=None)
self.conv2 = slim.conv2d( \
inputs=self.conv1,num_outputs=64,kernel_size=[4,4],stride=[2,2],padding='VALID', biases_initializer=None)
self.conv3 = slim.conv2d( \
inputs=self.conv2,num_outputs=64,kernel_size=[3,3],stride=[1,1],padding='VALID', biases_initializer=None)
self.conv4 = slim.conv2d( \
inputs=self.conv3,num_outputs=h_size,kernel_size=[7,7],stride=[1,1],padding='VALID', biases_initializer=None)
################################################################################
# TODO: Implement Dueling DQN #
# We take the output from the final convolutional layer i.e. self.conv4 and #
# split it into separate advantage and value streams. #
# Outout: self.Advantage, self.Value #
# Hint: Refer to Fig.1 in [Dueling DQN](https://arxiv.org/pdf/1511.06581.pdf) #
# In implementation, use tf.split to split into two branches. You may #
# use xavier_initializer for initializing the two additional linear #
# layers. #
################################################################################
adv, val = tf.split(self.conv4, 2, 3)
self.Advantage = tf.layers.dense(slim.flatten(adv), env.action_space.n)
self.Value = tf.layers.dense(slim.flatten(val), 1)
################################################################################
# END OF YOUR CODE #
################################################################################
#Then combine them together to get our final Q-values.
#Please refer to Equation (9) in [Dueling DQN](https://arxiv.org/pdf/1511.06581.pdf)
self.Qout = self.Value + tf.subtract(self.Advantage,tf.reduce_mean(self.Advantage,axis=1,keep_dims=True))
self.predict = tf.argmax(self.Qout,1)
#Below we obtain the loss by taking the sum of squares difference between the target and prediction Q values.
self.targetQ = tf.placeholder(shape=[None],dtype=tf.float32)
self.actions = tf.placeholder(shape=[None],dtype=tf.int32)
self.actions_onehot = tf.one_hot(self.actions,env.action_space.n,dtype=tf.float32)
################################################################################
# TODO: #
# Obtain the loss (self.loss) by taking the sum of squares difference #
# between the target and prediction Q values. #
################################################################################
predictQ = tf.reduce_sum(self.Qout*self.actions_onehot,axis = 1)
self.loss = tf.reduce_mean((predictQ - self.targetQ)**2)
################################################################################
# END OF YOUR CODE #
################################################################################
self.trainer = tf.train.AdamOptimizer(learning_rate=0.0001)
self.updateModel = self.trainer.minimize(self.loss)
```
### Training the network
Setting all the training parameters
```
batch_size = 32 #How many experiences to use for each training step.
update_freq = 4 #How often to perform a training step.
y = .99 #Discount factor on the target Q-values
startE = 1 #Starting chance of random action
endE = 0.1 #Final chance of random action
annealing_steps = 10000. #How many steps of training to reduce startE to endE.
num_episodes = 3000 #How many episodes of game environment to train network with.
pre_train_steps = 1000 #How many steps of random actions before training begins.
max_epLength = 500 #The max allowed length of our episode.
load_model = True #Whether to load a saved model.
path = "./dqn" #The path to save our model to.
h_size = 512 #The size of the final convolutional layer before splitting it into Advantage and Value streams.
tau = 0.001 #Rate to update target network toward primary network
```
I trained this for 5000 + 3000 = 8000 episodes but still the result is not the best. I terminated the notebook mid-training since I ran out of time on Intel AI Devcloud. However, seeing from the plot the agent can get pass 6 points very consistently and the best it can achieve 9 points. I believe if I train it for more time it could perform better.
```
tf.reset_default_graph()
mainQN = Qnetwork(h_size)
targetQN = Qnetwork(h_size)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
trainables = tf.trainable_variables()
targetOps = updateTargetGraph(trainables,tau)
myBuffer = experience_buffer()
#Set the rate of random action decrease.
e = startE
stepDrop = (startE - endE)/annealing_steps
#create lists to contain total rewards and steps per episode
jList = []
rList = []
total_steps = 0
#Make a path for our model to be saved in.
if not os.path.exists(path):
os.makedirs(path)
with tf.Session() as sess:
sess.run(init)
if load_model == True:
print('Loading Model...')
ckpt = tf.train.get_checkpoint_state(path)
saver.restore(sess,ckpt.model_checkpoint_path)
for i in range(num_episodes):
episodeBuffer = experience_buffer()
#Reset environment and get first new observation
s = env.reset()
s = processState(s)
d = False
rAll = 0
j = 0
#The Q-Network
while True:
j+=1
#Choose an action by greedily (with e chance of random action) from the Q-network
if np.random.rand(1) < e or total_steps < pre_train_steps:
a = np.random.randint(0,4)
else:
a = sess.run(mainQN.predict,feed_dict={mainQN.scalarInput:[s]})[0]
total_steps += 1
################################################################################
# TODO: Save the experience to our episode buffer. #
# You will need to do the following: #
# (1) Get new state s1 (resized), reward r and done d from a #
# (2) Add experience to episode buffer. Hint: experience includes #
# s, a, r, s1 and d. #
################################################################################
s1, r, d, _ = env.step(a)
s1 = processState(s1)
experience = np.expand_dims(np.array([s, a, r, s1, d]), 0)
episodeBuffer.add(experience)
################################################################################
# END OF YOUR CODE #
################################################################################
if total_steps > pre_train_steps:
if e > endE:
e -= stepDrop
if total_steps % (update_freq) == 0:
################################################################################
# TODO: Implement Double-DQN #
# (1) Get a random batch of experiences via experience_buffer class #
# #
# (2) Perform the Double-DQN update to the target Q-values #
# Hint: Use mainQN and targetQN separately to chose an action and predict #
# the Q-values for that action. #
# Then compute targetQ based on Double-DQN equation #
# #
# (3) Update the primary network with our target values #
################################################################################
batch = myBuffer.sample(batch_size)
stacked_state = np.vstack(batch[:, 3])
action_ = sess.run(mainQN.predict, feed_dict={mainQN.scalarInput: stacked_state})
Q_ = sess.run(targetQN.Qout, feed_dict={targetQN.scalarInput: stacked_state})
next_Q = Q_[range(batch_size), action_]
done_mask = 1 - batch[:, 4]
targetQ = batch[:, 2] + done_mask * y * next_Q
sess.run(mainQN.updateModel, feed_dict={mainQN.scalarInput: np.vstack(batch[:,0]),
mainQN.targetQ: targetQ,
mainQN.actions: batch[:,1]})
################################################################################
# END OF YOUR CODE #
################################################################################
updateTarget(targetOps,sess) #Update the target network toward the primary network.
rAll += r
s = s1
if d == True:
break
myBuffer.add(episodeBuffer.buffer)
jList.append(j)
rList.append(rAll)
#Periodically save the model.
if i % 2000 == 0: # i % 1000 == 0:
saver.save(sess,path+'/model-'+str(i)+'.ckpt')
print("Saved Model")
if len(rList) % 10 == 0:
print("Episode",i,"reward:",np.mean(rList[-10:]))
saver.save(sess,path+'/model-'+str(i)+'.ckpt')
print("Mean reward per episode: " + str(sum(rList)/num_episodes))
```
### Checking network learning
```
rMat = np.resize(np.array(rList),[len(rList)//100,100])
rMean = np.average(rMat,1)
plt.plot(rMean)
```
| github_jupyter |
<a href="https://kaggle.com/code/ritvik1909/siamese-network" target="_blank"><img align="left" alt="Kaggle" title="Open in Kaggle" src="https://kaggle.com/static/images/open-in-kaggle.svg"></a>
# Siamese Network
A Siamese neural network (sometimes called a twin neural network) is an artificial neural network that uses the same weights while working in tandem on two different input vectors to compute comparable output vectors.
Often one of the output vectors is precomputed, thus forming a baseline against which the other output vector is compared. This is similar to comparing fingerprints but can be described more technically as a distance function for locality-sensitive hashing.
It is possible to make a kind of structure that is functional similar to a siamese network, but implements a slightly different function. This is typically used for comparing similar instances in different type sets.
Uses of similarity measures where a twin network might be used are such things as recognizing handwritten checks, automatic detection of faces in camera images, and matching queries with indexed documents. The perhaps most well-known application of twin networks are face recognition, where known images of people are precomputed and compared to an image from a turnstile or similar. It is not obvious at first, but there are two slightly different problems. One is recognizing a person among a large number of other persons, that is the facial recognition problem.
Source: [Wikipedia](https://en.wikipedia.org/wiki/Siamese_neural_network)
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import random
import tensorflow as tf
import tensorflow.keras.backend as K
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras import layers, utils, callbacks
```
Lets start off by creating our dataset. Our input data will be consisting of pairs of images, and output will be either 1 or 0 indicating if the pair are similar or not
```
def make_pairs(images, labels, seed=19):
np.random.seed(seed)
pairImages = []
pairLabels = []
numClasses = len(np.unique(labels))
idx = [np.where(labels == i)[0] for i in range(numClasses)]
for idxA in range(len(images)):
currentImage = images[idxA]
label = labels[idxA]
idxB = np.random.choice(idx[label])
posImage = images[idxB]
pairImages.append([currentImage, posImage])
pairLabels.append([1])
negIdx = np.where(labels != label)[0]
negImage = images[np.random.choice(negIdx)]
pairImages.append([currentImage, negImage])
pairLabels.append([0])
return (np.array(pairImages), np.array(pairLabels))
```
We will be working with `MNIST` dataset in our notebook which comes along with the tensorflow library
```
(trainX, trainY), (testX, testY) = mnist.load_data()
trainX = 1 - (trainX / 255.0)
testX = 1 - (testX / 255.0)
trainX = np.expand_dims(trainX, axis=-1)
testX = np.expand_dims(testX, axis=-1)
(pairTrain, labelTrain) = make_pairs(trainX, trainY)
(pairTest, labelTest) = make_pairs(testX, testY)
print(f'\nTrain Data Shape: {pairTrain.shape}')
print(f'Test Data Shape: {pairTest.shape}\n\n')
```
Lets visualize the mnist images
```
fig, ax = plt.subplots(2, 6, figsize=(20, 6))
random.seed(19)
idx = random.choices(range(len(trainX)), k=12)
for i in range(12):
ax[i//6][i%6].imshow(np.squeeze(trainX[idx[i]]), cmap='gray')
ax[i//6][i%6].set_title(f'Label: {trainY[idx[i]]}', fontsize=18)
ax[i//6][i%6].set_axis_off()
fig.suptitle('MNIST Images', fontsize=24);
```
Here is a sample of our prepared dataset
```
fig, ax = plt.subplots(2, 6, figsize=(20, 6))
random.seed(19)
idx = random.choices(range(len(pairTrain)), k=6)
for i in range(0, 12, 2):
ax[i//6][i%6].imshow(np.squeeze(pairTrain[idx[i//2]][0]), cmap='gray')
ax[i//6][i%6+1].imshow(np.squeeze(pairTrain[idx[i//2]][1]), cmap='gray')
ax[i//6][i%6].set_title(f'Label: {labelTrain[idx[i//2]]}', fontsize=18)
ax[i//6][i%6].set_axis_off()
ax[i//6][i%6+1].set_axis_off()
fig.suptitle('Input Pair Images', fontsize=24);
```
Here we define some configurations for our model
```
class config():
IMG_SHAPE = (28, 28, 1)
EMBEDDING_DIM = 48
BATCH_SIZE = 64
EPOCHS = 500
```
Here we define a function to calculate euclidean distance between two vectors. This will be used by our model to calculate the euclidean distance between the vectors of the image pairs (image vectors will be created by the feature extractor of our model)
```
def euclidean_distance(vectors):
(featsA, featsB) = vectors
sumSquared = K.sum(K.square(featsA - featsB), axis=1, keepdims=True)
return K.sqrt(K.maximum(sumSquared, K.epsilon()))
```
With Siamese Network, the two most commonly used loss functions are:
* contrastive loss
* triplet loss
We will be using contrastive loss in this notebook ie:
```Contrastive loss = mean( (1-true_value) * square(prediction) + true_value * square( max(margin - prediction, 0)))```
```
def loss(margin=1):
def contrastive_loss(y_true, y_pred):
y_true = tf.cast(y_true, y_pred.dtype)
square_pred = tf.math.square(y_pred)
margin_square = tf.math.square(tf.math.maximum(margin - (y_pred), 0))
return tf.math.reduce_mean(
(1 - y_true) * square_pred + (y_true) * margin_square
)
return contrastive_loss
```
Finally we define our model architecture
* The model contains two input layers
* A feature extractor through which both the images will be passed to generate feature vectors, the feature extractor typically consists of Convolutional and Pooling Layers
* The feature vectors are passed through a custom layer to get euclidean distance between the vectors
* The final layer consists of a single sigmoid unit
```
class SiameseNetwork(Model):
def __init__(self, inputShape, embeddingDim):
super(SiameseNetwork, self).__init__()
imgA = layers.Input(shape=inputShape)
imgB = layers.Input(shape=inputShape)
featureExtractor = self.build_feature_extractor(inputShape, embeddingDim)
featsA = featureExtractor(imgA)
featsB = featureExtractor(imgB)
distance = layers.Lambda(euclidean_distance, name='euclidean_distance')([featsA, featsB])
outputs = layers.Dense(1, activation="sigmoid")(distance)
self.model = Model(inputs=[imgA, imgB], outputs=outputs)
def build_feature_extractor(self, inputShape, embeddingDim=48):
model = Sequential([
layers.Input(inputShape),
layers.Conv2D(64, (2, 2), padding="same", activation="relu"),
layers.MaxPooling2D(pool_size=2),
layers.Dropout(0.3),
layers.Conv2D(64, (2, 2), padding="same", activation="relu"),
layers.MaxPooling2D(pool_size=2),
layers.Dropout(0.3),
layers.Conv2D(128, (1, 1), padding="same", activation="relu"),
layers.Flatten(),
layers.Dense(embeddingDim, activation='tanh')
])
return model
def call(self, x):
return self.model(x)
model = SiameseNetwork(inputShape=config.IMG_SHAPE, embeddingDim=config.EMBEDDING_DIM)
model.compile(loss=loss(margin=1), optimizer="adam", metrics=["accuracy"])
es = callbacks.EarlyStopping(monitor='val_loss', patience=10, verbose=1, restore_best_weights=True, min_delta=1e-4)
rlp = callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=3, min_lr=1e-6, mode='min', verbose=1)
history = model.fit(
[pairTrain[:, 0], pairTrain[:, 1]], labelTrain[:],
validation_data=([pairTest[:, 0], pairTest[:, 1]], labelTest[:]),
batch_size=config.BATCH_SIZE,
epochs=config.EPOCHS,
callbacks=[es, rlp]
)
fig, ax = plt.subplots(2, 6, figsize=(20, 6))
random.seed(19)
idx = random.choices(range(len(pairTest)), k=6)
preds = model.predict([pairTest[:, 0], pairTest[:, 1]])
for i in range(0, 12, 2):
ax[i//6][i%6].imshow(np.squeeze(pairTest[idx[i//2]][0]), cmap='gray')
ax[i//6][i%6+1].imshow(np.squeeze(pairTest[idx[i//2]][1]), cmap='gray')
ax[i//6][i%6].set_title(f'Label: {labelTest[idx[i//2]]}', fontsize=18)
ax[i//6][i%6+1].set_title(f'Predicted: {np.round(preds[idx[i//2]], 2)}', fontsize=18)
ax[i//6][i%6].set_axis_off()
ax[i//6][i%6+1].set_axis_off()
fig.suptitle('Test Pair Images', fontsize=24);
sns.set_style('darkgrid')
fig, ax = plt.subplots(2, 1, figsize=(20, 8))
df = pd.DataFrame(history.history)
df[['accuracy', 'val_accuracy']].plot(ax=ax[0])
df[['loss', 'val_loss']].plot(ax=ax[1])
ax[0].set_title('Model Accuracy', fontsize=12)
ax[1].set_title('Model Loss', fontsize=12)
fig.suptitle('Siamese Network: Learning Curve', fontsize=18);
```
# References
[Fisher Discriminant Triplet and Contrastive Losses for Training Siamese Networks](https://arxiv.org/pdf/2004.04674v1.pdf)
| github_jupyter |
```
import os
os.chdir('..')
```
<img src="flow_2.png">
```
from flows.flows import Flows
flow = Flows(2)
path = "./data/flow_2"
files_list = ["train.csv","test.csv"]
dataframe_dict, columns_set = flow.load_data(path, files_list)
dataframe_dict, columns_set= flow.flatten_json_data(dataframe_dict)
dataframe_dict, columns_set = flow.encode_categorical_feature(dataframe_dict)
ignore_columns = ['id', 'revenue']
dataframe_dict, columns_set = flow.scale_data(dataframe_dict, ignore_columns)
flow.exploring_data(dataframe_dict, "test")
flow.comparing_statistics(dataframe_dict)
ignore_columns = ["id", "release_date", "revenue"]
columns = dataframe_dict["train"].columns
train_dataframe = dataframe_dict["train"][[x for x in columns_set["train"]["continuous"] if x not in ignore_columns]]
test_dataframe = dataframe_dict["test"][[x for x in columns_set["train"]["continuous"] if x not in ignore_columns]]
train_target = dataframe_dict["train"]["revenue"]
parameters = {
"data": {
"train": {"features": train_dataframe, "target": train_target.to_numpy()},
},
"split": {
"method": "kfold", # "method":"kfold"
"fold_nr": 5, # foldnr:5 , "split_ratios": 0.8 # "split_ratios":(0.7,0.2)
},
"model": {"type": "Ridge linear regression",
"hyperparameters": {"alpha": "optimize", # alpha:optimize
},
},
"metrics": ["r2_score", "mean_squared_error"],
"predict": {
"test": {"features": test_dataframe}
}
}
model_index_list, save_models_dir, y_test = flow.training(parameters)
parameters_lighgbm = {
"data": {
"train": {"features": train_dataframe, "target": train_target.to_numpy()},
},
"split": {
"method": "kfold", # "method":"kfold"
"fold_nr": 5, # foldnr:5 , "split_ratios": 0.8 # "split_ratios":(0.7,0.2)
},
"model": {"type": "lightgbm",
"hyperparameters": dict(objective='regression', metric='root_mean_squared_error', num_leaves=5,
boost_from_average=True,
learning_rate=0.05, bagging_fraction=0.99, feature_fraction=0.99, max_depth=-1,
num_rounds=10000, min_data_in_leaf=10, boosting='dart')
},
"metrics": ["r2_score", "mean_squared_error"],
"predict": {
"test": {"features": test_dataframe}
}
}
model_index_list, save_models_dir, y_test = flow.training(parameters_lighgbm)
parameters_xgboost = {
"data": {
"train": {"features": train_dataframe, "target": train_target.to_numpy()},
},
"split": {
"method": "kfold", # "method":"kfold"
"fold_nr": 5, # fold_nr:5 , "split_ratios": 0.3 # "split_ratios":(0.3,0.2)
},
"model": {"type": "xgboost",
"hyperparameters": {'max_depth': 5, 'eta': 1, 'eval_metric': "rmse"}
},
"metrics": ["r2_score", "mean_squared_error"],
"predict": {
"test": {"features": test_dataframe}
}
}
model_index_list, save_models_dir, y_test = flow.training(parameters_xgboost)
```
| github_jupyter |
# UROP Code Stuff
#### The python translation of doit.pl
## Initializing
### Since this is a notebook, argv doesn't work how it normally would. Instead, place the values in this block
```
import sys
import numpy as np
sys.argv = ['doit.py', 1]
with open('framedist_vals', 'r') as f:
framedist = np.array(f.readline().split(), int) #these are the random values from perl
#am using to make sure I get the same result across the two.
```
### Actual code starts here including proper command line argument usage
```
import numpy as np
from skimage import measure
from astropy.io import fits
from astropy.table import Table
import sys, glob
import datetime
import re
num_args = len(sys.argv) - 1
if num_args == 1:
start_run, stop_run = sys.argv[1:2]*2
run_str = f"### Doing run {start_run}"
elif num_args == 2:
start_run, stop_run = sorted(sys.argv[1:3])
run_str = f"### Doing runs {start_run} to {stop_run}"
else:
sys.exit(f"Usage: {sys.argv[0]} <start_run> [<stop_run>]")
date = datetime.datetime.now().astimezone()
date_string = date.strftime("%a %d %b %Y %I:%M:%S %p %Z").rstrip()
print("############################################################")
print(f"### Started {sys.argv[0]} on {date_string}")
print(run_str)
#//line 99
# define defaults
evtth = .1 # 100 eV for WFI Geant4 simulations
splitth = evtth # same as evtth for WFI Geant4 simulations
npixthresh = 5 # minimum number of pixels in a blob
mipthresh = 15. # minimum ionizing particle threshold in keV
clip_energy = 22. # maximum pixel value reported by WFI, in keV
skip_writes = -1 # writes FITS images for every skip_writes primary;
# set to -1 to turn off writes
evperchan = 1000. # why not? PHA here is really PI
mipthresh_chan = mipthresh * 1000. / evperchan # minimum ionizing particle threshold in PHA units
spec_maxkev = 100.
numchans = int((spec_maxkev*1000.) / evperchan)
gain_intercept = 0. # use this for Geant4 data
gain_slope = 1000. # use this for Geant4 data (pixel PH units are keV)
#gain_intercepts = (0, 0, 0, 0) # in ADU
#gain_slopes = (1000, 1000, 1000, 1000) # in eV/ADU
# rate and frame defaults
proton_flux = 4.1 * 7./5. # protons/s/cm2; 7/5 accounts for alphas, etc.
sphere_radius = 70. # radius of boundary sphere in cm
num_protons_per_run = 1.e6 # number of proton primaries
# in a simulatin run (from Jonathan)
# his email said 1e7, but looking at the
# input files it's really 1e6!!!
detector_area = 4. * (130.e-4 * 512.)**2 # detector area in cm2,
# assuming 130 um pixels
texp_run = num_protons_per_run/3.14159/proton_flux/(sphere_radius**2)
# total exposure time in sec for this run
texp_frame = .005 # frame exposure time in sec
mu_per_frame = num_protons_per_run * texp_frame / texp_run
# minimum ionizing pa
print(f"### There are {num_protons_per_run} primaries in this run.")
print(f"### The run exposure time is {texp_run} sec.")
print(f"### The frame exposure time is {texp_frame} sec")
print(f"### for an expected mean of {mu_per_frame} primaries per frame.")
"""
Comments...
"""
epicpn_pattern_table = np.array([
0, 13, 3, 13, 13, 13, 13, 13, 4, 13, 8, 12, 13, 13, 13, 13,
2, 13, 7, 13, 13, 13, 11, 13, 13, 13, 13, 13, 13, 13, 13, 13,
13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,
13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,
1, 13, 13, 13, 13, 13, 13, 13, 5, 13, 13, 13, 13, 13, 13, 13,
6, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,
13, 13, 13, 13, 13, 13, 13, 13, 9, 13, 13, 13, 13, 13, 13, 13,
13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,
13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,
13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,
13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,
13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,
13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,
10, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,
13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,
13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13
])
# hash of codes indexed by particle type indexed
ptypes = {
'proton': 0,
'gamma': 1,
'electron': 2,
'neutron': 3,
'pi+': 4,
'e+': 5,
'pi-': 6,
'nu_mu': 7,
'anti_nu_mu': 8,
'nu_e': 9,
'kaon+': 10,
'mu+': 11,
'deuteron': 12,
'kaon0L': 13,
'lambda': 14,
'kaon-': 15,
'mu-': 16,
'kaon0S': 17,
'alpha': 18,
'anti_proton': 19,
'triton': 20,
'anti_neutron': 21,
'sigma-': 22,
'sigma+': 23,
'He3': 24,
'anti_lambda': 25,
'anti_nu_e': 26,
'anti_sigma-': 27,
'xi0': 28,
'anti_sigma+': 29,
'xi-': 30,
'anti_xi0': 31,
'C12': 32,
'anti_xi-': 33,
'Li6': 34,
'Al27': 35,
'O16': 36,
'Ne19': 37,
'Mg24': 38,
'Li7': 39,
'He6': 40,
'Be8': 41,
'Be10': 42,
'unknown': 99
}
# initialize rng
rng = np.random.RandomState(1234)
# temporary variables
x, y = 0, 0
"""
Comments...
line 246
"""
# 0-511; indexed by detector number 0-3
x_offset = 513
y_offset = 513
imgsize = 1027
actmin = 0
actmax = 1026
xydep_min = -513
xydep_max = 513
def match(regex: str, string: str):
return re.compile(regex).search(string)
def indexND(array, ind, value=None):
if value is None:
return array[ind[1],ind[0]]
array[ind[1],ind[0]] = value
def wfits(data, fitsfile, ow=True, hdr=None):
if isinstance(data, tuple):
t = Table(data[1], names = data[0])
hdu = fits.table_to_hdu(t)
if hdr:
for key, val in hdr.items():
hdu.header[key] = val
else:
hdu = fits.PrimaryHDU()
hdu.data = data
hdu.writeto(fitsfile, overwrite = ow)
def which(condition):
if condition.ndim != 1:
condition = condition.flat
return np.where(condition)[0]
def whichND(condition):
return np.array(np.where(condition)[::-1])
def which_both(condition):
return which(condition), which(condition == False)
```
## Main Loop
```
#######################################
# Main loop.
# Step through Geant4 output data files.
# For each one, create a frame per primary,
# find blobs and MIPs, and then find events in that frame.
# Change start_run to parallelize things (i.e. do chunks of 10 runs
# in parallel).
# delte variables later
for this_run in range(start_run, stop_run + 1):
# see if there are files for this run
infiles = glob.glob(f"input/{this_run}_detector?")
if len(infiles) != 4:
print (f"### Found something other than 4 datafiles for {this_run}, skipping.")
continue
# initialize event piddles, which will be written out or used later
runid = np.zeros(0, dtype=int)
detectorid = np.zeros(0, dtype=int)
primid = np.zeros(0, dtype=int)
actx = np.zeros(0, dtype=int)
acty = np.zeros(0, dtype=int)
phas = np.zeros((0,25), dtype=float)
pha = np.zeros(0, dtype=float)
ptype = np.zeros(0, dtype=int)
energy = np.zeros(0, dtype = float) # in keV
evttype = np.zeros(0, dtype=int)
blobdist = np.zeros(0, dtype=float)
mipdist = np.zeros(0, dtype=float)
pattern = np.zeros(0, dtype=int)
vfaint = np.zeros(0, dtype=int)
# assign frames and times to each primary
# to start, assume mean of one primary per second
evt_time = np.zeros(0, dtype=float)
evt_frame = np.zeros(0, dtype=int)
pix_time = np.zeros(0, dtype=float)
pix_frame = np.zeros(0, dtype=int)
# initialize structures to hold the secondary particle columns
# piddles for the numeric columns; these are enough for now
run = np.zeros(0, dtype=int) # Geant4 run (* in *_detector[0123])
detector = np.zeros(0, dtype=int) # WFI detector (? in *_detector?)
eid = np.zeros(0, dtype=int) # primary ID
particleid = np.zeros(0, dtype=int) # interacting particle ID
parentid = np.zeros(0, dtype=int) # don't really need probably
# initialize piddles to hold the energy deposition (per pixel) columns
# some of this will be written out the pixel list
xdep = np.zeros(0, dtype=int)
ydep = np.zeros(0, dtype=int)
endep = np.zeros(0, dtype=float)
rundep = np.zeros(0, dtype=int)
detectordep = np.zeros(0, dtype=int)
eiddep = np.zeros(0, dtype=int)
framedep = np.zeros(0, dtype=int)
piddep = np.zeros(0, dtype=int)
ptypedep = np.zeros(0, dtype=int)
cprocdep = np.zeros(0, dtype=int)
blobid = np.zeros(0, dtype=int)
# initialize piddles to hold frame-specific things to go in FITS table
frame_frame = np.zeros(0, dtype=int)
frame_time = np.zeros(0, dtype=float)
frame_runid = np.zeros(0, dtype=int)
frame_npix = np.zeros(0, dtype=int)
frame_npixmip = np.zeros(0, dtype=int)
frame_nevt = np.zeros(0, dtype=int)
frame_nevtgood = np.zeros(0, dtype=int)
frame_nevt27 = np.zeros(0, dtype=int)
frame_nblob = np.zeros(0, dtype=int)
frame_nprim = np.zeros(0, dtype=int)
# initialize piddles to hold blob-specific things to go in FITS table
blob_frame = np.zeros(0, dtype=int)
blob_blobid = np.zeros(0, dtype=int)
blob_cenx = np.zeros(0, dtype=float)
blob_ceny = np.zeros(0, dtype=float)
blob_cenxcl = np.zeros(0, dtype=float)
blob_cenycl = np.zeros(0, dtype=float)
blob_npix = np.zeros(0, dtype=int)
blob_energy = np.zeros(0, dtype=float)
blob_energycl = np.zeros(0, dtype=float)
# initialize things for the running frames which we will
# randomly populate
# frame settings
# we know there are $num_protons_per_run, so generate enough
# random frames to hold them
##framedist = rng.poisson(mu_per_frame, int(2*num_protons_per_run/mu_per_frame))
cumframedist = framedist.cumsum()
# get the total number of frames needed to capture all the primaries;
# will write this to FITS header so we can combine runs
numtotframes = which(cumframedist >= num_protons_per_run)[0] + 1
# this is wrong, because it will remove the last bin which we need
# it's also unnecessary
#cumframedist = cumframedist[cumframedist <= num_protons_per_run]
# running variables
numevents = 0
numtotblobs = 0
# loop through the four quadrant data files for this run
# now combine all four, since single primary can produce signal
# in multiple quadrants
for infile in infiles:
print(f"### Reading {infile}")
rc = match('[0-9]+_detector([0-9]+)', infile) #extracts the detector name
this_detector = int(rc.group(1))
ptype = {}
cproc = {}
with open(infile, 'r') as IN:
# step through the input file and accumulate primaries
for line in IN: #switched to a for loop because of built-in __iter__ method
if match('^\s*#', line): #skip comments #added ability to have arbritrary whitespace before '#'
continue
if match('^\s*$', line): #skip blank lines:
continue
if not match(',', line): #could be if ',' not in line
continue
fields = line.rstrip().split(',')
if match('[a-zA-Z]', fields[0]): # if the first column is a string, then this is a particle line
# retain the primary for this interaction
this_eid = int(float(fields[1]))
eid = np.append(eid, this_eid)
# particle type and interaction type are hashes so
# that the pixel-specific read can pick them up
# doesn't matter if the particle ID is re-used from
# primary to primary, since this will reset it
ptype[int(fields[2])] = fields[0]
cproc[int(fields[2])] = fields[4]
else: # if the first column is a number, then this is a pixel hit line
#print(fields)
if float(fields[2]) <= splitth: # skip it if less than split threshold is deposited,
continue #since that is ~ the lower threshold of pixels we'll get
tmp_x, tmp_y = int(fields[0]), int(fields[1])
if tmp_x<xydep_min or tmp_y<xydep_min or tmp_x>xydep_max or tmp_y>xydep_max:
continue # skip it if it's outside the 512x512 region of a quad
xdep = np.append(xdep, tmp_x)
ydep = np.append(ydep, tmp_y)
endep = np.append(endep, float(fields[2]))
rundep = np.append(rundep, this_run)
detectordep = np.append(detectordep, this_detector)
eiddep = np.append(eiddep, this_eid)
framedep = np.append(framedep, 0)
piddep = np.append(piddep, int(fields[3]))
# %ptype is hash of particle type strings indexed by the id
# %ptypes is (constant) hash of my own particle type IDs indexed
# by the string (confused yet?)
ptypedep = np.append(ptypedep, ptypes[ptype.get(int(fields[3]), 'unknown')])
blobid = np.append(blobid, 0)#np.zeros(0, dtype = int));
#print f"### piddep = {fields[3]}"
#print f"### ptype[piddep] = {ptype[fields[3]]}"
#print "### ptypes{ptype{piddep}} = {ptypes[ptype[fields[3]]]}"
# done loop through quadrant data files for this run
uniq_eid = np.unique(eid)
numprimaries = uniq_eid.size
primary_flux = numprimaries / texp_run / detector_area
numpixels = endep.size
print(f"### Run {this_run}: found {numprimaries} primaries that interacted.")
print(f"### Run {this_run}: that's {primary_flux} protons per sec per cm2.")
print(f"### Run {this_run}: found {numpixels} pixels with deposited energy")
# loop through unique primaries to sort them into frames
# also figure out the number of primaries with signal in
# multiple quadrants, just as a diagnostic
num_in_different_quadrants = [0, 0, 0, 0]
"""alternate
for primary in uniq_eid:
"""
for i in range(numprimaries):
primary = uniq_eid[i]
indx = which(eiddep==primary)
#print("#####################")
#print(f"Doing primary {primary})"
#print(f"indx: {indx}")
#print(f"eiddep: {eiddep[indx]}")
#print(f"OLD framedep {framedep[indx]}")
# assign each primary to a frame
# first get the frame ID (indexed starts at 0)
#print(cumframedist[-1])
frame = which(cumframedist >= primary)[0]
#print(f"THIS IS FRAME {frame}")
# then set the primary and pixel piddles
framedep[indx] = frame
#print(f"NEW framedep framedep[indx]")
num_quadrants = np.unique(detectordep[indx]).size
num_in_different_quadrants[num_quadrants-1] += 1
print(f"### Run {this_run}: number of primaries in 1 2 3 4 quads: {num_in_different_quadrants}")
# min and max X,Y values
# figure out the unique frames that are populated by
# primaries that interacted
frame_frame = np.append(frame_frame, np.unique(framedep))
numframes = frame_frame.size
# now we can append to frame piddles since we know how many there are
frame_runid = np.append(frame_runid, np.zeros(numframes) + this_run)
frame_time = np.append(frame_time, np.zeros(numframes, dtype=float))
frame_npix = np.append(frame_npix, np.zeros(numframes))
frame_npixmip = np.append(frame_npixmip, np.zeros(numframes))
frame_nevt = np.append(frame_nevt, np.zeros(numframes))
frame_nevtgood = np.append(frame_nevtgood, np.zeros(numframes))
frame_nevt27 = np.append(frame_nevt27, np.zeros(numframes))
frame_nblob = np.append(frame_nblob, np.zeros(numframes))
frame_nprim = np.append(frame_nprim, np.zeros(numframes))
pct_interact = 100. * numprimaries / num_protons_per_run
print(f"### Run {this_run}: generated {numtotframes} total frames,")
print(f"### of which {numframes} frames with the {numprimaries}")
print("### interacting primaries will be written.")
print(f"### {num_protons_per_run} total primaries were simulated.")
print(f"### {numprimaries} or {pct_interact}% of these produced a WFI interaction.")
# loop through frames and make a raw frame for each
ptype = np.zeros(0, dtype=int)
for i in range(numframes):
# set the frame ID
frame = frame_frame[i]
# set the frame time
frame_time[i] = frame * texp_frame
# keep us updated
if i % 100 == 0:
print(f"### Run {this_run}: done {i} of {numframes} frames")
# are we writing out?
if skip_writes > 0 and i % skip_writes == 0:
writeit = 1
else:
writeit = 0
#############################################
# make a raw frame
#x = np.zeros(0, float)
#y = np.zeros(0, float)
#en = np.zeros(0, float)
#pixptype = np.zeros(0, float)
pixel_indx = which(framedep==frame)
x = np.copy(xdep[pixel_indx])
y = np.copy(ydep[pixel_indx])
en = np.copy(endep[pixel_indx])
pixptype = np.copy(ptypedep[pixel_indx])
pixprimid = np.copy(eiddep[pixel_indx])
# we want to populate this so don't sever it
"""comments..."""
xoff = np.amin(x) + x_offset - 2
yoff = np.amin(y) + y_offset - 2
x -= np.amin(x)
x += 2
y -= np.amin(y)
y += 2
xdim = np.amax(x) + 3
ydim = np.amax(y) + 3
#line 542
# img is the energy (pulseheight) image
# ptypeimg is an image encoding the particle type responsible
# primidimg is an image encoding the primary responsible
# for each pixel
img = np.zeros((ydim, xdim), float)
ptypeimg = np.zeros((ydim, xdim), float)
primidimg = np.zeros((ydim, xdim), float)
img_xvals = np.fromfunction(lambda x,y: y, (ydim, xdim), dtype = int)
img_yvals = np.fromfunction(lambda x,y: x, (ydim, xdim), dtype = int)
#for j in range(en.size):
# img[x[j], y[j]] += en[j]
# ptypeimg[x[j], y[j]] += en[j]
# better way to do the above mapping to an image without a loop
# indexND wants a 2xN piddle, where N is the number of pixels
coos = np.array((x,y))
indexND(img, coos, en)
indexND(ptypeimg, coos, pixptype)
indexND(primidimg, coos, pixprimid)
#print(pixprimid, x, y, en, sep='\n')
#print(img)
# add to some frame piddles
frame_npix[i] = pixel_indx.size
frame_npixmip[i] = which(en >= mipthresh).size
frame_nprim[i] = np.unique(pixprimid).size
# write out the frame as FITS image
if writeit:
fitsfile = f"output/run{this_run}_frame{frame}_img.fits"
print(f"### Writing raw image {fitsfile}.")
wfits(img, fitsfile)
#############################################
# find blobs
# segment image into blobs
# original IDL code
# blobRegions=label_region(phimg * (phimg GE evtthresh), /ulong)
# NLR PDL 'cc8compt' is equivalentish to IDL 'label_regions'
blobimg = measure.label(img > evtth, connectivity=2)
# the first blob is 1, not 0 (which indicates no blob)
# blobadjust decrements blob IDs when we chuck one, so the IDs are continuous from 1
blobadjust = 0
for j in range(1, np.amax(blobimg) + 1):
indx = which(blobimg == j)
#indx2d = whichND(blobimg == j) dsn
# set the blob to zeros and skip it if there are too few elements
# if it contains a MIP, it's a good blob irregardless
if (indx.size < npixthresh and np.amax(img.flat[indx]) < mipthresh):
blobimg.flat[indx] = 0
blobadjust += 1
continue
blobimg.flat[indx] -= blobadjust
# this is the running blobid which we need to add to blob piddles
blob_blobid = np.append(blob_blobid, numtotblobs +j- blobadjust)
blob_frame = np.append(blob_frame, frame)
blob_npix = np.append(blob_npix, indx.size)
# calculate unclipped blob centroid and summed energy
tmp_en = img.flat[indx]
tmp_toten = np.sum(tmp_en)
tmp_wtd_x = img_xvals.flat[indx] * tmp_en
tmp_wtd_y = img_yvals.flat[indx] * tmp_en
blob_cenx = np.append(blob_cenx, xoff + tmp_wtd_x.sum() / tmp_toten)
blob_ceny = np.append(blob_ceny, yoff + tmp_wtd_y.sum() / tmp_toten)
blob_energy = np.append(blob_energy, tmp_toten)
# calculate clipped blob centroid and summed energy
tmp_en = np.clip(tmp_en, None, clip_energy)
tmp_toten = tmp_en.sum()
tmp_wtd_x = img_xvals.flat[indx] * tmp_en
tmp_wtd_y = img_yvals.flat[indx] * tmp_en
blob_cenxcl = np.append(blob_cenxcl, xoff + tmp_wtd_x.sum() / tmp_toten)
blob_cenycl = np.append(blob_cenycl, yoff + tmp_wtd_y.sum() / tmp_toten)
blob_energycl = np.append(blob_energycl, tmp_toten)
# record number of blobs in this frame
frame_nblob[i] = np.amax(blobimg)
# if we found some blobs, change their IDs so they reflect the running total
# for this run, increase that running blob number
blobimg[blobimg > 0] += numtotblobs
numtotblobs = np.amax(blobimg) if np.amax(blobimg) > 0 else numtotblobs
# mark each pixel in a blob
blobid[pixel_indx] = indexND(blobimg, coos)
# reset the blobimg to ones where there are blobs, zeros
# otherwise; this way all the blobs have distance 0 automagically
blobimg[blobimg > 0] = 1
#############################################
# find MIPs
# set up some piddles
#mip_xy = np.zeros((2, 0), float);
# segment image into mips
mipimg = np.copy(img)
mipimg[mipimg < mipthresh] = 0
# reset the mipimg to ones where there are mips, zeros
# otherwise; this way all the mips have distance 0 automagically
mipimg[mipimg > 0] = 1
#############################################
# find events
# based on Bev's 'findev.pro' rev3.1, 2011-01-26
# get indices and number of pixels with PH above event threshold
# need both 1D and 2D indices, latter to ease local max finding
indx_thcross = which(img > evtth)
indx2d_thcross = whichND(img > evtth)
num_thcross = indx_thcross.size
#line 676
if num_thcross > 0:
# get piddles containing X,Y coords of threshold crossings
evtx = indx2d_thcross[0]
evty = indx2d_thcross[1]
"""coments"""
tmp_evtx = evtx + xoff
tmp_evty = evty + yoff
indx_border, indx_notborder = which_both(
(tmp_evtx<=1) | ((tmp_evtx>=510) & (tmp_evtx<=516)) | (tmp_evtx>=1025) |
(tmp_evty<=1) | ((tmp_evty>=510) & (tmp_evty<=516)) | (tmp_evty>=1025) |
(img[evty,evtx] < 0)
)
num_notborder = indx_notborder.size
if num_notborder > 0:
evtx = evtx[indx_notborder]
evty = evty[indx_notborder]
indx_thcross = indx_thcross[indx_notborder]
# find local maxima
# make a copy of the threshold crossing piddle, which we will
# subsitute with the largest neighbor value and then compare to the original
localmax = np.copy(img)
"""comments..."""
for xi,yi in [(-1,0),(-1,1),(0,1),(1,1),(1,0),(1,-1),(0,-1),(-1,-1)]:
indx = which(img[evty + yi,evtx + xi] > localmax[evty,evtx])
localmax.flat[indx_thcross[indx]] = img[evty + yi, evtx + xi].flat[indx]
# finally compare the original central pixel pulseheight
# with the maximum neighbor; if the former is greater than
# or equal to the latter, it's a local maximum
indx_localmax = which(img.flat[indx_thcross] >= localmax.flat[indx_thcross])
num_localmax = indx_localmax.size
#line 748
if (num_localmax > 0):
evtx = evtx[indx_localmax]
evty = evty[indx_localmax]
indx_thcross = indx_thcross[indx_localmax]
# get 3x3 PHAS piddle in correct
# 1D order (central pixel first)
evt_phas = np.zeros((num_localmax, 25), float)
phas_order = iter(range(1, 25))
evt_phas[:,0] = img[evty, evtx]
for yi in range(-1,2):
for xi in range(-1,2):
if (xi, yi) != (0,0):
evt_phas[:,next(phas_order)] = img[evty + yi,evtx + xi]
# get outer 5x5 PHAS piddle
# first pad the image with zeros all around
tmp_img = np.pad(img, 1)
# now evtx and evty are too low by one, so correct for that in index
for yi in range(-1,4):
for xi in range(-1,4):
if not(-1 < xi < 3 and -1 < yi < 3):
evt_phas[:, next(phas_order)] = tmp_img[evty + yi,evtx + xi]
# set the particle type for all these events
# based on whatever ptype produced the local max
evt_ptype = np.copy(ptypeimg[evty,evtx])
evt_primid = np.copy(primidimg[evty,evtx])
else:
continue
else:
continue
else:
continue
## get X,Y coords of any pixels in a blob (don't matter which blob)
indx_blobs = whichND(blobimg > 0).T
## get X,Y coords of any pixels in a mip (don't matter which mip)
indx_mips = whichND(mipimg > 0).T
numevents_thisframe = num_localmax
#print(f"### Found {numevents_thisframe} events, processing.")
# process the detected events
# append events to running piddles
runid = np.append(runid, np.zeros(numevents_thisframe, float) + this_run)
detectorid = np.append(detectorid, np.zeros(numevents_thisframe, float) + this_run)
primid = np.append(primid, evt_primid)
evt_frame = np.append(evt_frame, np.zeros(numevents_thisframe, float) + frame)
#actx = np.append(actx, np.zeros(numevents_thisframe, float))
#acty = acty->append(zeros(long, $numevents_thisframe))
#phas = phas->glue(1, zeros(long, 25, $numevents_thisframe))
actx = np.append(actx, evtx)
acty = np.append(acty, evty)
phas = np.concatenate([phas, evt_phas])
pha = np.append(pha, np.zeros(numevents_thisframe))
ptype = np.append(ptype, evt_ptype)
evttype = np.append(evttype, np.zeros(numevents_thisframe))
energy = np.append(energy, np.zeros(numevents_thisframe))
blobdist = np.append(blobdist, np.zeros(numevents_thisframe))
mipdist = np.append(mipdist, np.zeros(numevents_thisframe))
pattern = np.append(pattern, np.zeros(numevents_thisframe))
vfaint = np.append(vfaint, np.zeros(numevents_thisframe))
# step through all events to determine
# EPIC-pn pattern, summed PHA, VFAINT flag, minimum distance to a
# blob and mip.
for j in range(numevents, numevents+numevents_thisframe):
# below is already done in event finding and pasted above
# get X,Y of center pixel
x = actx[j]
y = acty[j]
# below is deprecated, since we've assumed this in event finding
## eliminate events on edges so we can use 5x5
#next if ($x<2 or $x>1021 or $y<2 or $y>1021);
# get ACIS flt grade; "which" returns the indices of
# non-central pixels greater than or equal to event threshold,
# these are used as bits to raise 2 to the power, and summed
# (this can probably be removed from the loop if I'm smart)
indx = which(phas[j,1:9] >= splitth)
fltgrade = pow(2,indx).sum()
# convert to EPIC-pn pattern (from look-up table)
pattern[j] = epicpn_pattern_table[fltgrade]
# sum 3x3 pixels over split threshold to get PHA
# this is an ACIS PHA, not Suzaku
# (this can probably be removed from the loop if I'm smart)
pha[j] = phas[j][phas[j] >= splitth].sum()
# apply gain correction for this node
# get the gain parameters from the node (0-3)
#print(f"{phas[j]} {gain_intercept} {gain_slope} {pha[j]}")
pha[j] = (pha[j] - gain_intercept) * gain_slope / evperchan
# convert pha to energy
# (this can probably be removed from the loop if I'm smart)
energy[j] = pha[j] * evperchan / 1000
#print(f"{(phas[j])} {gain_intercept} {gain_slope} {pha[j]} {energy[j]}")
# perform simple VFAINT filtering; also update the EPIC-pn pattern
# of doubles, triples, and quads based on it
# get outer 16 pixels and if any are above split flag it
noutermost = which(phas[j, 9:25] > splitth).size #maybe don't need which
if noutermost > 0:
vfaint[j] = 1
# EDM Fri Jan 4 11:24:20 EST 2019
# Reinstated the 5x5 filtering on PATTERN.
# EDM Thu May 31 13:46:28 EDT 2018
# change to remove 5x5 criterion on EPIC-pn patterns
# for doubles, triples, quads
if pattern[j] > 0:
pattern[j] = 13
# get minimum distance to a blob
# first find delta X,Y from this event to list of blob pixels
delta_blobs = indx_blobs - np.array([x,y])
#print(delta_blobs)
# square that, sum it, square root it to get the distance
if delta_blobs.shape[0] > 0:
blobdist[j] = np.amin(np.sqrt(np.square(delta_blobs).sum(axis = 1)))
# unless there aren't any blobs, in which case set blobdist to -1
else:
blobdist[j] = -1
#print(f"{blobdist[j]}")
# get minimum distance to a mip
# first find delta X,Y from this event to list of mip pixels
delta_mips = indx_mips - np.array([x,y])
#print $delta_mips;
# square that, sum it, square root it to get the distance
if delta_mips.shape[0] > 0:
mipdist[j] = np.amin(np.sqrt(np.square(delta_mips).sum(axis = 1)))
# unless there aren't any mips, in which case set mipdist to -1
else:
mipdist[j] = -1
#print(f"{mipdist[j]}")
# we really want ACTX,Y in real WFI coords in the event list,
# so fix that here;
actx[j] += xoff
acty[j] += yoff
# add info to frame piddles
frame_nevt[i] += 1
if energy[j] < mipthresh:
frame_nevtgood[i] += 1
if energy[j] >= 2 and energy[j] < 7:
frame_nevt27[i] += 1
# increment the number of events and hit the next frame
numevents += numevents_thisframe
# done loop through frames
# segregate the reds and greens as defined now by EPIC-pn pattern
# reds are bad patterns only
indx_goodpatterns, indx_badpatterns = which_both(pattern < 13)
# cyans are singles and doubles, which are "best" of the good
indx_goodbest, indx_badbest = which_both(pattern < 5);
# combine indices for the filters we want via intersection
indx_reds = indx_badpatterns
indx_greens = np.intersect1d(indx_goodpatterns, indx_badbest)
indx_cyans = np.intersect1d(indx_goodpatterns, indx_goodbest)
# determine (event) count rate in cts/cm2/s/keV in the important band
indx_goodenergy = which((energy > 2) & (energy <= 7))
flux_goodenergy = indx_goodenergy.size / texp_run / detector_area / 5.
evttype[indx_reds] = 3
evttype[indx_greens] = 4
evttype[indx_cyans] = 6
#print(f"red maskcodes: {maskcode[indx_reds]}")
#print(f"green maskcodes: {maskcode[indx_greens]}")
#print(f"indx_reds: {indx_reds}")
print(f"### Run {this_run}: number of reds: {indx_reds.size}")
print(f"### Run {this_run}: number of greens: {indx_greens.size}")
print(f"### Run {this_run}: number of cyans: {indx_cyans.size}")
print(f"### Run {this_run}: 2-7 keV count rate: {flux_goodenergy} cts/cm2/s/keV")
hdr = {'NFRAMES': numtotframes, 'NBLOBS': numtotblobs}
# write out the event list
print(f"### Writing event list for run {this_run}.")
outevtfile = f"output/geant_events_{this_run}_evt.fits"
wfits(
(['ACTX','ACTY','BLOBDIST','DETID','ENERGY','EVTTYPE','FRAME','MIPDIST','PATTERN','PHA','PHAS','PRIMID','PTYPE','RUNID','VFAINT'],
[actx, acty, blobdist, detectorid, energy, evttype, evt_frame, mipdist, pattern, pha, phas, primid, ptype, runid, vfaint]),
outevtfile, hdr=hdr)
# write out pixel list
print(f"### Writing pixel list for run {this_run}.")
outpixfile = f"output/geant_events_{this_run}_pix.fits"
xdep += x_offset
ydep += y_offset
indx_sorted = np.argsort(framedep)
wfits(
(['ACTX', 'ACTY', 'BLOBID', 'ENERGY', 'FRAME', 'PRIMID', 'PTYPE', 'RUNID'],
[xdep[indx_sorted], ydep[indx_sorted], blobid[indx_sorted], endep[indx_sorted], framedep[indx_sorted], eiddep[indx_sorted], ptypedep[indx_sorted], rundep[indx_sorted]]),
outpixfile, hdr=hdr)
# write out frame list
print(f"### Writing frame list for run {this_run}.")
outframefile = f"output/pgeant_events_{this_run}_frames.fits"
indx_sorted = np.argsort(frame_frame)
wfits(
(['FRAME', 'NBLOB', 'NEVT', 'NEVT27', 'NEVTGOOD', 'NPIX', 'NPIXMIP', 'NPRIM', 'RUNID', 'TIME'],
[frame_frame[indx_sorted], frame_nblob[indx_sorted], frame_nevt[indx_sorted], frame_nevt27[indx_sorted], frame_nevtgood[indx_sorted], frame_npix[indx_sorted], frame_npixmip[indx_sorted], frame_nprim[indx_sorted], frame_runid[indx_sorted], frame_time[indx_sorted]]),
outframefile, hdr = hdr)
# write out blob list
print(f"### Writing blob list for run {this_run}.")
outblobfile = f"output/pgeant_events_{this_run}_blobs.fits"
indx_sorted = np.argsort(blob_blobid)
wfits(
(['BLOBID', 'CENX', 'CENXCL', 'CENY', 'CENYCL', 'ENERGY', 'ENERGYCL', 'FRAME', 'NPIX'],
[blob_blobid[indx_sorted], blob_cenx[indx_sorted], blob_cenxcl[indx_sorted], blob_ceny[indx_sorted], blob_cenycl[indx_sorted], blob_energy[indx_sorted], blob_energycl[indx_sorted], blob_frame[indx_sorted], blob_npix[indx_sorted]]),
outblobfile, hdr = hdr)
print(f"### Finished run {this_run}.")
# done loop through runs
raise
```
**Progress**
* Finished the first version of the code
* Still need to:
* Fix the types of fits files
* Do optimizations
* Did the presentation
**Questions**
* Thursday Presentation
* What is the knowledge level of the presentees?
* Can I show more than just the poster?
* How formal should it be?
* What should I do with all my work?
* Future for AMS?
* Should we meet after you get back?
**Takeaways**
* Prioritize optimization
* UROP page for report
* Put code in dropboc
* Eric will make repository in github
* Will set up a time next week
## Thoughts/to-do
**Optimization**
* Use sets instead of appending to arrays <br>
* Use lists instead of arrays when appending constantly <br>
**Reminders**
* Indeces are backwards from perl to python and flatting should be c style
* Ptype has two definitions: piddle and hash
**Open Questions**
* Why is the syntax different on Iperl for indexing?
## Other
```
[(-1,0),(-1,1),(0,1),(1,1),(1,0),(1,-1),(0,-1),(-1,-1)]
([-1,-1,0,1,1,1,0,-1,-1,-1][i:i+3:2] for i in range(8))
i, j = (-1, 0)
for k in range(8):
print(i,j)
theta = atan2(j,i) - pi/4
i, j = round(cos(theta)), round(sin(theta))
img = np.fromfunction(lambda x,y:10*x + y,(10,10), dtype=int)
x, y = img.shape
ind = 56
img[ind//x,ind - ind//x * x] #use
img.reshape(-1)[ind] #get a 1d view
numpy.ravel
a.flat
import numpy as np
import matplotlib.pyplot as plt
from astropy.visualization import astropy_mpl_style
from astropy.utils.data import get_pkg_data_filename
from astropy.io import fits
def plot(fitsfile, exp=False):
plt.style.use(astropy_mpl_style)
image_data = fits.getdata(fitsfile, ext=0)
if exp:
print("expanded")
expand(image_data, 10, 100)
plt.figure()
plt.imshow(image_data, cmap='gray_r')
a = 'output/run1_frame0_img.fits'
plot(a)
print("#file name\n#eid,pid\n")
for infile, items in unk_info.items():
print(infile[6:])
for item in items:
print(f"{item[0]},{item[1]}")
print()
import numpy as np
class Parray():
def __init__(self, npc):
self.npc = np.array(npc)
def __getitem__(self, key):
return self.npc[:,key]
def __str__(self):
return str(self.npc)
def __repr__(self):
return repr(self.npc)
import numpy as np
class Farray(np.ndarray):
def __new__(self, data):
shape = (len(data), len(data[0]))
print(shape)
super.__new__(shape, buffer=np.array(data), dtype=int)
def balog(self):
print("Baloooog")
def __getitem__(self, ind):
print(ind)
if isinstance(ind, slice):
ind = slice(ind.stop, ind.start, -1 * ind.step if ind.step else -1)
elif not isinstance(ind, int):
ind = ind[::-1]
return super().__getitem__(ind)
def ind(self, key):
return self[key[::-1]]
#return True
#def __getitem__(self, key):
# return self[:,key]
def farray(data):
shape = (len(data), len(data[0]))
#shape = (3,1)
return Farray(shape, buffer=np.array(data), dtype=int)
```
## Perl to Python
These are some code snippets I used to help me convert between some of the syntax if the two languages
```
text ="""$xdep = $xdep->append($tmp_x);
$ydep = $ydep->append($tmp_y);
$endep = $endep->append($fields[2]);
$rundep = $rundep->append($this_run);
$detectordep = $detectordep->append($this_detector);
$eiddep = $eiddep->append($this_eid);
$framedep = $framedep->append(0);
$piddep = $piddep->append($fields[3]);
# %ptype is hash of particle type strings indexed by the id
# %ptypes is (constant) hash of my own particle type IDs indexed
# by the string (confused yet?)
$ptypedep = $ptypedep->append($ptypes{$ptype{$fields[3]}});
$blobid = $blobid->append(pdl(long, 0));
#print "### piddep = $fields[3]\n";
#print "### ptype{piddep} = ".$ptype{$fields[3]}."\n";
#print "### ptypes{ptype{piddep}} = ".$ptypes{$ptype{$fields[3]}}."\n";"""
lines = text.split('\n')
out = []
for line in lines:
if line[0] != '#':
#print(line)
var = match('\$([a-zA-z]+)\s*=\s*\$([a-zA-z]+)\s*->\s*append\(\$*([a-zA-Z|_|\[|\]|0-9]+)\)', line)
if var:
new = '%s = np.append(%s, %s)' % var.groups()
else:
new = line
out.append(new)
else:
out.append(line)
final = ''
for line in out:
final += line + '\n'
print(final)
text = """# initialize structures to hold the secondary particle columns
# piddles for the numeric columns; these are enough for now
my $run = zeros(long, 0); # Geant4 run (* in *_detector[0123])
my $detector = zeros(long, 0); # WFI detector (? in *_detector?)
my $eid = zeros(long, 0); # primary ID
my $particleid = zeros(long, 0); # interacting particle ID
my $parentid = zeros(long, 0); # don't really need probably
# initialize piddles to hold the energy deposition (per pixel) columns
# some of this will be written out the pixel list
my $xdep = zeros(long, 0);
my $ydep = zeros(long, 0);
my $endep = zeros(double, 0);
my $rundep = zeros(long, 0);
my $detectordep = zeros(long, 0);
my $eiddep = zeros(long, 0);
my $framedep = zeros(long, 0);
my $piddep = zeros(long, 0);
my $ptypedep = zeros(long, 0);
my $cprocdep = zeros(long, 0);
my $blobid = zeros(long, 0);
# initialize piddles to hold frame-specific things to go in FITS table
my $frame_frame = zeros(long, 0);
my $frame_time = zeros(double, 0);
my $frame_runid = zeros(long, 0);
my $frame_npix = zeros(long, 0);
my $frame_npixmip = zeros(long, 0);
my $frame_nevt = zeros(long, 0);
my $frame_nevtgood = zeros(long, 0);
my $frame_nevt27 = zeros(long, 0);
my $frame_nblob = zeros(long, 0);
my $frame_nprim = zeros(long, 0);
# initialize piddles to hold blob-specific things to go in FITS table
my $blob_frame = zeros(long, 0);
my $blob_blobid = zeros(long, 0);
my $blob_cenx = zeros(double, 0);
my $blob_ceny = zeros(double, 0);
my $blob_cenxcl = zeros(double, 0);
my $blob_cenycl = zeros(double, 0);
my $blob_npix = zeros(long, 0);
my $blob_energy = zeros(double, 0);
my $blob_energycl = zeros(double, 0);
"""
lines = text.split('\n')
out = []
for line in lines:
if len(line) > 0 and line[0] == 'm':
new = line.replace('zeros(long, 0);', 'np.zeros(0, dtype=int)')
new = new.replace('zeros(double, 0);', 'np.zeros(0, dtype=float)')
out.append(new[4:])
else:
out.append(line)
final = ''
for line in out:
final += line + '\n'
print(final)
```
## Outakes
### Starts at line 255 in doit.pl
Possibly meant to graph some of the results:<br>
Won't implemented in the final version. <br>
The following is a mix of python and perl, yet sequential
Python
```python
# initialize the blobdist running histogram
# this histogram is the number of pixels available in each radial bin
# linear bins, this should be the same as plot_things.pl!:
numbins_blobhist = 300
minrad_blobhist = 0
maxrad_blobhist = 600
binsize_blobhist = (maxrad_blobhist - minrad_blobhist)/ numbins_blobhist
```
Perl
```perl
# needs some value to bin, so just make it a zero
my ($blobhist_bins,$blobhist_vals) = hist(zeros(long, 1),
$minrad_blobhist,$maxrad_blobhist,$binsize_blobhist);
# really only wanted the bins, so reset the histogram
$blobhist_vals .= 0;
```
Python
```python
# initialize the mipdist running histogram
# this histogram is the number of pixels available in each radial bin
# linear bins, this should be the same as plot_things.pl!:
numbins_miphist = 300
minrad_miphist = 0.
maxrad_miphist = 600.
binsize_miphist = (maxrad_miphist - minrad_miphist)/ numbins_miphist
```
Perl
```perl
# needs some value to bin, so just make it a zero
my ($miphist_bins,$miphist_vals) = hist(zeros(long, 1),
$minrad_miphist,$maxrad_miphist,$binsize_miphist);
# really only wanted the bins, so reset the histogram
$miphist_vals .= 0;
```
### line 291
***Original Code:***
```Perl
my @infiles = `/bin/ls input/${this_run}_detector[0123]`;
...
chomp
```
***My first solution:***
```python
stream = os.popen(f"/bin/ls input/{this_run}_detector[0123]")
infiles = stream.readlines()
...
infiles = infiles.rstrip('\n')```
I found out later that this only works on Linux based sysyems
***New solution***
```python
glob.glob(f"input/{this_run}_detector?")
```
Works on both systems using the `glob` library
```
#to reset all variables easier than restarting the kernel.
%reset
%whos
import uproot
import xml
help(xml)
```
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from rdkit import Chem
from rdkit.Chem import AllChem
from rdkit.Chem import Descriptors
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.neural_network import MLPRegressor
from sklearn.svm import SVR
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import LassoCV
from rdkit.ML.Descriptors.MoleculeDescriptors import MolecularDescriptorCalculator as Calculator
#Data Cleaning
data = pd.read_excel("compounddata.xlsx")
data['EC_value'], data['EC_error'] = zip(*data['ELE_COD'].map(lambda x: x.split('±')))
data.head()
#Setting up for molecular descriptors
n = data.shape[0]
list_of_descriptors = ['NumHeteroatoms','MolWt','NOCount','NumHDonors','RingCount','NumAromaticRings','NumSaturatedRings','NumAliphaticRings']
calc = Calculator(list_of_descriptors)
D = len(list_of_descriptors)
d = len(list_of_descriptors)*2 + 3
print(n,d)
#setting up the x and y matrices
X = []
X = np.zeros((n,d))
X[:,-3] = data['T']
X[:,-2] = data['P']
X[:,-1] = data['MOLFRC_A']
for i in range(n):
A = Chem.MolFromSmiles(data['A'][i])
B = Chem.MolFromSmiles(data['B'][i])
X[i][:D] = calc.CalcDescriptors(A)
X[i][D:2*D] = calc.CalcDescriptors(B)
new_data = pd.DataFrame(X,columns=['NumHeteroatoms_A','MolWt_A','NOCount_A','NumHDonors_A','RingCount_A','NumAromaticRings_A','NumSaturatedRings_A','NumAliphaticRings_A','NumHeteroatoms_B','MolWt_B','NOCount_B','NumHDonors_B','RingCount_B','NumAromaticRings_B','NumSaturatedRings_B','NumAliphaticRings_B','T','P','MOLFRC_A'])
y = data['EC_value']
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1)
#MLPClassifier
alphas = np.array([0.1,0.01,0.001,0.0001])
mlp_class = MLPClassifier(hidden_layer_sizes=(100,), activation='relu', solver='adam', alpha=0.0001, max_iter=5000, random_state=None,learning_rate_init=0.01)
gs = GridSearchCV(mlp_class, param_grid=dict(alpha=alphas))
gs.fit(X_train,y_train)
plt.figure(figsize=(4,4))
plt.scatter(y_test.values.astype(np.float), gs.predict(X_test))
plt.plot([0,12],[0,12],lw=4,c = 'r')
plt.show()
#MLPRegressor
alphas = np.array([5,2,5,1.5,1,0.1,0.01,0.001,0.0001,0])
mlp_regr = MLPRegressor(hidden_layer_sizes=(100,), activation='relu', solver='adam', alpha=0.0001, max_iter=5000, random_state=None,learning_rate_init=0.01)
gs = GridSearchCV(mlp_regr, param_grid=dict(alpha=alphas))
gs.fit(X_train,y_train)
plt.figure(figsize=(4,4))
plt.scatter(y_test.values.astype(np.float), gs.predict(X_test))
plt.plot([0,12],[0,12],lw=4,c = 'r')
plt.show()
#Lasso
alphas = np.array([0.1,0.01,0.001,0.0001])
lasso = Lasso(alpha=0.001, fit_intercept=True, normalize=False, precompute=False, copy_X=True, max_iter=10000, tol=0.001, positive=False, random_state=None, selection='cyclic')
gs = GridSearchCV(lasso, param_grid=dict(alpha=alphas))
gs.fit(X_train,y_train)
plt.figure(figsize=(4,4))
plt.scatter(y_test.values.astype(np.float),gs.predict(X_test))
plt.plot([0,12],[0,12],lw=4,c = 'r')
plt.show()
#SVR
svr = SVR(kernel='rbf', degree=3, gamma='auto', coef0=0.0, tol=0.001, C=1.0, epsilon=0.01, shrinking=True, cache_size=200, verbose=False, max_iter=-1)
svr = GridSearchCV(svr, cv=5, param_grid={"C": [1e0, 1e1, 1e2, 1e3],"gamma": np.logspace(-2, 2, 5)})
svr.fit(X_train,y_train)
plt.figure(figsize=(4,4))
plt.scatter(y_test.values.astype(np.float), svr.predict(X_test))
plt.plot([0,12],[0,12],lw=4,c = 'r')
plt.show()
```
| github_jupyter |
# [Country Embedding](https://philippmuens.com/word2vec-intuition/)
```
import json
import pandas as pd
import seaborn as sns
import numpy as np
# prettier Matplotlib plots
import matplotlib.pyplot as plt
import matplotlib.style as style
style.use('seaborn')
```
# 1. Dataset
#### Download
```
%%bash
download=1
for FILE in "data/country-by-surface-area.json" "data/country-by-population.json"; do
if [[ ! -f ${FILE} ]]; then
download=0
fi
done
if [[ download -eq 0 ]]; then
mkdir -p data
wget -nc \
https://raw.githubusercontent.com/samayo/country-json/master/src/country-by-surface-area.json \
-O data/country-by-surface-area.json 2> /dev/null
wget -nc \
https://raw.githubusercontent.com/samayo/country-json/master/src/country-by-population.json \
-O data/country-by-population.json 2> /dev/null
fi
```
#### Build df
```
df_surface_area = pd.read_json("data/country-by-surface-area.json")
df_population = pd.read_json("data/country-by-population.json")
df_population.dropna(inplace=True)
df_surface_area.dropna(inplace=True)
df = pd.merge(df_surface_area, df_population, on='country')
df.set_index('country', inplace=True)
print(len(df))
df.head()
```
#### Visualize some countries
```
df_small = df[
(df['area'] > 100000) & (df['area'] < 600000) &
(df['population'] > 35000000) & (df['population'] < 100000000)
]
print(len(df_small))
df_small.head()
fig, ax = plt.subplots()
df_small.plot(
x='area',
y='population',
figsize=(10, 10),
kind='scatter', ax=ax)
for k, v in df_small.iterrows():
ax.annotate(k, v)
fig.canvas.draw()
```
# 2. Model
#### Euclidean distance
$$d(x,y)\ =\ \sqrt{\sum\limits_{i=1}^{N}(x_i\ -\ y_i)^2}$$
```
def euclidean_distance(x: (int, int), y: (int, int)) -> int:
'''
Note: cast the result into an int which makes it easier to compare
'''
x1, x2 = x
y1, y2 = y
result = np.sqrt((x1 - x2)**2 + (y1 - y2)**2)
return int(round(result, 0))
```
#### Finding similar countries based on population and area
```
from collections import defaultdict
similar_countries = defaultdict(list)
for country in df.iterrows():
name = country[0]
area = country[1]['area']
population = country[1]['population']
for other_country in df.iterrows():
other_name = other_country[0]
other_area = other_country[1]['area']
other_population = other_country[1]['population']
if other_name == name: continue
x = (area, other_area)
y = (population, other_population)
similar_countries[name].append(
(euclidean_distance(x, y), other_name))
for country in similar_countries.keys():
similar_countries[country].sort(key=lambda x: x[0], reverse=True)
# List of Vietnam similar countries based on population and area
similar_countries['Vietnam'][:10]
# List of Singapore similar countries based on population and area
similar_countries['Singapore'][:10]
```
| github_jupyter |
# 1. Regressão Linear
## 1.1. Univariada
Existem diversos problemas na natureza para os quais procura-se obter valores de saída dado um conjunto de dados de entrada. Suponha o problema de predizer os valores de imóveis de uma determinada cidade, conforme apresentado na Figura 1, em que podemos observer vários pontos que representam diferentes imóveis, cada qual com seu preço de acordo com o seu tamanho.
Em problemas de **regressão**, objetiva-se estimar valores de saída de acordo com um conjunto de valores de entrada. Desta forma, considerando o problema anterior, a ideia consiste em estimar o preço de uma casa de acordo com o seu tamanho, isto é, gostaríamos de encontrar uma **linha reta** que melhor se adequa ao conjunto de pontos na Figura 1.
```
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn import metrics
from matplotlib import pyplot
import numpy
# gerando um conjunto de pontos aleatórios para um problema de regressão linear ***
x, y = make_regression(n_samples=100, n_features=1, noise=5.7)
# apresenta o conjunto de dados criado no passo anterior ***
fig = pyplot.figure(figsize=(15,7))
pyplot.subplot(1, 2, 1)
pyplot.scatter(x,y)
pyplot.xlabel("Tamanho ($m^2$)")
pyplot.ylabel("Preço (R\$x$10^3$)")
pyplot.title("(a)")
# executando regressor linear
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=0) #criando partições
model = LinearRegression()
model.fit(x_train, y_train) # treinando o algoritmo
pyplot.subplot(1, 2, 2)
pyplot.scatter(x,y)
pyplot.plot(x, model.predict(x), color = 'red')
pyplot.xlabel("Tamanho ($m^2$)")
pyplot.ylabel("Preço (R\$x$10^3$)")
pyplot.title("(b)")
fig.tight_layout(pad=10)
fig.suptitle("Figura 1: Exemplo de conjunto de dados de valores imobiliários: (a) conjunto de dados de entrada e (b) linha reta estimada via regressão linear.", y=0.18)
pyplot.show()
```
Seja um conjunto de dados ${\cal D}=\{(x_1,y_1),(x_2,y_2),\ldots,(x_m,y_m)\}$ tal que $x_i\in\Re$ denota o conjunto dos dados de **entrada** (isto é, o tamanho da casa) e $y_i\in\Re$ representa o seu valor. Além disso, seja ${\cal D}_{tr}\subset {\cal D}$ o chamado **conjunto de treinamento** e ${\cal D}_{ts}\subset {\cal D}\backslash{\cal D}_{tr}$ o **conjunto de teste**. Usualmente, técnicas de aprendizado de máquina são avaliadas em conjuntos de treinamento e teste disjuntos, ou seja, temos que ${\cal D}_{tr}$ e ${\cal D}_{ts}$ são denominados **partições** do conjunto original ${\cal D}$. Em nosso exemplo, temos que $x_i$ e $y_i$ correspondem ao tamanho e preço do imóvel, respectivamente.
Basicamente, um algoritmo de regressão linear recebe como entrada um conjunto de dados de treinamento e objetiva estimar uma função linear (reta) a qual chamamos de **função hipótese**, dada por:
\begin{equation}
h_\textbf{w}(x) = w_0+w_1x,
\tag{1}
\end{equation}
em que $\textbf{w}=[w_0\ w_1]$ corresponde aos parâmetros do modelo, sendo que $w_0,w_1\in\Re$. Dependendo dos valores assumidos por $\textbf{w}$, a função hipótese pode assumir diferentes comportamentos, conforme ilustra a Figura 2.
```
fig = pyplot.figure(figsize=(15,7))
x = numpy.arange(-10, 10, 0.5)
pyplot.subplot(2, 2, 1)
y = 1.5 + 0*x #h_w(x) = 1.5 + w_1*0
pyplot.plot(x, y, color = "red")
pyplot.title("$h_w(x) = 1.5$ $(w_0 = 1$ e $w_1 = 1)$")
pyplot.subplot(2, 2, 2)
y = 0 + 0.5*x #h_w(x) = 0 + 0.5*x
pyplot.plot(x, y, color = "red")
pyplot.title("$h_w(x) = 0.5x$ $(w_0 = 0$ e $w_1 = 0.5)$")
pyplot.subplot(2, 2, 3)
y = 1 + 0.5*x #h_w(x) = 1 + 0.5*x
pyplot.plot(x, y, color = "red")
pyplot.title("$h_w(x) = 1 + 0.5x$ $(w_0 = 1$ e $w_1 = 0.5)$")
pyplot.subplot(2, 2, 4)
y = 0 - 0.5*x #h_w(x) = 0 - 0.5*x
pyplot.plot(x, y, color = "red")
pyplot.title("$h_w(x) = -0.5x$ $(w_0 = 0$ e $w_1 = -0.5)$")
fig.tight_layout(pad=2)
fig.suptitle("Figura 2: Exemplos de diferentes funções hipótese.", y=0.01)
pyplot.show()
```
De maneira geral, o objetivo da regressão linear é encontrar valores para $\textbf{w}=[w_0\ w_1]$ de tal forma que $h_w(x_i)$ é o mais próximo possível de $y_i$ considerando o conjunto de treinamento ${\cal D}_{tr}$, $\forall i\in\{1,2,\ldots,m^\prime\}$, em que $m^\prime=\left|{\cal D}_{tr}\right|$. Em outras palavras, o objetivo consiste em resolver o seguinte problema de minimização:
\begin{equation}
\label{e.mse}
\underset{\textbf{w}}{\operatorname{argmin}}\frac{1}{2m^\prime}\sum_{i=1}^{m^\prime}(h_\textbf{w}(x_i)-y_i)^2.
\tag{3}
\end{equation}
Essa equação é também conhecida por **erro médio quadrático**, do inglês *Minimum Square Error* (MSE). Uma outra denominação bastante comum é a de **função de custo**. Note que $h_w(x_i)$ representa o **preço estimado** do imóvel pela técnica de regressão linear, ao passo que $y_i$ denota o seu **valor rea**l dado pelo conjunto de treinamento.
Podemos simplificar a Equação \ref{e.mse} e reescrevê-la da seguinte maneira:
\begin{equation}
\label{e.mse_simplified}
\underset{\textbf{w}}{\operatorname{argmin}}J(\textbf{w}),
\tag{4}
\end{equation}
em que $J(\textbf{w})=\frac{1}{2m^\prime}\sum_{i=1}^{m^\prime}(h_\textbf{w}(x_i)-y_i)^2$. Partindo desta premissa, vamos simplificar um pouco mais a notação e assumir que nossa função hipótese cruza a origem do plano cartesiano:
\begin{equation}
\label{e.hypothesis_origin}
h_w(\textbf{x}) = w_1x,
\tag{5}
\end{equation}
ou seja, $w_0=0$. Neste caso, nosso problema de otimização restringe-se a encontrar $w_1$ que minimiza a seguinte equação:
\begin{equation}
\label{e.mse_simplified_origin}
\underset{w_1}{\operatorname{argmin}}J(w_1).
\tag{6}
\end{equation}
Como exemplo, suponha o seguinte conjunto de treinamento ${\cal D}_{tr}=\{(1,1),(2,2),(3,3)\}$, o qual é ilustrado na Figura 3a. Como pode ser observado, a função hipótese que modela esse conjunto de treinamento é dada por $h_\textbf{w}(x)=x$, ou seja, $\textbf{w}=[0\ 1]$, conforme apresentado na Figura 3b.
```
fig = pyplot.figure(figsize=(13,7))
x = numpy.arange(1, 4, 1)
pyplot.subplot(2, 2, 1)
y = x #h_w(x) = x
pyplot.scatter(x,y)
pyplot.title("(a)")
pyplot.subplot(2, 2, 2)
pyplot.scatter(x,y)
pyplot.plot(x, x, color = "red")
pyplot.title("(b)")
fig.suptitle("Figura 3: (a) conjunto de treinamento ($m^\prime=3$) e (b) função hipótese que intercepta os dados.", y=0.47)
pyplot.show()
```
Na prática, a ideia consiste em testar diferentes valores de $w_1$ e calcular o valor de $J(w_1)$. Aquele que minimizar a função de custo, é o valor de $w_1$ a ser utilizado no modelo (função hipótese). Suponha que tomemos como valor inicial $w_1 = 1$, ou seja, o "chute" correto. Neste caso, temos que:
\begin{equation}
\begin{split}
J(w_1) & =\frac{1}{2m^\prime}\sum_{i=1}^{m^\prime}(h_\textbf{w}(x_i)-y_i)^2 \\
& = \frac{1}{2m^\prime}\sum_{i=1}^{m^\prime}(w_1x_i-y_i)^2 \\
& = \frac{1}{2\times3}\left[(1-1)^2+(2-2)^2+(3-3)^2\right] \\
& = \frac{1}{6}\times 0 = 0.
\end{split}
\tag{7}
\end{equation}
Neste caso, temos que $J(w_1) = 0$ para $w_1 = 1$, ou seja, o custo é o menor possível dado que achamos a função hipótese **exata** que intercepta os dados. Agora, suponha que tivéssemos escolhido $w_1 = 0.5$. Neste caso, a funcão de custo seria calculada da seguinte forma:
\begin{equation}
\begin{split}
J(w_1) & =\frac{1}{2m^\prime}\sum_{i=1}^{m^\prime}(h_\textbf{w}(x_i)-y_i)^2 \\
& = \frac{1}{2m^\prime}\sum_{i=1}^{m^\prime}(w_1x_i-y_i)^2 \\
& = \frac{1}{2\times3}\left[(0.5-1)^2+(1-2)^2+(1.5-3)^2\right] \\
& = \frac{1}{6}\times (0.25+1+2.25) \approx 0.58.
\end{split}
\tag{8}
\end{equation}
Neste caso, nosso erro foi ligeiramente maior. Caso continuemos a calcular $J(w_1)$ para diferentes valores de $w_1$, obteremos o gráfico ilustrado na Figura 4.
```
def J(w_1, x, y):
error = numpy.zeros(len(w_1))
for i in range(len(w_1)):
error[i] = 0
for j in range(3):
error[i] = error[i] + numpy.power(w_1[i]*x[j]-y[j], 2)
return error
w_1 = numpy.arange(-7,10,1) #criando um vetor
error = J(w_1, x, y)
pyplot.plot(w_1, error, color = "red")
pyplot.xlabel("$w_1$")
pyplot.ylabel("$J(w_1)$")
pyplot.title("Figura 4: Comportamento da função de custo para diferentes valores de $w_1$.", y=-0.27)
pyplot.show()
```
Portanto, temos que $w_1=1$ é o valor que minimiza $J(w_1)$ para o exemplo citado anteriormente. Voltando à função de custo dada pela Equação \ref{e.mse}, a questão que temos agora é: como podemos encontrar valores plausíveis para o vetor de parâmetros $\textbf{w}=[w_0\ w_1]$. Uma abordagem simples seria testar uma combinação de valores aleatórios para $w_0$ e $w_1$ e tomar aqueles que minimizam $J(\textbf{w})$. Entretanto, essa heurística não garante que bons resultados sejam alcançados, principalmente em situações mais complexas.
Uma abordagem bastante comum para esse problema de otimização é fazer uso da ténica conhecida por **Gradiente Descentente** (GD), a qual consiste nos seguintes passos gerais:
1. Escolha valores aleatórios para $w_0$ e $w_1$.
2. Iterativamente, modifique os valores de $w_0$ e $w_1$ de tal forma a minimizar $J(\textbf{w})$.
A grande questão agora é qual heurística utilizar para atualizar os valores do vetor $\textbf{w}$. A técnica de GD faz uso das **derivadas parciais** da função de custo para guiar o processo de otimização no sentido do mínimo da função por meio da seguinte regra de atualização dos pesos:
\begin{equation}
\label{e.update_rule_GD}
w^{t+1}_j = w^{t}_j - \alpha\frac{\partial J(\textbf{w})}{\partial w_j},\ j\in\{0,1\},
\tag{9}
\end{equation}
em que $\alpha$ corresponde à chamada **taxa de aprendizado**.
Um questionamento bastante comum diz repeito ao termo derivativo, ou seja, como podemos calculá-lo. Para fins de explicação, suponha que tenhamos apenas o parâmetro $w_1$ para otimizar, ou seja, a nossa função hipótese é dada pela Equação \ref{e.hypothesis_origin}. Neste caso, o objetivo é minimizar $J(w_1)$ para algum valor de $w_1$. Na prática, o que significa a derivada em um dado ponto? A Figura \ref{f.derivada} ilustra melhor esta situação.
```
# Código basedo em https://stackoverflow.com/questions/54961306/how-to-plot-the-slope-tangent-line-of-parabola-at-any-point
# Definindo a parábola
def f(x):
return x**2
# Definindo a derivada da parábola
def slope(x):
return 2*x
# Definindo conjunto de dados para x
x = numpy.linspace(-5,5,100)
# Escolhendo pontos para traçar as retas tangentes
x1 = -3
y1 = f(x1)
x2 = 3
y2 = f(x2)
x3 = 0
y3 = f(x3)
# Definindo intervalo de dados em x para plotar a reta tangente
xrange1 = numpy.linspace(x1-1, x1+1, 10)
xrange2 = numpy.linspace(x2-1, x2+1, 10)
xrange3 = numpy.linspace(x3-1, x3+1, 10)
# Definindo a reta tangente
# y = m*(x - x1) + y1
def tangent_line(x, x1, y1):
return slope(x1)*(x - x1) + y1
# Desenhando as figuras
fig = pyplot.figure(figsize=(13,9))
pyplot.subplot2grid((2,4),(0,0), colspan = 2)
pyplot.title("Decaimento < 0.")
pyplot.plot(x, f(x))
pyplot.scatter(x1, y1, color='C1', s=50)
pyplot.plot(xrange1, tangent_line(xrange1, x1, y1), 'C1--', linewidth = 2)
pyplot.subplot2grid((2,4),(0,2), colspan = 2)
pyplot.title("Decaimento > 0.")
pyplot.plot(x, f(x))
pyplot.scatter(x2, y2, color='C1', s=50)
pyplot.plot(xrange2, tangent_line(xrange2, x2, y2), 'C1--', linewidth = 2)
pyplot.subplot2grid((2,4),(1,1), colspan = 2)
pyplot.title("Decaimento = 0.")
pyplot.plot(x, f(x))
pyplot.scatter(x3, y3, color='C1', s=50)
pyplot.plot(xrange3, tangent_line(xrange3, x3, y3), 'C1--', linewidth = 2)
```
| github_jupyter |
First we need to download the dataset. In this case we use a datasets containing poems. By doing so we train the model to create its own poems.
```
from datasets import load_dataset
dataset = load_dataset("poem_sentiment")
print(dataset)
```
Before training we need to preprocess the dataset. We tokenize the entries in the dataset and remove all columns we don't need to train the adapter.
```
from transformers import GPT2Tokenizer
def encode_batch(batch):
"""Encodes a batch of input data using the model tokenizer."""
encoding = tokenizer(batch["verse_text"])
# For language modeling the labels need to be the input_ids
#encoding["labels"] = encoding["input_ids"]
return encoding
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
# The GPT-2 tokenizer does not have a padding token. In order to process the data
# in batches we set one here
tokenizer.pad_token = tokenizer.eos_token
column_names = dataset["train"].column_names
dataset = dataset.map(encode_batch, remove_columns=column_names, batched=True)
```
Next we concatenate the documents in the dataset and create chunks with a length of `block_size`. This is beneficial for language modeling.
```
block_size = 50
# Main data processing function that will concatenate all texts from our dataset and generate chunks of block_size.
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
total_length = (total_length // block_size) * block_size
# Split by chunks of max_len.
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
dataset = dataset.map(group_texts,batched=True,)
dataset.set_format(type="torch", columns=["input_ids", "attention_mask", "labels"])
```
Next we create the model and add our new adapter.Let's just call it `poem` since it is trained to create new poems. Then we activate it and prepare it for training.
```
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("gpt2")
# add new adapter
model.add_adapter("poem")
# activate adapter for training
model.train_adapter("poem")
```
The last thing we need to do before we can start training is create the trainer. As trainingsargumnénts we choose a learningrate of 1e-4. Feel free to play around with the paraeters and see how they affect the result.
```
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="./examples",
do_train=True,
remove_unused_columns=False,
learning_rate=5e-4,
num_train_epochs=3,
)
trainer = Trainer(
model=model,
args=training_args,
tokenizer=tokenizer,
train_dataset=dataset["train"],
eval_dataset=dataset["validation"],
)
trainer.train()
```
Now that we have a trained udapter we save it for future usage.
```
model.save_adapter("adapter_poem", "poem")
```
With our trained adapter we want to create some poems. In order to do this we create a GPT2LMHeadModel wich is best suited for language generation. Then we load our trained adapter. Finally we have to choose the start of our poem. If you want your poem to start differently just change `PREFIX` accordingly.
```
from transformers import GPT2LMHeadModel, GPT2Tokenizer
model = GPT2LMHeadModel.from_pretrained("gpt2")
# You can also load your locally trained adapter
model.load_adapter("adapter_poem")
model.set_active_adapters("poem")
PREFIX = "In the night"
```
For the generation we need to tokenize the prefix first and then pass it to the model. In this case we create five possible continuations for the beginning we chose.
```
PREFIX = "In the night"
encoding = tokenizer(PREFIX, return_tensors="pt")
output_sequence = model.generate(
input_ids=encoding["input_ids"],
attention_mask=encoding["attention_mask"],
do_sample=True,
num_return_sequences=5,
max_length = 50,
)
```
Lastly we want to see what the model actually created. Too de this we need to decode the tokens from ids back to words and remove the end of sentence tokens. You can easily use this code with an other dataset. Don't forget to share your adapters at [AdapterHub](https://adapterhub.ml/).
```
for generated_sequence_idx, generated_sequence in enumerate(output_sequence):
print("=== GENERATED SEQUENCE {} ===".format(generated_sequence_idx + 1))
generated_sequence = generated_sequence.tolist()
# Decode text
text = tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True)
# Remove EndOfSentence Tokens
text = text[: text.find(tokenizer.eos_token)]
print(text)
model
```
| github_jupyter |

[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/20.SentenceDetectorDL_Healthcare.ipynb)
# 20. SentenceDetectorDL for Healthcare
`SentenceDetectorDL` (SDDL) is based on a general-purpose neural network model for sentence boundary detection. The task of sentence boundary detection is to identify sentences within a text. Many natural language processing tasks take a sentence as an input unit, such as part-of-speech tagging, dependency parsing, named entity recognition or machine translation.
In this model, we treated the sentence boundary detection task as a classification problem using a DL CNN architecture. We also modified the original implemenation a little bit to cover broken sentences and some impossible end of line chars.
```
import json
from google.colab import files
license_keys = files.upload()
with open(list(license_keys.keys())[0]) as f:
license_keys = json.load(f)
%%capture
for k,v in license_keys.items():
%set_env $k=$v
!wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jsl_colab_setup.sh
!bash jsl_colab_setup.sh
import json
import os
from pyspark.ml import Pipeline,PipelineModel
from pyspark.sql import SparkSession
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
import sparknlp
params = {"spark.driver.memory":"16G",
"spark.kryoserializer.buffer.max":"2000M",
"spark.driver.maxResultSize":"2000M"}
spark = sparknlp_jsl.start(license_keys['SECRET'],params=params)
print (sparknlp.version())
print (sparknlp_jsl.version())
documenter = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentencerDL_hc = SentenceDetectorDLModel.pretrained("sentence_detector_dl_healthcare","en","clinical/models") \
.setInputCols(["document"]) \
.setOutputCol("sentences")
sd_pipeline = PipelineModel(stages=[documenter, sentencerDL_hc])
sd_model = LightPipeline(sd_pipeline)
```
# **SetenceDetectorDL_HC** Performance and Comparison with **Spacy Sentence Splitter** on different **Clinical Texts**
```
def get_sentences_sddl(text):
print ('with Spark NLP SentenceDetectorDL_HC')
print ('=======================================')
for anno in sd_model.fullAnnotate(text)[0]["sentences"]:
print("{}\t{}\t{}\t{}".format(
anno.metadata["sentence"], anno.begin, anno.end, anno.result.replace('\n','')))
return
%%capture
!pip install spacy
!python3 -m spacy download en_core_web_sm
import spacy
import en_core_web_sm
nlp = en_core_web_sm.load()
def get_sentences_spacy(text):
print()
print ('with Spacy Sentence Detection')
print ('===================================')
for i,sent in enumerate(nlp(text).sents):
print(i, '\t',str(sent).replace('\n',''))# removing \n to beutify printing
return
```
### Text 1
```
text_1 = '''He was given boluses of MS04 with some effect, he has since been placed on a PCA - he take 80mg of oxycontin at home, his PCA dose is ~ 2 the morphine dose of the oxycontin, he has also received ativan for anxiety.Repleted with 20 meq kcl po, 30 mmol K-phos iv and 2 gms mag so4 iv. LASIX CHANGED TO 40 PO BID WHICH IS SAME AS HE TAKES AT HOME - RECEIVED 40 PO IN AM - 700CC U/O TOTAL FOR FLUID NEGATIVE ~ 600 THUS FAR TODAY, ~ 600 NEG LOS. pt initially hypertensive 160s-180s gave prn doses if IV hydralazine without effect, labetalol increased from 100 mg to 200 mg PO TID which kept SBP > 160 for rest of night.Transferred to the EW found to be hypertensive BP 253/167 HR 132 ST treated with IV NTG gtt 20-120mcg/ and captopril 12.5mg sl x3. During the day pt's resp status has been very tenuous, responded to lasix in the am but then became hypotensive around 1800 tx with 500cc NS bolus and a unit of RBC did improve her BP to 90-100/60 but also became increasingly more tachypneic, RR 30-40's crackles went from bases bilaterally to [**2-14**] way up bilaterally.Lasix given 10mg x 2 during the evening without much change in her respiratory status.10units iv insulin given, 1 amps of 50%dextrose, 1amp of sodium bicard, 2gm calc gluconate.LOPRESSOR 7MG IV Q6H, ENALAPRIL 0.625MG IV Q6H TOLERATED.
ID: Continues to receive flagyl, linazolid, pipercillin, ambisome, gent, and acyclovir, the acyclovir was changed from PO to IV-to be given after dialysis.Meds- Lipitor, procardia, synthroid, lisinopril, pepcid, actonel, calcium, MVI Social- Denies tobacco and drugs.'''
get_sentences_sddl(text_1)
get_sentences_spacy(text_1)
```
### Text 2
```
text_2 = '''ST 109-120 ST. Pt had two 10 beat runs and one 9 beat run of SVT s/p PICC line placement. Stable BP, VT nonsustained. Pt denies CP/SOB. EKG and echo obtained. Cyclying enzymes first CK 69. Cardiology consulted, awaiting echo report. Pt to be started on beta blocker for treatment of NSVT. ? secondary to severe illness.K+ 3.4 IV. IVF with 20meq KCL at 200cc/hr. S/p NSVT pt rec'd 40meq po and 40 meq IV KCL. K+ 3.9 repeat K+ at 8pm. Mg and Ca repleted. Please follow electrolyte SS.
'''
get_sentences_sddl(text_2)
get_sentences_spacy(text_2)
```
### Text 3
```
text_3 = '''PT. IS A 56 Y/O FEMALE S/P CRANIOTOMY ON 7/16 FOR REMOVAL OF BENIGN CYSTIC LESION. SURGERY PERFORMED AT BIDMC.STARTED ON DILANTIN POST-OP FOR SEIZURE PROPHYLAXIS. 2 DAYS PRIOR TO ADMISSION PT DEVELOPED BILAT. EYE DISCHARGE-- SEEN BY EYE MD AND TREATED WITH SULFATE OPTHALMIC DROPS.ALSO DEVELOPED ORAL SORES AND RASH ON CHEST AND RAPIDLY SPREAD TO TRUNK, ARMS, THIGHS, BUTTOCKS, AND FACE WITHIN 24 HRS.UNABLE TO EAT DUE TO MOUTH PAIN. + FEVER, + DIARRHEA, WEAKNESS. PRESENTED TO EW ON 8/4 WITH TEMP 104 SBP 90'S.GIVEN NS FLUID BOLUS, TYLENOL FOR TEMP. SHE PUSTULAR RED RASH ON FACE, RED RASH NOTED ON TRUNK, UPPER EXTREMITIES AND THIGHS. ALSO BOTH EYES DRAINING GREENISH-YELLOW DRAINAGE. ADMITTED TO CCU ( MICU BORDER) FOR CLOSE OBSERVATION.
'''
get_sentences_sddl(text_3)
get_sentences_spacy(text_3)
```
### Text 4
```
text_4 = '''Tylenol 650mg po q6h CVS: Aspirin 121.5mg po daily for graft patency, npn 7p-7a: ccu nsg progress note: s/o: does understand and speak some eng, family visiting this eve and states that pt is oriented and appropriate for them resp--ls w/crackles approx 1/2 up, rr 20's, appeared sl sob, on 4l sat when sitting straight up 95-99%, when lying flat or turning s-s sat does drop to 88-93%, pt does not c/o feeling sob, sat does come back up when sitting up, did rec 40mg iv lasix cardiac hr 90's sr w/occ pvc's, bp 95-106/50's, did not c/o any cp during the noc, conts on hep at 600u/hr, ptt during noc 79, am labs pnd, remains off pressors, at this time no further plans to swan pt or for her to go to cath lab, gi--abd soft, non tender to palpation, (+)bs, passing sm amt of brown soft stool, tol po's w/out diff renal--u/o cont'd low during the eve, team decided to give lasix 40mg in setting of crackles, decreased u/o and sob, did diuresis well to lasix, pt approx 700cc neg today access--pt has 3 peripheral iv's in place, all working, unable to draw bloods from pt d/t poor veins, pt is going to need access to draw bloods, central line or picc line social--son & dtr in visiting w/their famlies tonight, pt awake and conversing w/them a/p: cont to monitor/asses cvs follow resp status, additional lasix f/u w/team re: plan of care for her will need iv access for blood draws keep family & pt updated w/plan, Neuro:On propofol gtt dose increased from 20 to 40mcg/kg/min,moves all extrimities to pain,awaken to stimuly easily,purposeful movements.PERL,had an episode of seizure at 1815 <1min when neuro team in for exam ,responded to 2mg ativan on keprra Iv BID.2 grams mag sulfate given, IVF bolus 250 cc started, approx 50 cc in then dc'd d/t PAD's,2.5 mg IV lopressor given x 2 without effect. CCU NURSING 4P-7P S DENIES CP/SOB O. SEE CAREVUE FLOWSHEET FOR COMPLETE VS 1600 O2 SAT 91% ON 5L N/C, 1 U PRBC'S INFUSING, LUNGS CRACKLES BILATERALLY, LASIX 40MG IV ORDERED, 1200 CC U/O W/IMPROVED O2 SATS ON 4L N/C, IABP AT 1:1 BP 81-107/90-117/48-57, HR 70'S SR, GROIN SITES D+I, HEPARIN REMAINS AT 950 U/HR INTEGRELIN AT 2 MCGS/KG A: IMPROVED U/O AFTER LASIX, AWAITING CARDIAC SURGERY P: CONT SUPPORTIVE CARE, REPEAT HCT POST- TRANSFUSION, CHECK LYTES POST-DIURESIS AND REPLACE AS NEEDED, AWAITING CABG-DATE.Given 50mg IV benadryl and 2mg morphine as well as one aspirin PO and inch of NT paste. When pt remained tachycardic w/ frequent ectopy s/p KCL and tylenol for temp - Orders given by Dr. [**Last Name (STitle) 2025**] to increase diltiazem to 120mg PO QID and NS 250ml given X1 w/ moderate effect.Per team, the pts IV sedation was weaned over the course of the day and now infusing @ 110mcg/hr IV Fentanyl & 9mg/hr IV Verced c pt able to open eyes to verbal stimuli and nod head appropriately to simple commands (are you in pain?). A/P: 73 y/o male remains intubated, IVF boluses x2 for CVP <12, pt continues in NSR on PO amio 400 TID, dopamine gtt weaned down for MAPs>65 but the team felt he is overall receiving about the same amt fentanyl now as he has been in past few days, as the fentanyl patch 100 mcg was added a 48 hrs ago to replace the decrease in the IV fentanyl gtt today (fent patch takes at least 24 hrs to kick in).Started valium 10mg po q6hrs at 1300 with prn IV valium as needed.'''
get_sentences_sddl(text_4)
get_sentences_spacy(text_4)
```
# **SetenceDetectorDL_HC** Performance and Comparison with **Spacy Sentence Splitter** on **Broken Clinical Sentences**
### Broken Text 1
```
random_broken_text_1 = '''He was given boluses of MS04 with some effect, he has since been placed on a PCA
- he take 80mg of oxycontin at home, his PCA dose is ~ 2 the morphine dose of the oxycontin, he has also received ativan for anxiety.Repleted with 20 meq kcl po, 30 m
mol K-phos iv and 2 gms mag so4 iv. LASIX CHANGED TO 40 PO BID WHICH IS SAME AS HE TAKES AT HOME - RECEIVED 40 PO IN AM - 700CC U/O TOTAL FOR FLUID NEGATIVE ~ 600 THUS FAR TODAY, ~ 600 NEG LOS pt initially hypertensive 160s-180s gave prn doses if IV hydralazine without effect, labetalol increased from 100 mg to 200 mg PO TID which kept SBP > 160 for rest of night.Transferred to the EW found to be hypertensive BP 253/167 HR 132 ST treated with IV NTG gtt 20-120mcg/ and captopril 12.5mg sl x3. During the day pt's resp status has been very tenuous, responded to lasix in the am but then became hypotensive around 1800 tx with 500cc NS bolus and a unit of RBC did improve her BP to 90-100/60 but also became increasingly more tachypneic, RR 30-40's crackles went from bases bilaterally to [**2-14**] way up bilaterally.Lasix given 10
mg x 2 during the evening without much change in her respiratory status.10units iv insulin given, 1 amps of 50%dextrose, 1amp of sodium bicard, 2gm calc gluconate.LOPRESSOR 7MG IV Q6H, ENALAPRIL 0.625MG IV Q6H TOLERATED.
ID: Continues to receive flagyl, linazolid, pipercillin, ambisome, gent, and acyclovir, the acyclovir was changed from PO to IV-to be given after dialysis.Meds- Lipitor, procardia, synthroid, lisinopril, pepcid, actonel, calcium, MVI Social- Denies tobacco and drugs.'''
get_sentences_sddl(random_broken_text_1)
get_sentences_spacy(random_broken_text_1)
```
### Broken Text 2
```
random_broken_text_2 = '''A 28-year-
old female with a history of gestational diabetes mellitus diagnosed eight years prior to presentation and subsequent type two diabetes mellitus (
T2DM ),
one prior episode of HTG-induced pancreatitis three years prior to presentation, associated with an acute hepatitis , and obesity with a body mass index ( BMI ) of 33.5
kg/m2 , presented with a one-
week history of polyuria , polydipsia , poor appetite , and vomiting.Two weeks prior to presentation , she was treated with a five-day course of
amoxicillin for a respiratory tract infection.She was on metformin , glipizide , and dapagliflozin for T2DM and atorvastatin and gemfibrozil for HTG . She had been on dapagliflozin for six months at the time of
presentation. Physical examination on presentation was significant for dry oral mucosa ; significantly , her abdominal examination was benign with no tenderness , guarding , or rigidity .
Pertinent laboratory findings on admission were : serum glucose 111 mg
/dl , bicarbonate 18 mmol/l ,
anion gap 20 , creatinine 0.4 mg/dL , triglycerides 508 mg
/dL , total cholesterol 122
mg/dL , glycated hemoglobin ( HbA1c
) 10% , and venous pH 7.27 .Serum lipase was normal at 43U/L .Serum acetone levels could not be assessed as blood samples kept hemolyzing due to significant lipemia.The patient was initially admitted for starvation ketosis , as she reported poor oral intake for three days prior to admission.However , serum chemistry obtained six hours after presentation revealed her glucose was
186 mg/dL , the anion gap was still elevated at 21 , serum bicarbonate was 16 mmol/L , triglyceride level peaked at 2050
mg/dL , and lipase was 52 U/L.The β-hydroxybutyrate level was obtained and found to be elevated at 5.
29
mmol/L -
the original sample was centrifuged and the chylomicron layer removed prior to analysis due to interference from turbidity caused by lipemia again.The patient was treated with an insulin drip for euDKA
and HTG with a reduction in the anion gap to 13 and triglycerides to 1400 mg/
dL , within 24 hours.Her euDKA was thought to be precipitated by her respiratory tract infection in the setting of SGLT2 inhibitor use .
The patient was seen by the endocrinology service and she was discharged on 40
units of insulin glargine at night ,
12 units of insulin lispro with meals , and metformin 1000 mg two times a day.It was determined that all SGLT2 inhibitors should be discontinued indefinitely .'''
get_sentences_sddl(random_broken_text_2)
get_sentences_spacy(random_broken_text_2)
```
### Broken Text 3
```
random_broken_text_3 = ''' Tylenol 650mg po q6h CVS: Aspirin 121.5mg po daily for graft patency.npn 7p-7a: ccu nsg progress note: s/o: does understand and speak some eng, family visiting this eve and states that pt is oriented and appropriate for them resp--ls w/crackles approx 1/2 up, rr 20's, appeared sl sob, on 4l sat when sitting straight up 95-99%, when lying flat or turning s-s sat does drop to 88-93%, pt does not c/o feeling sob,
sat does come back up when sitting up, did rec 40
mg iv lasix cardiac--hr 90's sr w/occ pvc's, bp 95-
106/50's, did not c/o any cp during the noc, conts on hep at 600u/
hr, ptt during noc 79, am labs pnd, remains off pressors, at this time no further plans to swan pt or for her to go to cath lab, gi--abd soft, non tender to palpation, (+)bs, passing sm amt of brown soft stool, tol po's w/out diff renal--u/o cont'd low during the eve,
team decided to give lasix 40mg in setting of crackles, decreased u/o
and sob, did diuresis well to lasix, pt approx 700cc neg today access--pt has 3 peripheral iv's in place, all working, unable to draw bloods from pt d/t poor veins, pt is going to need access to draw bloods, ?central line or picc line social--son & dtr in visiting w/their famlies tonight, pt awake and conversing w/them a/p: cont to monitor/asses cvs follow resp status, additional
lasix f/u w/team re: plan of care for her will need iv access for blood draws keep family & pt updated w/plan, Neuro:On propofol gtt dose increased from 20 to 40mcg/kg/min,moves all extrimities to pain,awaken to stimuly easily,purposeful movements.PERL,had an episode of seizure at 1815 <1min when neuro team in for exam ,responded to 2mg ativan on keprra Iv BID.2 grams mag sulfate given, IVF bolus 250 cc started, approx 50 cc in then dc'd d/t PAD's,
2.5 mg IV lopressor given x 2 without effect. CCU NURSING 4P-7P S DENIES CP/SOB O. SEE CAREVUE FLOWSHEET FOR COMPLETE VS 1600 O2 SAT 91% ON 5L N/C, 1 U PRBC'S INFUSING, LUNGS CRACKLES BILATERALLY, LASIX 40MG IV ORDERED, 1200 CC U/O W/IMPROVED O2 SATS ON 4L N/C, IABP AT 1:1 BP 81-107/90-117/48-57, HR 70'S SR, GROIN SITES D+I, HEPARIN REMAINS AT 950 U/HR INTEGRELIN AT 2 MCGS/KG A: IMPROVED U/O AFTER LASIX, AWAITING CARDIAC SURGERY P: CONT SUPPORTIVE CARE,
REPEAT HCT POST-TRANSFUSION, CHECK LYTES POST-DIURESIS AND REPLACE AS NEEDED, AWAITING CABG -DATE.Given 50mg IV benadryl and 2mg morphine as well as one aspirin PO and inch of NT paste. When pt remained tachycardic w/ frequent ectopy s/p KCL and tylenol for temp - Orders given by Dr. [**Last Name (STitle) 2025**] to increase diltiazem to 120mg PO QID and NS 250ml given X1 w/ moderate effect.Per team, the pts IV sedation was weaned over the course of the
day and now infusing @ 110mcg/hr IV Fentanyl & 9mg/hr IV Verced c pt able to open eyes to verbal stimuli and nod head appropriately to simple commands (are you in pain?) . A/P: 73 y/o male remains intubated, IVF boluses x2 for CVP <12, pt continues in NSR on PO amio 400 TID, dopamine gtt weaned down for MAPs>65 but the team felt he is overall receiving about the same amt fentanyl now as he has been in past few days, as the fentanyl patch 100
mcg was added a 48 hrs ago to replace the decrease in the IV fentanyl gtt today (fent patch takes at least 24 hrs to kick in).Started valium 10mg po q6hrs at 1300 with prn IV valium as needed.'''
get_sentences_sddl(random_broken_text_3)
get_sentences_spacy(random_broken_text_3)
```
| github_jupyter |
# Test post compute 3D
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import subprocess as sp
import sys
import os
import glob
import pickle
import itertools
from matplotlib.colors import LogNorm, PowerNorm, Normalize
from ipywidgets import *
%matplotlib widget
### Transformation functions for image pixel values
def f_transform(x):
return 2.*x/(x + 4.) - 1.
def f_invtransform(s):
return 4.*(1. + s)/(1. - s)
```
## Histogram modules
```
def f_batch_histogram(img_arr,bins,norm,hist_range):
''' Compute histogram statistics for a batch of images'''
## Extracting the range. This is important to ensure that the different histograms are compared correctly
if hist_range==None : ulim,llim=np.max(img_arr),np.min(img_arr)
else: ulim,llim=hist_range[1],hist_range[0]
# print(ulim,llim)
### array of histogram of each image
hist_arr=np.array([np.histogram(arr.flatten(), bins=bins, range=(llim,ulim), density=norm) for arr in img_arr]) ## range is important
hist=np.stack(hist_arr[:,0]) # First element is histogram array
# print(hist.shape)
bin_list=np.stack(hist_arr[:,1]) # Second element is bin value
### Compute statistics over histograms of individual images
mean,err=np.mean(hist,axis=0),np.std(hist,axis=0)/np.sqrt(hist.shape[0])
bin_edges=bin_list[0]
centers = (bin_edges[:-1] + bin_edges[1:]) / 2
return mean,err,centers
def f_pixel_intensity(img_arr,bins=25,label='validation',mode='avg',normalize=False,log_scale=True,plot=True, hist_range=None):
'''
Module to compute and plot histogram for pixel intensity of images
Has 2 modes : simple and avg
simple mode: No errors. Just flatten the input image array and compute histogram of full data
avg mode(Default) :
- Compute histogram for each image in the image array
- Compute errors across each histogram
'''
norm=normalize # Whether to normalize the histogram
if plot:
plt.figure()
plt.xlabel('Pixel value')
plt.ylabel('Counts')
plt.title('Pixel Intensity Histogram')
if log_scale: plt.yscale('log')
if mode=='simple':
hist, bin_edges = np.histogram(img_arr.flatten(), bins=bins, density=norm, range=hist_range)
centers = (bin_edges[:-1] + bin_edges[1:]) / 2
if plot: plt.errorbar(centers, hist, fmt='o-', label=label)
return hist,None
elif mode=='avg':
### Compute histogram for each image.
mean,err,centers=f_batch_histogram(img_arr,bins,norm,hist_range)
if plot: plt.errorbar(centers,mean,yerr=err,fmt='o-',label=label)
return mean,err
def f_compare_pixel_intensity(img_lst,label_lst=['img1','img2'],bkgnd_arr=[],log_scale=True, normalize=True, mode='avg',bins=25, hist_range=None):
'''
Module to compute and plot histogram for pixel intensity of images
Has 2 modes : simple and avg
simple mode: No errors. Just flatten the input image array and compute histogram of full data
avg mode(Default) :
- Compute histogram for each image in the image array
- Compute errors across each histogram
bkgnd_arr : histogram of this array is plotting with +/- sigma band
'''
norm=normalize # Whether to normalize the histogram
def f_batch_histogram(img_arr,bins,norm,hist_range):
''' Compute histogram statistics for a batch of images'''
## Extracting the range. This is important to ensure that the different histograms are compared correctly
if hist_range==None : ulim,llim=np.max(img_arr),np.min(img_arr)
else: ulim,llim=hist_range[1],hist_range[0]
# print(ulim,llim)
### array of histogram of each image
hist_arr=np.array([np.histogram(arr.flatten(), bins=bins, range=(llim,ulim), density=norm) for arr in img_arr]) ## range is important
hist=np.stack(hist_arr[:,0]) # First element is histogram array
# print(hist.shape)
bin_list=np.stack(hist_arr[:,1]) # Second element is bin value
### Compute statistics over histograms of individual images
mean,err=np.mean(hist,axis=0),np.std(hist,axis=0)/np.sqrt(hist.shape[0])
bin_edges=bin_list[0]
centers = (bin_edges[:-1] + bin_edges[1:]) / 2
# print(bin_edges,centers)
return mean,err,centers
plt.figure()
## Plot background distribution
if len(bkgnd_arr):
if mode=='simple':
hist, bin_edges = np.histogram(bkgnd_arr.flatten(), bins=bins, density=norm, range=hist_range)
centers = (bin_edges[:-1] + bin_edges[1:]) / 2
plt.errorbar(centers, hist, color='k',marker='*',linestyle=':', label='bkgnd')
elif mode=='avg':
### Compute histogram for each image.
mean,err,centers=f_batch_histogram(bkgnd_arr,bins,norm,hist_range)
plt.plot(centers,mean,linestyle=':',color='k',label='bkgnd')
plt.fill_between(centers, mean - err, mean + err, color='k', alpha=0.4)
### Plot the rest of the datasets
for img,label,mrkr in zip(img_lst,label_lst,itertools.cycle('>^*sDHPdpx_')):
if mode=='simple':
hist, bin_edges = np.histogram(img.flatten(), bins=bins, density=norm, range=hist_range)
centers = (bin_edges[:-1] + bin_edges[1:]) / 2
plt.errorbar(centers, hist, fmt=mrkr+'-', label=label)
elif mode=='avg':
### Compute histogram for each image.
mean,err,centers=f_batch_histogram(img,bins,norm,hist_range)
# print('Centers',centers)
plt.errorbar(centers,mean,yerr=err,fmt=mrkr+'-',label=label)
if log_scale:
plt.yscale('log')
plt.xscale('symlog',linthreshx=50)
plt.legend()
plt.xlabel('Pixel value')
plt.ylabel('Counts')
plt.title('Pixel Intensity Histogram')
```
### Spectral modules
```
## numpy code
def f_radial_profile_3d(data, center=(None,None)):
''' Module to compute radial profile of a 2D image '''
z, y, x = np.indices((data.shape)) # Get a grid of x and y values
center=[]
if not center:
center = np.array([(x.max()-x.min())/2.0, (y.max()-y.min())/2.0, (z.max()-z.min())/2.0]) # compute centers
# get radial values of every pair of points
r = np.sqrt((x - center[0])**2 + (y - center[1])**2+ + (z - center[2])**2)
r = r.astype(np.int)
# Compute histogram of r values
tbin = np.bincount(r.ravel(), data.ravel())
nr = np.bincount(r.ravel())
radialprofile = tbin / nr
return radialprofile[1:-1]
def f_compute_spectrum_3d(arr):
'''
compute spectrum for a 3D image
'''
# GLOBAL_MEAN=1.0
# arr=((arr - GLOBAL_MEAN)/GLOBAL_MEAN)
y1=np.fft.fftn(arr)
y1=np.fft.fftshift(y1)
# print(y1.shape)
y2=abs(y1)**2
z1=f_radial_profile_3d(y2)
return(z1)
def f_batch_spectrum_3d(arr):
batch_pk=np.array([f_compute_spectrum_3d(i) for i in arr])
return batch_pk
### Code ###
def f_image_spectrum_3d(x,num_channels):
'''
Compute spectrum when image has a channel index
Data has to be in the form (batch,channel,x,y)
'''
mean=[[] for i in range(num_channels)]
sdev=[[] for i in range(num_channels)]
for i in range(num_channels):
arr=x[:,i,:,:,:]
# print(i,arr.shape)
batch_pk=f_batch_spectrum_3d(arr)
# print(batch_pk)
mean[i]=np.mean(batch_pk,axis=0)
sdev[i]=np.var(batch_pk,axis=0)
mean=np.array(mean)
sdev=np.array(sdev)
return mean,sdev
def f_plot_spectrum_3d(img_arr,plot=False,label='input',log_scale=True):
'''
Module to compute Average of the 1D spectrum for a batch of 3d images
'''
num = img_arr.shape[0]
Pk = f_batch_spectrum_3d(img_arr)
#mean,std = np.mean(Pk, axis=0),np.std(Pk, axis=0)/np.sqrt(Pk.shape[0])
mean,std = np.mean(Pk, axis=0),np.std(Pk, axis=0)
k=np.arange(len(mean))
if plot:
plt.figure()
plt.plot(k, mean, 'k:')
plt.plot(k, mean + std, 'k-',label=label)
plt.plot(k, mean - std, 'k-')
# plt.xscale('log')
if log_scale: plt.yscale('log')
plt.ylabel(r'$P(k)$')
plt.xlabel(r'$k$')
plt.title('Power Spectrum')
plt.legend()
return mean,std
def f_compare_spectrum_3d(img_lst,label_lst=['img1','img2'],bkgnd_arr=[],log_scale=True):
'''
Compare the spectrum of 2 sets s:
img_lst contains the set of images arrays, Each is of the form (num_images,height,width)
label_lst contains the labels used in the plot
'''
plt.figure()
## Plot background distribution
if len(bkgnd_arr):
Pk= f_batch_spectrum_3d(bkgnd_arr)
mean,err = np.mean(Pk, axis=0),np.std(Pk, axis=0)/np.sqrt(Pk.shape[0])
k=np.arange(len(mean))
plt.plot(k, mean,color='k',linestyle='-',label='bkgnd')
plt.fill_between(k, mean - err, mean + err, color='k',alpha=0.8)
for img_arr,label,mrkr in zip(img_lst,label_lst,itertools.cycle('>^*sDHPdpx_')):
Pk= f_batch_spectrum_3d(img_arr)
mean,err = np.mean(Pk, axis=0),np.std(Pk, axis=0)/np.sqrt(Pk.shape[0])
k=np.arange(len(mean))
# print(mean.shape,std.shape)
plt.fill_between(k, mean - err, mean + err, alpha=0.4)
plt.plot(k, mean, marker=mrkr, linestyle=':',label=label)
if log_scale: plt.yscale('log')
plt.ylabel(r'$P(k)$')
plt.xlabel(r'$k$')
plt.title('Power Spectrum')
plt.legend()
```
### Read data
```
# fname='/global/cfs/cdirs/m3363/vayyar/cosmogan_data/results_from_other_code/pytorch/results/3d/20210111_104029_3d_/images/best_hist_epoch-8_step-13530.npy'
fname='/global/cfs/cdirs/m3363/vayyar/cosmogan_data/results_from_other_code/pytorch/results/3d/20210210_060657_3d_l0.5_80k/images/gen_img_epoch-15_step-37730.npy'
a1=np.load(fname)[:,0,:,:,:]
fname='/global/cfs/cdirs/m3363/vayyar/cosmogan_data/raw_data/3d_data/dataset1_smoothing_const_params_64cube_100k/val.npy'
val_arr=np.load(fname,mmap_mode='r')[-500:,0,:,:,:]
print(a1.shape,val_arr.shape)
val_arr=f_transform(val_arr)
np.max(val_arr),np.max(a1)
```
### Histogram
```
_,_=f_pixel_intensity(a1)
img_lst=[a1,val_arr]
label_lst=['a1','val']
f_compare_pixel_intensity(img_lst,label_lst=label_lst,bkgnd_arr=[],log_scale=True, normalize=True, mode='avg',bins=25, hist_range=None)
```
## Power spectrum
```
a1.shape
# f_image_spectrum_3d(a1,1)
f_compute_spectrum_3d(a1[0])
# f_image_spectrum_3d(a1,1)
_,_=f_plot_spectrum_3d(val_arr[:50],plot=True)
img_lst=[a1,val_arr]
f_compare_spectrum_3d(img_lst)
f_plot_spectrum_3d(a1,plot=False,label='input',log_scale=True)
f_batch_spectrum_3d(a1)
a1.shape
print(a1.shape)
f_plot_spectrum_3d(a1,plot=False,label='input',log_scale=True)
f_compute_spectrum_3d(a1[0])
a1[0].shape
```
| github_jupyter |
```
# Imports / Requirements
import json
import numpy as np
import torch
import matplotlib.pyplot as plt
import torch.nn.functional as F
import torchvision
from torch import nn, optim
from torchvision import datasets, transforms, models
from torch.autograd import Variable
from collections import OrderedDict
from PIL import Image
%matplotlib inline
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
!python --version
print(f"PyTorch Version {torch.__version__}")
```
# Developing an AI application
Going forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smart phone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications.
In this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice you'd train this classifier, then export it for use in your application. We'll be using [this dataset](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html) of 102 flower categories, you can see a few examples below.
<img src='assets/Flowers.png' width=500px>
The project is broken down into multiple steps:
* Load and preprocess the image dataset
* Train the image classifier on your dataset
* Use the trained classifier to predict image content
We'll lead you through each part which you'll implement in Python.
When you've completed this project, you'll have an application that can be trained on any set of labeled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. For example, imagine an app where you take a picture of a car, it tells you what the make and model is, then looks up information about it. Go build your own dataset and make something new.
First up is importing the packages you'll need. It's good practice to keep all the imports at the beginning of your code. As you work through this notebook and find you need to import a package, make sure to add the import up here.
## Load the data
Here you'll use `torchvision` to load the data ([documentation](http://pytorch.org/docs/0.3.0/torchvision/index.html)). The data should be included alongside this notebook, otherwise you can [download it here](https://s3.amazonaws.com/content.udacity-data.com/nd089/flower_data.tar.gz). The dataset is split into three parts, training, validation, and testing. For the training, you'll want to apply transformations such as random scaling, cropping, and flipping. This will help the network generalize leading to better performance. You'll also need to make sure the input data is resized to 224x224 pixels as required by the pre-trained networks.
The validation and testing sets are used to measure the model's performance on data it hasn't seen yet. For this you don't want any scaling or rotation transformations, but you'll need to resize then crop the images to the appropriate size.
The pre-trained networks you'll use were trained on the ImageNet dataset where each color channel was normalized separately. For all three sets you'll need to normalize the means and standard deviations of the images to what the network expects. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`, calculated from the ImageNet images. These values will shift each color channel to be centered at 0 and range from -1 to 1.
```
data_dir = 'flowers'
train_dir = data_dir + '/train'
valid_dir = data_dir + '/valid'
test_dir = data_dir + '/test'
# pre-trained network expectations
# see: https://pytorch.org/docs/stable/torchvision/models.html
expected_means = [0.485, 0.456, 0.406]
expected_std = [0.229, 0.224, 0.225]
max_image_size = 224
batch_size = 32
# DONE: Define your transforms for the training, validation, and testing sets
data_transforms = {
"training": transforms.Compose([transforms.RandomHorizontalFlip(p=0.25),
transforms.RandomRotation(25),
transforms.RandomGrayscale(p=0.02),
transforms.RandomResizedCrop(max_image_size),
transforms.ToTensor(),
transforms.Normalize(expected_means, expected_std)]),
"validation": transforms.Compose([transforms.Resize(max_image_size + 1),
transforms.CenterCrop(max_image_size),
transforms.ToTensor(),
transforms.Normalize(expected_means, expected_std)]),
"testing": transforms.Compose([transforms.Resize(max_image_size + 1),
transforms.CenterCrop(max_image_size),
transforms.ToTensor(),
transforms.Normalize(expected_means, expected_std)])
}
# DONE: Load the datasets with ImageFolder
image_datasets = {
"training": datasets.ImageFolder(train_dir, transform=data_transforms["training"]),
"validation": datasets.ImageFolder(valid_dir, transform=data_transforms["validation"]),
"testing": datasets.ImageFolder(test_dir, transform=data_transforms["testing"])
}
# DONE: Using the image datasets and the trainforms, define the dataloaders
dataloaders = {
"training": torch.utils.data.DataLoader(image_datasets["training"], batch_size=batch_size, shuffle=True),
"validation": torch.utils.data.DataLoader(image_datasets["validation"], batch_size=batch_size),
"testing": torch.utils.data.DataLoader(image_datasets["testing"], batch_size=batch_size)
}
```
### Label mapping
You'll also need to load in a mapping from category label to category name. You can find this in the file `cat_to_name.json`. It's a JSON object which you can read in with the [`json` module](https://docs.python.org/2/library/json.html). This will give you a dictionary mapping the integer encoded categories to the actual names of the flowers.
```
import json
with open('cat_to_name.json', 'r') as f:
cat_to_name = json.load(f)
print(f"Images are labeled with {len(cat_to_name)} categories.")
```
# Building and training the classifier
Now that the data is ready, it's time to build and train the classifier. As usual, you should use one of the pretrained models from `torchvision.models` to get the image features. Build and train a new feed-forward classifier using those features.
We're going to leave this part up to you. If you want to talk through it with someone, chat with your fellow students! You can also ask questions on the forums or join the instructors in office hours.
Refer to [the rubric](https://review.udacity.com/#!/rubrics/1663/view) for guidance on successfully completing this section. Things you'll need to do:
* Load a [pre-trained network](http://pytorch.org/docs/master/torchvision/models.html) (If you need a starting point, the VGG networks work great and are straightforward to use)
* Define a new, untrained feed-forward network as a classifier, using ReLU activations and dropout
* Train the classifier layers using backpropagation using the pre-trained network to get the features
* Track the loss and accuracy on the validation set to determine the best hyperparameters
We've left a cell open for you below, but use as many as you need. Our advice is to break the problem up into smaller parts you can run separately. Check that each part is doing what you expect, then move on to the next. You'll likely find that as you work through each part, you'll need to go back and modify your previous code. This is totally normal!
When training make sure you're updating only the weights of the feed-forward network. You should be able to get the validation accuracy above 70% if you build everything right. Make sure to try different hyperparameters (learning rate, units in the classifier, epochs, etc) to find the best model. Save those hyperparameters to use as default values in the next part of the project.
```
# DONE: Build and train your network
# Get model Output Size = Number of Categories
output_size = len(cat_to_name)
# Using VGG16.
nn_model = models.vgg16(pretrained=True)
# Input size from current classifier
input_size = nn_model.classifier[0].in_features
hidden_size = [
(input_size // 8),
(input_size // 32)
]
# Prevent backpropigation on parameters
for param in nn_model.parameters():
param.requires_grad = False
# Create nn.Module with Sequential using an OrderedDict
# See https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential
classifier = nn.Sequential(OrderedDict([
('fc1', nn.Linear(input_size, hidden_size[0])),
('relu1', nn.ReLU()),
('dropout', nn.Dropout(p=0.15)),
('fc2', nn.Linear(hidden_size[0], hidden_size[1])),
('relu2', nn.ReLU()),
('dropout', nn.Dropout(p=0.15)),
('output', nn.Linear(hidden_size[1], output_size)),
# LogSoftmax is needed by NLLLoss criterion
('softmax', nn.LogSoftmax(dim=1))
]))
# Replace classifier
nn_model.classifier = classifier
hidden_size
torch.cuda.is_available()
device
# hyperparameters
# https://en.wikipedia.org/wiki/Hyperparameter
epochs = 5
learning_rate = 0.001
chk_every = 50
# Start clean by setting gradients of all parameters to zero.
nn_model.zero_grad()
# The negative log likelihood loss as criterion.
criterion = nn.NLLLoss()
# Adam: A Method for Stochastic Optimization
# https://arxiv.org/abs/1412.6980
optimizer = optim.Adam(nn_model.classifier.parameters(), lr=learning_rate)
# Move model to perferred device.
nn_model = nn_model.to(device)
data_set_len = len(dataloaders["training"].batch_sampler)
total_val_images = len(dataloaders["validation"].batch_sampler) * dataloaders["validation"].batch_size
print(f'Using the {device} device to train.')
print(f'Training on {data_set_len} batches of {dataloaders["training"].batch_size}.')
print(f'Displaying average loss and accuracy for epoch every {chk_every} batches.')
for e in range(epochs):
e_loss = 0
prev_chk = 0
total = 0
correct = 0
print(f'\nEpoch {e+1} of {epochs}\n----------------------------')
for ii, (images, labels) in enumerate(dataloaders["training"]):
# Move images and labeles preferred device
# if they are not already there
images = images.to(device)
labels = labels.to(device)
# Set gradients of all parameters to zero.
optimizer.zero_grad()
# Propigate forward and backward
outputs = nn_model.forward(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# Keep a running total of loss for
# this epoch
e_loss += loss.item()
# Accuracy
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
# Keep a running total of loss for
# this epoch
itr = (ii + 1)
if itr % chk_every == 0:
avg_loss = f'avg. loss: {e_loss/itr:.4f}'
acc = f'accuracy: {(correct/total) * 100:.2f}%'
print(f' Batches {prev_chk:03} to {itr:03}: {avg_loss}, {acc}.')
prev_chk = (ii + 1)
# Validate Epoch
e_valid_correct = 0
e_valid_total = 0
# Disabling gradient calculation
with torch.no_grad():
for ii, (images, labels) in enumerate(dataloaders["validation"]):
# Move images and labeles perferred device
# if they are not already there
images = images.to(device)
labels = labels.to(device)
outputs = nn_model(images)
_, predicted = torch.max(outputs.data, 1)
e_valid_total += labels.size(0)
e_valid_correct += (predicted == labels).sum().item()
print(f"\n\tValidating for epoch {e+1}...")
correct_perc = 0
if e_valid_correct > 0:
correct_perc = (100 * e_valid_correct // e_valid_total)
print(f'\tAccurately classified {correct_perc:d}% of {total_val_images} images.')
print('Done...')
```
## Testing your network
It's good practice to test your trained network on test data, images the network has never seen either in training or validation. This will give you a good estimate for the model's performance on completely new images. Run the test images through the network and measure the accuracy, the same way you did validation. You should be able to reach around 70% accuracy on the test set if the model has been trained well.
```
# DONE: Do validation on the test set
correct = 0
total = 0
total_images = len(dataloaders["testing"].batch_sampler) * dataloaders["testing"].batch_size
# Disabling gradient calculation
with torch.no_grad():
for ii, (images, labels) in enumerate(dataloaders["testing"]):
# Move images and labeles perferred device
# if they are not already there
images = images.to(device)
labels = labels.to(device)
outputs = nn_model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print(f'Accurately classified {(100 * correct // total):d}% of {total_images} images.')
```
## Save the checkpoint
Now that your network is trained, save the model so you can load it later for making predictions. You probably want to save other things such as the mapping of classes to indices which you get from one of the image datasets: `image_datasets['train'].class_to_idx`. You can attach this to the model as an attribute which makes inference easier later on.
```model.class_to_idx = image_datasets['train'].class_to_idx```
Remember that you'll want to completely rebuild the model later so you can use it for inference. Make sure to include any information you need in the checkpoint. If you want to load the model and keep training, you'll want to save the number of epochs as well as the optimizer state, `optimizer.state_dict`. You'll likely want to use this trained model in the next part of the project, so best to save it now.
```
# DONE: Save the checkpoint
def save_checkpoint(model_state, file='checkpoint.pth'):
torch.save(model_state, file)
nn_model.class_to_idx = image_datasets['training'].class_to_idx
model_state = {
'epoch': epochs,
'state_dict': nn_model.state_dict(),
'optimizer_dict': optimizer.state_dict(),
'classifier': classifier,
'class_to_idx': nn_model.class_to_idx,
}
save_checkpoint(model_state, 'checkpoint.pth')
```
## Loading the checkpoint
At this point it's good to write a function that can load a checkpoint and rebuild the model. That way you can come back to this project and keep working on it without having to retrain the network.
```
# DONE: Write a function that loads a checkpoint and rebuilds the model
def load_checkpoint(file='checkpoint.pth'):
# Loading weights for CPU model while trained on GP
# https://discuss.pytorch.org/t/loading-weights-for-cpu-model-while-trained-on-gpu/1032
model_state = torch.load(file, map_location=lambda storage, loc: storage)
model = models.vgg16(pretrained=True)
model.classifier = model_state['classifier']
model.load_state_dict(model_state['state_dict'])
model.class_to_idx = model_state['class_to_idx']
return model
chkp_model = load_checkpoint()
```
# Inference for classification
Now you'll write a function to use a trained network for inference. That is, you'll pass an image into the network and predict the class of the flower in the image. Write a function called `predict` that takes an image and a model, then returns the top $K$ most likely classes along with the probabilities. It should look like
```python
probs, classes = predict(image_path, model)
print(probs)
print(classes)
> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]
> ['70', '3', '45', '62', '55']
```
First you'll need to handle processing the input image such that it can be used in your network.
## Image Preprocessing
You'll want to use `PIL` to load the image ([documentation](https://pillow.readthedocs.io/en/latest/reference/Image.html)). It's best to write a function that preprocesses the image so it can be used as input for the model. This function should process the images in the same manner used for training.
First, resize the images where the shortest side is 256 pixels, keeping the aspect ratio. This can be done with the [`thumbnail`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) or [`resize`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) methods. Then you'll need to crop out the center 224x224 portion of the image.
Color channels of images are typically encoded as integers 0-255, but the model expected floats 0-1. You'll need to convert the values. It's easiest with a Numpy array, which you can get from a PIL image like so `np_image = np.array(pil_image)`.
As before, the network expects the images to be normalized in a specific way. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`. You'll want to subtract the means from each color channel, then divide by the standard deviation.
And finally, PyTorch expects the color channel to be the first dimension but it's the third dimension in the PIL image and Numpy array. You can reorder dimensions using [`ndarray.transpose`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ndarray.transpose.html). The color channel needs to be first and retain the order of the other two dimensions.
```
def process_image(image):
''' Scales, crops, and normalizes a PIL image for a PyTorch model,
returns an Numpy array
'''
expects_means = [0.485, 0.456, 0.406]
expects_std = [0.229, 0.224, 0.225]
pil_image = Image.open(image).convert("RGB")
# Any reason not to let transforms do all the work here?
in_transforms = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(expects_means, expects_std)])
pil_image = in_transforms(pil_image)
return pil_image
# DONE: Process a PIL image for use in a PyTorch model
chk_image = process_image(valid_dir + '/1/image_06739.jpg')
type(chk_image)
```
To check your work, the function below converts a PyTorch tensor and displays it in the notebook. If your `process_image` function works, running the output through this function should return the original image (except for the cropped out portions).
```
def imshow(image, ax=None, title=None):
if ax is None:
fig, ax = plt.subplots()
# PyTorch tensors assume the color channel is the first dimension
# but matplotlib assumes is the third dimension
image = image.transpose((1, 2, 0))
# Undo preprocessing
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
image = std * image + mean
# Image needs to be clipped between 0 and 1 or it looks like noise when displayed
image = np.clip(image, 0, 1)
ax.imshow(image)
return ax
imshow(chk_image.numpy())
```
## Class Prediction
Once you can get images in the correct format, it's time to write a function for making predictions with your model. A common practice is to predict the top 5 or so (usually called top-$K$) most probable classes. You'll want to calculate the class probabilities then find the $K$ largest values.
To get the top $K$ largest values in a tensor use [`x.topk(k)`](http://pytorch.org/docs/master/torch.html#torch.topk). This method returns both the highest `k` probabilities and the indices of those probabilities corresponding to the classes. You need to convert from these indices to the actual class labels using `class_to_idx` which hopefully you added to the model or from an `ImageFolder` you used to load the data ([see here](#Save-the-checkpoint)). Make sure to invert the dictionary so you get a mapping from index to class as well.
Again, this method should take a path to an image and a model checkpoint, then return the probabilities and classes.
```python
probs, classes = predict(image_path, model)
print(probs)
print(classes)
> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]
> ['70', '3', '45', '62', '55']
```
```
def predict(image_path, model, topk=5):
''' Predict the class (or classes) of an image using a trained deep learning model.
'''
# DONE: Implement the code to predict the class from an image file
# evaluation mode
# https://pytorch.org/docs/stable/nn.html#torch.nn.Module.eval
model.eval()
# cpu mode
model.cpu()
# load image as torch.Tensor
image = process_image(image_path)
# Unsqueeze returns a new tensor with a dimension of size one
# https://pytorch.org/docs/stable/torch.html#torch.unsqueeze
image = image.unsqueeze(0)
# Disabling gradient calculation
# (not needed with evaluation mode?)
with torch.no_grad():
output = model.forward(image)
top_prob, top_labels = torch.topk(output, topk)
# Calculate the exponentials
top_prob = top_prob.exp()
class_to_idx_inv = {model.class_to_idx[k]: k for k in model.class_to_idx}
mapped_classes = list()
for label in top_labels.numpy()[0]:
mapped_classes.append(class_to_idx_inv[label])
return top_prob.numpy()[0], mapped_classes
```
## Sanity Checking
Now that you can use a trained model for predictions, check to make sure it makes sense. Even if the testing accuracy is high, it's always good to check that there aren't obvious bugs. Use `matplotlib` to plot the probabilities for the top 5 classes as a bar graph, along with the input image. It should look like this:
<img src='assets/inference_example.png' width=300px>
You can convert from the class integer encoding to actual flower names with the `cat_to_name.json` file (should have been loaded earlier in the notebook). To show a PyTorch tensor as an image, use the `imshow` function defined above.
```
# DONE: Display an image along with the top 5 classes
chk_image_file = valid_dir + '/55/image_04696.jpg'
correct_class = cat_to_name['55']
top_prob, top_classes = predict(chk_image_file, chkp_model)
label = top_classes[0]
fig = plt.figure(figsize=(6,6))
sp_img = plt.subplot2grid((15,9), (0,0), colspan=9, rowspan=9)
sp_prd = plt.subplot2grid((15,9), (9,2), colspan=5, rowspan=5)
image = Image.open(chk_image_file)
sp_img.axis('off')
sp_img.set_title(f'{cat_to_name[label]}')
sp_img.imshow(image)
labels = []
for class_idx in top_classes:
labels.append(cat_to_name[class_idx])
yp = np.arange(5)
sp_prd.set_yticks(yp)
sp_prd.set_yticklabels(labels)
sp_prd.set_xlabel('Probability')
sp_prd.invert_yaxis()
sp_prd.barh(yp, top_prob, xerr=0, align='center', color='blue')
plt.show()
print(f'Correct classification: {correct_class}')
print(f'Correct prediction: {correct_class == cat_to_name[label]}')
```
| github_jupyter |
# Exercise 1
**(1)** Forecasting with linear models:
> **(a)** Estimate four linear models unsing the OLS estimator
> **(b)** Forecast n steps ahead using the estimated models
> **(c)** Forecast n steps ahead (recursively) using the estimated models
> **(d)** Compute confidence intervals for both **(b)** and **(c)** forecasts
```
library(data.table)
library(readr)
library(here)
```
## (a) Estimate four linear models unsing the OLS estimator
The linear regression model (LRM) can be written as follows:
$
\mathbf{y} = X\mathbf{\beta} + \mathbf{\epsilon}
$
where $\mathbf{y}$ is a Tx1 vector of dependent variables, $X$ is a Txk matrix of independen variables, $\mathbf{\beta}$ is a kx1 vector of parameters, and $\mathbf{\epsilon}$ is a Tx1 vector of independent error terms.
The LRM assumptions are:
> A1. $\mathbf{E}[\mathbf{\epsilon}]=0$
> A2. $\mathbf{E}[\mathbf{\epsilon}^{'}\mathbf{\epsilon}]=\sigma^2\mathbf{I}$
> A3. $X \perp \!\!\! \perp \mathbf{\epsilon}$
> A4. $X^{'}X$ is non singular
> A5. $X$ is weekly stationary
Furthermore, the parameters $\mathbf{\beta}$ can be estimated using the ordinary least squares (OLS) method. This method seeks to find the vector $\mathbf{\beta}$ that solves the following optimization problem:
$
argmax_{\mathbf{\beta}} SSE(X, \mathbf{y}; \mathbf{\beta}) \quad SSE = \mathbf{\epsilon}^{'}\mathbf{\epsilon} = (\mathbf{y} - X\mathbf{\beta})^{'}(\mathbf{y} - X\mathbf{\beta})
$
which has the following solution:
$
\hat{\mathbf{\beta}} = (X^{'}X)^{-1}X^{'}\mathbf{y}
$
For the variance, the OLS estimator ir given by:
$
\hat{\mathbf{\sigma}}^2 = \frac{\hat{\mathbf{\epsilon}}^{'}\hat{\mathbf{\epsilon}}}{T-k} \quad where \quad \hat{\mathbf{\epsilon}} = \mathbf{y} - X\hat{\mathbf{\beta}}
$
It is possible to show that, under the assumtions A1-A5, the OLS estimator is the best linear unbiased estimator and has the lowest variance in this class.
```
df = readr::read_csv(here("src", "data", "ex2_regress_gdp.csv"))
head(df)
par(mfrow=c(3, 2))
plot(df$y, type = 'line', main = 'GDP')
plot(df$pr, type = 'line', main = 'Industrial production')
plot(df$sr, type = 'line', main = 'Consumer sentiment index')
plot(df$su, type = 'line', main = 'Eurostoxx')
plot(df$ipr, type = 'line', main = 'inflation rate')
```
By doing a quick check on the time series under analysis it seems that some of the LRM hypothesis may be violated. For instance, all the series seems to have a variance that is not constant over time, violating the A2 assumption. Furthermore, the GDP series, in particular, has an outlier in its history, which may undermine the standard error estimates of the OLS.
Despite these facts, we estimate four specifications of the LRM as follows:
$
gdp_t = \beta_1 ipr_t + \beta_2 su_t + \beta_3 pr_t + \beta_4 sr_t + \epsilon_t
$
$
gdp_t = \beta_1 ipr_t + \beta_2 su_t + \beta_3 sr_t + \epsilon_t
$
$
gdp_t = \beta_1 ipr_t + \beta_2 su_t + \epsilon_t
$
$
gdp_t = \beta_1 ipr_t + \beta_2 pr_t + \beta_3 sr_t + \epsilon_t
$
where:
> $ipr_t$ is the industrial production for the euro area at time $t$
> $su_t$ is the Eurostoxx returns at $t$
> $pr_t$ is the euro area inflation rate at $t$
> $sr_t$ is the euro area consumer sentiment index at $t$
All series are in monthly frequency.
```
gdp.fit <- list()
gdp.formula <- c('y ~ ipr + su + pr + sr', 'y ~ ipr + su + sr',
'y ~ ipr + su', 'y ~ ipr + pr + sr')
for (model in 1:4) {
gdp.fit[[model]] <- lm(gdp.formula[model], data = df)
}
est_df = df[1:44,] # estimation window
fore_df = df[45:70,] # forecast window
gdp.est <- list()
for (model in 1:4) {
gdp.est[[model]] <- lm(gdp.formula[model], data = est_df)
summary(gdp.est[[model]])
}
```
The model estimation summaries are:
```
summary(gdp.fit[[1]])
summary(gdp.fit[[2]])
summary(gdp.fit[[3]])
summary(gdp.fit[[4]])
```
One common statistics for model evaluation is the coefficient of determination or $R^2$:
$
R^2 = 1-\frac{\mathbf{\epsilon}^{'}\mathbf{\epsilon}}{\mathbf{y}^{'}\mathbf{y}}
$
The $R^2$ compares the variability in the dependent variable $\mathbf{y}$ explained by the model with the variability of $\mathbf{\epsilon}$, which is the unexplanied amount of variability.
Despite been naive to analyse the output of a regression in terms of the $R^2$ only, we see that the first model has the highest between the tested models ($R^2$=0.8008).
One clear undesirable property of the $R^2$ is that its an increasing function of the number of parameters, therefore the adjusted $R^2$ is typically a better measure.
$
\bar{R}^2 = 1-\frac{\mathbf{\epsilon}^{'}\mathbf{\epsilon}}{T-k}\frac{T-1}{\mathbf{y}^{'}\mathbf{y}}
$
We can note that despite the first model having the highest $R^2$=0.8008, its adjusted version has $\bar{R}^2=0.7886$ which is pretty close to the second model $\bar{R}^2=0.7866$. Therefore, it seems that the unadjusted r-quared of the first model has been inflated by the number of variables and we should the second model instead.
## (b) Forecast n steps ahead using the estimated models
## +
## (c) Forecast n steps ahead (recursively) using the estimated models
The OLS regression forecast is given by:
$
\hat{\mathbf{y}}_{T+h} = X_{T+h}\hat{\mathbf{\beta}}
$
and the forecast error is simply:
$
e_{t+h} = \mathbf{y}_{T+h} - \hat{\mathbf{y}}_{T+h}
$
```
gdp.fore = list()
gdp.rec = list()
for (model in 1:4) {
gdp.fore[[model]] = predict(gdp.est[[model]], newdata = fore_df)
gdp.rec[[model]] = rep(0, 26)
for (i in 1:26) {
ols.rec = lm(gdp.formula[model], data = df[1:(43 + i),])
gdp.rec[[model]][i] = predict(ols.rec, newdata = df[44 + i,])
}
}
plot.label <- 2007:2013
par(mfrow=c(2, 2))
for (model in 1:4) {
gdp.plot = cbind(data.table(fore_df$y),
data.table(gdp.rec[[model]]),
data.table(gdp.fore[[model]]))
setnames(gdp.plot, c('Y', paste0('YRFOREG', model),
paste0('YFOREG', model)))
plot(gdp.plot[[1]], type = 'n', xlab = '', ylab = '',
xaxt = 'n', ylim = c(-3, 2))
axis(1, at = c(0:6) * 4 + 2, labels = plot.label)
for (l in 1:3) {
lines(gdp.plot[[l]], lty = l)
}
legend('bottomright', legend = colnames(gdp.plot), inset = 0.05, lty = 1:3)
}
error_fore = list()
error_rec = list()
for (model in 1:4) {
gdp.plot = cbind(data.table(fore_df$y),
data.table(gdp.rec[[model]]),
data.table(gdp.fore[[model]]))
error_fore[model] = sum((gdp.plot[,1]-gdp.plot[,3])^2 / dim(gdp.plot[,3])[1]
error_rec[model] = sum((gdp.plot[,1]-gdp.plot[,2])^2) / dim(gdp.plot[,2])[1]
}
error_fore
sum(unlist(error_fore))
error_rec
sum(unlist(error_rec))
```
It can be seem from the above that model two has the lowest forecast error (0.14307) between the batch forecast method, and the lowest error (0.12327) between the recursive forecast method.
Furthermore, it can be seem that, as expected, the recursive method has consistently beaten the batch forecast method in terms of mean squared error across all models.
**(d) Compute confidence intervals for both (b) and (c) forecasts**
A typical parametric assumption of the LRM is:
$
\epsilon \sim N(0, \sigma^2 \mathbf{I})
$
this implies that the distribution of the OLS estimator can be written as:
$
\sqrt{T}(\hat{\beta}-\beta) \sim N(0, \sigma^2(\frac{X^{'}X}{T})^{-1})
$
and the forecast errors have the following distribution:
$
\frac{\mathbf{y}_{T+h} - \hat{\mathbf{y}}_{T+h}}{\sqrt{Var(e_{T+h})}} \sim N(0,1) \implies \mathbf{y}_{T+h} \sim N(\hat{\mathbf{y}}_{T+h}, Var(e_{T+h}))
$
This means that we can use the above density to construct confidence intervals for our forecasts. In particular, a $[1-\alpha]$% forecast interval is represented by:
$
[\hat{\mathbf{y}}_{T+h}-c_{\alpha/2}\sqrt{Var(e_{T+h})} ; \hat{\mathbf{y}}_{T+h}+c_{\alpha/2}\sqrt{Var(e_{T+h})}]
$
Therefore, for a 95% area interval we have that:
```
gdp.fore.ic.se = list()
gdp.fore.ic.up = list()
gdp.fore.ic.low = list()
for (i in 1:length(gdp.est)){
gdp.fore.ic.se[[i]] = sqrt(sum(gdp.est[[1]]$residuals^2) / dim(fore_df)[1])
gdp.fore.ic.up[[i]] = gdp.fore[[i]] + 1.96 * unlist(gdp.fore.ic.se)
gdp.fore.ic.low[[i]] = gdp.fore[[i]] - 1.96 * unlist(gdp.fore.ic.se)
}
par(mfrow=c(2,2))
for (i in 1:length(gdp.est)){
dat = cbind(gdp.fore.ic.up[[i]], gdp.fore.ic.low[[i]], gdp.fore[[i]], fore_df$y)
colnames(dat) = names = c('up', 'lower', 'predict', 'real')
matplot(dat, type = c("b"),pch=1,col = 1:4)
legend("topleft", legend=names, col=1:4, pch=1)
}
```
| github_jupyter |
<a href="https://colab.research.google.com/github/IanCostello/tools/blob/ValidationTool/import-validation-helper/ImportValidatorMaster.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Import Validation Helper
This Colab notebook introduces a few tools to check your template MCF, StatVars, and CSV.
A summary of features is as follows.
* MCF format checking (no improperly defined nodes).
* StatVar reference checking (makes sure that all references either exist locally or in the knowledge graph).
* TMCF and CSV column valididation.
* Description spell checking.
* ASCII encoding checking.
### Usage summary:
1. Runtime -> Run All
2. Authenticate with BigQuery in second cell
3. Scroll to bottom, select three files to validate from your local computer.
# 1) At the top of the page, go to "Runtime -> Run All".
```
import re
import pandas as pd
!pip install --upgrade -q pyspellchecker
!pip install --upgrade -q pygsheets
from spellchecker import SpellChecker
import subprocess
from google.colab import auth
from google.cloud import bigquery
import gspread
from oauth2client.client import GoogleCredentials
```
# 2) Authenticate BQ here.
BigQuery is used to check your used references against the KG.
```
#@title Do you have BigQuery Access? (Internal Googler)
bq_access = True
if bq_access:
auth.authenticate_user()
```
## Helper Functions
```
# Setup BQ client
client = None
if bq_access:
project_id = "google.com:datcom-store-dev"
client = bigquery.Client(project=project_id)
# Setup logging
# gc = gspread.authorize(GoogleCredentials.get_application_default())
# Enum definition
from enum import Enum
class PrecheckError(Enum):
CRITICAL = "Critical"
WARN = "Warn"
# Helpers
cache = {}
def validateNodeStructure(mcf_contents):
# See if node has been processed
hash_of_contents = "validateNodeStructure_" + str(hash(mcf_contents))
if hash_of_contents in cache:
return cache[hash_of_contents]
# Nodes in an MCF file are separated by a blank line
mcf_nodes_text = mcf_contents.split("\n\n")
# Lines in an MCF file are separated as property: constraint
mcf_line = re.compile(r"^(\w+): (.*)$")
mcf_nodes = []
errors = []
for node in mcf_nodes_text:
current_mcf_node = {}
for line in node.split('\n'):
# Ignore blank lines if multiple spaces between lines
if len(line) == 0:
continue
parsed_line = mcf_line.match(line)
if parsed_line is None:
errors.append((PrecheckError.CRITICAL, "MalformedLine", f"Malformed MCF Line '{line}'"))
else:
# Property = Constraint
current_mcf_node[parsed_line.group(1)] = parsed_line.group(2)
if len(current_mcf_node) > 0:
mcf_nodes.append(current_mcf_node)
# Add to cache
cache[hash_of_contents] = (mcf_nodes, errors)
return mcf_nodes, errors
def get_nodes_with_property(mcf_contents, prop, constraint):
mcf_nodes, errors = validateNodeStructure(mcf_contents)
matching_nodes = []
for node in mcf_nodes:
if prop in node and node[prop] == constraint:
matching_nodes.append(node)
return matching_nodes
def remove_prefix(s):
"""Removes prefixes 'dcs:', 'dcid:' and 'schema:' to ease node comparison."""
s = s.strip()
if s.startswith('dcs:'):
return s[4:]
if s.startswith('dcid:'):
return s[5:]
if s.startswith('schema:'):
return s[7:]
return s
def cmp_nodes(n1, n2):
"""Compares two nodes, ignoring prefixes such as in remove_prefix()"""
return remove_prefix(n1) == remove_prefix(n2)
def get_newly_defined_nodes(mcf_contents, typeOf = ""):
mcf_nodes, errors = validateNodeStructure(mcf_contents)
new_nodes = []
for node in mcf_nodes:
if "Node" in node and "typeOf" in node and \
(typeOf == "" or typeOf == remove_prefix(node['typeOf'])):
new_nodes.append(node['Node'].replace("dcs:","").replace("dcid:",""))
return new_nodes
```
## Definition of Tests
```
class TriplesChecks():
"""Defines the various tests that run on the combined contents of TMCF,
uploaded csv, and statistical variable files.
To add a test: Make a new method with the following args.
Args:
df -> Dataframe of uploaded CSV
tmcf_contents -> String of TMCF text content
stat_vars -> String of Statistical Variables file
Yields:
Yields tuple of the precheck error level enum, error name, and an error message
"""
def ensure_ascii(_, tmcf_contents, stat_vars_content):
"""Checks to ensure that files contents are solely ascii characters."""
ascii_character_match = re.compile(r"^[\x00-\x7F]+$")
for file_name, contents in \
[("TMCF", tmcf_contents), ("Statistical Variables", stat_vars_content)]:
if ascii_character_match.match(contents) == None:
yield (PrecheckError.CRITICAL, "NonAsciiInFile",
f"{file_name} file contains non-ascii characters.")
def tmcf_csv_column_checks(df, tmcf_contents, stat_vars_content):
"""Handles column inconsistencies between tmcf and csv."""
column_matches = re.compile(r"C:\w+->(\w+)")
tmcf_columns = column_matches.findall(tmcf_contents)
csv_columns = df.columns
for column in tmcf_columns:
if column not in csv_columns:
yield (PrecheckError.CRITICAL, "ColInTMCFMissingFromCSV",
f"Referenced column {column} in TMCF not found in CSV.")
for column in csv_columns:
if column not in tmcf_columns:
yield (PrecheckError.WARN, "UnusedColumnPresent",
f"Unused column {column} present in CSV.")
def ensure_mcf_not_malformed(_, tmcf_contents, stat_vars_content):
"""Ensures lines of MCF files are property defined.
Passes: Node: E:WorldBank->E0
Fails: Node E:WorldBank->E0
"""
# Grab error field of tuple
for error in validateNodeStructure(tmcf_contents)[1]:
yield error
for error in validateNodeStructure(stat_vars_content)[1]:
yield error
def ensure_nodes_properly_referenced(_, tmcf_contents, stat_vars_content):
"""Ensures that constraint field of mcf files are references or constants."""
tmcf_nodes, _ = validateNodeStructure(tmcf_contents)
stat_var_nodes, _ = validateNodeStructure(stat_vars_content)
# Ensure that each property is a string, integer, boolean, tmcf reference, or schema reference
tmcf_match = re.compile(r"^(\"[^\"]+\")|(E:\w+->E\d+)|(C:\w+->\w+)|(\d+)|(True)|(False)|(((dcs)|(dcid)|(schema)):\w+)$")
# Ensure that each property is a string, integer, boolean, tmcf reference, schema reference, or quantity range
stvr_match = re.compile(r"^(\"[^\"]+\")|(\d+)|(True)|(False)|((((dcs)|(dcid)|(schema)):[A-Za-z0-9_\-\/]+,? ?)+)|(\[\S+ ((\d+)|(\d+ \d+)|(\d+ \+)|(\- \d+))\])$")
tmcf_node_match = re.compile("^E:\S+->E\d+$")
stvr_node_match = re.compile("^dcid:\S+$")
for node_list, node_prop_regex, property_regex in \
[(tmcf_nodes, tmcf_node_match, tmcf_match),
(stat_var_nodes, stvr_node_match, stvr_match)]:
for node in node_list:
for prop, constraint in node.items():
if prop == "Node":
if node_prop_regex.match(constraint) == None:
yield (PrecheckError.CRITICAL, "MalformedNode",
f"Malformed Node Property '{prop}: {constraint}'")
# Validate properties of TMCF
elif prop[0].islower():
if property_regex.match(constraint) == None:
yield (PrecheckError.WARN, "MisformedReference",
f"Misformed Reference: '{prop}: {constraint}'")
# All properties besides Node should be lower case
else:
yield (PrecheckError.WARN, "LowerProperties",
f"All MCF Properties besides Node should be lowercase. Triggered for '{prop}'.")
def spell_check_descriptions(_, tmcf_contents, stat_vars_content):
"""Provides spell checking on all description fields."""
description_field_parser = re.compile("description: \"([^\"]*)\"")
spell = SpellChecker()
sets_to_check = [("TMCF", tmcf_contents),
("Statistical Variables", stat_vars_content)]
for set_name, text in sets_to_check:
potential_mispellings = set()
for description in description_field_parser.findall(text):
potential_mispellings = potential_mispellings.union(
spell.unknown(spell.split_words(description))
)
if len(potential_mispellings) != 0:
yield (PrecheckError.WARN, "Misspelling",
f"Potential Misspelling(s) in {set_name}: {list(potential_mispellings)})")
def ensure_all_references_exist(df, tmcf_contents, stat_vars_content):
if not bq_access:
return
# Get locally defined instances.
new_references = get_newly_defined_nodes(stat_vars_content)
# Get all references instances in stat vars
ref_finder = re.compile(r"(?:(?:dcs)|(?:dcid)):(\S+)")
references = list(set(ref_finder.findall(stat_vars_content)))
# Get all stat vars that are not locally defined
global_references = []
for ref in references:
if len(ref) != 0 and ref not in new_references:
global_references.append(ref)
# Query database
instance_query = """
SELECT distinct id
FROM `google.com:datcom-store-dev.dc_v3_clustered.Instance`
WHERE id IN ({str})
"""
obj_instances = client.query(instance_query.replace("{str}",
str(global_references).lstrip("[").rstrip("]"))).to_dataframe()['id'].values
missing_references = []
for ref in global_references:
if ref not in obj_instances:
missing_references.append(ref)
if len(missing_references) != 0:
yield (PrecheckError.WARN, "UndefinedReference",
f"Potential Undefined References: {missing_references}")
def ensure_all_statvars_defined(df, tmcf_contents, stat_vars_content):
direct_ref = re.compile(r'variableMeasured:\s(\w+)$')
indirect_ref = re.compile(r'variableMeasured:\sC:\w+->(\w+)')
defined_nodes = get_newly_defined_nodes(stat_vars_content)
for ref in direct_ref.findall(tmcf_contents):
if not any([cmp_nodes(ref, n) for n in defined_nodes]):
yield (PrecheckError.CRITICAL, "TMCFNodeRefNotInMCF",
f"Node '{ref}' referenced in TMCF, undefined in MCF.")
for col_ref in indirect_ref.findall(tmcf_contents):
if col_ref not in df.columns:
yield (PrecheckError.CRITICAL, "ColInTMCFNotInCSV",
f"Column '{col_ref}' referenced in TMCF, not in CSV.")
else:
for ref in df[col_ref].unique():
if not any([cmp_nodes(ref, n) for n in defined_nodes]):
yield (PrecheckError.CRITICAL, "ReferencedFieldNotInMcf",
f"Node '{ref}' referenced in TMCF through '{col_ref}' column in CSV, undefined in MCF.")
def ensure_dcid_not_too_long(_, __, stat_vars_content):
for line in stat_vars_content.split("\n"):
if 'Node:' in line and len(line) - len('Node: ') > 256:
yield (PrecheckError.CRITICAL, "MalformedNode",
f"The following node is too long: '{line.strip()}'\nMax dcid length is 256.")
from optparse import OptionParser
import inspect
def validate_prechecks(df, tmcf_contents, stat_vars_content, demo=False):
"""Runs validation checks on provided triples.
Args:
df -> Dataframe of uploaded CSV
tmcf_contents -> String of TMCF text content
stat_vars_content -> String of Statistical Variables file
"""
# Log usage to find common errors
# process = subprocess.Popen("gcloud config get-value account", shell=True, stdout=subprocess.PIPE)
# username = process.stdout.read().decode("utf-8")
# gc = gspread.authorize(GoogleCredentials.get_application_default())
# workbook = gc.open_by_url('https://docs.google.com/spreadsheets/d/1l4YqvkhzRBKtab5lVCuoR0guGf2qIARK-A995DFZ6Nw/edit?usp=sharing')
# sheet = workbook.worksheet('Usage')
# if demo:
# sheet.append_row([username, "RanDemo"])
for function_name, function in inspect.getmembers(TriplesChecks, predicate=inspect.isfunction):
errors = list(function(df, tmcf_contents, stat_vars_content))
if len(errors) != 0:
print(f"Error In Test {function_name}")
for error in errors:
# if not demo:
# sheet.append_row([username, str(error[0].value), error[1], error[2]])
print(f"{error[0].value} - {error[2]}")
print("")
```
# Sample Upload Demonstrating Common Errors
```
# Sample Input
stat_vars_content = \
"""
Node: dcid:Tourism
name: “Tourism“
typeOf: dcid:TravelPurposeEnum
description: "Ptential mispeling in my description."
Node: dcid:ThisIsAnExtremelyLongStatVarName__Count_MortalityEvent_From75To79Years_MalignantImmunoproliferativeDiseasesAndCertainOtherB-CellLymphomas_MultipleMyelomaAndMalignantPlasmaCellNeoplasms_OtherAndUnspecifiedMalignantNeoplasmsOfLymphoid_HematopoieticAndRelatedTissue_Female_AsFractionOf_Count_Person
name: “Tourism“
typeOf: dcid:TravelPurposeEnum
description: "Ptential mispeling in my description."
"""
tmcf_contents = \
"""
Node E:WorldBank->E0
typeOf: dcs:StatVarObservation
variableMeasured: C:WorldBank->StatisticalVariable
observationDate: C:WorldBank->Year
observationPeriod: "P1Y"
observationAbout: E:WorldBank->E1
value: C:WorldBank->Value
Node: E:WorldBank->E1
typeOf: dcs:Country
countryAlpha3Code: C:WorldBank->IsoCode
BadProperty: someFieldThatShouldBeAReferenceButIsInterpretedAsAString
"""
df = pd.DataFrame.from_dict({"Value": [4], "IsoCode": ["USA"], "Year":[2018], "Foo": ['bar']})
df
validate_prechecks(df, tmcf_contents, stat_vars_content, demo=True)
```
# Real Validation Code
Upload three files.
- StatisticalVariable -> Needs to end in .mcf
- Template MCF -> Needs to end in .tmcf
- CSV File -> Needs to end in .csv
```
from google.colab import files
from io import StringIO
LARGE_FILE_FLAG = False # Set this to true for large files
selected_files = files.upload()
# Parse out files
df, tmcf_text, stat_var_text = None, None, None
for file, contents in selected_files.items():
if ".csv" in file:
if not LARGE_FILE_FLAG:
df = pd.read_csv(StringIO(contents.decode()))
else:
df = pd.read_csv(StringIO(contents.decode()), nrows=100)
elif ".tmcf" in file:
tmcf_text = contents.decode()
elif ".mcf" in file:
stat_var_text = contents.decode()
validate_prechecks(df, tmcf_text, stat_var_text)
```
| github_jupyter |
```
import pandas as pd
import os
import json
from functools import singledispatch
date = "2022-02-12"
STATE_NAMES = {
'AP': 'Andhra Pradesh',
'AR': 'Arunachal Pradesh',
'AS': 'Assam',
'BR': 'Bihar',
'CT': 'Chhattisgarh',
'GA': 'Goa',
'GJ': 'Gujarat',
'HR': 'Haryana',
'HP': 'Himachal Pradesh',
'JH': 'Jharkhand',
'KA': 'Karnataka',
'KL': 'Kerala',
'MP': 'Madhya Pradesh',
'MH': 'Maharashtra',
'MN': 'Manipur',
'ML': 'Meghalaya',
'MZ': 'Mizoram',
'NL': 'Nagaland',
'OR': 'Odisha',
'PB': 'Punjab',
'RJ': 'Rajasthan',
'SK': 'Sikkim',
'TN': 'Tamil Nadu',
'TG': 'Telangana',
'TR': 'Tripura',
'UT': 'Uttarakhand',
'UP': 'Uttar Pradesh',
'WB': 'West Bengal',
'AN': 'Andaman and Nicobar Islands',
'CH': 'Chandigarh',
'DN': 'Dadra and Nagar Haveli and Daman and Diu',
'DL': 'Delhi',
'JK': 'Jammu and Kashmir',
'LA': 'Ladakh',
'LD': 'Lakshadweep',
'PY': 'Puducherry',
'TT': 'India',
# [UNASSIGNED_STATE_CODE]: 'Unassigned',
}
data_min_json={}
for k,v in STATE_NAMES.items():
data_min_json[k]={"districts":{},"delta":{},"delta7":{},"delta21_14":{},"meta":{},"total":{}}
def number_generation(df,col_name,col_value,field):
value=df.loc[df[col_name]==col_value,field]
value.reset_index(inplace=True,drop=True)
if value.isna().all():
value = "null"
try:
value = int(value[0])
except:
# raise
value = value[0]
return value
import json
import traceback
from datetime import datetime
import time
def addLogging(logDict:dict):
loggingsFile = '../log.json'
with open(loggingsFile) as f:
data = json.load(f)
#data = []
data.append(logDict)
with open(loggingsFile, 'w') as f:
json.dump(data, f)
def currentTimeUTC(date):
# return time.mktime(datetime.strptime(datetime.now().strftime('%d/%m/%Y'),'%d/%m/%Y').timetuple())
return int(time.mktime(datetime.strptime(date,'%Y-%m-%d').timetuple()))
#return datetime.now().strftime('%d/%m/%Y %H:%M:%S')
def updateJSONLog(stateName,ttValue,stateValue,date):
addLogging({
"update": stateName + ":\n State totals for confirmed cases reported on https://www.mygov.in/covid-19 ("+str(ttValue)+") different from district total for confirmed cases reported in state government bulletin ("+str(stateValue)+")",
"timestamp": currentTimeUTC(date)
})
# State totals for confirmed cases reported on https://www.mygov.in/covid-19 (xxx) different from district total for confirmed cases reported in state government bulletin (yyy)
# try:
# print(5/0)
# except ZeroDivisionError:
# fullTraceback = str(traceback.format_exc())
# addLogging({'timestamp': currentTimeUTC(), 'level': 'error', 'traceback': fullTraceback})
for k,v in data_min_json.items():
print(k)
df = pd.read_csv("../RAWCSV/"+date+"/"+k+"_final.csv")
df_tt = pd.read_csv("../RAWCSV/"+date+"/TT_final.csv")
# df = pd.read_csv('datamin.csv',header=[1])
for district in df["District"]:
try:
district_dict={district:{"delta":{"confirmed":number_generation(df,"District",district,"deltaConfirmedForDistrict"),
"recovered":number_generation(df,"District",district,"deltaRecoveredForDistrict"),
"deceased":number_generation(df,"District",district,"deltaDeceasedForDistrict"),
"tested":number_generation(df,"District",district,"deltaTestedForDistrict"),
"vaccinated1":number_generation(df,"District",district,"deltaVaccinated1ForDistrict"),
"vaccinated2":number_generation(df,"District",district,"deltaVaccinated2ForDistrict"),
"vaccinated3":number_generation(df,"District",district,"deltaVaccinated3ForDistrict")
},
"delta21_14":{"confirmed": number_generation(df,"District",district,"delta21_14confirmedForDistrict")},
# },#}
# district_dict={district:{
"delta7":{
"confirmed": number_generation(df,"District",district,"7DmaConfirmedForDistrict"),
"deceased": number_generation(df,"District",district,"7DmaDeceasedForDistrict"),
"recovered": number_generation(df,"District",district,"7DmaRecoveredForDistrict"),
"tested": number_generation(df,"District",district,"7DmaTestedForDistrict"),
"Other": number_generation(df,"District",district,"7DmaOtherForDistrict"),
"vaccinated1": number_generation(df,"District",district,"7DmaVaccinated1ForDistrict"),
"vaccinated2": number_generation(df,"District",district,"7DmaVaccinated2ForDistrict"),
"vaccinated3": number_generation(df,"District",district,"7DmaVaccinated3ForDistrict")
},
# }
# }
"meta":{"population": number_generation(df,"District",district,"districtPopulation"),
"tested": {
"last_updated": number_generation(df,"State/UTCode",k,'last_updated'),
"source": number_generation(df,"State/UTCode",k,'tested_source_state'),
},
"notes": number_generation(df,"State/UTCode",k,'notesForDistrict')
},
"total":{
"confirmed": number_generation(df,"District",district,'cumulativeConfirmedNumberForDistrict'),
"deceased": number_generation(df,"District",district,'cumulativeDeceasedNumberForDistrict'),
"recovered": number_generation(df,"District",district,'cumulativeRecoveredNumberForDistrict'),
"other": number_generation(df,"District",district,'cumulativeOtherNumberForDistrict'),
"tested": number_generation(df,"District",district,'cumulativeTestedNumberForDistrict'),
"vaccinated1": number_generation(df,"District",district,'cumulativeVaccinated1NumberForDistrict'),
"vaccinated2": number_generation(df,"District",district,'cumulativeVaccinated2NumberForDistrict'),
"vaccinated3": number_generation(df,"District",district,'cumulativeVaccinated3NumberForDistrict'),
}}
}
except KeyError as e:
print(district,e)
pass
else:
data_min_json[k]["districts"].update(district_dict)
print("State:"+str(number_generation(df,"State/UTCode",k,'cumulativeConfirmedNumberForState')))
print("TT :"+str(number_generation(df_tt,"District",STATE_NAMES[k],'cumulativeConfirmedNumberForDistrict')))
if k != "TT":
if number_generation(df,"State/UTCode",k,'cumulativeConfirmedNumberForState') != number_generation(df_tt,"District",STATE_NAMES[k],'cumulativeConfirmedNumberForDistrict'):
updateJSONLog(STATE_NAMES[k],number_generation(df_tt,"District",STATE_NAMES[k],'cumulativeConfirmedNumberForDistrict'),number_generation(df,"State/UTCode",k,'cumulativeConfirmedNumberForState'),date)
# pass
if k != "TT":
data_min_json[k]["delta"]["confirmed"]=number_generation(df_tt,"District",STATE_NAMES[k],'deltaConfirmedForDistrict')
data_min_json[k]["delta"]["deceased"]=number_generation(df_tt,"District",STATE_NAMES[k],'deltaDeceasedForDistrict')
data_min_json[k]["delta"]["recovered"]=number_generation(df_tt,"District",STATE_NAMES[k],'deltaRecoveredForDistrict')
else:
data_min_json[k]["delta"]["confirmed"]=number_generation(df,"State/UTCode",k,'deltaConfirmedForState')
data_min_json[k]["delta"]["deceased"]=number_generation(df,"State/UTCode",k,'deltaDeceasedForState')
data_min_json[k]["delta"]["recovered"]=number_generation(df,"State/UTCode",k,'deltaRecoveredForState')
data_min_json[k]["delta"]["vaccinated1"]=number_generation(df,"State/UTCode",k,'deltaVaccinated1ForState')
data_min_json[k]["delta"]["vaccinated2"]=number_generation(df,"State/UTCode",k,'deltaVaccinated2ForState')
data_min_json[k]["delta"]["vaccinated3"]=number_generation(df,"State/UTCode",k,'deltaVaccinated3ForState')
data_min_json[k]["delta"]["tested"]=number_generation(df,"State/UTCode",k,'deltaTestedForState')
# if k != "TT":
# data_min_json[k]["delta"]["tested"]=number_generation(df,"State/UTCode",k,'deltaTestedForState')
# else:
# data_min_json[k]["delta"]["tested"]=0
data_min_json[k]["delta7"]["confirmed"]=number_generation(df,"State/UTCode",k,'7DmaConfirmedForState')
data_min_json[k]["delta7"]["deceased"]=number_generation(df,"State/UTCode",k,'7DmaDeceasedForState')
data_min_json[k]["delta7"]["recovered"]=number_generation(df,"State/UTCode",k,'7DmaRecoveredForState')
data_min_json[k]["delta7"]["vaccinated1"]=number_generation(df,"State/UTCode",k,'7DmaVaccinated1ForState')
data_min_json[k]["delta7"]["vaccinated2"]=number_generation(df,"State/UTCode",k,'7DmaVaccinated2ForState')
data_min_json[k]["delta7"]["vaccinated3"]=number_generation(df,"State/UTCode",k,'7DmaVaccinated3ForState')
data_min_json[k]["delta7"]["tested"]=number_generation(df,"State/UTCode",k,'7DmaTestedForState')
data_min_json[k]["delta7"]["other"]=number_generation(df,"State/UTCode",k,'7DmaOtherForState')
data_min_json[k]["delta21_14"]={"confirmed":number_generation(df,"State/UTCode",k,'delta21_14confirmedForState')}
data_min_json[k]["meta"]={"date":number_generation(df,"State/UTCode",k,'Date'),
"last_updated":number_generation(df,"State/UTCode",k,'last_updated'),
"notes":number_generation(df,"State/UTCode",k,'notesForState'),
"population":number_generation(df,"State/UTCode",k,'statePopulation'),
"tested":{"date":number_generation(df,"State/UTCode",k,'last_updated'),
"source":number_generation(df,"State/UTCode",k,'tested_source_state')
}
}
if k != "TT":
if k == "NL" or k == "HP":
data_min_json[k]["total"]["confirmed"] = number_generation(df,"State/UTCode",k,'cumulativeConfirmedNumberForState')
data_min_json[k]["total"]["deceased"] = number_generation(df,"State/UTCode",k,'cumulativeDeceasedNumberForState')
data_min_json[k]["total"]["recovered"] = number_generation(df,"State/UTCode",k,'cumulativeRecoveredNumberForState')
data_min_json[k]["total"]["other"] = number_generation(df,"District",district,'cumulativeOtherNumberForState')
else:
data_min_json[k]["total"]["confirmed"] = number_generation(df_tt,"District",STATE_NAMES[k],'cumulativeConfirmedNumberForDistrict')
data_min_json[k]["total"]["deceased"] = number_generation(df_tt,"District",STATE_NAMES[k],'cumulativeDeceasedNumberForDistrict')
data_min_json[k]["total"]["recovered"] = number_generation(df_tt,"District",STATE_NAMES[k],'cumulativeRecoveredNumberForDistrict')
data_min_json[k]["total"]["other"] = number_generation(df,"District",district,'cumulativeOtherNumberForState')
else:
data_min_json[k]["total"]["confirmed"] = number_generation(df,"State/UTCode",k,'cumulativeConfirmedNumberForState')
data_min_json[k]["total"]["deceased"] = number_generation(df,"State/UTCode",k,'cumulativeDeceasedNumberForState')
data_min_json[k]["total"]["recovered"] = number_generation(df,"State/UTCode",k,'cumulativeRecoveredNumberForState')
data_min_json[k]["total"]["tested"] = number_generation(df,"State/UTCode",k,'cumulativeTestedNumberForState')
data_min_json[k]["total"]["vaccinated1"] = number_generation(df,"State/UTCode",k,'cumulativeVaccinated1NumberForState')
data_min_json[k]["total"]["vaccinated2"] = number_generation(df,"State/UTCode",k,'cumulativeVaccinated2NumberForState')
data_min_json[k]["total"]["vaccinated3"] = number_generation(df,"State/UTCode",k,'cumulativeVaccinated3NumberForState')
@singledispatch
def remove_null_bool(ob):
return ob
@remove_null_bool.register(list)
def _process_list(ob):
return [remove_null_bool(v) for v in ob]
@remove_null_bool.register(dict)
def _process_list(ob):
return {k: remove_null_bool(v) for k, v in ob.items()
if v is not None and v is not 0 and v is not 'n' and v is not {}}
with open('data.min_swi.json', 'w') as json_file:
json.dump(remove_null_bool(data_min_json), json_file)
```
# Stop here
```
import requests, json
url = requests.get("https://data.covid19india.org/v4/min/data.min.json")
text = url.text
C19I_data = json.loads(text)
compare_json_data(C19I_data['RJ'],remove_null_bool(data_min_json))
print('Compare JSON result is: {0}'.format(
compare_json_data(C19I_data['RJ'],remove_null_bool(data_min_json))
))
!pip install jsondiff
```
| github_jupyter |
```
#import packages
import numpy as np
from numpy import loadtxt
import pylab as pl
from IPython import display
from RcTorch import *
from matplotlib import pyplot as plt
from scipy.integrate import odeint
import time
import matplotlib.gridspec as gridspec
#this method will ensure that the notebook can use multiprocessing (train multiple
#RC's in parallel) on jupyterhub or any other linux based system.
try:
mp.set_start_method("spawn")
except:
pass
torch.set_default_tensor_type(torch.FloatTensor)
%matplotlib inline
start_time = time.time()
lineW = 3
lineBoxW=2
plt.rcParams['text.usetex'] = True
```
### This notebook demonstrates how to use RcTorch to find optimal hyper-paramters for the differential equation $\dot y + q(t) y = f(t) $.
Simple population: <font color='blue'>$\dot y + y =0$ </font>
* Analytical solution: <font color='green'>$y = y_0 e^{-t}$</font>
```
#define a reparameterization function, empirically we find that g= 1-e^(-t) works well)
def reparam(t, order = 1):
exp_t = torch.exp(-t)
derivatives_of_g = []
g = 1 - exp_t
g_dot = 1 - g
return g, g_dot
def plot_predictions(RC, results, integrator_model, ax = None):
"""plots a RC prediction and integrator model prediction for comparison
Parameters
----------
RC: RcTorchPrivate.esn
the RcTorch echostate network to evaluate. This model should already have been fit.
results: dictionary
the dictionary of results returned by the RC after fitting
integrator model: function
the model to be passed to odeint which is a gold standard integrator numerical method
for solving ODE's written in Fortran. You may find the documentation here:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.odeint.html
ax: matplotlib.axes._subplots.AxesSubplot
If provided, the function will plot on this subplot axes
"""
X = RC.X.cpu()
if not ax:
fig, ax = plt.subplots(1,1, figsize = (6,6))
for i, y in enumerate(results["ys"]):
y = y.cpu()
if not i:
labels = ["RC", "Integrator Solution"]
else:
labels = [None, None]
ax.plot(X, y, color = "dodgerblue", label = labels[0], linewidth = lineW + 1, alpha = 0.9)
#calculate the integrator prediction:
int_sol = odeint(integrator_model, y0s[i], np.array(X.cpu().squeeze()))
int_sol = torch.tensor(int_sol)
#plot the integrator prediction
ax.plot(X, int_sol, '--', color = "red", alpha = 0.9, label = labels[1], linewidth = lineW)
plt.ylabel(r'$y(t)$');
ax.legend();
ax.tick_params(labelbottom=False)
plt.tight_layout()
def covert_ode_coefs(t, ode_coefs):
""" converts coefficients from the string 't**n' or 't^n' where n is any float
Parameters
----------
t: torch.tensor
input time tensor
ode_coefs: list
list of associated floats. List items can either be (int/floats) or ('t**n'/'t^n')
Returns
-------
ode_coefs
"""
type_t = type(t)
for i, coef in enumerate(ode_coefs):
if type(coef) == str:
if coef[0] == "t" and (coef[1] == "*" or (coef[1] == "*" and coef[2] == "*")):
pow_ = float(re.sub("[^0-9.-]+", "", coef))
ode_coefs[i] = t ** pow_
print("alterning ode_coefs")
elif type(coef) in [float, int, type_t]:
pass
else:
assert False, "ode_coefs must be a list floats or strings of the form 't^pow', where pow is a real number."
return ode_coefs
def plot_rmsr(RC, results, force, ax = None):
"""plots the residuals of a RC prediction directly from the loss function
Parameters
----------
RC: RcTorchPrivate.esn
the RcTorch echostate network to evaluate. This model should already have been fit.
results: dictionary
the dictionary of results returned by the RC after fitting
force: function
the force function describing the force term in the population equation
ax: matplotlib.axes._subplots.AxesSubplot
If provided, the function will plot on this subplot axes
"""
if not ax:
fig, ax = plt.subplots(1,1, figsize = (10, 4))
X = RC.X.cpu()
ys, ydots = results["ys"], results["ydots"]
residuals = []
force_t = force(X)
for i, y in enumerate(ys):
ydot = ydots[i]
y = y.cpu()
ydot = ydot.cpu()
ode_coefs = covert_ode_coefs(t = X, ode_coefs = RC.ode_coefs)
resids = custom_loss(X, y, ydot, None,
force_t = force_t,
ode_coefs = RC.ode_coefs,
mean = False)
if not i:
resids_tensor = resids
label = r'{Individual Trajectory RMSR}'
else:
resids_tensor = torch.cat((resids_tensor, resids), axis = 1)
label = None
resids_specific_rmsr = torch.sqrt(resids/1)
ax.plot(X, resids_specific_rmsr, color = "orangered", alpha = 0.4, label = label, linewidth = lineW-1)
residuals.append(resids)
mean_resid = torch.mean(resids_tensor, axis =1)
rmsr = torch.sqrt(mean_resid)
ax.plot(X, rmsr,
color = "blue",
alpha = 0.9,
label = r'{RMSR}',
linewidth = lineW-0.5)
ax.legend(prop={"size":16});
ax.set_xlabel(r'$t$')
ax.set_yscale("log")
ax.set_ylabel(r'{RMSR}')
# common cv arguments:
cv_declaration_args = {"interactive" : True,
"batch_size" : 8, #batch size is parallel
"cv_samples" : 2, #number of cv_samples, random start points
"initial_samples" : 50, #number of random samples before optimization starts
"validate_fraction" : 0.3, #validation prop of tr+val sets
"log_score" : True, #log-residuals
"random_seed" : 209, # random seed
"ODE_order" : 1, #order of eq
#see turbo ref:
"length_min" : 2 ** (-7),#2 **(-7),
"success_tolerance" : 10}
```
## task 1: cross check burn in for all three experiments (burn in should be embedded into hps)
```
def driven_force(X, A = 1):
""" a force function, specifically f(t) = sin(t)
Parameters
----------
X: torch.tensor
the input time tensor
Returns
-------
the force, a torch.tensor of equal dimension to the input time tensor.
"""
return A*torch.sin(X)
def no_force(X):
""" a force function (returns 0)
Parameters
----------
X: torch.tensor
the input time tensor
Returns
-------
the force, in this case 0.
"""
return 0
lam =1
def custom_loss(X , y, ydot, out_weights, lam = lam, force_t = None, reg = False,
ode_coefs = None, init_conds = None,
enet_alpha = None, enet_strength =None, mean = True):
""" The loss function of the ODE (in this case the population equation loss)
Parameters
----------
X: torch.tensor
The input (in the case of ODEs this is time t)
y: torch.tensor
The response variable
ydot: torch.tensor
The time derivative of the response variable
enet_strength: float
the magnitude of the elastic net regularization parameter. In this case there is no e-net regularization
enet_alpha: float
the proportion of the loss that is L2 regularization (ridge). 1-alpha is the L1 proportion (lasso).
ode_coefs: list
this list represents the ODE coefficients. They can be numbers or t**n where n is some real number.
force: function
this function needs to take the input time tensor and return a new tensor f(t)
reg: bool
if applicable (not in the case below) this will toggle the elastic net regularization on and off
reparam: function
a reparameterization function which needs to take in the time tensor and return g and gdot, which
is the reparameterized time function that satisfies the initial conditions.
init_conds: list
the initial conditions of the ODE.
mean: bool
if true return the cost (0 dimensional float tensor) else return the residuals (1 dimensional tensor)
Returns
-------
the residuals or the cost depending on the mean argument (see above)
"""
#with paramization
L = ydot + lam * y - force_t
# if reg:
# #assert False
# weight_size_sq = torch.mean(torch.square(out_weights))
# weight_size_L1 = torch.mean(torch.abs(out_weights))
# L_reg = enet_strength*(enet_alpha * weight_size_sq + (1- enet_alpha) * weight_size_L1)
# L = L + 0.1 * L_reg
L = torch.square(L)
if mean:
L = torch.mean(L)
return L
#declare the initial conditions (each initial condition corresponds to a different curve)
y0s = np.arange(0.1, 2.1, 0.1)
len(y0s)
```
### Simple population
```
#declare the bounds dict. We search for the variables within the specified bounds.
# if a variable is declared as a float or integer like n_nodes or dt, these variables are fixed.
bounds_dict = {"connectivity" : (-2.2, -0.12), #log space
"spectral_radius" : (1, 10), #lin space
"n_nodes" : 250,
"regularization" : (-4, 4), #log space
"leaking_rate" : (0, 1), #linear space
"dt" : -2.5, #log space
"bias": (-0.75,0.75) #linear space
}
#set up data
x0, xf = 0, 5
nsteps = int(abs(xf - x0)/(10**bounds_dict["dt"]))
xtrain = torch.linspace(x0, xf, nsteps, requires_grad=False).view(-1,1)
int(xtrain.shape[0] * 0.5)
%%time
#declare the esn_cv optimizer: this class will run bayesian optimization to optimize the bounds dict.
#for more information see the github.
esn_cv = EchoStateNetworkCV(bounds = bounds_dict,
esn_burn_in = 500, #states to throw away before calculating output
subsequence_length = int(xtrain.shape[0] * 0.8), #combine len of tr + val sets
**cv_declaration_args
)
#optimize the network:
simple_pop_hps = esn_cv.optimize(x = xtrain,
reparam_f = reparam,
ODE_criterion = custom_loss,
init_conditions = [y0s],
force = no_force,
ode_coefs = [1, 1],
n_outputs = 1,
reg_type = "simple_pop")
%%time
pop_RC = EchoStateNetwork(**simple_pop_hps,
random_state = 209,
dtype = torch.float32)
train_args = {"X" : xtrain.view(-1,1),
"burn_in" : 500,
"ODE_order" : 1,
"force" : no_force,
"reparam_f" : reparam,
"ode_coefs" : [1, 1]}
pop_results = pop_RC.fit(init_conditions = [y0s,1],
SOLVE = True,
train_score = True,
ODE_criterion = custom_loss,
**train_args)
def simple_pop(y, t, t_pow = 0, force_k = 0, k = 1):
dydt = -k * y *t**t_pow + force_k*np.sin(t)
return dydt
#TODO: show results outside BO range
# some particularly good runs:
# simple_pop_hps = {'dt': 0.0031622776601683794,
# 'n_nodes': 250,
# 'connectivity': 0.13615401772200952,
# 'spectral_radius': 4.1387834548950195,
# 'regularization': 0.00028325262824591835,
# 'leaking_rate': 0.2962796092033386,
# 'bias': -0.5639935731887817}
# opt_hps = {'dt': 0.0031622776601683794,
# 'n_nodes': 250,
# 'connectivity': 0.7170604557008349,
# 'spectral_radius': 1.5755887031555176,
# 'regularization': 0.00034441529823729916,
# 'leaking_rate': 0.9272222518920898,
# 'bias': 0.1780446171760559}
# opt_hps = {'dt': 0.0017782794100389228,
# 'n_nodes': 250,
# 'connectivity': 0.11197846061157432,
# 'spectral_radius': 1.7452095746994019,
# 'regularization': 0.00012929296298723957,
# 'leaking_rate': 0.7733328938484192,
# 'bias': 0.1652531623840332}
fig = plt.figure(figsize = (9, 7)); gs1 = gridspec.GridSpec(3, 3);
ax = plt.subplot(gs1[:-1, :])
gts = plot_predictions(RC = pop_RC,
results = pop_results,
integrator_model = simple_pop,
ax = ax)
ax = plt.subplot(gs1[-1, :])
plot_data = plot_rmsr(pop_RC,
results = pop_results,
force = no_force,
ax = ax)
```
### Driven population:
```
#declare the bounds dict. We search for the variables within the specified bounds.
# if a variable is declared as a float or integer like n_nodes or dt, these variables are fixed.
bounds_dict = {"connectivity" : (-2, -0.12), #log space
"spectral_radius" : (1, 10), #lin space
"n_nodes" : 400,
"regularization" : (-4, 4), #log space
"leaking_rate" : (0, 1), #linear space
"dt" : -2.5, #log space
"bias": (-0.75,0.75) #linear space
}
%%time
#declare the esn_cv optimizer: this class will run bayesian optimization to optimize the bounds dict.
#for more information see the github.
esn_cv = EchoStateNetworkCV(bounds = bounds_dict,
esn_burn_in = 500, #states to throw away before calculating output
subsequence_length = int(xtrain.shape[0] * 0.8), #combine len of tr + val sets
**cv_declaration_args
)
#optimize the network:
driven_pop_hps = esn_cv.optimize(x = xtrain,
reparam_f = reparam,
ODE_criterion = custom_loss,
init_conditions = [y0s],
force = driven_force,
ode_coefs = [1, 1],
n_outputs = 1,
reg_type = "driven_pop")
y0s = np.arange(-10, 10.1, 1)
len(y0s)
%%time
driven_RC = EchoStateNetwork(**driven_pop_hps,
random_state = 209,
dtype = torch.float32)
train_args = {"X" : xtrain.view(-1,1),
"burn_in" : 500,
"ODE_order" : 1,
"force" : driven_force,
"reparam_f" : reparam,
"ode_coefs" : [1, 1]}
driven_results = driven_RC.fit(init_conditions = [y0s,1],
SOLVE = True,
train_score = True,
ODE_criterion = custom_loss,
**train_args)
def driven_pop(y, t, t_pow = 0, force_k = 1, k = 1):
dydt = -k * y *t**t_pow + force_k*np.sin(t)
return dydt
driven_pop_hps
fig = plt.figure(figsize = (9, 7)); gs1 = gridspec.GridSpec(3, 3);
ax = plt.subplot(gs1[:-1, :])
gts = plot_predictions(RC = driven_RC,
results = driven_results,
integrator_model = driven_pop,
ax = ax)
ax = plt.subplot(gs1[-1, :])
plot_data = plot_rmsr(driven_RC,
results = driven_results,
force = driven_force,
ax = ax)
```
#### Driven t^2 Population:
```
#declare the initial conditions (each initial condition corresponds to a different curve)
y0s = np.arange(-10, 10.1, 0.1)
len(y0s)
np.log10(0.005)
#declare the bounds dict. We search for the variables within the specified bounds.
# if a variable is declared as a float or integer like n_nodes or dt, these variables are fixed.
t2_hps = {'n_nodes': 500,
'connectivity': 0.09905712745750006,
'spectral_radius': 1.8904799222946167,
'regularization': 714.156090350679,
'leaking_rate': 0.031645022332668304,
'bias': -0.24167031049728394,
'dt' : 0.005}
bounds_dict = {"connectivity" : (-1.1, -0.9), #log space
"spectral_radius" : (1.8, 2.0), #lin space
"n_nodes" : 500,
"regularization" : (2.5, 3.5), #log space
"leaking_rate" : (0.02, .04), #linear space
"dt" : -2.3, #log space
"bias": (0,1) #linear space
}
%%time
#declare the esn_cv optimizer: this class will run bayesian optimization to optimize the bounds dict.
#for more information see the github.
esn_cv = EchoStateNetworkCV(bounds = bounds_dict,
esn_burn_in = 1000, #states to throw away before calculating output
subsequence_length = int(xtrain.shape[0] * 0.8), #combine len of tr + val sets
**cv_declaration_args
)
#optimize the network:
t2_pop_hps = esn_cv.optimize(x = xtrain,
reparam_f = reparam,
ODE_criterion = custom_loss,
init_conditions = [y0s],
force = driven_force,
ode_coefs = ["t^2", 1],
n_outputs = 1,
reg_type = "driven_pop")
#solution run:
# t2_hps = {'n_nodes': 500,
# 'connectivity': 0.09905712745750006,
# 'spectral_radius': 1.8904799222946167,
# 'regularization': 714.156090350679,
# 'leaking_rate': 0.031645022332668304,
# 'bias': -0.24167031049728394,
# 'dt' : 0.005}
def t2_pop(y, t, t_pow = 2, force_k = 1, k = 1):
dydt = -k * y *t**t_pow + force_k*np.sin(t)
return dydt
%%time
t2_RC = EchoStateNetwork(**t2_pop_hps,
random_state = 209,
dtype = torch.float32)
train_args = {"X" : xtrain.view(-1,1),
"burn_in" : 1000,
"ODE_order" : 1,
"force" : driven_force,
"reparam_f" : reparam,
"ode_coefs" : ["t^2", 1]}
t2_results = t2_RC.fit(init_conditions = [y0s,1],
SOLVE = True,
train_score = True,
ODE_criterion = custom_loss,
**train_args)
t2_RC.ode_coefs[0]
fig = plt.figure(figsize = (9, 7)); gs1 = gridspec.GridSpec(3, 3);
ax = plt.subplot(gs1[:-1, :])
gts = plot_predictions(RC = t2_RC,
results = t2_results,
integrator_model = t2_pop,
ax = ax)
ax = plt.subplot(gs1[-1, :])
plot_data = plot_rmsr(t2_RC,
results = t2_results,
force = driven_force,
ax = ax)
end_time = time.time()
print(f'Total notebook runtime: {end_time - start_time:.2f} seconds')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/parekhakhil/pyImageSearch/blob/main/1402_opencv_template_matching.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>

# OpenCV Template Matching ( cv2.matchTemplate )
### by [PyImageSearch.com](http://www.pyimagesearch.com)
## Welcome to **[PyImageSearch Plus](http://pyimg.co/plus)** Jupyter Notebooks!
This notebook is associated with the [OpenCV Template Matching ( cv2.matchTemplate )](https://www.pyimagesearch.com/2021/03/22/opencv-template-matching-cv2-matchtemplate/) blog post published on 03-22-21.
Only the code for the blog post is here. Most codeblocks have a 1:1 relationship with what you find in the blog post with two exceptions: (1) Python classes are not separate files as they are typically organized with PyImageSearch projects, and (2) Command Line Argument parsing is replaced with an `args` dictionary that you can manipulate as needed.
We recommend that you execute (press ▶️) the code block-by-block, as-is, before adjusting parameters and `args` inputs. Once you've verified that the code is working, you are welcome to hack with it and learn from manipulating inputs, settings, and parameters. For more information on using Jupyter and Colab, please refer to these resources:
* [Jupyter Notebook User Interface](https://jupyter-notebook.readthedocs.io/en/stable/notebook.html#notebook-user-interface)
* [Overview of Google Colaboratory Features](https://colab.research.google.com/notebooks/basic_features_overview.ipynb)
As a reminder, these PyImageSearch Plus Jupyter Notebooks are not for sharing; please refer to the **Copyright** directly below and **Code License Agreement** in the last cell of this notebook.
Happy hacking!
*Adrian*
<hr>
***Copyright:*** *The contents of this Jupyter Notebook, unless otherwise indicated, are Copyright 2021 Adrian Rosebrock, PyimageSearch.com. All rights reserved. Content like this is made possible by the time invested by the authors. If you received this Jupyter Notebook and did not purchase it, please consider making future content possible joining PyImageSearch Plus at [http://pyimg.co/plus/](http://pyimg.co/plus) today.*
### Download the code zip file
```
!wget https://pyimagesearch-code-downloads.s3-us-west-2.amazonaws.com/opencv-template-matching/opencv-template-matching.zip
!unzip -qq opencv-template-matching.zip
%cd opencv-template-matching
```
## Blog Post Code
### Import Packages
```
# import the necessary packages
import matplotlib.pyplot as plt
import cv2
```
### Function to display images in Jupyter Notebooks and Google Colab
```
def plt_imshow(title, image):
# convert the image frame BGR to RGB color space and display it
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
plt.title(title)
plt.grid(False)
plt.show()
```
### Implementing template matching with OpenCV
```
# # construct the argument parser and parse the arguments
# ap = argparse.ArgumentParser()
# ap.add_argument("-i", "--image", type=str, required=True,
# help="path to input image where we'll apply template matching")
# ap.add_argument("-t", "--template", type=str, required=True,
# help="path to template image")
# args = vars(ap.parse_args())
# since we are using Jupyter Notebooks we can replace our argument
# parsing code with *hard coded* arguments and values
args = {
"image": "images/coke_bottle.png",
"template": "images/coke_logo.png"
}
# load the input image and template image from disk, then display
# them to our screen
print("[INFO] loading images...")
image = cv2.imread(args["image"])
template = cv2.imread(args["template"])
plt_imshow("Image", image)
plt_imshow("Template", template)
# convert both the image and template to grayscale
imageGray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
templateGray = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY)
# perform template matching
print("[INFO] performing template matching...")
result = cv2.matchTemplate(imageGray, templateGray,
cv2.TM_CCOEFF_NORMED)
(minVal, maxVal, minLoc, maxLoc) = cv2.minMaxLoc(result)
# determine the starting and ending (x, y)-coordinates of the
# bounding box
(startX, startY) = maxLoc
endX = startX + template.shape[1]
endY = startY + template.shape[0]
# draw the bounding box on the image
cv2.rectangle(image, (startX, startY), (endX, endY), (255, 0, 0), 3)
# show the output image
plt_imshow("Output", image)
```
For a detailed walkthrough of the concepts and code, be sure to refer to the full tutorial, [*OpenCV Template Matching ( cv2.matchTemplate )*](https://www.pyimagesearch.com/2021/03/22/opencv-template-matching-cv2-matchtemplate/) published on 03-22-21.
---
# Code License Agreement
```
Copyright (c) 2021 PyImageSearch.com
SIMPLE VERSION
Feel free to use this code for your own projects, whether they are
purely educational, for fun, or for profit. THE EXCEPTION BEING if
you are developing a course, book, or other educational product.
Under *NO CIRCUMSTANCE* may you use this code for your own paid
educational or self-promotional ventures without written consent
from Adrian Rosebrock and PyImageSearch.com.
LONGER, FORMAL VERSION
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files
(the "Software"), to deal in the Software without restriction,
including without limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of the Software,
and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
Notwithstanding the foregoing, you may not use, copy, modify, merge,
publish, distribute, sublicense, create a derivative work, and/or
sell copies of the Software in any work that is designed, intended,
or marketed for pedagogical or instructional purposes related to
programming, coding, application development, or information
technology. Permission for such use, copying, modification, and
merger, publication, distribution, sub-licensing, creation of
derivative works, or sale is expressly withheld.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
| github_jupyter |
# Regular Expressions in Python
> Some people, when confronted with a problem, think, "I know, I'll use regular expressions." Now they have two problems. - Jamie Zawinski
## First day: quick overview
This first day we will explore the basics of the `re` (standard libary) module so you can start adding this powerful skill to your Python toolkit.
```
import re
```
### When not to use regexes?
Basically when regular string manipulations will do, for example:
```
text = 'Awesome, I am doing the #100DaysOfCode challenge'
```
Does text start with 'Awesome'?
```
text.startswith('Awesome')
```
Does text end with 'challenge'?
```
text.endswith('challenge')
```
Does text contain '100daysofcode' (case insensitive)
```
'100daysofcode' in text.lower()
```
I am bold and want to do 200 days (note strings are inmutable, so save to a new string)
```
text.replace('100', '200')
```
### Regex == Meta language
But what if you need to do some more tricky things, say matching any #(int)DaysOfCode? Here you want to use a regex pattern. Regular expressions are a (meta) language on their own and I highly encourage you to read through [this HOWTO](https://docs.python.org/3.7/howto/regex.html#regex-howto) to become familiar with their syntax.
### `search` vs `match`
The main methods you want to know about are `search` and `match`, former matches a substring, latter matches the string from beginning to end. I always embed my regex in `r''` to avoid having to escape special characters like \d (digit), \w (char), \s (space), \S (non-space), etc (I think \\\d and \\\s clutters up the regex)
```
text = 'Awesome, I am doing the #100DaysOfCode challenge'
re.search(r'I am', text)
re.match(r'I am', text)
re.match(r'Awesome.*challenge', text)
```
### Capturing strings
A common task is to retrieve a match, you can use _capturing () parenthesis_ for that:
```
hundred = 'Awesome, I am doing the #100DaysOfCode challenge'
two_hundred = 'Awesome, I am doing the #200DaysOfCode challenge'
m = re.match(r'.*(#\d+DaysOfCode).*', hundred)
m.groups()[0]
m = re.search(r'(#\d+DaysOfCode)', two_hundred)
m.groups()[0]
```
### `findall` is your friend
What if you want to match multiple instances of a pattern? `re` has the convenient `findall` method I use a lot. For example in [our 100 Days Of Code](https://github.com/pybites/100DaysOfCode/blob/master/LOG.md) we used the `re` module for the following days - how would I extract the days from this string?
```
text = '''
$ python module_index.py |grep ^re
re | stdlib | 005, 007, 009, 015, 021, 022, 068, 080, 081, 086, 095
'''
re.findall(r'\d+', text)
```
How cool is that?! Just because we can, look at how you can find the most common word combining `findall` with `Counter`:
```
text = """Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been
the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and
scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into
electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of
Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus
PageMaker including versions of Lorem Ipsum"""
text.split()[:5]
```
Of course you can do the same with `words.split()` but if you have more requirements you might fit it in the same regex, for example let's only count words that start with a capital letter.
I am using two _character classes_ here (= pattern inside `[]`), the first to match a capital letter, the second to match 0 or more common word characters.
Note I am escaping the single quote (') inside the second character class, because the regex pattern is wrapped inside single quotes as well:
```
from collections import Counter
cnt = Counter(re.findall(r'[A-Z][A-Za-z0-9\']*', text))
cnt.most_common(5)
```
### Compiling regexes
If you want to run the same regex multiple times, say in a for loop it is best practice to define the regex one time using `re.compile`, here is an example:
```
movies = '''1. Citizen Kane (1941)
2. The Godfather (1972)
3. Casablanca (1942)
4. Raging Bull (1980)
5. Singin' in the Rain (1952)
6. Gone with the Wind (1939)
7. Lawrence of Arabia (1962)
8. Schindler's List (1993)
9. Vertigo (1958)
10. The Wizard of Oz (1939)'''.split('\n')
movies
```
Let's find movie titles that have exactly 2 words, just for exercise sake. Before peaking to the solution how would _you_ define such a regex?
OK here is one way to do it, I am using `re.VERBOSE` which ignores spaces and comments so I can explain what each part of the regex does (really nice!):
```
pat = re.compile(r'''
^ # start of string
\d+ # one or more digits
\. # a literal dot
\s+ # one or more spaces
(?: # non-capturing parenthesis, so I don't want store this match in groups()
[A-Za-z']+\s # character class (note inclusion of ' for "Schindler's"), followed by a space
) # closing of non-capturing parenthesis
{2} # exactly 2 of the previously grouped subpattern
\( # literal opening parenthesis
\d{4} # exactly 4 digits (year)
\) # literal closing parenthesis
$ # end of string
''', re.VERBOSE)
```
As we've seen before if the regex matches it returns an `_sre.SRE_Match` object, otherwise it returns `None`
```
for movie in movies:
print(movie, pat.match(movie))
```
### Advanced string replacing
As shown before `str.replace` probably covers a lot of your needs, for more advanced usage there is `re.sub`:
```
text = '''Awesome, I am doing #100DaysOfCode, #200DaysOfDjango and of course #365DaysOfPyBites'''
# I want all challenges to be 100 days, I need a break!
text.replace('200', '100').replace('365', '100')
```
`re.sub` makes this easy:
```
re.sub(r'\d+', '100', text)
```
Or what if I want to change all the #nDaysOf... to #nDaysOfPython? You can use `re.sub` for this. Note how I use the capturing parenthesis to port over the matching part of the string to the replacement (2nd argument) where I use `\1` to reference it:
```
re.sub(r'(#\d+DaysOf)\w+', r'\1Python', text)
```
And that's a wrap. I only showed you some of the common `re` features I use day-to-day, but there is much more. I hope you got a taste for writing regexes in Python.
## Second day: solidify what you've learned
A. We recommend reading [10 Tips to Get More out of Your Regexes](https://pybit.es/mastering-regex.html) + watching the Al Sweigart's PyCon talk: _Yes, It's Time to Learn Regular Expressions_, linked at the end.
If you still have time check out [the mentioned HOWTO](https://docs.python.org/3.7/howto/regex.html#regex-howto) and the [docs](https://docs.python.org/3.7/library/re.html).
B. Write some regexes interactively using an online tool like [regex101](https://regex101.com/#python).
## Third day: put your new skill to the test!
A. Take [Bite 2. Regex Fun](https://codechalleng.es/bites/2/) which should be within reach after studying the materials. It let's you write 3 regexes. Like to work on your desktop? Maybe you can do [blog challenge 42 - Mastering Regular Expressions](https://codechalleng.es/challenges/42/) which is similar but let's you solve 6 regex problems!
B. More fun: `wget` or `request.get` your favorite site and use regex on the output to parse out data (fun trivia: a similar exercise is where [our code challenges started](https://pybit.es/js_time_scraper_ch.html)).
Good luck and remember:
> Keep calm and code in Python
### Time to share what you've accomplished!
Be sure to share your last couple of days work on Twitter or Facebook. Use the hashtag **#100DaysOfCode**.
Here are [some examples](https://twitter.com/search?q=%23100DaysOfCode) to inspire you. Consider including [@talkpython](https://twitter.com/talkpython) and [@pybites](https://twitter.com/pybites) in your tweets.
*See a mistake in these instructions? Please [submit a new issue](https://github.com/talkpython/100daysofcode-with-python-course/issues) or fix it and [submit a PR](https://github.com/talkpython/100daysofcode-with-python-course/pulls).*
| github_jupyter |
## Eng+Wales well-mixed example model
This is the inference notebook. There are various model variants as encoded by `expt_params_local` and `model_local`, which are shared by the notebooks in a given directory.
Outputs of this notebook:
* `ewMod-inf.pik` : result of inference computation
* `ewMod-hess.npy` : hessian matrix of log-posterior
NOTE carefully : `Im` compartment is cumulative deaths, this is called `D` elsewhere
### Start notebook
(the following line is for efficient parallel processing)
```
%env OMP_NUM_THREADS=1
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
import pyross
import time
import pandas as pd
import matplotlib.image as mpimg
import pickle
import os
import pprint
import scipy.stats
# comment these before commit
#print(pyross.__file__)
#print(os.getcwd())
from ew_fns import *
import expt_params_local
import model_local
```
### switches etc
```
verboseMod=False ## print ancillary info about the model? (would usually be False, for brevity)
## Calculate things, or load from files ?
doInf = False ## do inference, or load it ?
doHes = False ## Hessian may take a few minutes !! does this get removed? what to do?
## time unit is one week
daysPerWeek = 7.0
## these are params that might be varied in different expts
exptParams = expt_params_local.getLocalParams()
pprint.pprint(exptParams)
## this is used for filename handling throughout
pikFileRoot = exptParams['pikFileRoot']
```
### convenient settings
```
np.set_printoptions(precision=3)
pltAuto = True
plt.rcParams.update({'figure.autolayout': pltAuto})
plt.rcParams.update({'font.size': 14})
```
## LOAD MODEL
```
loadModel = model_local.loadModel(exptParams,daysPerWeek,verboseMod)
## should use a dictionary but...
[ numCohorts, fi, N, Ni, model_spec, estimator, contactBasis, interventionFn,
modParams, priorsAll, initPriorsLinMode, obsDeath, fltrDeath,
simTime, deathCumulativeDat ] = loadModel
```
### Inspect most likely trajectory for model with prior mean params
```
x0_lin = estimator.get_mean_inits(initPriorsLinMode, obsDeath[0], fltrDeath)
guessTraj = estimator.integrate( x0_lin, exptParams['timeZero'], simTime, simTime+1)
## plots
yesPlot = model_spec['classes'].copy()
yesPlot.remove('S')
plt.yscale('log')
for lab in yesPlot :
indClass = model_spec['classes'].index(lab)
totClass = np.sum(guessTraj[:,indClass*numCohorts:(indClass+1)*numCohorts],axis=1)
plt.plot( N * totClass,'-',lw=3,label=lab)
plt.plot(N*np.sum(obsDeath,axis=1),'X',label='data')
plt.legend(fontsize=14,bbox_to_anchor=(1, 1.0))
plt.xlabel('time in weeks')
plt.ylabel('class population')
plt.show() ; plt.close()
indClass = model_spec['classes'].index('Im')
plt.yscale('log')
for coh in range(numCohorts):
plt.plot( N*guessTraj[:,coh+indClass*numCohorts],label='m{c:d}'.format(c=coh) )
plt.xlabel('time in weeks')
plt.ylabel('cumul deaths by age cohort')
plt.legend(fontsize=8,bbox_to_anchor=(1, 1.0))
plt.show() ; plt.close()
```
## INFERENCE
parameter count
* 32 for age-dependent Ai and Af (or beta and Af)
* 2 (step-like) or 3 (NPI-with-easing) for lockdown time and width (+easing param)
* 1 for projection of initial condition along mode
* 5 for initial condition in oldest cohort
* 5 for the gammas
* 1 for beta in late stage
total: 46 (step-like) or 47 (with-easing)
The following computation with CMA-ES takes some minutes depending on compute power, it should use multiple CPUs efficiently, if available. The result will vary (slightly) according to the random seed, can be controlled by passing `cma_random_seed` to `latent_infer`
```
def runInf() :
infResult = estimator.latent_infer(obsDeath, fltrDeath, simTime,
priorsAll,
initPriorsLinMode,
generator=contactBasis,
intervention_fun=interventionFn,
tangent=False,
verbose=True,
enable_global=True,
enable_local =True,
**exptParams['infOptions'],
)
return infResult
if doInf:
## do the computation
elapsedInf = time.time()
infResult = runInf()
elapsedInf = time.time() - elapsedInf
print('** elapsed time',elapsedInf/60.0,'mins')
# save the answer
opFile = pikFileRoot + "-inf.pik"
print('opf',opFile)
with open(opFile, 'wb') as f:
pickle.dump([infResult,elapsedInf],f)
else:
## load a saved computation
print(' Load data')
# here we load the data
# (this may be the file that we just saved, it is deliberately outside the if: else:)
ipFile = pikFileRoot + "-inf.pik"
print('ipf',ipFile)
with open(ipFile, 'rb') as f:
[infResult,elapsedInf] = pickle.load(f)
```
#### unpack results
```
epiParamsMAP = infResult['params_dict']
conParamsMAP = infResult['control_params_dict']
x0_MAP = infResult['x0']
CM_MAP = contactBasis.intervention_custom_temporal( interventionFn,
**conParamsMAP)
logPinf = -estimator.minus_logp_red(epiParamsMAP, x0_MAP, obsDeath, fltrDeath, simTime,
CM_MAP, tangent=False)
print('** measuredLikelihood',logPinf)
print('** logPosterior ',infResult['log_posterior'])
print('** logLikelihood',infResult['log_likelihood'])
```
#### MAP dominant trajectory
```
estimator.set_params(epiParamsMAP)
estimator.set_contact_matrix(CM_MAP)
trajMAP = estimator.integrate( x0_MAP, exptParams['timeZero'], simTime, simTime+1)
yesPlot = model_spec['classes'].copy()
yesPlot.remove('S')
plt.yscale('log')
for lab in yesPlot :
indClass = model_spec['classes'].index(lab)
totClass = np.sum(trajMAP[:,indClass*numCohorts:(indClass+1)*numCohorts],axis=1)
plt.plot( N * totClass,'-',lw=3,label=lab)
plt.plot(N*np.sum(obsDeath,axis=1),'X',label='data')
plt.xlabel('time in weeks')
plt.ylabel('class population')
plt.legend(fontsize=14,bbox_to_anchor=(1, 1.0))
plt.show() ; plt.close()
fig,axs = plt.subplots(1,2,figsize=(10,4.5))
cohRanges = [ [x,x+4] for x in range(0,75,5) ]
#print(cohRanges)
cohLabs = ["{l:d}-{u:d}".format(l=low,u=up) for [low,up] in cohRanges ]
cohLabs.append("75+")
ax = axs[0]
ax.set_title('MAP (average dynamics)')
mSize = 3
minY = 0.12
maxY = 1.0
indClass = model_spec['classes'].index('Im')
ax.set_yscale('log')
ax.set_ylabel('cumulative M (by cohort)')
ax.set_xlabel('time/weeks')
for coh in reversed(list(range(numCohorts))) :
ax.plot( N*trajMAP[:,coh+indClass*numCohorts],'o-',label=cohLabs[coh],ms=mSize )
maxY = np.maximum( maxY, np.max(N*trajMAP[:,coh+indClass*numCohorts]))
#ax.legend(fontsize=8,bbox_to_anchor=(1, 1.0))
maxY *= 1.6
ax.set_ylim(bottom=minY,top=maxY)
#plt.show() ; plt.close()
ax = axs[1]
ax.set_title('data')
ax.set_xlabel('time/weeks')
indClass = model_spec['classes'].index('Im')
ax.set_yscale('log')
for coh in reversed(list(range(numCohorts))) :
ax.plot( N*obsDeath[:,coh],'o-',label=cohLabs[coh],ms=mSize )
## keep the same as other panel
ax.set_ylim(bottom=minY,top=maxY)
ax.legend(fontsize=10,bbox_to_anchor=(1, 1.0))
#plt.show() ; plt.close()
#plt.savefig('ageMAPandData.png')
plt.show(fig)
```
#### sanity check : plot the prior and inf value for one or two params
```
(likFun,priFun,dim) = pyross.evidence.latent_get_parameters(estimator,
obsDeath, fltrDeath, simTime,
priorsAll,
initPriorsLinMode,
generator=contactBasis,
intervention_fun=interventionFn,
tangent=False,
)
def showInfPrior(xLab) :
fig = plt.figure(figsize=(4,4))
dimFlat = np.size(infResult['flat_params'])
## magic to work out the index of this param in flat_params
jj = infResult['param_keys'].index(xLab)
xInd = infResult['param_guess_range'][jj]
## get the range
xVals = np.linspace( *priorsAll[xLab]['bounds'], 100 )
#print(infResult['flat_params'][xInd])
pVals = []
checkVals = []
for xx in xVals :
flatP = np.zeros( dimFlat )
flatP[xInd] = xx
pdfAll = np.exp( priFun.logpdf(flatP) )
pVals.append( pdfAll[xInd] )
#checkVals.append( scipy.stats.norm.pdf(xx,loc=0.2,scale=0.1) )
plt.plot(xVals,pVals,'-',label='prior')
infVal = infResult['flat_params'][xInd]
infPdf = np.exp( priFun.logpdf(infResult['flat_params']) )[xInd]
plt.plot([infVal],[infPdf],'ro',label='inf')
plt.xlabel(xLab)
upperLim = 1.05*np.max(pVals)
plt.ylim(0,upperLim)
#plt.plot(xVals,checkVals)
plt.legend()
plt.show(fig) ; plt.close()
#print('**params\n',infResult['flat_params'])
#print('**logPrior\n',priFun.logpdf(infResult['flat_params']))
showInfPrior('gammaE')
```
## Hessian matrix of log-posterior
(this can take a few minutes, it does not make use of multiple cores)
```
if doHes:
## this eps amounts to a perturbation of approx 1% on each param
## (1/4) power of machine epsilon is standard for second deriv
xx = infResult['flat_params']
eps = 100 * xx*( np.spacing(xx)/xx )**(0.25)
#print('**params\n',infResult['flat_params'])
#print('** rel eps\n',eps/infResult['flat_params'])
CM_MAP = contactBasis.intervention_custom_temporal( interventionFn,
**conParamsMAP)
estimator.set_params(epiParamsMAP)
estimator.set_contact_matrix(CM_MAP)
start = time.time()
hessian = estimator.latent_hessian(obs=obsDeath, fltr=fltrDeath,
Tf=simTime, generator=contactBasis,
infer_result=infResult,
intervention_fun=interventionFn,
eps=eps, tangent=False, fd_method="central",
inter_steps=0)
end = time.time()
print('time',(end-start)/60,'mins')
opFile = pikFileRoot + "-hess.npy"
print('opf',opFile)
with open(opFile, 'wb') as f:
np.save(f,hessian)
else :
print('Load hessian')
# reload in all cases (even if we just saved it)
ipFile = pikFileRoot + "-hess.npy"
try:
print('ipf',ipFile)
with open(ipFile, 'rb') as f:
hessian = np.load(f)
except (OSError, IOError) :
print('... error loading hessian')
hessian = None
#print(hessian)
print("** param vals")
print(infResult['flat_params'],'\n')
if np.all(hessian) != None :
print("** naive uncertainty v1 : reciprocal sqrt diagonal elements (x2)")
print( 2/np.sqrt(np.diagonal(hessian)) ,'\n')
print("** naive uncertainty v2 : sqrt diagonal elements of inverse (x2)")
print( 2*np.sqrt(np.diagonal(np.linalg.inv(hessian))) ,'\n')
```
| github_jupyter |
# AllSides sources & bias crawler
Get and save a list of rated news sources as left or right and in between.
A CSV file will be created with the following columns:
- Source
- Label
- Agree
- Disagree
- Publisher URL
- Publisher site
```
!ipython -m pip install aiohttp bs4 requests
import asyncio
import csv
import logging
import re
import urllib.parse as urlparse
import aiohttp
import bs4
import requests
url_tpl = "https://www.allsides.com/media-bias/media-bias-ratings?field_featured_bias_rating_value=All&field_news_source_type_tid%5B1%5D=1&field_news_source_type_tid%5B2%5D=2&field_news_source_type_tid%5B3%5D=3&field_news_bias_nid_1%5B1%5D=1&field_news_bias_nid_1%5B2%5D=2&field_news_bias_nid_1%5B3%5D=3&title=&customFilter=1&page={}"
html_parser = "html5lib"
csv_header = [
"source",
"label",
"agree",
"disagree",
"publisher",
"site",
]
dump_path = "media-bias.csv"
encoding = "utf-8"
skip_blocked_sites = True
verbose = True # make it True to see debugging messages
level = logging.DEBUG if verbose else logging.INFO
logging.root.handlers.clear()
logging.basicConfig(
format="%(levelname)s - %(name)s - %(asctime)s - %(message)s",
level=level
)
site_adapter = {
"www.huffingtonpost.com": "www.huffpost.com",
"www.cnn.com": "edition.cnn.com",
"online.wsj.com": "wsj.com",
}
async def get_soup(session, url):
abs_url = urlparse.urljoin(url_tpl, url)
text = await (await session.get(abs_url)).text()
# resp.raise_for_status()
soup = bs4.BeautifulSoup(text, html_parser)
return soup
def _adapt_site(url, netloc):
site = site_adapter.get(netloc)
if site:
url = url.replace(netloc, site)
netloc = site
return url, netloc
async def get_publisher_url(session, src_url, source_name):
# import code; code.interact(local={**globals(), **locals()})
logging.debug("Getting publisher's URL for %r.", source_name)
soup = await get_soup(session, src_url)
div = soup.find("div", class_="dynamic-grid")
if not div:
return None
url = div.find("a").get("href").strip()
parsed = urlparse.urlparse(url)
if not parsed.netloc:
return None
return _adapt_site(url, parsed.netloc)
async def save_pages(bias_writer, csvfile):
async with aiohttp.ClientSession() as session:
page = 0 # custom page if you want
while True:
logging.info("Crawling page %d...", page)
url = url_tpl.format(page)
soup = await get_soup(session, url)
pub_coros = []
extras = []
table = soup.find("table")
if not table or "no record" in table.find("tbody").find("tr").text.lower():
logging.info("Reached empty table -> end of results/pages.")
break
for row in table.find("tbody").find_all("tr"):
src_a = row.find("td", class_="source-title").find("a")
src_url = src_a.get("href")
source_name = src_a.text
label_alt = row.find("td", class_="views-field-field-bias-image").find("img").get("alt")
label = label_alt.split(":")[-1].strip()
feedback = row.find("td", class_="community-feedback")
agree = int(feedback.find("span", class_="agree").text)
disagree = int(feedback.find("span", class_="disagree").text)
extras.append([source_name, label, agree, disagree])
# import code; code.interact(local={**globals(), **locals()})
pub_coros.append(get_publisher_url(session, src_url, source_name))
publisher_details_list = await asyncio.gather(*pub_coros)
for idx, publisher_details in enumerate(publisher_details_list):
if not publisher_details:
if skip_blocked_sites:
continue
else:
publisher_details = ("", "")
# print(source_name, label, f"{agree}/{disagree}")
bias_writer.writerow(extras[idx] + list(publisher_details))
page += 1
csvfile.flush()
async def main():
with open(dump_path, "w", newline="", encoding=encoding) as csvfile:
bias_writer = csv.writer(csvfile)
bias_writer.writerow(csv_header)
await save_pages(bias_writer, csvfile)
await main()
```
Some publishers are blocked (no websites offered by AllSides), therefore fewer results in the CSV file.
Now let's find a good way of associating a side with a website in case multiple candidates are available.
```
side_dict = {}
with open(dump_path, newline="") as stream:
reader = csv.reader(stream)
print(next(reader))
for row in reader:
side_dict.setdefault(row[5], []).append((row[0], row[1], row[2]))
for site, sides in side_dict.items():
if len(sides) > 1:
print(site, sides)
```
| github_jupyter |
# Summary of transfer outcomes for practice M85092
**Context**
We would like to see a summary of transfer outcomes where the sending practice is M85092 for April data (if available), otherwise March data.
NB: Upon finding there were only 8 relevant transfers, we also used March data (which contained 2600 transfers)
**Scope**
- Breakdown of transfers out, per month
- Show outcome of each transfer
- Show which practices they were sent to
```
import pandas as pd
# Import transfer files to extract whether message creator is sender or requester
transfer_file_location = "s3://prm-gp2gp-data-sandbox-dev/transfers-sample-5/"
transfer_files = [
"2020-9-transfers.parquet",
"2020-10-transfers.parquet",
"2020-11-transfers.parquet",
"2020-12-transfers.parquet",
"2021-1-transfers.parquet",
"2021-2-transfers.parquet",
"2021-3-transfers.parquet",
"2021-4-transfers.parquet",
]
#april_data="s3://prm-gp2gp-data-sandbox-dev/duplicate-fix-14-day-cut-off/2021/04/transfers.parquet"
transfer_input_files = [transfer_file_location + f for f in transfer_files] #+[april_data]
transfers_raw = pd.concat((
pd.read_parquet(f)
for f in transfer_input_files
))
# In the data from the PRMT-1742-duplicates-analysis branch, these columns have been added , but contain only empty values.
transfers_raw = transfers_raw.drop(["sending_supplier", "requesting_supplier"], axis=1)
transfers = transfers_raw.copy()
# Correctly interpret certain sender errors as failed.
# This is explained in PRMT-1974. Eventually this will be fixed upstream in the pipeline.
pending_sender_error_codes=[6,7,10,24,30,23,14,99]
transfers_with_pending_sender_code_bool=transfers['sender_error_code'].isin(pending_sender_error_codes)
transfers_with_pending_with_error_bool=transfers['status']=='PENDING_WITH_ERROR'
transfers_which_need_pending_to_failure_change_bool=transfers_with_pending_sender_code_bool & transfers_with_pending_with_error_bool
transfers.loc[transfers_which_need_pending_to_failure_change_bool,'status']='FAILED'
# Add integrated Late status
eight_days_in_seconds=8*24*60*60
transfers_after_sla_bool=transfers['sla_duration']>eight_days_in_seconds
transfers_with_integrated_bool=transfers['status']=='INTEGRATED'
transfers_integrated_late_bool=transfers_after_sla_bool & transfers_with_integrated_bool
transfers.loc[transfers_integrated_late_bool,'status']='INTEGRATED LATE'
# If the record integrated after 28 days, change the status back to pending.
# This is to handle each month consistently and to always reflect a transfers status 28 days after it was made.
# TBD how this is handled upstream in the pipeline
fourteen_days_in_seconds=14*24*60*60
transfers_after_month_bool=transfers['sla_duration']>fourteen_days_in_seconds
transfers_pending_at_month_bool=transfers_after_month_bool & transfers_integrated_late_bool
transfers.loc[transfers_pending_at_month_bool,'status']='PENDING'
transfers_with_early_error_bool=(~transfers.loc[:,'sender_error_code'].isna()) |(~transfers.loc[:,'intermediate_error_codes'].apply(len)>0)
transfers.loc[transfers_with_early_error_bool & transfers_pending_at_month_bool,'status']='PENDING_WITH_ERROR'
# Supplier name mapping
supplier_renaming = {
"EGTON MEDICAL INFORMATION SYSTEMS LTD (EMIS)":"EMIS",
"IN PRACTICE SYSTEMS LTD":"Vision",
"MICROTEST LTD":"Microtest",
"THE PHOENIX PARTNERSHIP":"TPP",
None: "Unknown"
}
asid_lookup_file = "s3://prm-gp2gp-data-sandbox-dev/asid-lookup/asidLookup-Mar-2021.csv.gz"
asid_lookup = pd.read_csv(asid_lookup_file)
lookup = asid_lookup[["ASID", "MName", "NACS","OrgName"]]
transfers = transfers.merge(lookup, left_on='requesting_practice_asid',right_on='ASID',how='left')
transfers = transfers.rename({'MName': 'requesting_supplier', 'ASID': 'requesting_supplier_asid', 'NACS': 'requesting_ods_code','OrgName':'requesting_practice_name'}, axis=1)
transfers = transfers.merge(lookup, left_on='sending_practice_asid',right_on='ASID',how='left')
transfers = transfers.rename({'MName': 'sending_supplier', 'ASID': 'sending_supplier_asid', 'NACS': 'sending_ods_code','OrgName':'sending_practice_name'}, axis=1)
transfers["sending_supplier"] = transfers["sending_supplier"].replace(supplier_renaming.keys(), supplier_renaming.values())
transfers["requesting_supplier"] = transfers["requesting_supplier"].replace(supplier_renaming.keys(), supplier_renaming.values())
# Making the status to be more human readable here
transfers["status"] = transfers["status"].str.replace("_", " ").str.title()
transfers['Month of Transfer Request']=transfers['date_requested'].dt.year.astype(str) + '-' + transfers['date_requested'].dt.month.astype(str)
# Select the transfers where the sending practice is the practice of interest
practice_of_interest_bool = transfers["sending_ods_code"] == "M85092"
practice_transfers = transfers[practice_of_interest_bool]
# Create a table showing numbers of transfers to each practice and the status (at 14 days)
# Both the practice (rows) and status (columns) are ordered by most common first
ordered_requesting_practice_names=practice_transfers['requesting_practice_name'].value_counts().index
ordered_status=practice_transfers['status'].value_counts().index
ordered_dates=practice_transfers['Month of Transfer Request'].drop_duplicates().values
ordered_columns=[(date,status) for date in ordered_dates for status in ordered_status]
practice_transfers_count_table=practice_transfers.pivot_table(index='requesting_practice_name',columns=['Month of Transfer Request','status'],values='conversation_id',aggfunc='count')
ordered_columns=[column for column in ordered_columns if column in practice_transfers_count_table.columns]
practice_transfers_count_table=practice_transfers_count_table.loc[ordered_requesting_practice_names,ordered_columns].fillna(0).astype(int)
practice_transfers_count_table.to_csv( "s3://prm-gp2gp-data-sandbox-dev/notebook-outputs/38b-PRMT-2076-M85092-14-day-transfers-out.csv")
pd.DataFrame(practice_transfers['Month of Transfer Request'].value_counts()[ordered_dates].rename('Total Transfer Requests'))
# Create a Pandas Excel writer using XlsxWriter as the engine.
writer = pd.ExcelWriter('PRMT-2076-M85092-outcomes.xlsx', engine='xlsxwriter')
# Write each dataframe to a different worksheet.
[practice_transfers_count_table[month].to_excel(writer, sheet_name=month) for month in ordered_dates]
# Close the Pandas Excel writer and output the Excel file.
writer.save()
```
| github_jupyter |
```
num_classes = 2
ultrasound_size = 128
data_folder = r"QueensToChildrens"
notebook_save_folder = r"SavedNotebooks"
model_save_folder = r"SavedModels"
ultrasound_file = r"ultrasound.npy"
segmentation_file = r"segmentation.npy"
test_ultrasound_file = r"ultrasound-test.npy"
test_segmentation_file = r"segmentation-test.npy"
test_prediction_file=r"prediction-test.npy"
# Augmentation parameters
max_rotation_angle = 10
# Model parameters
filter_multiplier = 8
# Learning parameters
num_epochs = 20
batch_size = 24
max_learning_rate = 0.002
min_learning_rate = 0.00001
# Other parameters
num_show = 2
import datetime
import numpy as np
import os
from keras.models import *
from keras.layers import *
from keras.optimizers import *
from keras.callbacks import ModelCheckpoint, LearningRateScheduler
from keras import backend as keras
from keras.preprocessing.image import ImageDataGenerator
from local_vars import root_folder
data_fullpath = os.path.join(root_folder, data_folder)
ultrasound_fullname = os.path.join(data_fullpath, ultrasound_file)
segmentation_fullname = os.path.join(data_fullpath, segmentation_file)
print("Reading ultrasound images from: {}".format(ultrasound_fullname))
print("Reading segmentations from: {}".format(segmentation_fullname))
ultrasound_data = np.load(ultrasound_fullname)
segmentation_data = np.load(segmentation_fullname)
num_ultrasound = ultrasound_data.shape[0]
num_segmentation = segmentation_data.shape[0]
print("\nFound {} ultrasound images and {} segmentations".format(num_ultrasound, num_segmentation))
test_ultrasound_fullname = os.path.join(data_fullpath, test_ultrasound_file)
test_segmentation_fullname = os.path.join(data_fullpath, test_segmentation_file)
print("Reading test ultrasound from: {}".format(test_ultrasound_fullname))
print("Reading test segmentation from : {}".format(test_segmentation_fullname))
test_ultrasound_data = np.load(test_ultrasound_fullname)
test_segmentation_data = np.load(test_segmentation_fullname)
num_test_ultrasound = test_ultrasound_data.shape[0]
num_test_segmentation = test_segmentation_data.shape[0]
print("\nFound {} test ultrasound images and {} segmentations".format(num_test_ultrasound, num_test_segmentation))
import keras.utils
import scipy.ndimage
class UltrasoundSegmentationBatchGenerator(keras.utils.Sequence):
def __init__(self,
x_set,
y_set,
batch_size,
image_dimensions=(ultrasound_size, ultrasound_size),
shuffle=True,
n_channels=1,
n_classes=2):
self.x = x_set
self.y = y_set
self.batch_size = batch_size
self.image_dimensions = image_dimensions
self.shuffle = shuffle
self.n_channels = n_channels
self.n_classes = n_classes
self.number_of_images = self.x.shape[0]
self.indexes = np.arange(self.number_of_images)
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __len__(self):
return int(np.floor(self.number_of_images / self.batch_size))
def on_epoch_end(self):
self.indexes = np.arange(self.number_of_images)
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __getitem__(self, index):
batch_indexes = self.indexes[index*self.batch_size : (index+1)*self.batch_size]
x = np.empty((self.batch_size, *self.image_dimensions, self.n_channels))
y = np.empty((self.batch_size, *self.image_dimensions))
for i in range(self.batch_size):
flip_flag = np.random.randint(2)
if flip_flag == 1:
x[i,:,:,:] = np.flip(self.x[batch_indexes[i],:,:,:], axis=1)
y[i,:,:] = np.flip(self.y[batch_indexes[i],:,:], axis=1)
else:
x[i,:,:,:] = self.x[batch_indexes[i],:,:,:]
y[i,:,:] = self.y[batch_indexes[i],:,:]
angle = np.random.randint(-max_rotation_angle, max_rotation_angle)
x_rot = scipy.ndimage.interpolation.rotate(x, angle, (1,2), False, mode="constant", cval=0, order=0)
y_rot = scipy.ndimage.interpolation.rotate(y, angle, (1,2), False, mode="constant", cval=0, order=0)
x_rot = np.clip(x_rot, 0.0, 1.0)
y_rot = np.clip(y_rot, 0.0, 1.0)
y_onehot = keras.utils.to_categorical(y_rot, self.n_classes)
return x_rot, y_onehot
# Prepare dilated output
def dialateStack(segmentation_data, iterations):
return np.array([scipy.ndimage.binary_dilation(y, iterations=iterations) for y in segmentation_data])
width = 1
segmentation_dilated = dialateStack(segmentation_data[:, :, :, 0], width)
# Uncomment this if you don't want dilation
segmentation_dilated[:, :, :] = segmentation_data[:, :, :, 0]
# Testing batch generator
tgen = UltrasoundSegmentationBatchGenerator(ultrasound_data, segmentation_dilated, batch_size, shuffle=False)
bx, by = tgen.__getitem__(0)
import matplotlib.pyplot as plt
i = np.random.randint(batch_size)
fig = plt.figure(figsize=(18,4*num_show))
for i in range(num_show):
a1 = fig.add_subplot(num_show,3,i*3+1)
img1 = a1.imshow(bx[i, :, :, 0], vmin=0.0, vmax=1.0)
a1.set_title("Ultrasound #{}".format(i))
c = fig.colorbar(img1)
a2 = fig.add_subplot(num_show,3,i*3+2)
img2 = a2.imshow(by[i, :, :, 0], vmin=0.0, vmax=1.0)
a2.set_title("Class 0 #{}".format(i))
c = fig.colorbar(img2)
a3 = fig.add_subplot(num_show,3,i*3+3)
img3 = a3.imshow(by[i, :, :, 1], vmin=0.0, vmax=1.0)
a3.set_title("Class 1 #{}".format(i))
c = fig.colorbar(img3)
# Construct a U-net model
def nvidia_unet(patch_size=ultrasound_size, num_classes=num_classes):
input_ = Input((patch_size, patch_size, 1))
skips = []
output = input_
num_layers = int(np.floor(np.log2(patch_size)))
down_conv_kernel_sizes = np.zeros([num_layers], dtype=int)
down_filter_numbers = np.zeros([num_layers], dtype=int)
up_conv_kernel_sizes = np.zeros([num_layers], dtype=int)
up_filter_numbers = np.zeros([num_layers], dtype=int)
for layer_index in range(num_layers):
down_conv_kernel_sizes[layer_index] = int(3)
down_filter_numbers[layer_index] = int( (layer_index + 1) * filter_multiplier + num_classes )
up_conv_kernel_sizes[layer_index] = int(4)
up_filter_numbers[layer_index] = int( (num_layers - layer_index - 1) * filter_multiplier + num_classes )
print("Number of layers: {}".format(num_layers))
print("Filters in layers down: {}".format(down_filter_numbers))
print("Filters in layers up: {}".format(up_filter_numbers))
for shape, filters in zip(down_conv_kernel_sizes, down_filter_numbers):
skips.append(output)
output= Conv2D(filters, (shape, shape), strides=2, padding="same", activation="relu")(output)
for shape, filters in zip(up_conv_kernel_sizes, up_filter_numbers):
output = keras.layers.UpSampling2D()(output)
skip_output = skips.pop()
output = concatenate([output, skip_output], axis=3)
if filters != num_classes:
activation = "relu"
output = Conv2D(filters, (shape, shape), activation="relu", padding="same")(output)
output = BatchNormalization(momentum=.9)(output)
else:
activation = "softmax"
output = Conv2D(filters, (shape, shape), activation="softmax", padding="same")(output)
assert len(skips) == 0
return Model([input_], [output])
model = nvidia_unet(ultrasound_size, num_classes)
# model.summary()
print("Model built with {} parameters".format(model.count_params()))
learning_rate_decay = (max_learning_rate - min_learning_rate) / num_epochs
model.compile(optimizer=keras.optimizers.adam(lr=max_learning_rate, decay=learning_rate_decay),
loss= "binary_crossentropy",
metrics=["accuracy"])
print("Learning rate decay = {}".format(learning_rate_decay))
training_generator = UltrasoundSegmentationBatchGenerator(ultrasound_data, segmentation_dilated, batch_size)
test_generator = UltrasoundSegmentationBatchGenerator(test_ultrasound_data, test_segmentation_data[:, :, :, 0], batch_size)
training_time_start = datetime.datetime.now()
training_log = model.fit_generator(training_generator,
validation_data=test_generator,
epochs=num_epochs,
verbose=1)
training_time_stop = datetime.datetime.now()
print("Training started at: {}".format(training_time_start))
print("Training stopped at: {}".format(training_time_stop))
print("Total training time: {}".format(training_time_stop-training_time_start))
y_pred = model.predict(test_ultrasound_data)
# Saving prediction for further evaluation
test_prediction_fullname = os.path.join(data_fullpath, test_prediction_file)
np.save(test_prediction_fullname, y_pred)
print("Predictions saved to: {}".format(test_prediction_fullname))
from random import sample
num_test = test_ultrasound_data.shape[0]
num_show = 5
indices = [i for i in range(num_test)]
sample_indices = sample(indices, num_show)
fig = plt.figure(figsize=(18, num_show*5))
for i in range(num_show):
a0 = fig.add_subplot(num_show,3,i*3+1)
img0 = a0.imshow(test_ultrasound_data[sample_indices[i], :, :, 0].astype(np.float32))
a0.set_title("Ultrasound #{}".format(sample_indices[i]))
a1 = fig.add_subplot(num_show,3,i*3+2)
img1 = a1.imshow(test_segmentation_data[sample_indices[i], :, :, 0], vmin=0.0, vmax=1.0)
a1.set_title("Segmentation #{}".format(sample_indices[i]))
c = fig.colorbar(img1, fraction=0.046, pad=0.04)
a2 = fig.add_subplot(num_show,3,i*3+3)
img2 = a2.imshow(y_pred[sample_indices[i], :, :, 1], vmin=0.0, vmax=1.0)
a2.set_title("Prediction #{}".format(sample_indices[i]))
c = fig.colorbar(img2, fraction=0.046, pad=0.04)
# Display training loss and accuracy curves over epochs
plt.plot(training_log.history['loss'], 'bo--')
plt.plot(training_log.history['val_loss'], 'ro-')
plt.ylabel('Loss')
plt.xlabel('Epochs (n)')
plt.legend(['Training loss', 'Validation loss'])
plt.show()
plt.plot(training_log.history['acc'], 'bo--')
plt.plot(training_log.history['val_acc'], 'ro-')
plt.ylabel('Accuracy')
plt.xlabel('Epochs (n)')
plt.legend(['Training accuracy', 'Validation accuracy'])
plt.show()
import time
time.sleep(3)
# Archive model and notebook with unique filenames based on timestamps
import datetime
timestamp = datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S')
saved_models_fullpath = os.path.join(root_folder, model_save_folder)
if not os.path.exists(saved_models_fullpath):
os.makedirs(saved_models_fullpath)
print("Creating folder: {}".format(saved_models_fullpath))
model_file_name = "model_" + timestamp + ".h5"
model_fullname = os.path.join(saved_models_fullpath, model_file_name)
model.save(model_fullname)
print("Model saved to: {}".format(model_fullname))
saved_notebooks_fullpath = os.path.join(root_folder, notebook_save_folder)
if not os.path.exists(saved_notebooks_fullpath):
os.makedirs(saved_notebooks_fullpath)
print("Creating folder: {}".format(saved_notebooks_fullpath))
notebook_file_name = "notebook_" + timestamp + ".html"
notebook_fullname = os.path.join(saved_notebooks_fullpath, notebook_file_name)
time.sleep(30)
# If figures are missing from the saved notebook, run this cell again after some time has passed
os.system("jupyter nbconvert --to html Segmentation2-QueensToChildrens --output " + notebook_fullname)
print("Notebook saved to: {}".format(notebook_fullname))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/mnslarcher/cs224w-slides-to-code/blob/main/notebooks/04-link-analysis-pagerank.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Link Analysis: PageRank
```
import random
from typing import Optional
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
def seed_everything(seed: Optional[int] = None) -> None:
random.seed(seed)
np.random.seed(seed)
seed_everything(42)
```
# Example: Flow Equations & M
```
def get_stochastic_adjacency_matrix(G: nx.Graph) -> np.ndarray:
nodes = list(G.nodes())
num_nodes = len(nodes)
M = np.zeros((num_nodes, num_nodes))
for j, node_j in enumerate(nodes):
in_edges = G.in_edges(node_j)
for in_edge in G.in_edges(node_j):
node_i = in_edge[0]
i = nodes.index(node_i)
M[j, i] += 1.0 / G.out_degree(node_i)
return M
# Or, more concise but slower:
# def get_stochastic_adjacency_matrix(G: nx.Graph) -> np.ndarray:
# A = nx.adjacency_matrix(G).todense().astype(float)
# out_degrees = np.array([degree[1] for degree in G.out_degree()])
# return np.divide(A, out_degrees, out=np.zeros_like(A), where=out_degrees != 0)
edge_list = [("y", "a"), ("y", "y"), ("a", "m"), ("a", "y"), ("m", "a")]
G = nx.DiGraph(edge_list)
M = get_stochastic_adjacency_matrix(G)
plt.figure(figsize=(4, 3))
# Self/multiple edges visualization sucks
nx.draw(G, node_color="tab:red", node_size=1500, with_labels=True)
plt.show()
print(f"\nStochastic Adjacency Matrix M (nodes {G.nodes()}):")
print(M)
```
# Summary: Page Rank Variants
```
def pagerank_example(
personalization: Optional[dict] = None,
spring_layout_k: float = 5.0,
label_rank_threshold: float = 0.02,
cmap_name: str = "viridis",
node_size_factor: float = 2e4,
width: float = 1.5,
font_size: int = 16,
seed: Optional[int] = 42,
) -> None:
edge_list = [
("B", "C"),
("C", "B"),
("D", "A"),
("D", "B"),
("E", "B"),
("E", "D"),
("E", "F"),
("F", "B"),
("F", "E"),
("G", "B"),
("G", "E"),
("H", "B"),
("H", "E"),
("I", "B"),
("I", "E"),
("J", "E"),
("K", "E"),
]
G = nx.DiGraph(edge_list)
ranks = nx.pagerank(G, personalization=personalization)
max_rank = max(ranks.values())
node_sizes = [max(100.0, node_size_factor * rank / max_rank) for node, rank in ranks.items()]
cmap = plt.get_cmap(cmap_name)
node_colors = [cmap(rank / max_rank) for node, rank in ranks.items()]
node_lables = {
node: f"{node}\n{100 * ranks[node]:.1f}" if ranks[node] > label_rank_threshold else "" for node in G.nodes
}
pos = nx.spring_layout(G, k=spring_layout_k, seed=seed)
nx.draw(
G,
pos=pos,
node_color=node_colors,
labels=node_lables,
edgecolors="black",
node_size=node_sizes,
width=1.5,
font_size=font_size,
)
```
## PageRank
```
personalization = None # Equivalent to {"A": 1 / num_nodes, "B": 1 / num_nodes, ...}
plt.figure(figsize=(6, 6))
pagerank_example(personalization=personalization)
plt.title("PageRank", fontsize=16)
plt.show()
```
## Personalized PageRank
```
personalization = {"A": 0.1, "D": 0.2, "G": 0.5, "J": 0.2}
plt.figure(figsize=(6, 6))
pagerank_example(personalization=personalization)
plt.title(f"Personalized PageRank\n(personalization = {personalization})", fontsize=16)
plt.show()
```
## Random Walk with Restarts
```
personalization = {"E": 1.0}
plt.figure(figsize=(6, 6))
pagerank_example(personalization=personalization)
plt.title(f"Random Walk with Restarts\n(personalization = {personalization})", fontsize=16)
plt.show()
```
| github_jupyter |
# Background
This notebook walks through the creation of a fastai [DataBunch](https://docs.fast.ai/basic_data.html#DataBunch) object. This object contains a pytorch dataloader for the train, valid and test sets. From the documentation:
```
Bind train_dl,valid_dl and test_dl in a data object.
It also ensures all the dataloaders are on device and applies to them dl_tfms as batch are drawn (like normalization). path is used internally to store temporary files, collate_fn is passed to the pytorch Dataloader (replacing the one there) to explain how to collate the samples picked for a batch.
```
Because we are training the language model, we want our dataloader to construct the target variable from the input data. The target variable for language models are the next word in a sentence. Furthermore, there are other optimizations with regard to the sequence length and concatenating texts together that avoids wasteful padding. Luckily the [TextLMDataBunch](https://docs.fast.ai/text.data.html#TextLMDataBunch) does all this work for us (and more) automatically.
```
from fastai.text import TextLMDataBunch as lmdb
from fastai.text.transform import Tokenizer
import pandas as pd
from pathlib import Path
```
### Read in Data
You can download the above saved dataframes (in pickle format) from Google Cloud Storage:
**train_df.hdf (9GB)**:
`https://storage.googleapis.com/issue_label_bot/pre_processed_data/2_partitioned_df/train_df.hdf`
**valid_df.hdf (1GB)**
`https://storage.googleapis.com/issue_label_bot/pre_processed_data/2_partitioned_df/valid_df.hdf`
```
# note: download the data and place in right directory before running this code!
valid_df = pd.read_hdf(Path('../data/2_partitioned_df/valid_df.hdf'))
train_df = pd.read_hdf(Path('../data/2_partitioned_df/train_df.hdf'))
print(f'rows in train_df:, {train_df.shape[0]:,}')
print(f'rows in valid_df:, {valid_df.shape[0]:,}')
train_df.head(3)
```
## Create The [DataBunch](https://docs.fast.ai/basic_data.html#DataBunch)
#### Instantiate The Tokenizer
```
def pass_through(x):
return x
# only thing is we are changing pre_rules to be pass through since we have already done all of the pre-rules.
# you don't want to accidentally apply pre-rules again otherwhise it will corrupt the data.
tokenizer = Tokenizer(pre_rules=[pass_through], n_cpus=31)
```
Specify path for saving language model artifacts
```
path = Path('../model/lang_model/')
```
#### Create The Language Model Data Bunch
**Warning**: this steps builds the vocabulary and tokenizes the data. This procedure consumes an incredible amount of memory. This took 1 hour on a machine with 72 cores and 400GB of Memory.
```
# Note you want your own tokenizer, without pre-rules
data_lm = lmdb.from_df(path=path,
train_df=train_df,
valid_df=valid_df,
text_cols='text',
tokenizer=tokenizer,
chunksize=6000000)
data_lm.save() # saves to self.path/data_save.hdf
```
### Location of Saved DataBunch
The databunch object is available here:
`https://storage.googleapis.com/issue_label_bot/model/lang_model/data_save.hdf`
It is a massive file of 27GB so proceed with caution when downlaoding this file.
| github_jupyter |
# Project 3: Smart Beta Portfolio and Portfolio Optimization
## Instructions
Each problem consists of a function to implement and instructions on how to implement the function. The parts of the function that need to be implemented are marked with a `# TODO` comment. After implementing the function, run the cell to test it against the unit tests we've provided. For each problem, we provide one or more unit tests from our `project_tests` package. These unit tests won't tell you if your answer is correct, but will warn you of any major errors. Your code will be checked for the correct solution when you submit it Udacity.
## Packages
When you implement the functions, you'll only need to use the [Pandas](https://pandas.pydata.org/) and [Numpy](http://www.numpy.org/) packages. Don't import any other packages, otherwise the grader willn't be able to run your code.
The other packages that we're importing is `helper` and `project_tests`. These are custom packages built to help you solve the problems. The `helper` module contains utility functions and graph functions. The `project_tests` contains the unit tests for all the problems.
### Install Packages
```
import sys
!{sys.executable} -m pip install -r requirements.txt
```
### Load Packages
```
import pandas as pd
import numpy as np
import helper
import project_tests
```
## Market Data
The data source we'll be using is the [Wiki End of Day data](https://www.quandl.com/databases/WIKIP) hosted at [Quandl](https://www.quandl.com). This contains data for many stocks, but we'll just be looking at the S&P 500 stocks. We'll also make things a little easier to solve by narrowing our range of time from 2007-06-30 to 2017-09-30.
### Set API Key
Set the `quandl.ApiConfig.api_key ` variable to your Quandl api key. You can find your Quandl api key [here](https://www.quandl.com/account/api).
```
import quandl
# TODO: Add your Quandl API Key
quandl.ApiConfig.api_key = ''
```
### Download Data
```
import os
snp500_file_path = 'data/tickers_SnP500.txt'
wiki_file_path = 'data/WIKI_PRICES.csv'
start_date, end_date = '2013-07-01', '2017-06-30'
use_columns = ['date', 'ticker', 'adj_close', 'adj_volume', 'ex-dividend']
if not os.path.exists(wiki_file_path):
with open(snp500_file_path) as f:
tickers = f.read().split()
print('Downloading data...')
helper.download_quandl_dataset('WIKI', 'PRICES', wiki_file_path, use_columns, tickers, start_date, end_date)
print('Data downloaded')
else:
print('Data already downloaded')
```
### Load Data
```
df = pd.read_csv(wiki_file_path, index_col=['ticker', 'date'])
```
### Create the Universe
We'll be selecting dollar volume stocks for our stock universe. This universe is similar to large market cap stocks, becauses they are the highly liquid.
```
percent_top_dollar = 0.2
high_volume_symbols = helper.large_dollar_volume_stocks(df, 'adj_close', 'adj_volume', percent_top_dollar)
df = df.loc[high_volume_symbols, :]
```
### 2-D Matrices
In the previous projects, we used a [multiindex](https://pandas.pydata.org/pandas-docs/stable/advanced.html) to store all the data in a single dataframe. As you work with larger datasets, it come infeasable to store all the data in memory. Starting with this project, we'll be storing all our data as 2-D matrices to match what you'll be expecting in the real world.
```
close = df.reset_index().pivot(index='ticker', columns='date', values='adj_close')
volume = df.reset_index().pivot(index='ticker', columns='date', values='adj_volume')
ex_dividend = df.reset_index().pivot(index='ticker', columns='date', values='ex-dividend')
```
### View Data
To see what one of these 2-d matrices looks like, let's take a look at the closing prices matrix.
```
helper.print_dataframe(close)
```
# Part 1: Smart Beta Portfolio
In Part 1 of this project, you'll build a smart beta portfolio using dividend yied. To see how well it performs, you'll compare this portfolio to an index.
## Index Weights
After building the smart beta portfolio, should compare it to a similar strategy or index.
Implement `generate_dollar_volume_weights` to generate the weights for this index. For each date, generate the weights based on dollar valume traded for that date. For example, assume the following is dollar volume traded data:
| | 10/02/2010 | 10/03/2010 |
|----------|------------|------------|
| **AAPL** | 2 | 2 |
| **BBC** | 5 | 6 |
| **GGL** | 1 | 2 |
| **ZGB** | 6 | 5 |
The weights should be the following:
| | 10/02/2010 | 10/03/2010 |
|----------|------------|------------|
| **AAPL** | 0.142 | 0.133 |
| **BBC** | 0.357 | 0.400 |
| **GGL** | 0.071 | 0.133 |
| **ZGB** | 0.428 | 0.333 |
```
def generate_dollar_volume_weights(close, volume):
"""
Generate dollar volume weights.
Parameters
----------
close : DataFrame
Close price for each ticker and date
volume : str
Volume for each ticker and date
Returns
-------
dollar_volume_weights : DataFrame
The dollar volume weights for each ticker and date
"""
assert close.index.equals(volume.index)
assert close.columns.equals(volume.columns)
#TODO: Implement function
return None
project_tests.test_generate_dollar_volume_weights(generate_dollar_volume_weights)
```
### View Data
Let's generate the index weights using `generate_dollar_volume_weights` and view them using a heatmap.
```
index_weights = generate_dollar_volume_weights(close, volume)
helper.plot_weights(index_weights, 'Index Weights')
```
## ETF Weights
Now that we have the index weights, it's time to build the weights for the smart beta ETF. Let's build an ETF portfolio that is based on dividends. This is a common factor used to build portfolios. Unlike most portfolios, we'll be using a single factor for simplicity.
Implement `calculate_dividend_weights` to returns the weights for each stock based on it's total dividend yield over time. This is similar to generating the weight for the index, but it's dividend data instead.
```
def calculate_dividend_weights(ex_dividend):
"""
Calculate dividend weights.
Parameters
----------
ex_dividend : DataFrame
Ex-dividend for each stock and date
Returns
-------
dividend_weights : DataFrame
Weights for each stock and date
"""
#TODO: Implement function
return None
project_tests.test_calculate_dividend_weights(calculate_dividend_weights)
```
### View Data
Let's generate the ETF weights using `calculate_dividend_weights` and view them using a heatmap.
```
etf_weights = calculate_dividend_weights(ex_dividend)
helper.plot_weights(etf_weights, 'ETF Weights')
```
## Returns
Implement `generate_returns` to generate the returns. Note this isn't log returns. Since we're not dealing with volatility, we don't have to use log returns.
```
def generate_returns(close):
"""
Generate returns for ticker and date.
Parameters
----------
close : DataFrame
Close price for each ticker and date
Returns
-------
returns : Dataframe
The returns for each ticker and date
"""
#TODO: Implement function
return None
project_tests.test_generate_returns(generate_returns)
```
### View Data
Let's generate the closing returns using `generate_returns` and view them using a heatmap.
```
returns = generate_returns(close)
helper.plot_returns(returns, 'Close Returns')
```
## Weighted Returns
With the returns of each stock computed, we can use it to compute the returns for for an index or ETF. Implement `generate_weighted_returns` to create weighted returns using returns and weights for an Index or ETF.
```
def generate_weighted_returns(returns, weights):
"""
Generate weighted returns.
Parameters
----------
returns : DataFrame
Returns for each ticker and date
weights : DataFrame
Weights for each ticker and date
Returns
-------
weighted_returns : DataFrame
Weighted returns for each ticker and date
"""
assert returns.index.equals(weights.index)
assert returns.columns.equals(weights.columns)
#TODO: Implement function
return None
project_tests.test_generate_weighted_returns(generate_weighted_returns)
```
### View Data
Let's generate the etf and index returns using `generate_weighted_returns` and view them using a heatmap.
```
index_weighted_returns = generate_weighted_returns(returns, index_weights)
etf_weighted_returns = generate_weighted_returns(returns, etf_weights)
helper.plot_returns(index_weighted_returns, 'Index Returns')
helper.plot_returns(etf_weighted_returns, 'ETF Returns')
```
## Cumulative Returns
Implement `calculate_cumulative_returns` to calculate the cumulative returns over time.
```
def calculate_cumulative_returns(returns):
"""
Calculate cumulative returns.
Parameters
----------
returns : DataFrame
Returns for each ticker and date
Returns
-------
cumulative_returns : Pandas Series
Cumulative returns for each date
"""
#TODO: Implement function
return None
project_tests.test_calculate_cumulative_returns(calculate_cumulative_returns)
```
### View Data
Let's generate the etf and index cumulative returns using `calculate_cumulative_returns` and compare the two.
```
index_weighted_cumulative_returns = calculate_cumulative_returns(index_weighted_returns)
etf_weighted_cumulative_returns = calculate_cumulative_returns(etf_weighted_returns)
helper.plot_benchmark_returns(index_weighted_cumulative_returns, etf_weighted_cumulative_returns, 'Smart Beta ETF vs Index')
```
## Tracking Error
In order to check the performance of the smart beta protfolio, we can compare it against the index. Implement `tracking_error` to return the tracking error between the etf and index over time.
```
def tracking_error(index_weighted_cumulative_returns, etf_weighted_cumulative_returns):
"""
Calculate the tracking error.
Parameters
----------
index_weighted_cumulative_returns : Pandas Series
The weighted index Cumulative returns for each date
etf_weighted_cumulative_returns : Pandas Series
The weighted etf Cumulative returns for each date
Returns
-------
tracking_error : Pandas Series
The tracking error for each date
"""
assert index_weighted_cumulative_returns.index.equals(etf_weighted_cumulative_returns.index)
#TODO: Implement function
return None
project_tests.test_tracking_error(tracking_error)
```
### View Data
Let's generate the tracking error using `tracking_error` and graph it over time.
```
smart_beta_tracking_error = tracking_error(index_weighted_cumulative_returns, etf_weighted_cumulative_returns)
helper.plot_tracking_error(smart_beta_tracking_error, 'Smart Beta Tracking Error')
```
# Part 2: Portfolio Optimization
In Part 2, you'll optimize the index you created in part 1. You'll use `cvxopt` to optimize the convex problem of finding the optimal weights for the portfolio. Just like before, we'll compare these results to the index.
## Covariance
Implement `get_covariance` to calculate the covariance of `returns` and `weighted_index_returns`. We'll use this to feed into our convex optimization function. By using covariance, we can prevent the optimizer from going all in on a few stocks.
```
def get_covariance(returns, weighted_index_returns):
"""
Calculate covariance matrices.
Parameters
----------
returns : DataFrame
Returns for each ticker and date
weighted_index_returns : DataFrame
Weighted index returns for each ticker and date
Returns
-------
xtx, xty : (2 dimensional Ndarray, 1 dimensional Ndarray)
"""
assert returns.index.equals(weighted_index_returns.index)
assert returns.columns.equals(weighted_index_returns.columns)
#TODO: Implement function
return None, None
project_tests.test_get_covariance(get_covariance)
```
### View Data
```
xtx, xty = get_covariance(returns, index_weighted_returns)
xtx = pd.DataFrame(xtx, returns.index, returns.index)
xty = pd.Series(xty, returns.index)
helper.plot_covariance(xty, xtx)
```
## Quadratic Programming
Now that you have the covariance, we can use this to optomize the weights. Implement `solve_qp` to return the optimal `x` in the convex function with the following constrains:
- Sum of all x is 1
- x >= 0
```
import cvxopt
def solve_qp(P, q):
"""
Find the solution for minimize 0.5P*x*x - q*x with the following constraints:
- sum of all x equals to 1
- All x are greater than or equal to 0
Parameters
----------
P : 2 dimensional Ndarray
q : 1 dimensional Ndarray
Returns
-------
x : 1 dimensional Ndarray
The solution for x
"""
assert len(P.shape) == 2
assert len(q.shape) == 1
assert P.shape[0] == P.shape[1] == q.shape[0]
#TODO: Implement function
return None
project_tests.test_solve_qp(solve_qp)
```
### View Data
```
raw_optim_etf_weights = solve_qp(xtx.values, xty.values)
raw_optim_etf_weights_per_date = np.tile(raw_optim_etf_weights, (len(returns.columns), 1))
optim_etf_weights = pd.DataFrame(raw_optim_etf_weights_per_date.T, returns.index, returns.columns)
```
## Optimized Portfolio
With our optimized etf weights built using quadradic programming, let's compare it to the index. Run the next cell to calculate the optimized etf returns and compare the returns to the index returns.
```
optim_etf_returns = generate_weighted_returns(returns, optim_etf_weights)
optim_etf_cumulative_returns = calculate_cumulative_returns(optim_etf_returns)
helper.plot_benchmark_returns(index_weighted_cumulative_returns, optim_etf_cumulative_returns, 'Optimized ETF vs Index')
optim_etf_tracking_error = tracking_error(index_weighted_cumulative_returns, optim_etf_cumulative_returns)
helper.plot_tracking_error(optim_etf_tracking_error, 'Optimized ETF Tracking Error')
```
## Rebalance Portfolio
The optimized etf porfolio used different weights for each day. After calculating in transation fees, this amount of turnover to the portfolio can reduce the total returns. Let's find the optimal times to rebalancve the portfolio instead of doing it every day.
Implement `rebalance_portfolio` to rebalance a portfolio.
```
def rebalance_portfolio(returns, weighted_index_returns, shift_size, chunk_size):
"""
Get weights for each rebalancing of the portfolio.
Parameters
----------
returns : DataFrame
Returns for each ticker and date
weighted_index_returns : DataFrame
Weighted index returns for each ticker and date
shift_size : int
The number of days between each rebalance
chunk_size : int
The number of days to look in the past for rebalancing
Returns
-------
all_rebalance_weights : list of Ndarrays
The etf weights for each point they are rebalanced
"""
assert returns.index.equals(weighted_index_returns.index)
assert returns.columns.equals(weighted_index_returns.columns)
assert shift_size > 0
assert chunk_size >= 0
#TODO: Implement function
return None
project_tests.test_rebalance_portfolio(rebalance_portfolio)
```
### View Data
```
chunk_size = 250
shift_size = 5
all_rebalance_weights = rebalance_portfolio(returns, index_weighted_returns, shift_size, chunk_size)
```
## Portfolio Rebalance Cost
With the portfolio rebalanced, we need to use a metric to measure the cost of rebalancing the portfolio. Imeplement `get_rebalance_cost` to calculate the rebalance cost.
```
def get_rebalance_cost(all_rebalance_weights, shift_size, rebalance_count):
"""
Get the cost of all the rebalancing.
Parameters
----------
all_rebalance_weights : list of Ndarrays
ETF Returns for each ticker and date
shift_size : int
The number of days between each rebalance
rebalance_count : int
Number of times the portfolio was rebalanced
Returns
-------
rebalancing_cost : float
The cost of all the rebalancing
"""
assert shift_size > 0
assert rebalance_count > 0
#TODO: Implement function
return None
project_tests.test_get_rebalance_cost(get_rebalance_cost)
```
### View Data
```
unconstrained_costs = get_rebalance_cost(all_rebalance_weights, shift_size, returns.shape[1])
```
## Submission
Now that you're done with the project, it's time to submit it. Click the submit button in the bottom right. One of our reviewers will give you feedback on your project with a pass or not passed grade. You can continue to the next section while you wait for feedback.
| github_jupyter |
# Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were [first reported on](https://arxiv.org/abs/1406.2661) in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
* [Pix2Pix](https://affinelayer.com/pixsrv/)
* [CycleGAN](https://github.com/junyanz/CycleGAN)
* [A whole list](https://github.com/wiseodd/generative-models)
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks _as close as possible_ to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.

The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
```
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
```
## Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input `inputs_real` and the generator input `inputs_z`. We'll assign them the appropriate sizes for each of the networks.
>**Exercise:** Finish the `model_inputs` function below. Create the placeholders for `inputs_real` and `inputs_z` using the input sizes `real_dim` and `z_dim` respectively.
```
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(shape=(None, real_dim), dtype=tf.float32, name='inputs_real')
inputs_z = tf.placeholder(shape=(None, z_dim), dtype=tf.float32, name='inputs_z')
return inputs_real, inputs_z
```
## Generator network

Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
#### Variable Scope
Here we need to use `tf.variable_scope` for two reasons. Firstly, we're going to make sure all the variable names start with `generator`. Similarly, we'll prepend `discriminator` to the discriminator variables. This will help out later when we're training the separate networks.
We could just use `tf.name_scope` to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also _sample from it_ as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the `reuse` keyword for `tf.variable_scope` to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use `tf.variable_scope`, you use a `with` statement:
```python
with tf.variable_scope('scope_name', reuse=False):
# code here
```
Here's more from [the TensorFlow documentation](https://www.tensorflow.org/programmers_guide/variable_scope#the_problem) to get another look at using `tf.variable_scope`.
#### Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to `tf.maximum`. Typically, a parameter `alpha` sets the magnitude of the output for negative values. So, the output for negative input (`x`) values is `alpha*x`, and the output for positive `x` is `x`:
$$
f(x) = max(\alpha * x, x)
$$
#### Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
>**Exercise:** Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the `reuse` keyword argument from the function to `tf.variable_scope`.
```
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out:
'''
with tf.variable_scope(name_or_scope='generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(inputs=z, units=n_units, activation=tf.nn.leaky_relu)
# Leaky ReLU
# h1 =
# Logits and tanh output
logits = tf.layers.dense(inputs=h1, units=out_dim, activation=None)
out = tf.tanh(logits)
return out
```
## Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
>**Exercise:** Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the `reuse` keyword argument from the function arguments to `tf.variable_scope`.
```
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope(name_or_scope='discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(inputs=x, units=n_units, activation=tf.nn.leaky_relu)
# Leaky ReLU
# h1 =
logits = tf.layers.dense(inputs=h1, units=1, activation=None)
out = tf.sigmoid(logits)
return out, logits
```
## Hyperparameters
```
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
```
## Build network
Now we're building the network from the functions defined above.
First is to get our inputs, `input_real, input_z` from `model_inputs` using the sizes of the input and z.
Then, we'll create the generator, `generator(input_z, input_size)`. This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as `g_model`. So the real data discriminator is `discriminator(input_real)` while the fake discriminator is `discriminator(g_model, reuse=True)`.
>**Exercise:** Build the network from the functions you defined earlier.
```
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size, n_units=g_hidden_size)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, reuse=False)
d_model_fake, d_logits_fake = discriminator(g_model, n_units=d_hidden_size, reuse=True)
```
## Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_loss_real + d_loss_fake`. The losses will by sigmoid cross-entropies, which we can get with `tf.nn.sigmoid_cross_entropy_with_logits`. We'll also wrap that in `tf.reduce_mean` to get the mean for all the images in the batch. So the losses will look something like
```python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
```
For the real image logits, we'll use `d_logits_real` which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter `smooth`. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like `labels = tf.ones_like(tensor) * (1 - smooth)`
The discriminator loss for the fake data is similar. The logits are `d_logits_fake`, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using `d_logits_fake`, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
>**Exercise:** Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
```
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
```
## Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use `tf.trainable_variables()`. This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with `generator`. So, we just need to iterate through the list from `tf.trainable_variables()` and keep variables that start with `generator`. Each variable object has an attribute `name` which holds the name of the variable as a string (`var.name == 'weights_0'` for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with `discriminator`.
Then, in the optimizer we pass the variable lists to the `var_list` keyword argument of the `minimize` method. This tells the optimizer to only update the listed variables. Something like `tf.train.AdamOptimizer().minimize(loss, var_list=var_list)` will only train the variables in `var_list`.
>**Exercise: ** Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using `AdamOptimizer`, create an optimizer for each network that update the network variables separately.
```
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = tf.trainable_variables(scope='generator')
d_vars = tf.trainable_variables(scope='discriminator')
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
```
## Training
```
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
```
## Training loss
Here we'll check out the training losses for the generator and discriminator.
```
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
```
## Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
```
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
```
These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
```
_ = view_samples(-1, samples)
```
Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
```
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
```
It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
## Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
```
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
```
| github_jupyter |
# Welcome to fastai
```
from fastai import *
from fastai.vision import *
from fastai.gen_doc.nbdoc import *
from fastai.core import *
from fastai.basic_train import *
```
The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.html#vision), [`text`](/text.html#text), [`tabular`](/tabular.html#tabular), and [`collab`](/collab.html#collab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)):
```
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
learn = create_cnn(data, models.resnet18, metrics=accuracy)
learn.fit(1)
jekyll_note("""This documentation is all built from notebooks;
that means that you can try any of the code you see in any notebook yourself!
You'll find the notebooks in the <a href="https://github.com/fastai/fastai/tree/master/docs_src">docs_src</a> folder of the
<a href="https://github.com/fastai/fastai">fastai</a> repo. For instance,
<a href="https://nbviewer.jupyter.org/github/fastai/fastai/blob/master/docs_src/index.ipynb">here</a>
is the notebook source of what you're reading now.""")
```
## Installation and updating
To install or update fastai, we recommend `conda`:
```
conda install -c pytorch -c fastai fastai pytorch-nightly cuda92
```
For troubleshooting, and alternative installations (including pip and CPU-only options) see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md).
## Reading the docs
To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.
We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this:
### An example function
```
show_doc(rotate)
```
---
Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.
For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters.
## Module structure
### Imports
fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.
In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *`. In general, for interactive computing, you'll want to import from both `fastai`, and from one of the *applications*, such as:
```
from fastai import *
from fastai.vision import *
```
That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.html#vision), in this case), e.g. creating a [`DataBunch`](/basic_data.html#DataBunch) from an image folder and training a convolutional neural network (with [`create_cnn`](/vision.learner.html#create_cnn)), are also imported.
If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance:
```
Learner
```
### Dependencies
At the base of everything are the two modules [`core`](/core.html#core) and [`torch_core`](/torch_core.html#torch_core) (we're not including the `fastai.` prefix when naming modules in these docs). They define the basic functions we use in the library; [`core`](/core.html#core) only relies on general modules, whereas [`torch_core`](/torch_core.html#torch_core) requires pytorch. Most type-hinting shortcuts are defined there too (at least the one that don't depend on fastai classes defined later). Nearly all modules below import [`torch_core`](/torch_core.html#torch_core).
Then, there are three modules directly on top of [`torch_core`](/torch_core.html#torch_core):
- [`data`](/vision.data.html#vision.data), which contains the class that will take a [`Dataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset) or pytorch [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) to wrap it in a [`DeviceDataLoader`](/basic_data.html#DeviceDataLoader) (a class that sits on top of a [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) and is in charge of putting the data on the right device as well as applying transforms such as normalization) and regroup then in a [`DataBunch`](/basic_data.html#DataBunch).
- [`layers`](/layers.html#layers), which contains basic functions to define custom layers or groups of layers
- [`metrics`](/metrics.html#metrics), which contains all the metrics
This takes care of the basics, then we regroup a model with some data in a [`Learner`](/basic_train.html#Learner) object to take care of training. More specifically:
- [`callback`](/callback.html#callback) (depends on [`data`](/vision.data.html#vision.data)) defines the basis of callbacks and the [`CallbackHandler`](/callback.html#CallbackHandler). Those are functions that will be called every step of the way of the training loop and can allow us to customize what is happening there;
- [`basic_train`](/basic_train.html#basic_train) (depends on [`callback`](/callback.html#callback)) defines [`Learner`](/basic_train.html#Learner) and [`Recorder`](/basic_train.html#Recorder) (which is a callback that records training stats) and has the training loop;
- [`callbacks`](/callbacks.html#callbacks) (depends on [`basic_train`](/basic_train.html#basic_train)) is a submodule defining various callbacks, such as for mixed precision training or 1cycle annealing;
- `learn` (depends on [`callbacks`](/callbacks.html#callbacks)) defines helper functions to invoke the callbacks more easily.
From [`data`](/vision.data.html#vision.data) we can split on one of the four main *applications*, which each has their own module: [`vision`](/vision.html#vision), [`text`](/text.html#text) [`collab`](/collab.html#collab), or [`tabular`](/tabular.html#tabular). Each of those submodules is built in the same way with:
- a submodule named <code>transform</code> that handles the transformations of our data (data augmentation for computer vision, numericalizing and tokenizing for text and preprocessing for tabular)
- a submodule named <code>data</code> that contains the class that will create datasets specific to this application and the helper functions to create [`DataBunch`](/basic_data.html#DataBunch) objects.
- a submodule named <code>models</code> that contains the models specific to this application.
- optionally, a submodule named <code>learn</code> that will contain [`Learner`](/basic_train.html#Learner) speficic to the application.
Here is a graph of the key module dependencies:

| github_jupyter |
```
print("hello world")
a = 10
print(a)
b = 10 * a
# jupyter will automatically print the last value in the block
# and by the way: this is how a comment looks.
b
if b > 50:
print("b is greater than 50")
```
Python can handle numbers of arbitrary length :)
What's actually $2^{2048}$?
```
2**2048
```
## define a function
```
def square(x):
return x * x
print(f"square(10) is {square(10)}")
# or as lambda statement
qubic = lambda x: x * x * x
print(f"qubic(10) is {qubic(10)}")
```
## loops and formated strings
```
for i in range(1,10):
print(f"square({i}) = {square(i)}")
```
## Which numbers appear in both a and b?
```
a = [1,2,3,4,5]
b = [4,5,6,7,8]
result = []
for number in a:
if number in b:
result.append(number)
result
```
## and more a more compact version with list comprehensions
```
[number for number in a if number in b]
```
how does this work?
```
[i for i in range(1,10)]
[square(i) for i in range(1,10)]
# this will produce the same output
list(map(square,range(1,10)))
# map applies the function (square) to every element of a sequence (numbers 1 to 10)
```
and this works for strings too
```
[char for char in "hello world"]
```
## let's plot something
To plot a graph, we need to import matplotlib
```
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (12, 9)
data = [square(i) for i in range(1,100)]
l = plt.plot(data, 'ro')
plt.show()
cat = ["bored", "happy", "bored", "bored", "happy", "bored"]
dog = ["happy", "happy", "happy", "happy", "bored", "bored"]
activity = ["combing", "drinking", "feeding", "napping", "playing", "washing"]
fig, ax = plt.subplots()
ax.plot(activity, dog, label="dog")
ax.plot(activity, cat, label="cat")
ax.legend()
plt.show()
```
# Classes and Objects
Python supports object-oriented programming. Packing your code into classes will help you to keep your code organized, maintainable and it also improves the reusability. You can read about classes in the [Python Docs](https://docs.python.org/3/tutorial/classes.html).
Ok, let's define a simple example class for a data loader. Since all deep learning models need data, data loader are very common and you'll find them in various forms.
```
class SensorData:
"""A simple class that manages sensor data"""
def __init__(self, url):
# this is how a constructor in python looks
self.data_url = url # data url will be a member or class variable
def load_data(self):
print(f"Todo: load data from {self.data_url}")
# lets create a instance for our new class
someSensorData = SensorData("http://somewehere-over-the-rainbow.com")
someSensorData.load_data()
```
If you are familiar with object-oriented programming this should look familiar to you.
## Ok, let's add the code to load some data
Now we will extend the class to download some sensor data. The file is CSV formatted and contains a list of temperature, humidity and time values. This is how one line with one data point looks like:
```2018-10-03T11:28:35.325Z;23.0;17.0```
```
import csv
import urllib
class SensorData:
"""A simple class that loads sensor data"""
def __init__(self, url):
# this is how a constructor in python looks
self.data_url = url # dataUrl will be a member or class variable
def load_data(self):
response = urllib.request.urlopen(self.data_url) # create a http request
csv_lines = response.read().decode('utf-8').splitlines() # read utf-8 text
reader = csv.reader(csv_lines, delimiter=';') # read csv
self.data = [row for row in reader] # convert csv to array
print(f"{reader.line_num} lines loaded")
# lets create a instance for our new class
temp_hum_data = SensorData("https://www2.htw-dresden.de/~guhr/dist/sensor-data.csv")
temp_hum_data.load_data()
```
We added some code to load the data via http from a webserver. To download the data we imported `urllib.request`, this package handels the http communication for us. Since our data is stored in a csv file, we imported the `csv` package to read the data.
## Plot the data
Now that we downloaded the data we can plot it. The data property of our class contains a list with our
**time** - **temperature** - **humidity** values. And the first entry looks like this:
```
temp_hum_data.data[0]
type(temp_hum_data.data[0][0])
```
All three values are strings, in order to plot them we need to convert them into time and float values.
```
import csv
import urllib
import datetime as dt
class SensorData:
"""A simple class that loads sensor data"""
def __init__(self, url):
# this is how a constructor in python looks
self.data_url = url # dataUrl will be a member or class variable
def load_data(self):
response = urllib.request.urlopen(self.data_url) # create a http request
csvlines = response.read().decode('utf-8').splitlines() # read utf-8 text
reader = csv.reader(csvlines, delimiter=';') # read csv
data = [row for row in reader] # convert csv to array
self.data = SensorData.parse_data(data) # since parse_data is a static method we call it by using the classname
print(f"{reader.line_num} lines loaded")
def parse_data(data): # I am a static method
# this lambda converts a time string into a python datetime object
parse_time = lambda time_string: dt.datetime.strptime(time_string, "%Y-%m-%dT%H:%M:%S.%fZ")
data = [[parse_time(value[0]),float(value[1]),float(value[2])] for value in data]
return data
# lets create a instance for our new class
temp_hum_data = SensorData("https://www2.htw-dresden.de/~guhr/dist/sensor-data.csv")
temp_hum_data.load_data()
temp_hum_data.data[0]
```
We converted the data to native python datatypes.
### Our data is now formatted as native python data types. Now we can plot them:
```
import matplotlib.dates as mdates
# take the last 5000 values and slice them
plotdata = temp_hum_data.data[:5000]
time = [value[0] for value in plotdata]
temp = [value[1] for value in plotdata]
hum = [value[2] for value in plotdata]
fig, ax = plt.subplots()
date_formatter = mdates.DateFormatter('%d %h - %H:%M')
ax.xaxis.set_major_formatter(date_formatter)
ax.plot(time, temp, label="Temperature in °C")
ax.plot(time, hum, label="Humidity in %")
ax.legend()
plt.xticks(rotation=45, ha='right')
plt.xlabel('Time')
plt.title(f'Temperature Graph for the last {len(plotdata)} data points')
plt.show()
```
### Of course there are ready made packages to load csv files:
```
import pandas as pd
data_frame = pd.read_table("https://www2.htw-dresden.de/~guhr/dist/sensor-data.csv", sep = ";", names=["Date", "Temperature", "Humidity"])
data_frame.head()
data_frame[:5000].plot()
```
# Tutotrials
If you are new to Python here is a list of online tutorials that you might find useful:
*Learn the Basics*
- [Hello, World!](https://www.learnpython.org/en/Hello%2C_World!)
- [Variables and Types](https://www.learnpython.org/en/Variables_and_Types)
- [Lists](https://www.learnpython.org/en/Lists)
- [Basic Operators](https://www.learnpython.org/en/Basic_Operators)
- [String Formatting](https://www.learnpython.org/en/String_Formatting)
- [Basic String Operations](https://www.learnpython.org/en/Basic_String_Operations)
- [Conditions](https://www.learnpython.org/en/Conditions)
- [Loops](https://www.learnpython.org/en/Loops)
- [Functions](https://www.learnpython.org/en/Functions)
- [Classes and Objects](https://www.learnpython.org/en/Classes_and_Objects)
- [Dictionaries](https://www.learnpython.org/en/Dictionaries)
- [Modules and Packages](https://www.learnpython.org/en/Modules_and_Packages)
| github_jupyter |
##################################################
## Assignment 1
## Problem 2(a)
## Samin Yeasar Arnob
## McGill ID: 260800927
## COMP 767- Reinforcement Learning Winter 2019
#################################################
# Grid World Environment
Original Grid world environment was taken from link: https://github.com/dennybritz/reinforcement-learning \
I modified it with:
(1) "Terminal states" at being
-Upper left (T1) with reward 1.0
- Upper right (T2) with reward 10.0
- Everywhere else reward 0.0
(2) Inserted stochastic action condition:
- where it takes input "prob_p = P " probability of taking policy action
- and with probability (1-P) takes random action
Environment Directory : gridworld.py
"""
T1 o o T2
o o o o
o o o o
x o o o
"""
# Report
I have written my findings from the experments after each section.
# For grid size 5x5
```
#############################
#
# Modified Policy Iteration
#
##############################
"""
grid_size = Input Modifies grid shape "default = 5"
prob_p = probability of choosing stochastic action "default = 0.9"
stop_itr = finite number of backups to the value function before an improvement step "defalt=10"
Final_Result = if True; gives final Policy and Value function "default = False"
"""
!python Modified_Policy_Iteration.py --grid_size=5 --prob_p=0.9 --stop_itr=10 --Final_Result=True
!python Modified_Policy_Iteration.py --grid_size=5 --prob_p=0.7 --stop_itr=10 --Final_Result=True
```
##########################
# Report:
##########################
### Even though the Policy isn't optimal but it finds it's way to terminal state as value function around Terminal states are higher. In this case when we choose stochastic action probability 0.7, gives higher value function around the upper right terminal comparing other states. For p = 0.7 and 0.9 policy tries to end up at upper right terminal
```
#####################
#
# Policy Iteration
#
#####################
!python Policy_Iteration.py --grid_size=5 --prob_p=0.9 --Final_Result=True
!python Policy_Iteration.py --grid_size=5 --prob_p=0.7 --Final_Result=True
```
#################
# Report:
#################
### Comparing with other algorithms Policy Iteration gives the optimal policy when probability of taking stochastic action is 0.9. For p = 0.7 for some reason policy indicates to terminate at upper left terminal even though upper right terminal state has higher value
```
#########################
#
# Value Iteration
#
##########################
!python Value_Iteration.py --grid_size=5 --prob_p=0.9 --Final_Result=True
!python Value_Iteration.py --grid_size=5 --prob_p=0.7 --Final_Result=True
```
################
# Report
################
### when P=0.7 both the terminal seems equally likely form value function where for P=0.9 upper right terminal state get's higher value function and makes the policy more likely to terminate with higher reward
# Grid size 50X50
```
###########################
# Modified Policy Iteration
###########################
!python Modified_Policy_Iteration.py --grid_size=50 --prob_p=0.9 --stop_itr=10 --Final_Result=True
!python Modified_Policy_Iteration.py --grid_size=50 --prob_p=0.7 --stop_itr=10 --Final_Result=True
###################
# Policy Iteration
###################
!python Policy_Iteration.py --grid_size=50 --prob_p=0.9 --Final_Result=True
!python Policy_Iteration.py --grid_size=50 --prob_p=0.7 --Final_Result=True
###################
#
# Value Iteration
#
####################
!python Value_Iteration.py --grid_size=50 --prob_p=0.9 --Final_Result=True
!python Value_Iteration.py --grid_size=50 --prob_p=0.7 --Final_Result=True
```
######################
# Report:
#######################
### When we're considering the 50X50, value function of upper right terminal states and states abound it seems to higher value and it persist for all experiments, However as our grid world is now large whenever we end up around the upper left corner even though value function of the upper left corner is low compare to the upper right terminal, agent will terminate at upper left.
| github_jupyter |
```
import numpy as np
from sklearn.metrics import silhouette_score
from scipy.spatial.distance import euclidean
class Kmeans:
'''
K-means is a clustering algorithm that finds convex clusters.
The user specifies the number of clusters a priori.'''
def __init__(self, K=2, init='k++', random_state=42):
self.K_ = K
self.init_ = init
self._seed = random_state
self.centroid_init_ = None
self.centroid_loc_ = None
self._centroid_test = None
self._nrows = None
self._nfeatures = None
self.count_ = None
self.inertia_ = None
self.silhouette_ = None
def _k_plus_plus(self, X, k):
'''k++ implementation for cluster initialization
Input:
X = numpy data matrix
k = number of centroids
Output:
k++ selected centroid indices
'''
n_clust = 0
idx = np.arange(len(X))
while n_clust < k:
# initialize
if n_clust == 0:
choice = np.random.choice(idx, size=1)
cluster_idx = np.array(choice)
else:
distances = np.array([euclidean(X[choice], X[i]) for i in range(len(X))])
for i,_ in enumerate(distances):
if i in cluster_idx:
distances[i] = 0
total_distance = np.sum(distances)
prob = np.array([distance/total_distance for distance in distances])
choice = np.random.choice(idx, size=1, p=prob)
if choice not in cluster_idx:
cluster_idx = np.append(cluster_idx, choice)
np.delete(idx, choice)
n_clust = len(cluster_idx)
return cluster_idx
def _initialize_centroids(self, X, seed=True):
'''Randomly initialize centroids.
Input:
X = numpy data matrix
Output:
K centroid locations
'''
if seed == True:
np.random.seed(self._seed)
self._nrows = X.shape[0]
self._nfeatures = X.shape[1]
assert self.init_ == 'random' or 'k++' or 'forgy', "choose 'random', 'k++', 'forgy' for init"
if self.init_ == 'random':
centroid_locs = [np.random.randint(low=np.min(X),
high=np.max(X),
size= self._nfeatures) for _ in range(self.K_)]
elif self.init_ == 'k++':
centroid_locs = X[self._k_plus_plus(X, self.K_)]
self.centroid_loc_ = centroid_locs
elif self.init_ == 'forgy':
centroid_locs = X[np.random.choice(self._nrows, replace=False, size=self.K_)]
self.centroid_loc_ = centroid_locs
self.centroid_init_ = np.array(centroid_locs).reshape(self.K_,-1)
def _calc_distance(self, X):
'''Calculate the distance between data points and centroids.
Input:
X = numpy data matrix
Output:
matrix of distance between each data point and each cluster
'''
return np.array([euclidean(X[i], self.centroid_loc_[j])
for i in range(self._nrows)
for j in range(self.K_)]).reshape(self._nrows, self.K_)
def _update_cluster_loc(self, X):
'''Update centroid locations for each iteration of fitting.
Input:
X = numpy data matrix
Output:
updated centroid location
'''
predictions = self.predict(X)
idx = set(predictions)
assert idx != self.K_, "Bad initialization: use 'k++' or 'forgy' init"
self.centroid_loc_ = np.array([np.mean(X[self.predict(X) == i], axis=0)
for i in range(self.K_)]).reshape(len(idx),-1)
def fit(self, X):
'''Calculate centroid positions given training data.
Input:
X = numpy data matrix
Output:
fitted centroid locations
'''
self._initialize_centroids(X, seed=True)
self.count_ = 0
while True:
self.count_ += 1
self._centroid_test = self.centroid_loc_
self._update_cluster_loc(X)
if np.all(self._centroid_test == self.centroid_loc_):
self._inertia()
self._silhouette_score()
break
def predict(self, X):
'''Assign data points to cluster number.
Input:
X = numpy data matrix
Output:
cluster ID
'''
return np.argmin(self._calc_distance(X), axis=1)
def _inertia(self):
'''Calculates the total inertia after fitting.'''
self.inertia_ = np.sum([euclidean(X[self.predict(X)==j][i], self.centroid_loc_[j])**2
for j in range(self.K_)
for i in range(X[self.predict(X)==j].shape[0])])
# Need to write this part from scratch
def _silhouette_score(self):
'''Calculates the silhouette score after fitting.'''
self.silhouette_ = silhouette_score(X, self.predict(X))
# Extensions
# 1. Add multiple initializations to find better clusters
# 2. Create plot methods
```
---
```
X = np.random.randint(0,30,90).reshape(30,3).astype('float')
X[10,:]
km = Kmeans(K=3, random_state=33, init='forgy')
km.fit(X)
km.centroid_init_
km.centroid_loc_
km.predict(X)
km.count_
km.inertia_
km.silhouette_
d = [(i,j, euclidean(km.centroid_loc_[i], km.centroid_loc_[j]))
for i in range(km.K_) for j in range(km.K_) if i!=j]
d
for i, tup in enumerate(d):
for j in range(1,km.K_):
print(i,j, tup)
euclidean(km.centroid_loc_[2], km.centroid_loc_[0])
```
---
```
km2 = Kmeans(random_state=33, init='k++')
km2.fit(X)
km2.centroid_init_
km2.centroid_loc_
km2.predict(X)
km2.count_
km2.inertia_
km2.silhouette_
```
---
```
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=2, init='k-means++')
kmeans.fit(X)
kmeans.cluster_centers_
kmeans.predict(X)
kmeans.n_iter_
kmeans.inertia_
silhouette_score(X, kmeans.predict(X))
```
| github_jupyter |
```
from __future__ import division
import numpy as np
import epgcpmg as epg
import time
import matplotlib.pyplot as plt
%matplotlib inline
def numerical_gradient(myfun, myparams, e=1e-5):
initial_params = myparams.copy()
num_grad = np.zeros(initial_params.shape)
perturb = np.zeros(initial_params.shape)
for p in range(len(initial_params)):
perturb[p] = e
loss2 = myfun(myparams + perturb)
loss1 = myfun(myparams - perturb)
num_grad[p] = (loss2 - loss1) / (2 * e)
perturb[p] = 0.
return num_grad
def read_angles(fliptable):
f = open(fliptable, 'r')
angles = []
for line in f.readlines():
angles.append(float(line))
f.close()
return np.array(angles)
T1 = 1000
T2 = 100
TE = 5
B1hom = .8
T = 81
# 0.26947104
# 0.52563555
# 0.76898634
echo_times = np.arange(TE, TE*(T+1), TE)
angles_rad = 120 * np.ones((T,)) * np.pi/180
angles_rad = np.pi/180 * read_angles('flipangles.txt.814192544')[:T]
z1 = epg.FSE_signal_ex(np.pi/2, angles_rad, TE, T1, T2, B1hom)
tic = time.time()
w1 = epg.FSE_signal_ex_prime_T1(np.pi/2, angles_rad, TE, T1, T2, B1hom)
w2 = epg.FSE_signal_ex_prime_T2(np.pi/2, angles_rad, TE, T1, T2, B1hom)
wb1 = epg.FSE_signal_ex_prime_B1(np.pi/2, angles_rad, TE, T1, T2, B1hom)
toc = time.time()
print 'prop grad time:', toc-tic
# w2_th = echo_times / (T2**2) * exp(-echo_times/T2)
tic = time.time()
w1_num = np.zeros((T,))
w2_num = np.zeros((T,))
wb1_num = np.zeros((T,))
for i in range(T):
w2_num[i] = numerical_gradient(lambda x: epg.FSE_signal_ex(np.pi/2, angles_rad, TE, T1, x, B1hom)[i], np.array([T2]))
w1_num[i] = numerical_gradient(lambda x: epg.FSE_signal_ex(np.pi/2, angles_rad, TE, x, T2, B1hom)[i], np.array([T1]))
wb1_num[i] = numerical_gradient(lambda x: epg.FSE_signal_ex(np.pi/2, angles_rad, TE, T1, T2, x)[i], np.array([B1hom]))
toc = time.time()
print 'num grad time:', toc-tic
print 'T1 numerical vs prop gradient error:', np.linalg.norm(w1_num - w1.T)
print 'T2 numerical vs prop gradient error:', np.linalg.norm(w2_num - w2.T)
print 'B1 numerical vs prop gradient error:', np.linalg.norm(wb1_num - wb1.T)
plt.figure()
plt.plot(echo_times, z1)
plt.title('FSE signal')
plt.figure()
plt.plot(echo_times, w2, echo_times, w2_num, 'o--')
plt.title('Derivative w.r.t. T2')
plt.figure()
plt.plot(echo_times, w1, echo_times, w1_num, 'o--')
plt.title('Derivative w.r.t. T1')
plt.figure()
plt.plot(echo_times, wb1, echo_times, wb1_num, 'o--')
plt.title('Derivative w.r.t. B1')
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.