markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
The signal is defined as 100 Hz SSSR | signal = np.zeros((32,200,int(sfreq*0.2)))
xt = np.linspace(0,0.2,sfreq*0.2)
for iChannel in range(32):
for iTrial in range(200):
signal[iChannel,iTrial,:] = np.sin(xt*100*2*np.pi+phase_list[iChannel])
# plot first two channels to show the phase differences
plt.plot(xt,signal[0:2,0,:].transpose()) | <string>:6: DeprecationWarning: object of type <class 'float'> cannot be safely interpreted as an integer.
| MIT | mtcplv_discrepancy.ipynb | HaoLu-a/cPCA-erratum |
The signal to noise ratio (SNR) in the simulated data was set as -40 dB for all channels | std = 10**(40/20)*np.sqrt((signal**2).mean())
noise = np.random.normal(0,std,signal.shape) | _____no_output_____ | MIT | mtcplv_discrepancy.ipynb | HaoLu-a/cPCA-erratum |
The simulated data was analyzed through the code from the function anlffr.spectral.mtcplv | params = dict(Fs = sfreq, tapers = [1,1], fpass = [80, 120], itc = 0, pad = 1)
x=signal + noise
#codes from the dpss tool of anlffr to make sure the multitaper part is consistent
if(len(x.shape) == 3):
timedim = 2
trialdim = 1
ntrials = x.shape[trialdim]
nchans = x.shape[0]
nfft, f, fInd = spectral._get_freq_vector(x, params, timedim)
ntaps = params['tapers'][1]
TW = params['tapers'][0]
w, conc = dpss.dpss_windows(x.shape[timedim], TW, ntaps)
# the original version of mtcplv
plv = np.zeros((ntaps, len(fInd)))
for k, tap in enumerate(w):
xw = np.fft.rfft(tap * x, n=nfft, axis=timedim)
if params['itc']:
C = (xw.mean(axis=trialdim) /
(abs(xw).mean(axis=trialdim))).squeeze()
else:
C = (xw / abs(xw)).mean(axis=trialdim).squeeze()
for fi in np.arange(0, C.shape[1]):
Csd = np.outer(C[:, fi], C[:, fi].conj())
vals = linalg.eigh(Csd, eigvals_only=True)
plv[k, fi] = vals[-1] / nchans
# Average over tapers and squeeze to pretty shapes
plv = (plv.mean(axis=0)).squeeze()
plv = plv[fInd] | _____no_output_____ | MIT | mtcplv_discrepancy.ipynb | HaoLu-a/cPCA-erratum |
The mtcplv did capture the 100 Hz component | plt.plot(f,plv)
plt.xlabel('frequency')
plt.ylabel('output of mtcPLV') | _____no_output_____ | MIT | mtcplv_discrepancy.ipynb | HaoLu-a/cPCA-erratum |
However, the output of mtcplv perfectly overlaps with the average of squared single-channel PLV stored in matrix C | plt.plot(f,abs(C**2).mean(0)[fInd], label='average of square', alpha=0.5)
plt.plot(f,plv,label = 'mtcplv', alpha = 0.5)
plt.plot(f,abs(C**2).mean(0)[fInd] - plv, label='difference')
plt.legend()
plt.xlabel('frequency')
plt.ylabel('PLV') | _____no_output_____ | MIT | mtcplv_discrepancy.ipynb | HaoLu-a/cPCA-erratum |
We then check the eigen value decomposition around the 100 Hz peak and there is only one non-zero eigen value as expected | fi = np.argmax(plv)+np.argwhere(fInd==True).min()
Csd = np.outer(C[:, fi], C[:, fi].conj())
vals = linalg.eigh(Csd, eigvals_only=True)
plt.bar(np.arange(32),vals[::-1])
plt.xlabel('Principle Components')
plt.ylabel('Eigen Values') | _____no_output_____ | MIT | mtcplv_discrepancy.ipynb | HaoLu-a/cPCA-erratum |
Statistics IntroductionIn this chapter, you'll learn about how to do statistics with code. We already saw some statistics in the chapter on probability and random processes: here we'll focus on computing basic statistics and using statistical tests. We'll make use of the excellent [*pingouin*](https://pingouin-stats.org/index.html) statistics package and its documentation for many of the examples and methods in this chapter {cite}`vallat2018pingouin`. This chapter also draws on Open Intro Statistics {cite}`diez2012openintro`. Notation and basic definitionsGreek letters, like $\beta$, are the truth and represent parameters. Modified Greek letters are an estimate of the truth, for example $\hat{\beta}$. Sometimes Greek letters will stand in for vectors of parameters. Most of the time, upper case Latin characters such as $X$ will represent random variables (which could have more than one dimension). Lower case letters from the Latin alphabet denote realised data, for instance $x$ (which again could be multi-dimensional). Modified Latin alphabet letters denote computations performed on data, for instance $\bar{x} = \frac{1}{n} \displaystyle\sum_{i} x_i$ where $n$ is number of samples. Parameters are given following a vertical bar, for example if $f(x|\mu, \sigma)$ is a probability density function, the vertical line indicates that its parameters are $\mu$ and $\sigma$. The set of distributions with densities $f_\theta(x)$, $\theta \in \Theta$ is called a parametric family, eg there is a family of different distributions that are parametrised by $\theta$.A **statistic** $T(x)$ is a function of the data $x=(x_1, \dots, x_n)$. An **estimator** of a parameter $\theta$ is a function $T=T(x)$ which is used to estimate $\theta$ based on observations of data. $T$ is an unbiased estimator if $\mathbb{E}(T) = \theta$.If $X$ has PDF $f(x|\theta)$ then, given the observed value $x$ of $X$, the **likelihood** of $\theta$ is defined by $\text{lik}(\theta) = f(x | \theta)$. For independent and identically distributed observed values, then $\text{lik}(\theta) = f(x_1, \dots, x_n| \theta) = \Pi_{i=1}^n f(x_i | \theta)$. The $\hat{\theta}$ such that this function attains its maximum value is the **maximum likelihood estimator (MLE)** of $\theta$.Given an MLE $\hat{\theta}$ of $\theta$, $\hat{\theta}$ is said to be **consistent** if $\mathbb{P}(\hat{\theta} - \theta > \epsilon) \rightarrow 0$ as $n\rightarrow \infty$.An estimator *W* is **efficient** relative to another estimator $V$ if $\text{Var}(W) < \text{Var}(V)$.Let $\alpha$ be the 'significance level' of a test statistic $T$.Let $\gamma(X)$ and $\delta(X)$ be two statistics satisfying $\gamma(X) < \delta(X)$ for all $X$. If on observing $X = x$, the inference can be made that $\gamma(x) \leq \theta \leq \delta(x)$. Then $[\delta(x), \gamma(x)]$ is an **interval estimate** and $[\delta(X), \gamma(X)]$ is an **interval estimator**. The random interval (random because the *endpoints* are random variables) $[\delta(X), \gamma(X)]$ is called a $100\cdot\alpha \%$ **confidence interval** for $\theta$. Of course, there is a true $\theta$, so either it is in this interval or it is not. But if the confidence interval was constructed many times over using samples, $\theta$ would be contained within it $100\cdot\alpha \%$ of the times.A **hypothesis test** is a conjecture about the distribution of one or more random variables, and a test of a hypothesis is a procedure for deciding whether or not to reject that conjecture. The **null hypothesis**, $H_0$, is only ever conservatively rejected and represents the default positiion. The **alternative hypothesis**, $H_1$, is the conclusion contrary to this.A type I error occurs when $H_0$ is rejected when it is true, ie when a *true* null hypothesis is rejected. Mistakenly failing to reject a false null hypothesis is called a type II error.In the most simple situations, the upper bound on the probability of a type I error is called the size or **significance level** of the *test*. The **p-value** of a random variable $X$ is the smallest value of the significance level (denoted $\alpha$) for which $H_0$ would be rejected on the basis of seeing $x$. The p-value is sometimes called the significance level of $X$. The probability that a test will reject the null is called the power of the test. The probability of a type II error is equal to 1 minus the power of the test.Recall that there are two types of statistics out there: parametrised, eg by $\theta$, and non-parametrised. The latter are often distribution free (ie don't involve a PDF) or don't require parameters to be specified. ImportsFirst we need to import the packages we'll be using | import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import pandas as pd
import pingouin as pg
import statsmodels.formula.api as smf
from numpy.random import Generator, PCG64
# Set seed for random numbers
seed_for_prng = 78557
prng = Generator(PCG64(seed_for_prng)) | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
Basic statisticsLet's start with computing the simplest statistics you can think of using some synthetic data. Many of the functions have lots of extra options that we won't explore here (like weights or normalisation); remember that you can see these using the `help()` method. We'll generate a vector with 100 entries: | data = np.array(range(100))
data
from myst_nb import glue
import sympy
import warnings
warnings.filterwarnings("ignore")
dict_fns = {'mean': np.mean(data),
'std': np.std(data),
'mode': stats.mode([0, 1, 2, 3, 3, 3, 5])[0][0],
'median': np.median(data)}
for name, eval_fn in dict_fns.items():
glue(name, f'{eval_fn:.1f}')
# Set max rows displayed for readability
pd.set_option('display.max_rows', 6)
# Plot settings
plt.style.use('plot_style.txt') | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
Okay, let's see how some basic statistics are computed. The mean is `np.mean(data)=` {glue:}`mean`, the standard deviation is `np.std(data)=` {glue:}`std`, and the median is given by `np.median(data)= `{glue:}`median`. The mode is given by `stats.mode([0, 1, 2, 3, 3, 3, 5])[0]=` {glue:}`mode` (access the counts using `stats.mode(...)[1]`).Less famous quantiles than the median are given by, for example for $q=0.25$, | np.quantile(data, 0.25) | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
As with **pandas**, **numpy** and **scipy** work on scalars, vectors, matrices, and tensors: you just need to specify the axis that you'd like to apply a function to: | data = np.fromfunction(lambda i, j: i + j, (3, 6), dtype=int)
data
np.mean(data, axis=0) | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
Remember that, for discrete data points, the $k$th (unnormalised) moment is$$\hat{m}_k = \frac{1}{n}\displaystyle\sum_{i=1}^{n} \left(x_i - \bar{x}\right)^k$$To compute this use scipy's `stats.moment(a, moment=1)`. For instance for the kurtosis ($k=4$), it's | stats.moment(data, moment=4, axis=1) | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
Covariances are found using `np.cov`. | np.cov(np.array([[0, 1, 2], [2, 1, 0]])) | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
Note that, as expected, the $C_{01}$ term is -1 as the vectors are anti-correlated. Parametric testsReminder: parametric tests assume that data are effectively drawn a probability distribution that can be described with fixed parameters. One-sample t-testThe one-sample t-test tells us whether a given parameter for the mean, i.e. a suspected $\mu$, is likely to be consistent with the sample mean. The null hypothesis is that $\mu = \bar{x}$. Let's see an example using the default `tail='two-sided'` option. Imagine we have data on the number of hours people spend working each day and we want to test the (alternative) hypothesis that $\bar{x}$ is not $\mu=$8 hours: | x = [8.5, 5.4, 6.8, 9.6, 4.2, 7.2, 8.8, 8.1]
pg.ttest(x, 8).round(2) | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
(The returned object is a **pandas** dataframe.) We only have 8 data points, and so that is a great big confidence interval! It's worth remembering what a t-statistic and t-test really are. In this case, the statistic that is constructed to test whether the sample mean is different from a known parameter $\mu$ is$$T = \frac{\sqrt{n}(\bar{x}-\mu)}{\hat{\sigma}} \thicksim t_{n-1}$$where $t_{n-1}$ is the student's t-distribution and $n-1$ is the number of degrees of freedom. The $100\cdot(1-\alpha)\%$ test interval in this case is given by$$1 - \alpha = \mathbb{P}\left(-t_{n-1, \alpha/2} \leq \frac{\sqrt{n}(\bar{x} - \mu)}{\hat{\sigma}} \leq t_{n-1,\alpha/2}\right)$$where we define $t_{n-1, \alpha/2}$ such that $\mathbb{P}(T > t_{n-1, \alpha/2}) = \alpha/2$. For $\alpha=0.05$, implying confidence intervals of 95%, this looks like: | import scipy.stats as st
def plot_t_stat(x, mu):
T = np.linspace(-7, 7, 500)
pdf_vals = st.t.pdf(T, len(x)-1)
sigma_hat = np.sqrt(np.sum( (x-np.mean(x))**2)/(len(x)-1))
actual_T_stat = (np.sqrt(len(x))*(np.mean(x) - mu))/sigma_hat
alpha = 0.05
T_alpha_over_2 = st.t.ppf(1.0-alpha/2, len(x)-1)
interval_T = T[((T>-T_alpha_over_2) & (T<T_alpha_over_2))]
interval_y = pdf_vals[((T>-T_alpha_over_2) & (T<T_alpha_over_2))]
fig, ax = plt.subplots()
ax.plot(T, pdf_vals, label=f'Student t: dof={len(x)-1}', zorder=2)
ax.fill_between(interval_T, 0, interval_y, alpha=0.2, label=r'95% interval', zorder=1)
ax.plot(actual_T_stat, st.t.pdf(actual_T_stat, len(x)-1), 'bo', ms=15, label=r'$\sqrt{n}(\bar{x} - \mu)/\hat{\sigma}}$',
color='orchid', zorder=4)
ax.vlines(actual_T_stat, 0, st.t.pdf(actual_T_stat, len(x)-1), color='orchid', zorder=3)
ax.set_xlabel('Value of statistic T')
ax.set_ylabel('PDF')
ax.set_xlim(-7, 7)
ax.set_ylim(0., 0.4)
ax.legend(frameon=False)
plt.show()
mu = 8
plot_t_stat(x, mu) | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
In this case, we would reject the alternative hypothesis. You can see why from the plot; the test statistic we have constructed lies within the interval where we cannot reject the null hypothesis. $\bar{x}-\mu$ is close enough to zero to give us cause for concern. (You can also see from the plot why this is a two-tailed test: we don't care if $\bar{x}$ is greater or less than $\mu$, just that it's different--and so the test statistic could appear in either tail of the distribution for us to accept $H_1$.)We accept the null here, but about if there were many more data points? Let's try adding some generated data (pretend it is from making extra observations). | # 'Observe' extra data
extra_data = prng.uniform(5.5, 8.5, size=(30))
# Add it in to existing vector
x_prime = np.concatenate((np.array(x), extra_data), axis=None)
# Run t-test
pg.ttest(x_prime, 8).round(2) | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
Okay, what happened? Our extra observations have seen the confidence interval shrink considerably, and the p-value is effectively 0. There's a large negative t-statistic too. Unsurprisingly, as we chose a uniform distribution that only just included 8 but was centered on $(8-4.5)/2$ *and* we had more points, the test now rejects the null hypothesis that $\mu=8$ . Because the alternative hypothesis is just $\mu\neq8$, and these tests are conservative, we haven't got an estimate of what the mean actually is; we just know that our test rejects that it's $8$.We can see this in a new version of the chart that uses the extra data: | plot_t_stat(x_prime, mu) | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
Now our test statistic is safely outside the interval. Connection to linear regressionNote that testing if $\mu\neq0$ is equivalent to having the alternative hypothesis that a single, non-zero scalar value is a good expected value for $x$, i.e. that $\mathbb{E}(x) \neq 0$. Which may sound familiar if you've run **linear regression** and, indeed, this t-test has an equivalent linear model! It's just regressing $X$ on a constant--a single, non-zero scalar value. In general, t-tests appear in linear regression to test whether any coefficient $\beta \neq 0$. We can see this connection by running a hypothesis test of whether the sample mean is not zero. Note the confidence interval, t-statistic, and p-value. | pg.ttest(x, 0).round(3) | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
And, as an alternative, regressing x on a constant, again noting the interval, t-stat, and p-value: | import statsmodels.formula.api as smf
df = pd.DataFrame(x, columns=['x'])
res = smf.ols(formula='x ~ 1', data=df).fit()
# Show only the info relevant to the intercept (there are no other coefficients)
print(res.summary().tables[1]) | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
Many tests have an equivalent linear model. Other information provided by **Pingouin** testsWe've covered the degrees of freedom, the T statistic, the p-value, and the confidence interval. So what's all that other gunk in our t-test? Cohen's d is a measure of whether the difference being measured in our test is large or not (this is important; you can have statistically significant differences that are so small as to be inconsequential). Cohen suggested that $d = 0.2$ be considered a 'small' effect size, 0.5 represents a 'medium' effect size and 0.8 a 'large' effect size. BF10 represents the Bayes factor, the ratio (given the data) of the likelihood of the alternative hypothesis relative to the null hypothesis. Values greater than unity therefore favour the alternative hypothesis. Finally, power is the achieved power of the test, which is $1 - \mathbb{P}(\text{type II error})$. A common default to have in mind is a power greater than 0.8. Two-sample t-testThe two-sample t-test is used to determine if two population means are equal (with the null being that they *are* equal). Let's look at an example with synthetic data of equal length, which means we can use the *paired* version of this. We'll imagine we are looking at an intervention with a pre- and post- dataset. | pre = [5.5, 2.4, 6.8, 9.6, 4.2, 5.9]
post = [6.4, 3.4, 6.4, 11., 4.8, 6.2]
pg.ttest(pre, post, paired=True, tail='two-sided').round(2) | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
In this case, we cannot reject the null hypothesis that the means are the same pre- and post-intervention. Pearson correlationThe Pearson correlation coefficient measures the linear relationship between two datasets. Strictly speaking, it requires that each dataset be normally distributed. | mean, cov = [4, 6], [(1, .5), (.5, 1)]
x, y = prng.multivariate_normal(mean, cov, 30).T
# Compute Pearson correlation
pg.corr(x, y).round(3) | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
Welch's t-testIn the case where you have two samples with unequal variances (or, really, unequal sample sizes too), Welch's t-test is appropriate. With `correction='true'`, it assumes that variances are not equal. | x = prng.normal(loc=7, size=20)
y = prng.normal(loc=6.5, size=15)
pg.ttest(x, y, correction='true') | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
One-way ANOVAAnalysis of variance (ANOVA) is a technique for testing hypotheses about means, for example testing the equality of the means of $k>2$ groups. The model would be$$X_{ij} = \mu_i + \epsilon_{ij} \quad j=1, \dots, n_i \quad i=1, \dots, k.$$so that the $i$th group has $n_i$ observations. The null hypothesis of one-way ANOVA is that $H_0: \mu_1 = \mu_2 = \dots = \mu_k$, with the alternative hypothesis that this is *not* true. | df = pg.read_dataset('mixed_anova')
df.head()
# Run the ANOVA
pg.anova(data=df, dv='Scores', between='Group', detailed=True) | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
Multiple pairwise t-testsThere's a problem with running multiple t-tests: if you run enough of them, something is bound to come up as significant! As such, some *post-hoc* adjustments exist that correct for the fact that multiple tests are occurring simultaneously. In the example below, multiple pairwise comparisons are made between the scores by time group. There is a corrected p-value, `p-corr`, computed using the Benjamini/Hochberg FDR correction. | pg.pairwise_ttests(data=df, dv='Scores', within='Time', subject='Subject',
parametric=True, padjust='fdr_bh', effsize='hedges').round(3) | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
One-way ANCOVAAnalysis of covariance (ANCOVA) is a general linear model which blends ANOVA and regression. ANCOVA evaluates whether the means of a dependent variable (dv) are equal across levels of a categorical independent variable (between) often called a treatment, while statistically controlling for the effects of other continuous variables that are not of primary interest, known as covariates or nuisance variables (covar). | df = pg.read_dataset('ancova')
df.head()
pg.ancova(data=df, dv='Scores', covar='Income', between='Method') | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
Power calculationsOften, it's quite useful to know what sample size is needed to avoid certain types of testing errors. **Pingouin** offers ways to compute effect sizes and test powers to help with these questions.As an example, let's assume we have a new drug (`x`) and an old drug (`y`) that are both intended to reduce blood pressure. The standard deviation of the reduction in blood pressure of those receiving the old drug is 12 units. The null hypothesis is that the new drug is no more effective than the new drug. But it will only be worth switching production to the new drug if it reduces blood pressure by more than 3 units versus the old drug. In this case, the effect size of interest is 3 units.Let's assume for a moment that the true difference is 3 units and we want to perform a test with $\alpha=0.05$. The problem is that, for small differences in the effect, the distribution of effects under the null and the distribution of effects under the alternative have a great deal of overlap. So the chances of making a Type II error - accepting the null hypothesis when it is actually false - is quite high. Let's say we'd ideally have at most a 20% chance of making a Type II error: what sample size do we need?We can compute this, but we need an extra piece of information first: a normalised version of the effect size called Cohen's $d$. We need to transform the difference in means to compute this. For independent samples, $d$ is:$$ d = \frac{\overline{X} - \overline{Y}}{\sqrt{\frac{(n_{1} - 1)\sigma_{1}^{2} + (n_{2} - 1)\sigma_{2}^{2}}{n_1 + n_2 - 2}}}$$(If you have real data samples, you can compute this using `pg.compute_effsize`.)For this case, $d$ is $-3/12 = -1/4$ if we assume the standard deviations are the same across the old (`y`) and new (`x`) drugs. So we will plug that $d$ in and look at a range of possible sample sizes along with a standard value for $alpha$ of 0.05. In the below `tail=less` tests the alternative that `x` has a smaller mean than `y`. | cohen_d = -0.25 # Fixed effect size
sample_size_array = np.arange(1, 500, 50) # Incrementing sample size
# Compute the achieved power
pwr = pg.power_ttest(d=cohen_d, n=sample_size_array, alpha=0.05,
contrast='two-samples', tail='less')
fig, ax = plt.subplots()
ax.plot(sample_size_array, pwr, 'ko-.')
ax.axhline(0.8, color='r', ls=':')
ax.set_xlabel('Sample size')
ax.set_ylabel('Power (1 - type II error)')
ax.set_title('Achieved power of a T-test')
plt.show() | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
From this, we can see we need a sample size of at least 200 in order to have a power of 0.8. The `pg.power_ttest` function takes any three of the four of `d`, `n`, `power`, and `alpha` (ie leave one of these out), and then returns what the missing parameter should be. We passed in `d`, `n`, and `alpha`, and so the `power` was returned. Non-parametric testsReminder: non-parametrics tests do not make any assumptions about the distribution from which data are drawn or that it can be described by fixed parameters. Wilcoxon Signed-rank TestThis tests the null hypothesis that two related paired samples come from the same distribution. It is the non-parametric equivalent of the t-test. | x = [20, 22, 19, 20, 22, 18, 24, 20, 19, 24, 26, 13]
y = [38, 37, 33, 29, 14, 12, 20, 22, 17, 25, 26, 16]
pg.wilcoxon(x, y, tail='two-sided').round(2) | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
Mann-Whitney U Test (aka Wilcoxon rank-sum test)The Mann–Whitney U test is a non-parametric test of the null hypothesis that it is equally likely that a randomly selected value from one sample will be less than or greater than a randomly selected value from a second sample. It is the non-parametric version of the two-sample T-test.Like many non-parametric **pingouin** tests, it can take values of tail that are 'two-sided', 'one-sided', 'greater', or 'less'. Below, we ask if a randomly selected value from `x` is greater than one from `y`, with the null that it is not. | x = prng.uniform(low=0, high=1, size=20)
y = prng.uniform(low=0.2, high=1.2, size=20)
pg.mwu(x, y, tail='greater') | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
Spearman CorrelationThe Spearman correlation coefficient is the Pearson correlation coefficient between the rank variables, and does not assume normality of data. | mean, cov = [4, 6], [(1, .5), (.5, 1)]
x, y = prng.multivariate_normal(mean, cov, 30).T
pg.corr(x, y, method="spearman").round(2) | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
Kruskal-WallaceThe Kruskal-Wallis H-test tests the null hypothesis that the population median of all of the groups are equal. It is a non-parametric version of ANOVA. The test works on 2 or more independent samples, which may have different sizes. | df = pg.read_dataset('anova')
df.head()
pg.kruskal(data=df, dv='Pain threshold', between='Hair color') | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
The Chi-Squared TestThe chi-squared test is used to determine whether there is a significant difference between the expected frequencies and the observed frequencies in one or more categories. This test can be used to evaluate the quality of a categorical variable in a classification problem or to check the similarity between two categorical variables.There are two conditions for a chi-squared test:- Independence: Each case that contributes a count to the table must be independent of all the other cases in the table.- Sample size or distribution: Each particular case (ie cell count) must have at least 5 expected cases.Let's see an example from the **pingouin** docs: whether gender is a good predictor of heart disease. First, let's load the data and look at the gender split in total: | chi_data = pg.read_dataset('chi2_independence')
chi_data['sex'].value_counts(ascending=True) | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
If gender is *not* a predictor, we would expect a roughly similar split between those who have heart disease and those who do not. Let's look at the observerd versus the expected split once we categorise by gender and 'target' (heart disease or not). | expected, observed, stats = pg.chi2_independence(chi_data, x='sex', y='target')
observed - expected | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
So we have fewer in the 0, 0 and 1, 1 buckets than expected but more in the 0, 1 and 1, 0 buckets. Let's now see how the test interprets this: | stats.round(3) | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
From these, it is clear we can reject the null and therefore it seems like gender is a good predictor of heart disease. Shapiro-Wilk Test for NormalityNote that the null here is that the distribution *is* normal, so normality is only rejected when the p-value is sufficiently small. | x = prng.normal(size=20)
pg.normality(x) | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
The test can also be run on multiple variables in a dataframe: | df = pg.read_dataset('ancova')
pg.normality(df[['Scores', 'Income', 'BMI']], method='normaltest').round(3) | _____no_output_____ | MIT | econmt-statistics.ipynb | lnsongxf/coding-for-economists |
If you don't want to train the network skip the cell righ below and dowload the pre-trained model. After downloading the pre-trained model run the cell below to the immediate below cell. | batch_size = 32
epoch_num = 50
saving_path = 'K:/autoencoder_color_to_gray/SavedModel/AutoencoderColorToGray.ckpt'
saver_ = tf.train.Saver(max_to_keep = 3)
batch_img = dataset_source[0:batch_size]
batch_out = dataset_target[0:batch_size]
num_batches = num_images//batch_size
sess = tf.Session()
sess.run(init)
for ep in range(epoch_num):
batch_size = 0
for batch_n in range(num_batches): # batches loop
_, c = sess.run([train_op, loss], feed_dict = {ae_inputs: batch_img, ae_target: batch_out})
print("Epoch: {} - cost = {:.5f}" .format((ep+1), c))
batch_img = dataset_source[batch_size: batch_size+32]
batch_out = dataset_target[batch_size: batch_size+32]
batch_size += 32
saver_.save(sess, saving_path, global_step = ep)
recon_img = sess.run([ae_outputs], feed_dict = {ae_inputs: batch_img})
sess.close()
saver = tf.train.Saver()
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
saver.restore(sess, 'K:/autoencoder_color_to_gray/SavedModel/AutoencoderColorToGray.ckpt-49')
import glob as gl
filenames = gl.glob('flower_images/*.png')
test_data = []
for file in filenames[0:100]:
test_data.append(np.array(cv2.imread(file)))
test_dataset = np.asarray(test_data)
print(test_dataset.shape)
# Running the test data on the autoencoder
batch_imgs = test_dataset
gray_imgs = sess.run(ae_outputs, feed_dict = {ae_inputs: batch_imgs})
print(gray_imgs.shape)
for i in range(gray_imgs.shape[0]):
cv2.imwrite('gen_gray_images/gen_gray_' +str(i) +'.jpeg', gray_imgs[i]) | (100, 128, 128, 1)
| CC-BY-2.0 | RGB_to_GRAY_scale_Autoencoder.ipynb | NeoBoy/RGB_to_GRAYSCALE_Autoencoder- |
CarND Object Detection LabLet's get started! | import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
from PIL import ImageDraw
from PIL import ImageColor
import time
from scipy.stats import norm
%matplotlib inline
plt.style.use('ggplot') | /root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
| MIT | CarND-Object-Detection-Lab.ipynb | 48cfu/Object-Detection-Lab |
MobileNets[*MobileNets*](https://arxiv.org/abs/1704.04861), as the name suggests, are neural networks constructed for the purpose of running very efficiently (high FPS, low memory footprint) on mobile and embedded devices. *MobileNets* achieve this with 3 techniques:1. Perform a depthwise convolution followed by a 1x1 convolution rather than a standard convolution. The 1x1 convolution is called a pointwise convolution if it's following a depthwise convolution. The combination of a depthwise convolution followed by a pointwise convolution is sometimes called a separable depthwise convolution.2. Use a "width multiplier" - reduces the size of the input/output channels, set to a value between 0 and 1.3. Use a "resolution multiplier" - reduces the size of the original input, set to a value between 0 and 1.These 3 techniques reduce the size of cummulative parameters and therefore the computation required. Of course, generally models with more paramters achieve a higher accuracy. *MobileNets* are no silver bullet, while they perform very well larger models will outperform them. ** *MobileNets* are designed for mobile devices, NOT cloud GPUs**. The reason we're using them in this lab is automotive hardware is closer to mobile or embedded devices than beefy cloud GPUs. Convolutions Vanilla ConvolutionBefore we get into the *MobileNet* convolution block let's take a step back and recall the computational cost of a vanilla convolution. There are $N$ kernels of size $D_k * D_k$. Each of these kernels goes over the entire input which is a $D_f * D_f * M$ sized feature map or tensor (if that makes more sense). The computational cost is:$$D_g * D_g * M * N * D_k * D_k$$Let $D_g * D_g$ be the size of the output feature map. Then a standard convolution takes in a $D_f * D_f * M$ input feature map and returns a $D_g * D_g * N$ feature map as output.(*Note*: In the MobileNets paper, you may notice the above equation for computational cost uses $D_f$ instead of $D_g$. In the paper, they assume the output and input are the same spatial dimensions due to stride of 1 and padding, so doing so does not make a difference, but this would want $D_g$ for different dimensions of input and output.) Depthwise ConvolutionA depthwise convolution acts on each input channel separately with a different kernel. $M$ input channels implies there are $M$ $D_k * D_k$ kernels. Also notice this results in $N$ being set to 1. If this doesn't make sense, think about the shape a kernel would have to be to act upon an individual channel.Computation cost:$$D_g * D_g * M * D_k * D_k$$ Pointwise ConvolutionA pointwise convolution performs a 1x1 convolution, it's the same as a vanilla convolution except the kernel size is $1 * 1$.Computation cost:$$D_k * D_k * D_g * D_g * M * N =1 * 1 * D_g * D_g * M * N =D_g * D_g * M * N$$Thus the total computation cost is for separable depthwise convolution:$$D_g * D_g * M * D_k * D_k + D_g * D_g * M * N$$which results in $\frac{1}{N} + \frac{1}{D_k^2}$ reduction in computation:$$\frac {D_g * D_g * M * D_k * D_k + D_g * D_g * M * N} {D_g * D_g * M * N * D_k * D_k} = \frac {D_k^2 + N} {D_k^2*N} = \frac {1}{N} + \frac{1}{D_k^2}$$*MobileNets* use a 3x3 kernel, so assuming a large enough $N$, separable depthwise convnets are ~9x more computationally efficient than vanilla convolutions! Width MultiplierThe 2nd technique for reducing the computational cost is the "width multiplier" which is a hyperparameter inhabiting the range [0, 1] denoted here as $\alpha$. $\alpha$ reduces the number of input and output channels proportionally:$$D_f * D_f * \alpha M * D_k * D_k + D_f * D_f * \alpha M * \alpha N$$ Resolution MultiplierThe 3rd technique for reducing the computational cost is the "resolution multiplier" which is a hyperparameter inhabiting the range [0, 1] denoted here as $\rho$. $\rho$ reduces the size of the input feature map:$$\rho D_f * \rho D_f * M * D_k * D_k + \rho D_f * \rho D_f * M * N$$ Combining the width and resolution multipliers results in a computational cost of:$$\rho D_f * \rho D_f * a M * D_k * D_k + \rho D_f * \rho D_f * a M * a N$$Training *MobileNets* with different values of $\alpha$ and $\rho$ will result in different speed vs. accuracy tradeoffs. The folks at Google have run these experiments, the result are shown in the graphic below: MACs (M) represents the number of multiplication-add operations in the millions. Exercise 1 - Implement Separable Depthwise ConvolutionIn this exercise you'll implement a separable depthwise convolution block and compare the number of parameters to a standard convolution block. For this exercise we'll assume the width and resolution multipliers are set to 1.Docs:* [depthwise convolution](https://www.tensorflow.org/api_docs/python/tf/nn/depthwise_conv2d) | def vanilla_conv_block(x, kernel_size, output_channels):
"""
Vanilla Conv -> Batch Norm -> ReLU
"""
x = tf.layers.conv2d(
x, output_channels, kernel_size, (2, 2), padding='SAME')
x = tf.layers.batch_normalization(x)
return tf.nn.relu(x)
# TODO: implement MobileNet conv block
def mobilenet_conv_block(x, kernel_size, output_channels):
"""
Depthwise Conv -> Batch Norm -> ReLU -> Pointwise Conv -> Batch Norm -> ReLU
"""
pass | _____no_output_____ | MIT | CarND-Object-Detection-Lab.ipynb | 48cfu/Object-Detection-Lab |
**[Sample solution](./exercise-solutions/e1.py)**Let's compare the number of parameters in each block. | # constants but you can change them so I guess they're not so constant :)
INPUT_CHANNELS = 32
OUTPUT_CHANNELS = 512
KERNEL_SIZE = 3
IMG_HEIGHT = 256
IMG_WIDTH = 256
with tf.Session(graph=tf.Graph()) as sess:
# input
x = tf.constant(np.random.randn(1, IMG_HEIGHT, IMG_WIDTH, INPUT_CHANNELS), dtype=tf.float32)
with tf.variable_scope('vanilla'):
vanilla_conv = vanilla_conv_block(x, KERNEL_SIZE, OUTPUT_CHANNELS)
with tf.variable_scope('mobile'):
mobilenet_conv = mobilenet_conv_block(x, KERNEL_SIZE, OUTPUT_CHANNELS)
vanilla_params = [
(v.name, np.prod(v.get_shape().as_list()))
for v in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'vanilla')
]
mobile_params = [
(v.name, np.prod(v.get_shape().as_list()))
for v in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'mobile')
]
print("VANILLA CONV BLOCK")
total_vanilla_params = sum([p[1] for p in vanilla_params])
for p in vanilla_params:
print("Variable {0}: number of params = {1}".format(p[0], p[1]))
print("Total number of params =", total_vanilla_params)
print()
print("MOBILENET CONV BLOCK")
total_mobile_params = sum([p[1] for p in mobile_params])
for p in mobile_params:
print("Variable {0}: number of params = {1}".format(p[0], p[1]))
print("Total number of params =", total_mobile_params)
print()
print("{0:.3f}x parameter reduction".format(total_vanilla_params /
total_mobile_params)) | _____no_output_____ | MIT | CarND-Object-Detection-Lab.ipynb | 48cfu/Object-Detection-Lab |
Your solution should show the majority of the parameters in *MobileNet* block stem from the pointwise convolution. *MobileNet* SSDIn this section you'll use a pretrained *MobileNet* [SSD](https://arxiv.org/abs/1512.02325) model to perform object detection. You can download the *MobileNet* SSD and other models from the [TensorFlow detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) (*note*: we'll provide links to specific models further below). [Paper](https://arxiv.org/abs/1611.10012) describing comparing several object detection models.Alright, let's get into SSD! Single Shot Detection (SSD)Many previous works in object detection involve more than one training phase. For example, the [Faster-RCNN](https://arxiv.org/abs/1506.01497) architecture first trains a Region Proposal Network (RPN) which decides which regions of the image are worth drawing a box around. RPN is then merged with a pretrained model for classification (classifies the regions). The image below is an RPN: The SSD architecture is a single convolutional network which learns to predict bounding box locations and classify the locations in one pass. Put differently, SSD can be trained end to end while Faster-RCNN cannot. The SSD architecture consists of a base network followed by several convolutional layers: **NOTE:** In this lab the base network is a MobileNet (instead of VGG16.) Detecting BoxesSSD operates on feature maps to predict bounding box locations. Recall a feature map is of size $D_f * D_f * M$. For each feature map location $k$ bounding boxes are predicted. Each bounding box carries with it the following information:* 4 corner bounding box **offset** locations $(cx, cy, w, h)$* $C$ class probabilities $(c_1, c_2, ..., c_p)$SSD **does not** predict the shape of the box, rather just where the box is. The $k$ bounding boxes each have a predetermined shape. This is illustrated in the figure below:The shapes are set prior to actual training. For example, In figure (c) in the above picture there are 4 boxes, meaning $k$ = 4. Exercise 2 - SSD Feature MapsIt would be a good exercise to read the SSD paper prior to a answering the following questions.***Q: Why does SSD use several differently sized feature maps to predict detections?*** A: Your answer here**[Sample answer](./exercise-solutions/e2.md)** The current approach leaves us with thousands of bounding box candidates, clearly the vast majority of them are nonsensical. Exercise 3 - Filtering Bounding Boxes***Q: What are some ways which we can filter nonsensical bounding boxes?*** A: Your answer here**[Sample answer](./exercise-solutions/e3.md)** LossWith the final set of matched boxes we can compute the loss:$$L = \frac {1} {N} * ( L_{class} + L_{box})$$where $N$ is the total number of matched boxes, $L_{class}$ is a softmax loss for classification, and $L_{box}$ is a L1 smooth loss representing the error of the matched boxes with the ground truth boxes. L1 smooth loss is a modification of L1 loss which is more robust to outliers. In the event $N$ is 0 the loss is set 0. SSD Summary* Starts from a base model pretrained on ImageNet. * The base model is extended by several convolutional layers.* Each feature map is used to predict bounding boxes. Diversity in feature map size allows object detection at different resolutions.* Boxes are filtered by IoU metrics and hard negative mining.* Loss is a combination of classification (softmax) and dectection (smooth L1)* Model can be trained end to end. Object Detection InferenceIn this part of the lab you'll detect objects using pretrained object detection models. You can download the latest pretrained models from the [model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md), although do note that you may need a newer version of TensorFlow (such as v1.8) in order to use the newest models.We are providing the download links for the below noted files to ensure compatibility between the included environment file and the models.[SSD_Mobilenet 11.6.17 version](http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_11_06_2017.tar.gz)[RFCN_ResNet101 11.6.17 version](http://download.tensorflow.org/models/object_detection/rfcn_resnet101_coco_11_06_2017.tar.gz)[Faster_RCNN_Inception_ResNet 11.6.17 version](http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017.tar.gz)Make sure to extract these files prior to continuing! | # Frozen inference graph files. NOTE: change the path to where you saved the models.
SSD_GRAPH_FILE = 'ssd_mobilenet_v1_coco_11_06_2017/frozen_inference_graph.pb'
RFCN_GRAPH_FILE = 'rfcn_resnet101_coco_11_06_2017/frozen_inference_graph.pb'
FASTER_RCNN_GRAPH_FILE = 'faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017/frozen_inference_graph.pb' | _____no_output_____ | MIT | CarND-Object-Detection-Lab.ipynb | 48cfu/Object-Detection-Lab |
Below are utility functions. The main purpose of these is to draw the bounding boxes back onto the original image. | # Colors (one for each class)
cmap = ImageColor.colormap
print("Number of colors =", len(cmap))
COLOR_LIST = sorted([c for c in cmap.keys()])
#
# Utility funcs
#
def filter_boxes(min_score, boxes, scores, classes):
"""Return boxes with a confidence >= `min_score`"""
n = len(classes)
idxs = []
for i in range(n):
if scores[i] >= min_score:
idxs.append(i)
filtered_boxes = boxes[idxs, ...]
filtered_scores = scores[idxs, ...]
filtered_classes = classes[idxs, ...]
return filtered_boxes, filtered_scores, filtered_classes
def to_image_coords(boxes, height, width):
"""
The original box coordinate output is normalized, i.e [0, 1].
This converts it back to the original coordinate based on the image
size.
"""
box_coords = np.zeros_like(boxes)
box_coords[:, 0] = boxes[:, 0] * height
box_coords[:, 1] = boxes[:, 1] * width
box_coords[:, 2] = boxes[:, 2] * height
box_coords[:, 3] = boxes[:, 3] * width
return box_coords
def draw_boxes(image, boxes, classes, thickness=4):
"""Draw bounding boxes on the image"""
draw = ImageDraw.Draw(image)
for i in range(len(boxes)):
bot, left, top, right = boxes[i, ...]
class_id = int(classes[i])
color = COLOR_LIST[class_id]
draw.line([(left, top), (left, bot), (right, bot), (right, top), (left, top)], width=thickness, fill=color)
def load_graph(graph_file):
"""Loads a frozen inference graph"""
graph = tf.Graph()
with graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(graph_file, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
return graph | _____no_output_____ | MIT | CarND-Object-Detection-Lab.ipynb | 48cfu/Object-Detection-Lab |
Below we load the graph and extract the relevant tensors using [`get_tensor_by_name`](https://www.tensorflow.org/api_docs/python/tf/Graphget_tensor_by_name). These tensors reflect the input and outputs of the graph, or least the ones we care about for detecting objects. | detection_graph = load_graph(SSD_GRAPH_FILE)
# detection_graph = load_graph(RFCN_GRAPH_FILE)
# detection_graph = load_graph(FASTER_RCNN_GRAPH_FILE)
# The input placeholder for the image.
# `get_tensor_by_name` returns the Tensor with the associated name in the Graph.
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
# The classification of the object (integer id).
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0') | _____no_output_____ | MIT | CarND-Object-Detection-Lab.ipynb | 48cfu/Object-Detection-Lab |
Run detection and classification on a sample image. | # Load a sample image.
image = Image.open('./assets/sample1.jpg')
image_np = np.expand_dims(np.asarray(image, dtype=np.uint8), 0)
with tf.Session(graph=detection_graph) as sess:
# Actual detection.
(boxes, scores, classes) = sess.run([detection_boxes, detection_scores, detection_classes],
feed_dict={image_tensor: image_np})
# Remove unnecessary dimensions
boxes = np.squeeze(boxes)
scores = np.squeeze(scores)
classes = np.squeeze(classes)
confidence_cutoff = 0.8
# Filter boxes with a confidence score less than `confidence_cutoff`
boxes, scores, classes = filter_boxes(confidence_cutoff, boxes, scores, classes)
# The current box coordinates are normalized to a range between 0 and 1.
# This converts the coordinates actual location on the image.
width, height = image.size
box_coords = to_image_coords(boxes, height, width)
# Each class with be represented by a differently colored box
draw_boxes(image, box_coords, classes)
plt.figure(figsize=(12, 8))
plt.imshow(image) | _____no_output_____ | MIT | CarND-Object-Detection-Lab.ipynb | 48cfu/Object-Detection-Lab |
Timing DetectionThe model zoo comes with a variety of models, each its benefits and costs. Below you'll time some of these models. The general tradeoff being sacrificing model accuracy for seconds per frame (SPF). | def time_detection(sess, img_height, img_width, runs=10):
image_tensor = sess.graph.get_tensor_by_name('image_tensor:0')
detection_boxes = sess.graph.get_tensor_by_name('detection_boxes:0')
detection_scores = sess.graph.get_tensor_by_name('detection_scores:0')
detection_classes = sess.graph.get_tensor_by_name('detection_classes:0')
# warmup
gen_image = np.uint8(np.random.randn(1, img_height, img_width, 3))
sess.run([detection_boxes, detection_scores, detection_classes], feed_dict={image_tensor: gen_image})
times = np.zeros(runs)
for i in range(runs):
t0 = time.time()
sess.run([detection_boxes, detection_scores, detection_classes], feed_dict={image_tensor: image_np})
t1 = time.time()
times[i] = (t1 - t0) * 1000
return times
with tf.Session(graph=detection_graph) as sess:
times = time_detection(sess, 600, 1000, runs=10)
# Create a figure instance
fig = plt.figure(1, figsize=(9, 6))
# Create an axes instance
ax = fig.add_subplot(111)
plt.title("Object Detection Timings")
plt.ylabel("Time (ms)")
# Create the boxplot
plt.style.use('fivethirtyeight')
bp = ax.boxplot(times) | _____no_output_____ | MIT | CarND-Object-Detection-Lab.ipynb | 48cfu/Object-Detection-Lab |
Exercise 4 - Model TradeoffsDownload a few models from the [model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) and compare the timings. Detection on a VideoFinally run your pipeline on [this short video](https://s3-us-west-1.amazonaws.com/udacity-selfdrivingcar/advanced_deep_learning/driving.mp4). | # Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
HTML("""
<video width="960" height="600" controls>
<source src="{0}" type="video/mp4">
</video>
""".format('driving.mp4')) | _____no_output_____ | MIT | CarND-Object-Detection-Lab.ipynb | 48cfu/Object-Detection-Lab |
Exercise 5 - Object Detection on a VideoRun an object detection pipeline on the above clip. | clip = VideoFileClip('driving.mp4')
# TODO: Complete this function.
# The input is an NumPy array.
# The output should also be a NumPy array.
def pipeline(img):
pass | _____no_output_____ | MIT | CarND-Object-Detection-Lab.ipynb | 48cfu/Object-Detection-Lab |
**[Sample solution](./exercise-solutions/e5.py)** | with tf.Session(graph=detection_graph) as sess:
image_tensor = sess.graph.get_tensor_by_name('image_tensor:0')
detection_boxes = sess.graph.get_tensor_by_name('detection_boxes:0')
detection_scores = sess.graph.get_tensor_by_name('detection_scores:0')
detection_classes = sess.graph.get_tensor_by_name('detection_classes:0')
new_clip = clip.fl_image(pipeline)
# write to file
new_clip.write_videofile('result.mp4')
HTML("""
<video width="960" height="600" controls>
<source src="{0}" type="video/mp4">
</video>
""".format('result.mp4')) | _____no_output_____ | MIT | CarND-Object-Detection-Lab.ipynb | 48cfu/Object-Detection-Lab |
[](https://neutronimaging.pages.ornl.gov/tutorial/notebooks/shifting_time_offset/) Select Your IPTS | from __code.select_files_and_folders import SelectFiles, SelectFolder
from __code.shifting_time_offset import ShiftTimeOffset
from __code import system
system.System.select_working_dir()
from __code.__all import custom_style
custom_style.style() | _____no_output_____ | BSD-3-Clause | notebooks/converted_notebooks/shifting_time_offset.ipynb | mabrahamdevops/python_notebooks |
Select Folder | o_shift = ShiftTimeOffset()
o_select = SelectFolder(system=system, is_input_folder=True, next_function=o_shift.display_counts_vs_time) | _____no_output_____ | BSD-3-Clause | notebooks/converted_notebooks/shifting_time_offset.ipynb | mabrahamdevops/python_notebooks |
Repeat on other folders? | o_other_folders = SelectFolder(working_dir=o_shift.working_dir,
is_input_folder=True,
multiple_flags=True,
next_function=o_shift.selected_other_folders) | _____no_output_____ | BSD-3-Clause | notebooks/converted_notebooks/shifting_time_offset.ipynb | mabrahamdevops/python_notebooks |
Output Images | o_shift.offset_images() | _____no_output_____ | BSD-3-Clause | notebooks/converted_notebooks/shifting_time_offset.ipynb | mabrahamdevops/python_notebooks |
Spline WidgetA spline widget can be enabled and disabled by the:func:`pyvista.WidgetHelper.add_spline_widget` and:func:`pyvista.WidgetHelper.clear_spline_widgets` methods respectively.This widget allows users to interactively create a poly line (spline) througha scene and use that spline.A common task with splines is to slice a volumetric dataset using an irregularpath. To do this, we have added a convenient helper method which leverages the:func:`pyvista.DataSetFilters.slice_along_line` filter named:func:`pyvista.WidgetHelper.add_mesh_slice_spline`. | import pyvista as pv
import numpy as np
mesh = pv.Wavelet()
# initial spline to seed the example
points = np.array([[-8.64208925, -7.34294559, -9.13803458],
[-8.25601497, -2.54814702, 0.93860914],
[-0.30179377, -3.21555997, -4.19999019],
[ 3.24099167, 2.05814768, 3.39041509],
[ 4.39935227, 4.18804542, 8.96391132]])
p = pv.Plotter()
p.add_mesh(mesh.outline(), color='black')
p.add_mesh_slice_spline(mesh, initial_points=points, n_handles=5)
p.camera_position = [(30, -42, 30),
(0.0, 0.0, 0.0),
(-0.09, 0.53, 0.84)]
p.show() | _____no_output_____ | MIT | locale/examples/03-widgets/spline-widget.ipynb | tkoyama010/pyvista-doc-translations |
Using Neural Network Formulations in OMLTIn this notebook we show how OMLT can be used to build different optimization formulations of neural networks within Pyomo. It specifically demonstrates the following examples:1.) A neural network with smooth sigmoid activation functions represented using full-space and reduced-space formulations 2.) A neural network with non-smooth ReLU activation functions represented using complementarity and mixed integer formulations 3.) A neural network with mixed ReLU and sigmoid activation functions represented using complementarity (for ReLU) and full-space (for sigmoid) formulations After building the OMLT formulations, we minimize each representation of the function and compare the results. Library SetupThis notebook assumes you have a working Tensorflow environment in addition to necessary Python packages described here. We use Keras to train neural networks of interest for our example which requires the Python Tensorflow package. The neural networks are then formulated in Pyomo using OMLT which therefore requires working Pyomo and OMLT installations.The required Python libraries used this notebook are as follows: - `pandas`: used for data import and management - `matplotlib`: used for plotting the results in this example- `tensorflow`: the machine learning language we use to train our neural network- `pyomo`: the algebraic modeling language for Python, it is used to define the optimization model passed to the solver- `omlt`: The package this notebook demonstates. OMLT can formulate machine learning models (such as neural networks) within Pyomo | #Start by importing the following libraries
#data manipulation and plotting
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rc('font', size=24)
plt.rc('axes', titlesize=24)
#tensorflow objects
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras.optimizers import Adam
#pyomo for optimization
import pyomo.environ as pyo
#omlt for interfacing our neural network with pyomo
from omlt import OmltBlock
from omlt.neuralnet import NetworkDefinition, NeuralNetworkFormulation, ReducedSpaceNeuralNetworkFormulation
from omlt.neuralnet.activations import ComplementarityReLUActivation
from omlt.io import keras_reader
import omlt | _____no_output_____ | BSD-3-Clause | docs/notebooks/neuralnet/neural_network_formulations.ipynb | fracek/OMLT |
Import the Data We begin by training neural networks that learn from data given the following imported dataframe. In practice, this data could represent the output of a simulation, real sensor measurements, or some other external data source. The data contains a single input `x` and a single output `y` and contains 10,000 total samples | df = pd.read_csv("../data/sin_quadratic.csv",index_col=[0]); | _____no_output_____ | BSD-3-Clause | docs/notebooks/neuralnet/neural_network_formulations.ipynb | fracek/OMLT |
The data we use for training is plotted below (on the left figure). We also scale the training data to a mean of zero with unit standard deviation. The scaled inputs and outputs are added to the dataframe and plotted next to the original data values (on the right). | #retrieve input 'x' and output 'y' from the dataframe
x = df["x"]
y = df["y"]
#calculate mean and standard deviation, add scaled 'x' and scaled 'y' to the dataframe
mean_data = df.mean(axis=0)
std_data = df.std(axis=0)
df["x_scaled"] = (df['x'] - mean_data['x']) / std_data['x']
df["y_scaled"] = (df['y'] - mean_data['y']) / std_data['y']
#create plots for unscaled and scaled data
f, (ax1, ax2) = plt.subplots(1, 2,figsize = (16,8))
ax1.plot(x, y)
ax1.set_xlabel("x")
ax1.set_ylabel("y");
ax1.set_title("Training Data")
ax2.plot(df["x_scaled"], df["y_scaled"])
ax2.set_xlabel("x_scaled")
ax2.set_ylabel("y_scaled");
ax2.set_title("Scaled Training Data")
plt.tight_layout() | _____no_output_____ | BSD-3-Clause | docs/notebooks/neuralnet/neural_network_formulations.ipynb | fracek/OMLT |
Train the Neural NetworksAfter importing the dataset we use Tensorflow (with Keras) to train three neural network models. Each neural network contains 2 layers with 100 nodes per layer with a single output layer. 1.) The first network (`nn1`) uses sigmoid activation functions for both layers.2.) The second network (`nn2`) uses ReLU activations3.) The last network (`nn3`) mixes ReLU and sigmoid activation functions. The first layer is sigmoid, the second layer is ReLU. We use the ADAM optimizer and train the first two neural networks for 50 epochs. We train `nn3` for 150 epochs since we observe difficulty obtaining a good fit with the mixed network. | #sigmoid neural network
nn1 = Sequential(name='sin_wave_sigmoid')
nn1.add(Input(1))
nn1.add(Dense(100, activation='sigmoid'))
nn1.add(Dense(100, activation='sigmoid'))
nn1.add(Dense(1))
nn1.compile(optimizer=Adam(), loss='mse')
#relu neural network
nn2 = Sequential(name='sin_wave_relu')
nn2.add(Input(1))
nn2.add(Dense(100, activation='relu'))
nn2.add(Dense(100, activation='relu'))
nn2.add(Dense(1))
nn2.compile(optimizer=Adam(), loss='mse')
#mixed neural network
nn3 = Sequential(name='sin_wave_mixed')
nn3.add(Input(1))
nn3.add(Dense(100, activation='sigmoid'))
nn3.add(Dense(100, activation='relu'))
nn3.add(Dense(1))
nn3.compile(optimizer=Adam(), loss='mse')
#train all three neural networks
history1 = nn1.fit(x=df['x_scaled'], y=df['y_scaled'],verbose=1, epochs=50)
history2 = nn2.fit(x=df['x_scaled'], y=df['y_scaled'],verbose=1, epochs=50)
history3 = nn3.fit(x=df['x_scaled'], y=df['y_scaled'],verbose=1, epochs=150) | Epoch 1/50
313/313 [==============================] - 1s 2ms/step - loss: 1.0197
Epoch 2/50
313/313 [==============================] - 1s 2ms/step - loss: 0.9949
Epoch 3/50
313/313 [==============================] - 1s 2ms/step - loss: 0.9749
Epoch 4/50
313/313 [==============================] - 1s 2ms/step - loss: 0.7148
Epoch 5/50
313/313 [==============================] - 1s 3ms/step - loss: 0.3070
Epoch 6/50
313/313 [==============================] - 1s 2ms/step - loss: 0.2495
Epoch 7/50
313/313 [==============================] - 1s 2ms/step - loss: 0.2226
Epoch 8/50
313/313 [==============================] - 1s 3ms/step - loss: 0.2064
Epoch 9/50
313/313 [==============================] - 1s 2ms/step - loss: 0.1886
Epoch 10/50
313/313 [==============================] - 1s 2ms/step - loss: 0.1675
Epoch 11/50
313/313 [==============================] - 1s 2ms/step - loss: 0.1411
Epoch 12/50
313/313 [==============================] - 1s 2ms/step - loss: 0.1205
Epoch 13/50
313/313 [==============================] - 1s 3ms/step - loss: 0.1049
Epoch 14/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0952
Epoch 15/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0891
Epoch 16/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0846
Epoch 17/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0819
Epoch 18/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0780
Epoch 19/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0742
Epoch 20/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0669
Epoch 21/50
313/313 [==============================] - 1s 4ms/step - loss: 0.0592
Epoch 22/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0508
Epoch 23/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0423
Epoch 24/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0328
Epoch 25/50
313/313 [==============================] - 1s 4ms/step - loss: 0.0244
Epoch 26/50
313/313 [==============================] - 1s 4ms/step - loss: 0.0160
Epoch 27/50
313/313 [==============================] - 1s 4ms/step - loss: 0.0098
Epoch 28/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0058
Epoch 29/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0036
Epoch 30/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0024
Epoch 31/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0019
Epoch 32/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0016
Epoch 33/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0015
Epoch 34/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0013
Epoch 35/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0013
Epoch 36/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0014
Epoch 37/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0012A: 1
Epoch 38/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0011
Epoch 39/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0011
Epoch 40/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0011
Epoch 41/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0010
Epoch 42/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0010
Epoch 43/50
313/313 [==============================] - 1s 4ms/step - loss: 9.4469e-04
Epoch 44/50
313/313 [==============================] - 1s 3ms/step - loss: 9.1601e-04
Epoch 45/50
313/313 [==============================] - 1s 3ms/step - loss: 9.2864e-04
Epoch 46/50
313/313 [==============================] - 1s 3ms/step - loss: 9.2708e-04
Epoch 47/50
313/313 [==============================] - 1s 2ms/step - loss: 9.0207e-04
Epoch 48/50
313/313 [==============================] - 1s 2ms/step - loss: 8.6175e-04
Epoch 49/50
313/313 [==============================] - 1s 2ms/step - loss: 8.6889e-04
Epoch 50/50
313/313 [==============================] - 1s 2ms/step - loss: 8.4783e-04
Epoch 1/50
313/313 [==============================] - 1s 2ms/step - loss: 0.3035
Epoch 2/50
313/313 [==============================] - 1s 3ms/step - loss: 0.1054
Epoch 3/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0889
Epoch 4/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0765
Epoch 5/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0719
Epoch 6/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0698
Epoch 7/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0689
Epoch 8/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0667
Epoch 9/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0680
Epoch 10/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0670
Epoch 11/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0665
Epoch 12/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0651
Epoch 13/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0584
Epoch 14/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0439
Epoch 15/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0230
Epoch 16/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0091
Epoch 17/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0040
Epoch 18/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0019
Epoch 19/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0016
Epoch 20/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0013
Epoch 21/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0014
Epoch 22/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0014
Epoch 23/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0013
Epoch 24/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0014
Epoch 25/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0015
Epoch 26/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0013
Epoch 27/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0012
Epoch 28/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0012
Epoch 29/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0012
Epoch 30/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0014
Epoch 31/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0012
Epoch 32/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0013
Epoch 33/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0012
Epoch 34/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0012
Epoch 35/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0011
Epoch 36/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0010
Epoch 37/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0011
Epoch 38/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0012
Epoch 39/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0012
Epoch 40/50
313/313 [==============================] - 1s 2ms/step - loss: 0.0013
Epoch 41/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0011
Epoch 42/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0011
Epoch 43/50
313/313 [==============================] - 1s 4ms/step - loss: 0.0011
Epoch 44/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0011
Epoch 45/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0010
Epoch 46/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0011
Epoch 47/50
313/313 [==============================] - 1s 3ms/step - loss: 0.0010
Epoch 48/50
313/313 [==============================] - 1s 4ms/step - loss: 0.0010
Epoch 49/50
313/313 [==============================] - 2s 6ms/step - loss: 0.0011
Epoch 50/50
313/313 [==============================] - 2s 5ms/step - loss: 0.0010
Epoch 1/150
313/313 [==============================] - 1s 3ms/step - loss: 0.9082
Epoch 2/150
313/313 [==============================] - 1s 2ms/step - loss: 0.3519
Epoch 3/150
313/313 [==============================] - 1s 2ms/step - loss: 0.2221
Epoch 4/150
313/313 [==============================] - 1s 2ms/step - loss: 0.1921
Epoch 5/150
313/313 [==============================] - 1s 2ms/step - loss: 0.1861
Epoch 6/150
313/313 [==============================] - 1s 2ms/step - loss: 0.1855
Epoch 7/150
313/313 [==============================] - 1s 2ms/step - loss: 0.1876
Epoch 8/150
313/313 [==============================] - 1s 2ms/step - loss: 0.1834
Epoch 9/150
313/313 [==============================] - 1s 3ms/step - loss: 0.1845
Epoch 10/150
313/313 [==============================] - 1s 4ms/step - loss: 0.1874
Epoch 11/150
313/313 [==============================] - 2s 5ms/step - loss: 0.1827
Epoch 12/150
313/313 [==============================] - 1s 3ms/step - loss: 0.1810
Epoch 13/150
313/313 [==============================] - 1s 3ms/step - loss: 0.1837
Epoch 14/150
313/313 [==============================] - 1s 4ms/step - loss: 0.1837
Epoch 15/150
313/313 [==============================] - 1s 2ms/step - loss: 0.1817
Epoch 16/150
313/313 [==============================] - 1s 2ms/step - loss: 0.1785
Epoch 17/150
313/313 [==============================] - 1s 2ms/step - loss: 0.1801
Epoch 18/150
313/313 [==============================] - 1s 2ms/step - loss: 0.1745
Epoch 19/150
313/313 [==============================] - 1s 3ms/step - loss: 0.1732
Epoch 20/150
313/313 [==============================] - 1s 3ms/step - loss: 0.1670
Epoch 21/150
313/313 [==============================] - 1s 3ms/step - loss: 0.1593
Epoch 22/150
313/313 [==============================] - 1s 4ms/step - loss: 0.1529
Epoch 23/150
313/313 [==============================] - 1s 4ms/step - loss: 0.1430
Epoch 24/150
313/313 [==============================] - 1s 2ms/step - loss: 0.1325
Epoch 25/150
313/313 [==============================] - 1s 2ms/step - loss: 0.1229
Epoch 26/150
313/313 [==============================] - 1s 2ms/step - loss: 0.1184
Epoch 27/150
313/313 [==============================] - 1s 2ms/step - loss: 0.1151
Epoch 28/150
313/313 [==============================] - 1s 3ms/step - loss: 0.1099
Epoch 29/150
313/313 [==============================] - 1s 2ms/step - loss: 0.1064
Epoch 30/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0950
Epoch 31/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0754
Epoch 32/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0585
Epoch 33/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0461
Epoch 34/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0385
Epoch 35/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0326
Epoch 36/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0302
Epoch 37/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0264
Epoch 38/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0267
Epoch 39/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0243
Epoch 40/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0256
Epoch 41/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0251
Epoch 42/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0252
Epoch 43/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0233
Epoch 44/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0237
Epoch 45/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0230
Epoch 46/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0218
Epoch 47/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0223
Epoch 48/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0212
Epoch 49/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0215
Epoch 50/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0211A: 0s - los
Epoch 51/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0196
Epoch 52/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0202
Epoch 53/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0189
Epoch 54/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0186
Epoch 55/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0185
Epoch 56/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0176
Epoch 57/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0174
Epoch 58/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0168
Epoch 59/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0165
Epoch 60/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0160
Epoch 61/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0154
Epoch 62/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0147
Epoch 63/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0142
Epoch 64/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0137
Epoch 65/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0107
Epoch 66/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0080
Epoch 67/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0056
Epoch 68/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0042
Epoch 69/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0036
Epoch 70/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0030
Epoch 71/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0030
Epoch 72/150
313/313 [==============================] - 1s 5ms/step - loss: 0.0029
Epoch 73/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0031
Epoch 74/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0029
Epoch 75/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0031
Epoch 76/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0028
Epoch 77/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0027
Epoch 78/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0039
Epoch 79/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0034
Epoch 80/150
313/313 [==============================] - 1s 4ms/step - loss: 0.0032
Epoch 81/150
313/313 [==============================] - 1s 4ms/step - loss: 0.0029
Epoch 82/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0036
Epoch 83/150
313/313 [==============================] - 1s 4ms/step - loss: 0.0026
Epoch 84/150
313/313 [==============================] - 1s 2ms/step - loss: 0.0035
Epoch 85/150
313/313 [==============================] - 1s 4ms/step - loss: 0.0032
Epoch 86/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0033
Epoch 87/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0032
Epoch 88/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0030
Epoch 89/150
313/313 [==============================] - 1s 4ms/step - loss: 0.0025
Epoch 90/150
313/313 [==============================] - 1s 4ms/step - loss: 0.0032
Epoch 91/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0028
Epoch 92/150
313/313 [==============================] - 2s 5ms/step - loss: 0.0026
Epoch 93/150
313/313 [==============================] - 1s 5ms/step - loss: 0.0031
Epoch 94/150
313/313 [==============================] - 1s 4ms/step - loss: 0.0027
Epoch 95/150
313/313 [==============================] - 2s 5ms/step - loss: 0.0028
Epoch 96/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0030
Epoch 97/150
313/313 [==============================] - 1s 3ms/step - loss: 0.0023
Epoch 98/150
| BSD-3-Clause | docs/notebooks/neuralnet/neural_network_formulations.ipynb | fracek/OMLT |
Check the predictionsBefore we formulate our trained neural networks in OMLT, we check to see that they adequately represent the data. While we would normally use some accuracy measure, we suffice with a visual plot of the fits. | #note: we calculate the unscaled output for each neural network to check the predictions
#nn1
y_predict_scaled_sigmoid = nn1.predict(x=df['x_scaled'])
y_predict_sigmoid = y_predict_scaled_sigmoid*(std_data['y']) + mean_data['y']
#nn2
y_predict_scaled_relu = nn2.predict(x=df['x_scaled'])
y_predict_relu = y_predict_scaled_relu*(std_data['y']) + mean_data['y']
#nn3
y_predict_scaled_mixed = nn3.predict(x=df['x_scaled'])
y_predict_mixed = y_predict_scaled_mixed*(std_data['y']) + mean_data['y']
#create a single plot with the original data and each neural network's predictions
fig,ax = plt.subplots(1,figsize = (8,8))
ax.plot(x,y,linewidth = 3.0,label = "data", alpha = 0.5)
ax.plot(x,y_predict_relu,linewidth = 3.0,linestyle="dotted",label = "relu")
ax.plot(x,y_predict_sigmoid,linewidth = 3.0,linestyle="dotted",label = "sigmoid")
ax.plot(x,y_predict_mixed,linewidth = 3.0,linestyle="dotted",label = "mixed")
plt.xlabel("x")
plt.ylabel("y")
plt.legend(); | _____no_output_____ | BSD-3-Clause | docs/notebooks/neuralnet/neural_network_formulations.ipynb | fracek/OMLT |
Formulating Neural Networks with OMLTWe now show how OMLT can formulate neural networks within Pyomo. We specifically show how to specify and build different neural network optimization formulations and how to connect them with a broader Pyomo model. In these examples we use Pyomo solvers to find the input that minimizes each neural network output.OMLT can formulate what we call full-space and reduced-space neural network representations using the `NeuralNetworkFormulation` object (for full-space) and `ReducedSpaceNeuralNetworkFormulation` object (for reduced-space). The reduced-space representation can be represented more compactly than the full-space within an optimization setting (i.e. it produces less variables and constraints), but we will see that full-space representation is necessary to represent non-smooth activation formulations (e.g. ReLU with binary variables). Reduced Space (supports smooth activations) The reduced-space representation (`ReducedSpaceNeuralNetworkFormulation`) provided by OMLT hides intermediate neural network variables and activation functions from the underlying optimizer and represents the neural network using one constraint as following:$\hat{y} = N(x)$Here, $\hat{y}$ is a vector of outputs from the neural network, $x$ is a vector of inputs, and $N(\cdot)$ represents the encoded neural network function that internally uses weights, biases, and activation functions to map $x \rightarrow \hat{y}$. From an implementation standpoint, OMLT builds the reduced-space formulation by encoding the sequential layer logic and activation functions as Pyomo `Expression` objects that depend only on the input variables. Full Space (supports smooth and non-smooth activations) The full space formulation (`NeuralNetworkFormulation`) creates intermediate variables associated with the neural network nodes and activation functions and exposes them to the optimizer. This is represented by the following set of equations where $x$ and $\hat{y}$ are again the neural network input and output vectors, and we introduce $\hat{z}_{\ell}$ and $z_{\ell}$ to represent pre-activation and post-activation vectors for each each layer $\ell$. We further use the notation $\hat z_{\ell,i}$ to denote node $i$ in layer $\ell$ where $N_\ell$ is the number of nodes in layer $\ell$ and $N_L$ is the number of layers in the neural network. As such, the first equation maps the input to the first layer values $z_0$, the second equation represents the pre-activation values obtained from the weights, biases, and outputs of the previous layer, the third equation applies the activation function, and the last equation maps the final layer to the output. Note that the reduced-space formulation effectively captures these equations using a single constraint.$\begin{align*}& x = z_0 &\\& \hat z_{\ell,i} = \sum_{j{=}1}^{N_{\ell-1}} w_{ij} z_j + b_i & \forall i \in \{1,...,N_\ell \}, \quad \ell \in \{1,...N_L\} \\& z_{\ell,i} = \sigma(\hat z_{\ell}) & \forall i \in \{1,...,N_\ell \}, \quad \ell \in \{1,...N_L\} \\& \hat{y} = z_{N_L} &\end{align*}$ Full Space ReLU with Binary VariablesThe full space formulation supports non-smooth ReLU activation functions (i.e. the function $z_i = max(0,\hat{z}_i)$) by using binary indicator variables. When using `NeuralNetworkFormulation` with a neural network that contains ReLU activations, OMLT will formulate the below set of variables and constraints for each node in a ReLU layer. Here, $q_{\ell,i}$ is a binary indicator variable that determines whether the output from node $i$ on layer $\ell$ is $0$ or whether it is $\hat{z}_{\ell,i}$. $M_{\ell,i}^U$ and $M_{\ell,i}^L$ are 'BigM' constants used to enforce the ReLU logic. Values for 'BigM' are often taken to be arbitrarily large numbers, but OMLT will automatically determine values by propagating the bounds on the input variables.$\begin{align*}& z_{\ell,i} \ge \hat{z}_{\ell,i} & \forall i \in \{1,...,N_\ell \}, \quad \ell \in \{1,...N_L\}\\& z_{\ell,i} \ge 0 & \forall i \in \{1,...,N_\ell \}, \quad \ell \in \{1,...N_L\}\\& z_{\ell,i} \le M_{\ell,i}^L q_{\ell,i} & \forall i \in \{1,...,N_\ell \}, \quad \ell \in \{1,...N_L\} \\& z_{\ell,i} \le \hat{z}_{\ell,i} - M_{\ell,i}^U(1-q_{\ell,i}) & \forall i \in \{1,...,N_\ell \}, \quad \ell \in \{1,...N_L\}\end{align*} $ Full Space ReLU with Complementarity ConstraintsReLU activation functions can also be represented using the following complementarity condition:$\begin{align*}0 \le (z_{\ell,i} - \hat{z}_{\ell,i}) \perp z_{\ell,i} \ge 0 & \quad \forall i \in \{1,...,N_\ell \}, \quad \ell \in \{1,...N_L\}\end{align*}$This condition means that both of the expressions must be satisfied, where exactly one expression must be satisfied with equality. Hence, we must have that $z_{\ell,i} \ge \hat{z}_{\ell,i}$ and $z_{\ell,i} \ge 0$ with either $z_{\ell,i} = \hat{z}_{\ell,i}$, or $z_{\ell,i} = 0$.OMLT uses a `ComplementarityReLUActivation` object to specify that ReLU activation functions should be formulated using complementarity conditions. Within the formulation code, it uses `pyomo.mpec` to transform this complementarity condition into nonlinear constraints which facilitates using smooth optimization solvers (such as Ipopt) to optimize over ReLU activation functions. Solving Optimization Problems with Neural Networks using OMLTWe now show how to use the above neural network formulations in OMLT for our trained neural networks: `nn1`, `nn2`, and `nn3`. For each formulation we solve the simple optimization problem below using Pyomo where we find the input $x$ that minimizes the output $\hat y$ of the neural network. $\begin{align*} & \min_x \ \hat{y}\\& s.t. \hat{y} = N(x) \end{align*}$For each neural network we trained, we instantiate a Pyomo `ConcreteModel` and create variables that represent the neural network input $x$ and output $\hat y$. We also create an objective function that seeks to minimize the output $\hat y$.Each example uses the same general workflow:- Use the `keras_reader` to import the neural network into a OMLT `NetworkDefinition` object.- Create a Pyomo model with variables `x` and `y` where we intend to minimize `y`.- Create an `OmltBlock`.- Create a formulation object. Note that we use `ReducedSpaceNeuralNetworkFormulation` for the reudced-space and `NeuralNetworkFormulation` for full-space and ReLU. - Build the formulation object on the `OmltBlock`.- Add constraints connecting `x` to the neural network input and `y` to the neural network output.- Solve with an optimization solver (this example uses ipopt).- Query the solution.We also print model size and solution time following each cell where we optimize the Pyomo model. Setup scaling and input boundsWe assume that our Pyomo model operates in the unscaled space with respect to our neural network inputs and outputs. We additionally assume input bounds to our neural networks are given by the limits of our training data. To handle this, OMLT can be given scaling information (in the form of an OMLT scaling object) and input bounds (in the form of a dictionary where indices correspond to neural network indices and values are 2-length tuples of lower and upper bounds). This maintains the space of the optimization problem and scaling is handled by OMLT underneath. The scaling object and input bounds are passed to keras reader method `load_keras_sequential` when importing the associated neural networks. | #create an omlt scaling object
scaler = omlt.scaling.OffsetScaling(offset_inputs=[mean_data['x']],
factor_inputs=[std_data['x']],
offset_outputs=[mean_data['y']],
factor_outputs=[std_data['y']])
#create the input bounds. note that the key `0` corresponds to input `0` and that we also scale the input bounds
input_bounds={0:((min(df['x']) - mean_data['x'])/std_data['x'],
(max(df['x']) - mean_data['x'])/std_data['x'])};
print(scaler)
print("Scaled input bounds: ",input_bounds) | <omlt.scaling.OffsetScaling object at 0x7fdd940bc850>
Scaled input bounds: {0: (-1.731791015101997, 1.731791015101997)}
| BSD-3-Clause | docs/notebooks/neuralnet/neural_network_formulations.ipynb | fracek/OMLT |
Neural Network 1: Sigmoid Activations with Full-Space and Reduced-Space FormulationsThe first neural network contains sigmoid activation functions which we formulate with full-space and reduced-space representations and solve with Ipopt. Reduced Space ModelWe begin with the reduced-space formulation and build the Pyomo model according to the above workflow. Note that the reduced-space model only contains 6 variables (`x` and `y` created on the Pyomo model, and the `OmltBlock` scaled and unscaled input and output which get created internally). The full-space formulation (shown next) will contain many more. | #create a network definition
net_sigmoid = keras_reader.load_keras_sequential(nn1,scaler,input_bounds)
#create a pyomo model with variables x and y
model1_reduced = pyo.ConcreteModel()
model1_reduced.x = pyo.Var(initialize = 0)
model1_reduced.y = pyo.Var(initialize = 0)
model1_reduced.obj = pyo.Objective(expr=(model1_reduced.y))
#create an OmltBlock
model1_reduced.nn = OmltBlock()
#use the reduced-space formulation
formulation1_reduced = ReducedSpaceNeuralNetworkFormulation(net_sigmoid)
model1_reduced.nn.build_formulation(formulation1_reduced)
#connect pyomo variables to the neural network
@model1_reduced.Constraint()
def connect_inputs(mdl):
return mdl.x == mdl.nn.inputs[0]
@model1_reduced.Constraint()
def connect_outputs(mdl):
return mdl.y == mdl.nn.outputs[0]
#solve the model and query the solution
status_1_reduced = pyo.SolverFactory('ipopt').solve(model1_reduced, tee=True)
solution_1_reduced = (pyo.value(model1_reduced.x),pyo.value(model1_reduced.y))
#print out model size and solution values
print("Reduced Space Solution:")
print("# of variables: ",model1_reduced.nvariables())
print("# of constraints: ",model1_reduced.nconstraints())
print("x = ", solution_1_reduced[0])
print("y = ", solution_1_reduced[1])
print("Solve Time: ", status_1_reduced['Solver'][0]['Time']) | Reduced Space Solution:
# of variables: 6
# of constraints: 5
x = -1.4257385602216635
y = 1.3405352390223917
Solve Time: 0.16946029663085938
| BSD-3-Clause | docs/notebooks/neuralnet/neural_network_formulations.ipynb | fracek/OMLT |
Full Space ModelFor the full-space representation we use `NeuralNetworkFormulation` instead of `ReducedSpaceNeuralNetworkFormulation`. The key difference is that this formulation creates additional variables and constraints to represent each node and activation function in the neural network.Note that when we print this model there are over 400 variables and constraints each owing to the number of neural network nodes. The solution consequently takes longer with more iterations (this effect is more pronounced for larger models). The full-space also finds a different local minima, but this was by no means guaranteed to happen. | net_sigmoid = keras_reader.load_keras_sequential(nn1,scaler,input_bounds)
model1_full = pyo.ConcreteModel()
model1_full.x = pyo.Var(initialize = 0)
model1_full.y = pyo.Var(initialize = 0)
model1_full.obj = pyo.Objective(expr=(model1_full.y))
model1_full.nn = OmltBlock()
formulation2_full = NeuralNetworkFormulation(net_sigmoid)
model1_full.nn.build_formulation(formulation2_full)
@model1_full.Constraint()
def connect_inputs(mdl):
return mdl.x == mdl.nn.inputs[0]
@model1_full.Constraint()
def connect_outputs(mdl):
return mdl.y == mdl.nn.outputs[0]
status_1_full = pyo.SolverFactory('ipopt').solve(model1_full, tee=True)
solution_1_full = (pyo.value(model1_full.x),pyo.value(model1_full.y))
#print out model size and solution values
print("Full Space Solution:")
print("# of variables: ",model1_full.nvariables())
print("# of constraints: ",model1_full.nconstraints())
print("x = ", solution_1_full[0])
print("y = ", solution_1_full[1])
print("Solve Time: ", status_1_full['Solver'][0]['Time']) | Full Space Solution:
# of variables: 409
# of constraints: 408
x = -0.27928922891858343
y = -0.8853885901469711
Solve Time: 0.3363785743713379
| BSD-3-Clause | docs/notebooks/neuralnet/neural_network_formulations.ipynb | fracek/OMLT |
Neural Network 2: ReLU Neural Network using Complementarity Constraints and Binary VariablesThe second neural network contains ReLU activation functions which we represent using complementarity constraints and binary variables. ReLU Complementarity ConstraintsTo represent ReLU using complementarity constraints we use the `ComplementarityReLUActivation` object which we pass as a keyword argument to a `NeuralNetworkFormulation`. This overrides the default ReLU behavior which uses binary variables (shown in the next model). Importantly, the complementarity formulation allows us to solve the model using a continuous solver (in this case using Ipopt). | net_relu = keras_reader.load_keras_sequential(nn2,scaler,input_bounds)
model2_comp = pyo.ConcreteModel()
model2_comp.x = pyo.Var(initialize = 0)
model2_comp.y = pyo.Var(initialize = 0)
model2_comp.obj = pyo.Objective(expr=(model2_comp.y))
model2_comp.nn = OmltBlock()
formulation2_comp = NeuralNetworkFormulation(net_relu,activation_constraints={
"relu": ComplementarityReLUActivation()})
model2_comp.nn.build_formulation(formulation2_comp)
@model2_comp.Constraint()
def connect_inputs(mdl):
return mdl.x == mdl.nn.inputs[0]
@model2_comp.Constraint()
def connect_outputs(mdl):
return mdl.y == mdl.nn.outputs[0]
status_2_comp = pyo.SolverFactory('ipopt').solve(model2_comp, tee=True)
solution_2_comp = (pyo.value(model2_comp.x),pyo.value(model2_comp.y))
#print out model size and solution values
print("ReLU Complementarity Solution:")
print("# of variables: ",model2_comp.nvariables())
print("# of constraints: ",model2_comp.nconstraints())
print("x = ", solution_2_comp[0])
print("y = ", solution_2_comp[1])
print("Solve Time: ", status_2_comp['Solver'][0]['Time']) | ReLU Complementarity Solution:
# of variables: 609
# of constraints: 808
x = -0.2970834231188838
y = -0.8747695437296246
Solve Time: 0.3795132637023926
| BSD-3-Clause | docs/notebooks/neuralnet/neural_network_formulations.ipynb | fracek/OMLT |
ReLU with Binary Variables and BigM ConstraintsFor the binary variable formulation of ReLU we use the default activation function settings. These are applied automatically if a `NetworkDefinition` contains ReLU activation functions. Note that we solve the optimization problem with Cbc which can handle binary decisions. While the solution takes considerably longer than the continuous complementarity formulation, it is guaranteed to find the global minimum. | net_relu = keras_reader.load_keras_sequential(nn2,scaler,input_bounds)
model2_bigm = pyo.ConcreteModel()
model2_bigm.x = pyo.Var(initialize = 0)
model2_bigm.y = pyo.Var(initialize = 0)
model2_bigm.obj = pyo.Objective(expr=(model2_bigm.y))
model2_bigm.nn = OmltBlock()
formulation2_bigm = NeuralNetworkFormulation(net_relu)
model2_bigm.nn.build_formulation(formulation2_bigm)
@model2_bigm.Constraint()
def connect_inputs(mdl):
return mdl.x == mdl.nn.inputs[0]
@model2_bigm.Constraint()
def connect_outputs(mdl):
return mdl.y == mdl.nn.outputs[0]
status_2_bigm = pyo.SolverFactory('cbc').solve(model2_bigm, tee=True)
solution_2_bigm = (pyo.value(model2_bigm.x),pyo.value(model2_bigm.y))
#print out model size and solution values
print("ReLU BigM Solution:")
print("# of variables: ",model2_bigm.nvariables())
print("# of constraints: ",model2_bigm.nconstraints())
print("x = ", solution_2_bigm[0])
print("y = ", solution_2_bigm[1])
print("Solve Time: ", status_2_bigm['Solver'][0]['Time']) | ReLU BigM Solution:
# of variables: 609
# of constraints: 1008
x = -0.29708481
y = -0.87476544
Solve Time: 82.65653038024902
| BSD-3-Clause | docs/notebooks/neuralnet/neural_network_formulations.ipynb | fracek/OMLT |
Neural Network 3: Mixed ReLU and Sigmoid Activation FunctionsThe last neural network contains both ReLU and sigmoid activation functions. These networks can be represented by using the complementarity formulation of relu and mixing it with the full-space formulation for the sigmoid functions. | net_mixed = keras_reader.load_keras_sequential(nn3,scaler,input_bounds)
model3_mixed = pyo.ConcreteModel()
model3_mixed.x = pyo.Var(initialize = 0)
model3_mixed.y = pyo.Var(initialize = 0)
model3_mixed.obj = pyo.Objective(expr=(model3_mixed.y))
model3_mixed.nn = OmltBlock()
formulation3_mixed = NeuralNetworkFormulation(net_mixed,activation_constraints={
"relu": ComplementarityReLUActivation()})
model3_mixed.nn.build_formulation(formulation3_mixed)
@model3_mixed.Constraint()
def connect_inputs(mdl):
return mdl.x == mdl.nn.inputs[0]
@model3_mixed.Constraint()
def connect_outputs(mdl):
return mdl.y == mdl.nn.outputs[0]
status_3_mixed = pyo.SolverFactory('ipopt').solve(model3_mixed, tee=True)
solution_3_mixed = (pyo.value(model3_mixed.x),pyo.value(model3_mixed.y))
#print out model size and solution values
print("Mixed NN Solution:")
print("# of variables: ",model3_mixed.nvariables())
print("# of constraints: ",model3_mixed.nconstraints())
print("x = ", solution_3_mixed[0])
print("y = ", solution_3_mixed[1])
print("Solve Time: ", status_3_mixed['Solver'][0]['Time']) | Mixed NN Solution:
# of variables: 509
# of constraints: 608
x = -0.33286905796510236
y = -0.9116201725726657
Solve Time: 0.6036348342895508
| BSD-3-Clause | docs/notebooks/neuralnet/neural_network_formulations.ipynb | fracek/OMLT |
Final Plots and DiscussionWe lastly plot the results of each optimization problem. Some of the main take-aways from this notebook are as follows:- A broad set of dense neural network architectures can be represented in Pyomo using OMLT. This notebook used the Keras reader to import sequential Keras models but OMLT also supports using ONNX models (see `import_network.ipynb`). OMLT additionally supports Convolutional Neural Networks (see `mnist_example_cnn.ipynb`).- The reduced-space formulation provides a computationally tractable means to represent neural networks that contain smooth activation functions and can be used with continuous optimizers to obtain local solutions.- The full-space formulation permits representing ReLU activation functions using either complementarity or 'BigM' approaches with binary variables (as well as partition-based approaches not shown in this notebook).- The full-space formulation further allows one to optimize over neural networks that contain mixed activation functions by formulating ReLU logic as complementarity conditions.- Using binary variables to represent ReLU can attain global solutions (if the rest of the problem is convex), whereas the complementarity formulation provides local solutions but tends to be more scalable. | #create a plot with 3 subplots
fig,axs = plt.subplots(1,3,figsize = (24,8))
#nn1 - sigmoid
axs[0].plot(x,y_predict_sigmoid,linewidth = 3.0,linestyle="dotted",color = "orange")
axs[0].set_title("sigmoid")
axs[0].scatter([solution_1_reduced[0]],[solution_1_reduced[1]],color = "black",s = 300, label="reduced space")
axs[0].scatter([solution_1_full[0]],[solution_1_full[1]],color = "blue",s = 300, label="full space")
axs[0].legend()
#nn2 - relu
axs[1].plot(x,y_predict_relu,linewidth = 3.0,linestyle="dotted",color = "green")
axs[1].set_title("relu")
axs[1].scatter([solution_2_comp[0]],[solution_2_comp[1]],color = "black",s = 300, label="complementarity")
axs[1].scatter([solution_2_bigm[0]],[solution_2_bigm[1]],color = "blue",s = 300, label="bigm")
axs[1].legend()
#nn3 - mixed
axs[2].plot(x,y_predict_mixed,linewidth = 3.0,linestyle="dotted", color = "red")
axs[2].set_title("mixed")
axs[2].scatter([solution_3_mixed[0]],[solution_3_mixed[1]],color = "black",s = 300); | _____no_output_____ | BSD-3-Clause | docs/notebooks/neuralnet/neural_network_formulations.ipynb | fracek/OMLT |
transmissibility-based TPA: FRF based In this example a numerical example is used to demonstrate a FRF based TPA example. | import pyFBS
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.colors import LogNorm
%matplotlib inline | _____no_output_____ | MIT | examples/17_TPA_FRF_based.ipynb | anantagrg/FBS_Substructuring |
Example datasets Load the required predefined datasets: | pyFBS.download_lab_testbench()
xlsx_pos = r"./lab_testbench/Measurements/TPA_synt.xlsx"
stl_A = r"./lab_testbench/STL/A.stl"
stl_B = r"./lab_testbench/STL/B.stl"
stl_AB = r"./lab_testbench/STL/AB.stl"
df_acc_AB = pd.read_excel(xlsx_pos, sheet_name='Sensors_AB')
df_chn_AB = pd.read_excel(xlsx_pos, sheet_name='Channels_AB')
df_imp_AB = pd.read_excel(xlsx_pos, sheet_name='Impacts_AB')
df_vp = pd.read_excel(xlsx_pos, sheet_name='VP_Channels')
df_vpref = pd.read_excel(xlsx_pos, sheet_name='VP_RefChannels') | _____no_output_____ | MIT | examples/17_TPA_FRF_based.ipynb | anantagrg/FBS_Substructuring |
Numerical model Load the corresponding .full and .ress file from the example datasets: | full_file_AB = r"./lab_testbench/FEM/AB.full"
ress_file_AB = r"./lab_testbench/FEM/AB.rst" | _____no_output_____ | MIT | examples/17_TPA_FRF_based.ipynb | anantagrg/FBS_Substructuring |
Create an MK model for each component: | MK_AB = pyFBS.MK_model(ress_file_AB, full_file_AB, no_modes=100, recalculate=False) | C:\Users\tomaz.bregar\Anaconda3\lib\site-packages\pyvista\core\pointset.py:610: UserWarning: VTK 9 no longer accepts an offset array
warnings.warn('VTK 9 no longer accepts an offset array')
| MIT | examples/17_TPA_FRF_based.ipynb | anantagrg/FBS_Substructuring |
The locations and directions of responses and excitations often do not match exactly with the numerical model, so we need to find the nodes closest to these points. Only the locations are updated, the directions remain the same. | df_chn_AB_up = MK_AB.update_locations_df(df_chn_AB)
df_imp_AB_up = MK_AB.update_locations_df(df_imp_AB) | _____no_output_____ | MIT | examples/17_TPA_FRF_based.ipynb | anantagrg/FBS_Substructuring |
3D view Open 3D viewer in the background. With the 3D viewer the subplot capabilities of PyVista can be used. | view3D = pyFBS.view3D(show_origin=False, show_axes=False, title="TPA") | _____no_output_____ | MIT | examples/17_TPA_FRF_based.ipynb | anantagrg/FBS_Substructuring |
Add the STL file of structure AB to the plot and show the corresponding accelerometer, channels and impacts. | view3D.plot.add_text("AB", position='upper_left', font_size=10, color="k", font="times", name="AB_structure")
view3D.add_stl(stl_AB, name="AB_structure", color="#8FB1CC", opacity=.1)
view3D.plot.add_mesh(MK_AB.mesh, scalars=np.zeros(MK_AB.mesh.points.shape[0]), show_scalar_bar=False, name="mesh_AB", cmap="coolwarm", show_edges=True)
view3D.show_chn(df_chn_AB_up, color="green", overwrite=True)
view3D.show_imp(df_imp_AB_up, color="red", overwrite=True);
view3D.show_acc(df_acc_AB, overwrite=True)
view3D.show_vp(df_vp, color="blue", overwrite=True)
view3D.label_imp(df_imp_AB_up)
view3D.label_acc(df_acc_AB) | _____no_output_____ | MIT | examples/17_TPA_FRF_based.ipynb | anantagrg/FBS_Substructuring |
FRF sythetization Perform the FRF sythetization for each component based on the updated locations: | MK_AB.FRF_synth(df_chn_AB_up, df_imp_AB_up, f_start=0, modal_damping=0.003, frf_type="accelerance") | _____no_output_____ | MIT | examples/17_TPA_FRF_based.ipynb | anantagrg/FBS_Substructuring |
First, structural admittance $\boldsymbol{\text{Y}}_{31}^{\text{AB}}$ is obtained. | imp_loc = 10
Y31_AB = MK_AB.FRF[:, 9:12, imp_loc:imp_loc+1]
Y31_AB.shape | _____no_output_____ | MIT | examples/17_TPA_FRF_based.ipynb | anantagrg/FBS_Substructuring |
Then, structural admittance $\boldsymbol{\text{Y}}_{41}^{\text{AB}}$ is obtained. | Y41_AB = MK_AB.FRF[:, :9, imp_loc:imp_loc+1]
Y41_AB.shape | _____no_output_____ | MIT | examples/17_TPA_FRF_based.ipynb | anantagrg/FBS_Substructuring |
Aplication of the FRF based TPA Calculation of transmissibility matrix $\boldsymbol{\text{T}}_{34, f_1}^{\text{AB}}$: | T34 = Y31_AB @ np.linalg.pinv(Y41_AB)
T34.shape | _____no_output_____ | MIT | examples/17_TPA_FRF_based.ipynb | anantagrg/FBS_Substructuring |
Define operational displacements $\boldsymbol{\text{u}}_4$: | u4 = MK_AB.FRF[:, :9, imp_loc:imp_loc+1]
u4.shape | _____no_output_____ | MIT | examples/17_TPA_FRF_based.ipynb | anantagrg/FBS_Substructuring |
Calcualting response $\boldsymbol{\text{u}}_3^{\text{TPA}}$. | u3 = T34 @ u4
u3.shape | _____no_output_____ | MIT | examples/17_TPA_FRF_based.ipynb | anantagrg/FBS_Substructuring |
On board validation: comparison of predicted $\boldsymbol{\text{u}}_{3}^{\text{TPA}}$ and operational $\boldsymbol{\text{u}}_{3}^{\text{MK}}$: | plt.figure(figsize=(10, 5))
u3_MK = MK_AB.FRF[:, 9:12, imp_loc:imp_loc+1]
sel = 0
plt.subplot(211)
plt.semilogy(np.abs(u3_MK[:,sel,0]), label='MK');
plt.semilogy(np.abs(u3[:,sel,0]), '--', label='TPA');
plt.ylim(10**-8, 10**4);
plt.xlim(0, 2000)
plt.legend(loc=0);
plt.subplot(413)
plt.plot(np.angle(u3_MK[:,sel,0]));
plt.plot(np.angle(u3[:,sel,0]), '--');
plt.xlim(0, 2000); | _____no_output_____ | MIT | examples/17_TPA_FRF_based.ipynb | anantagrg/FBS_Substructuring |
Which is better text embedding or graph embedding? | # define this question
data[(data['kg_id'] == data['GT_kg_id']) & (data['kg_id'] != '')] | _____no_output_____ | MIT | exact-match-centroid-pipeline/embedding_eval.ipynb | nicklein/table-linker-pipelines |
By cell linking task. Count/compute- Number of tasks- Number and fraction of tasks with known ground truth- Number and fraction of tasks with ground truth in the candidate set- Number and fraction of singleton candidate sets- Number and fraction of singleton candidate sets containing ground truth- Top-1 accuracy, Top-5 accuracy and NDCG using retrieval_score, text-embedding-score and graph-embedding-score. In our case with binary relevance I think NDCG is the same as DCG.- Average Top-1, Top-5 and NDCG metrics | row_idx, col_idx = 2, 0
relevant_df = data[(data['column'] == col_idx) & (data['row'] == row_idx) & (data['kg_id'] != '')]
num_tasks = len(relevant_df)
num_tasks
num_tasks_known_gt = len(relevant_df[relevant_df['GT_kg_id'] != ''])
num_tasks_known_gt
is_gt_in_candidate = len(relevant_df[relevant_df['GT_kg_id'] == relevant_df['kg_id']])
is_gt_in_candidate
is_candidate_set_singleton = len(relevant_df) == 1
is_candidate_set_singleton
is_top_one_accurate = False
top_one_row = relevant_df.iloc[0]
if top_one_row['kg_id'] == top_one_row['GT_kg_id']:
is_top_one_accurate = True
is_top_one_accurate
is_top_five_accurate = False
top_five_rows = relevant_df.iloc[0:5]
for i, row in top_five_rows.iterrows():
if row['kg_id'] == row['GT_kg_id']:
is_top_five_accurate = True
is_top_five_accurate
is_top_ten_accurate = False
top_ten_rows = relevant_df.iloc[0:10]
for i, row in top_ten_rows.iterrows():
if row['kg_id'] == row['GT_kg_id']:
is_top_ten_accurate = True
is_top_ten_accurate
# parse eval file
def parse_eval_file_stats(file_name=None, eval_data=None):
if file_name is not None and eval_data is None:
eval_data = pd.read_csv(file_name)
eval_data = eval_data.fillna('')
parsed_eval_data = {}
for ei, erow in eval_data.iterrows():
if 'table_id' not in erow:
table_id = file_name.split('/')[-1].split('.csv')[0]
else:
table_id = erow['table_id']
row_idx, col_idx = erow['row'], erow['column']
if (table_id, row_idx, col_idx) in parsed_eval_data:
continue
relevant_df = eval_data[(eval_data['column'] == col_idx) & (eval_data['row'] == row_idx) & (eval_data['kg_id'] != '')]
if len(relevant_df) == 0:
parsed_eval_data[(row_idx, col_idx)] = {
'table_id': table_id,
'GT_kg_id': erow['GT_kg_id'],
'row': row_idx,
'column': col_idx,
'num_candidate': 0,
'num_candidate_known_gt': 0,
'is_gt_in_candidate': False,
'is_candidate_set_singleton': False,
'is_top_one_accurate': False,
'is_top_five_accurate': False
}
continue
row_col_stats = {}
row_col_stats['table_id'] = table_id
row_col_stats['GT_kg_id'] = erow['GT_kg_id']
row_col_stats['row'] = erow['row']
row_col_stats['column'] = erow['column']
row_col_stats['num_candidate'] = len(relevant_df)
row_col_stats['num_candidate_known_gt'] = len(relevant_df[relevant_df['GT_kg_id'] != ''])
row_col_stats['is_gt_in_candidate'] = len(relevant_df[relevant_df['GT_kg_id'] == relevant_df['kg_id']]) > 0
row_col_stats['is_candidate_set_singleton'] = len(relevant_df) == 1
is_top_one_accurate = False
top_one_row = relevant_df.iloc[0]
if top_one_row['kg_id'] == top_one_row['GT_kg_id']:
is_top_one_accurate = True
row_col_stats['is_top_one_accurate'] = is_top_one_accurate
is_top_five_accurate = False
top_five_rows = relevant_df.iloc[0:5]
for i, row in top_five_rows.iterrows():
if row['kg_id'] == row['GT_kg_id']:
is_top_five_accurate = True
row_col_stats['is_top_five_accurate'] = is_top_five_accurate
parsed_eval_data[(table_id, row_idx, col_idx)] = row_col_stats
return parsed_eval_data
e_data = parse_eval_file_stats(file_name='/Users/summ7t/dev/novartis/table-linker/SemTab2019/embedding_evaluation_files/84575189_0_6365692015941409487.csv')
len(e_data), e_data[("84575189_0_6365692015941409487", 0, 2)]
e_data = parse_eval_file_stats(eval_data=all_data)
len(e_data), e_data[("84575189_0_6365692015941409487", 0, 2)]
import json
with open('./eval_all.json', 'w') as f:
json.dump(list(e_data.values()), f, indent=4)
import json
with open('./eval_14067031_0_559833072073397908.json', 'w') as f:
json.dump(list(e_data.values()), f, indent=4)
len([k for k in e_data if e_data[k]['is_gt_in_candidate']])
import os
eval_file_names = []
for (dirpath, dirnames, filenames) in os.walk('/Users/summ7t/dev/novartis/table-linker/SemTab2019/embedding_evaluation_files/'):
for fn in filenames:
if "csv" not in fn:
continue
abs_fn = dirpath + fn
assert os.path.isfile(abs_fn)
if os.path.getsize(abs_fn) == 0:
continue
eval_file_names.append(abs_fn)
len(eval_file_names)
eval_file_names
# merge all eval files in one df
def merge_df(file_names: list):
df_list = []
for fn in file_names:
fid = fn.split('/')[-1].split('.csv')[0]
df = pd.read_csv(fn)
df['table_id'] = fid
# df = df.fillna('')
df_list.append(df)
return pd.concat(df_list)
all_data = merge_df(eval_file_names)
all_data
all_data[all_data['table_id'] == '14067031_0_559833072073397908']
# filter out empty task: NaN in candidate
no_nan_all_data = all_data[pd.notna(all_data['kg_id'])]
no_nan_all_data
all_data[pd.isna(all_data['kg_id'])]
# parse eval file
from pandas.core.common import SettingWithCopyError
import numpy as np
import sklearn.metrics
pd.options.mode.chained_assignment = 'raise'
def parse_eval_files_stats(eval_data):
res = {}
candidate_eval_data = eval_data.groupby(['table_id', 'row', 'column'])['table_id'].count().reset_index(name="count")
res['num_tasks'] = len(eval_data.groupby(['table_id', 'row', 'column']))
res['num_tasks_with_gt'] = len(eval_data[pd.notna(eval_data['GT_kg_id'])].groupby(['table_id', 'row', 'column']))
res['num_tasks_with_gt_in_candidate'] = len(eval_data[eval_data['evaluation_label'] == 1].groupby(['table_id', 'row', 'column']))
res['num_tasks_with_singleton_candidate'] = len(candidate_eval_data[candidate_eval_data['count'] == 1].groupby(['table_id', 'row', 'column']))
singleton_eval_data = candidate_eval_data[candidate_eval_data['count'] == 1]
num_tasks_with_singleton_candidate_with_gt = 0
for i, row in singleton_eval_data.iterrows():
table_id, row_idx, col_idx = row['table_id'], row['row'], row['column']
c_e_data = eval_data[(eval_data['table_id'] == table_id) & (eval_data['row'] == row_idx) & (eval_data['column'] == col_idx)]
assert len(c_e_data) == 1
if c_e_data.iloc[0]['evaluation_label'] == 1:
num_tasks_with_singleton_candidate_with_gt += 1
res['num_tasks_with_singleton_candidate_with_gt'] = num_tasks_with_singleton_candidate_with_gt
num_tasks_with_retrieval_top_one_accurate = []
num_tasks_with_retrieval_top_five_accurate = []
num_tasks_with_text_top_one_accurate = []
num_tasks_with_text_top_five_accurate = []
num_tasks_with_graph_top_one_accurate = []
num_tasks_with_graph_top_five_accurate = []
ndcg_score_r_list = []
ndcg_score_t_list = []
ndcg_score_g_list = []
has_gt_list = []
has_gt_in_candidate = []
# candidate_eval_data = candidate_eval_data[:1]
for i, row in candidate_eval_data.iterrows():
table_id, row_idx, col_idx = row['table_id'], row['row'], row['column']
# print(f"working on {table_id}: {row_idx}, {col_idx}")
c_e_data = eval_data[(eval_data['table_id'] == table_id) & (eval_data['row'] == row_idx) & (eval_data['column'] == col_idx)]
assert len(c_e_data) > 0
if np.nan not in set(c_e_data['GT_kg_id']):
has_gt_list.append(1)
else:
has_gt_list.append(0)
if 1 in set(c_e_data['evaluation_label']):
has_gt_in_candidate.append(1)
else:
has_gt_in_candidate.append(0)
# handle retrieval score
s_data = c_e_data.sort_values(by=['retrieval_score'], ascending=False)
if s_data.iloc[0]['evaluation_label'] == 1:
num_tasks_with_retrieval_top_one_accurate.append(1)
else:
num_tasks_with_retrieval_top_one_accurate.append(0)
if 1 in set(s_data.iloc[0:5]['evaluation_label']):
num_tasks_with_retrieval_top_five_accurate.append(1)
else:
num_tasks_with_retrieval_top_five_accurate.append(0)
# handle text-embedding-score
s_data = c_e_data.sort_values(by=['text-embedding-score'], ascending=False)
if s_data.iloc[0]['evaluation_label'] == 1:
num_tasks_with_text_top_one_accurate.append(1)
else:
num_tasks_with_text_top_one_accurate.append(0)
if 1 in set(s_data.iloc[0:5]['evaluation_label']):
num_tasks_with_text_top_five_accurate.append(1)
else:
num_tasks_with_text_top_five_accurate.append(0)
# handle graph-embedding-score
s_data = c_e_data.sort_values(by=['graph-embedding-score'], ascending=False)
if s_data.iloc[0]['evaluation_label'] == 1:
num_tasks_with_graph_top_one_accurate.append(1)
else:
num_tasks_with_graph_top_one_accurate.append(0)
if 1 in set(s_data.iloc[0:5]['evaluation_label']):
num_tasks_with_graph_top_five_accurate.append(1)
else:
num_tasks_with_graph_top_five_accurate.append(0)
cf_e_data = c_e_data.copy()
cf_e_data['evaluation_label'] = cf_e_data['evaluation_label'].replace(-1, 0)
cf_e_data['text-embedding-score'] = cf_e_data['text-embedding-score'].replace(np.nan, 0)
cf_e_data['graph-embedding-score'] = cf_e_data['graph-embedding-score'].replace(np.nan, 0)
try:
ndcg_score_r_list.append(
sklearn.metrics.ndcg_score(
np.array([list(cf_e_data['evaluation_label'])]),
np.array([list(cf_e_data['retrieval_score'])])
)
)
except:
if len(cf_e_data['evaluation_label']) == 1 and cf_e_data['evaluation_label'].iloc[0] == 1:
ndcg_score_r_list.append(1.0)
elif len(cf_e_data['evaluation_label']) == 1 and cf_e_data['evaluation_label'].iloc[0] != 1:
ndcg_score_r_list.append(0.0)
else:
print("why am i here")
try:
ndcg_score_t_list.append(
sklearn.metrics.ndcg_score(
np.array([list(cf_e_data['evaluation_label'])]),
np.array([list(cf_e_data['text-embedding-score'])])
)
)
except:
if len(cf_e_data['evaluation_label']) == 1 and cf_e_data['evaluation_label'].iloc[0] == 1:
ndcg_score_t_list.append(1.0)
elif len(cf_e_data['evaluation_label']) == 1 and cf_e_data['evaluation_label'].iloc[0] != 1:
ndcg_score_t_list.append(0.0)
else:
print("text", cf_e_data['evaluation_label'], cf_e_data['text-embedding-score'] )
print("why am i here")
try:
ndcg_score_g_list.append(
sklearn.metrics.ndcg_score(
np.array([list(cf_e_data['evaluation_label'])]),
np.array([list(cf_e_data['graph-embedding-score'])])
)
)
except:
if len(cf_e_data['evaluation_label']) == 1 and cf_e_data['evaluation_label'].iloc[0] == 1:
ndcg_score_g_list.append(1.0)
elif len(cf_e_data['evaluation_label']) == 1 and cf_e_data['evaluation_label'].iloc[0] != 1:
ndcg_score_g_list.append(0.0)
else:
print("graph", cf_e_data['evaluation_label'], cf_e_data['graph-embedding-score'])
print("why am i here")
candidate_eval_data['r_ndcg'] = ndcg_score_r_list
candidate_eval_data['t_ndcg'] = ndcg_score_t_list
candidate_eval_data['g_ndcg'] = ndcg_score_g_list
candidate_eval_data['retrieval_top_one_accurate'] = num_tasks_with_retrieval_top_one_accurate
candidate_eval_data['retrieval_top_five_accurate'] = num_tasks_with_retrieval_top_five_accurate
candidate_eval_data['text_top_one_accurate'] = num_tasks_with_text_top_one_accurate
candidate_eval_data['text_top_five_accurate'] = num_tasks_with_text_top_five_accurate
candidate_eval_data['graph_top_one_accurate'] = num_tasks_with_graph_top_one_accurate
candidate_eval_data['graph_top_five_accurate'] = num_tasks_with_graph_top_five_accurate
candidate_eval_data['has_gt'] = has_gt_list
candidate_eval_data['has_gt_in_candidate'] = has_gt_in_candidate
res['num_tasks_with_retrieval_top_one_accurate'] = sum(num_tasks_with_retrieval_top_one_accurate)
res['num_tasks_with_retrieval_top_five_accurate'] = sum(num_tasks_with_retrieval_top_five_accurate)
res['num_tasks_with_text_top_one_accurate'] = sum(num_tasks_with_text_top_one_accurate)
res['num_tasks_with_text_top_five_accurate'] = sum(num_tasks_with_text_top_five_accurate)
res['num_tasks_with_graph_top_one_accurate'] = sum(num_tasks_with_graph_top_one_accurate)
res['num_tasks_with_graph_top_five_accurate'] = sum(num_tasks_with_graph_top_five_accurate)
return res, candidate_eval_data
# no_nan_all_data[no_nan_all_data['table_id'] == "84575189_0_6365692015941409487"]
res, candidate_eval_data = parse_eval_files_stats(no_nan_all_data[no_nan_all_data['table_id'] == "84575189_0_6365692015941409487"])
res
res, candidate_eval_data = parse_eval_files_stats(no_nan_all_data)
print(res)
display(candidate_eval_data)
candidate_eval_data['has_gt'].sum(), candidate_eval_data['has_gt_in_candidate'].sum()
candidate_eval_data.to_csv('./candidate_eval_no_empty.csv', index=False)
# Conclusion of exact-match on all tasks with ground truth (no filtering)
print(f"number of tasks: {res['num_tasks']}")
print(f"number of tasks with ground truth: {res['num_tasks_with_gt']}")
print(f"number of tasks with ground truth in candidate set: {res['num_tasks_with_gt_in_candidate']}, which is {res['num_tasks_with_gt_in_candidate']/res['num_tasks_with_gt'] * 100}%")
print(f"number of tasks has singleton candidate set: {res['num_tasks_with_singleton_candidate']}, which is {res['num_tasks_with_singleton_candidate']/res['num_tasks_with_gt'] * 100}%")
print(f"number of tasks has singleton candidate set which is ground truth: {res['num_tasks_with_singleton_candidate_with_gt']}, which is {res['num_tasks_with_singleton_candidate_with_gt']/res['num_tasks_with_gt'] * 100}%")
print()
print(f"number of tasks with top-1 accuracy in terms of retrieval score: {res['num_tasks_with_retrieval_top_one_accurate']}, which is {res['num_tasks_with_retrieval_top_one_accurate']/res['num_tasks_with_gt'] * 100}%")
print(f"number of tasks with top-5 accuracy in terms of retrieval score: {res['num_tasks_with_retrieval_top_five_accurate']}, which is {res['num_tasks_with_retrieval_top_five_accurate']/res['num_tasks_with_gt'] * 100}%")
print(f"number of tasks with top-1 accuracy in terms of text embedding score: {res['num_tasks_with_text_top_one_accurate']}, which is {res['num_tasks_with_text_top_one_accurate']/res['num_tasks_with_gt'] * 100}%")
print(f"number of tasks with top-5 accuracy in terms of text embedding score: {res['num_tasks_with_text_top_five_accurate']}, which is {res['num_tasks_with_text_top_five_accurate']/res['num_tasks_with_gt'] * 100}%")
print(f"number of tasks with top-1 accuracy in terms of graph embedding score: {res['num_tasks_with_graph_top_one_accurate']}, which is {res['num_tasks_with_graph_top_one_accurate']/res['num_tasks_with_gt'] * 100}%")
print(f"number of tasks with top-5 accuracy in terms of graph embedding score: {res['num_tasks_with_graph_top_five_accurate']}, which is {res['num_tasks_with_graph_top_five_accurate']/res['num_tasks_with_gt'] * 100}%")
print()
candidate_eval_data_with_gt = candidate_eval_data[candidate_eval_data['has_gt'] == 1]
print(f"average ndcg score ranked by retrieval score: {candidate_eval_data_with_gt['r_ndcg'].mean()}")
print(f"average ndcg score ranked by text-embedding-score: {candidate_eval_data_with_gt['t_ndcg'].mean()}")
print(f"average ndcg score ranked by graph-embedding-score: {candidate_eval_data_with_gt['g_ndcg'].mean()}")
# Conclusion of exact-match on filtered tasks: candidate set is non singleton and has ground truth
f_candidate_eval_data = candidate_eval_data[(candidate_eval_data['has_gt'] == 1) & (candidate_eval_data['count'] > 1)]
f_candidate_eval_data
num_tasks = len(f_candidate_eval_data)
df_has_gt_in_candidate = f_candidate_eval_data[f_candidate_eval_data['has_gt_in_candidate'] == 1]
df_singleton_candidate = f_candidate_eval_data[f_candidate_eval_data['count'] == 1]
df_singleton_candidate_has_gt = f_candidate_eval_data[(f_candidate_eval_data['count'] == 1) & (f_candidate_eval_data['has_gt_in_candidate'] == 1)]
df_retrieval_top_one_accurate = f_candidate_eval_data[f_candidate_eval_data['retrieval_top_one_accurate'] == 1]
df_retrieval_top_five_accurate = f_candidate_eval_data[f_candidate_eval_data['retrieval_top_five_accurate'] == 1]
df_text_top_one_accurate = f_candidate_eval_data[f_candidate_eval_data['text_top_one_accurate'] == 1]
df_text_top_five_accurate = f_candidate_eval_data[f_candidate_eval_data['text_top_five_accurate'] == 1]
df_graph_top_one_accurate = f_candidate_eval_data[f_candidate_eval_data['graph_top_one_accurate'] == 1]
df_graph_top_five_accurate = f_candidate_eval_data[f_candidate_eval_data['graph_top_five_accurate'] == 1]
print(f"number of tasks with ground truth: {num_tasks}")
print(f"number of tasks with ground truth in candidate set: {len(df_has_gt_in_candidate)}, which is {len(df_has_gt_in_candidate)/num_tasks * 100}%")
print(f"number of tasks has singleton candidate set: {len(df_singleton_candidate)}, which is {len(df_singleton_candidate)/num_tasks * 100}%")
print(f"number of tasks has singleton candidate set which is ground truth: {len(df_singleton_candidate_has_gt)}, which is {len(df_singleton_candidate_has_gt)/num_tasks * 100}%")
print()
print(f"number of tasks with top-1 accuracy in terms of retrieval score: {len(df_retrieval_top_one_accurate)}, which is {len(df_retrieval_top_one_accurate)/num_tasks * 100}%")
print(f"number of tasks with top-5 accuracy in terms of retrieval score: {len(df_retrieval_top_five_accurate)}, which is {len(df_retrieval_top_five_accurate)/num_tasks * 100}%")
print(f"number of tasks with top-1 accuracy in terms of text embedding score: {len(df_text_top_one_accurate)}, which is {len(df_text_top_one_accurate)/num_tasks * 100}%")
print(f"number of tasks with top-5 accuracy in terms of text embedding score: {len(df_text_top_five_accurate)}, which is {len(df_text_top_five_accurate)/num_tasks * 100}%")
print(f"number of tasks with top-1 accuracy in terms of graph embedding score: {len(df_graph_top_one_accurate)}, which is {len(df_graph_top_one_accurate)/num_tasks * 100}%")
print(f"number of tasks with top-5 accuracy in terms of graph embedding score: {len(df_graph_top_five_accurate)}, which is {len(df_graph_top_five_accurate)/num_tasks * 100}%")
print()
print(f"average ndcg score ranked by retrieval score: {df_has_gt_in_candidate['r_ndcg'].mean()}")
print(f"average ndcg score ranked by text-embedding-score: {df_has_gt_in_candidate['t_ndcg'].mean()}")
print(f"average ndcg score ranked by graph-embedding-score: {df_has_gt_in_candidate['g_ndcg'].mean()}")
test_data = all_data[(all_data['table_id'] == "14067031_0_559833072073397908") & (all_data['row'] == 3) & (all_data['column'] == 0)]
test_data
sklearn.metrics.ndcg_score(np.array([list(all_data[:5]['evaluation_label'])]), np.array([list(all_data[:5]['retrieval_score'])]))
# Some ground truth is empty??? why???
all_data[all_data['GT_kg_id'] == ''] | _____no_output_____ | MIT | exact-match-centroid-pipeline/embedding_eval.ipynb | nicklein/table-linker-pipelines |
Graphs | import matplotlib.pyplot as plt
import sys
candidate_eval_data = pd.read_csv('./candidate_eval_no_empty.csv', index_col=False)
candidate_eval_data
# Line plot of top-1, top-5 and NDCG versus size of candidate set
x_candidate_set_size = list(pd.unique(candidate_eval_data['count']))
x_candidate_set_size.sort()
y_r_top_one = []
y_r_top_five = []
y_t_top_one = []
y_t_top_five = []
y_g_top_one = []
y_g_top_five = []
y_avg_r_ndcg = []
y_avg_t_ndcg = []
y_avg_g_ndcg = []
for c in x_candidate_set_size:
dff = candidate_eval_data[candidate_eval_data['count'] == c]
y_r_top_one.append(len(dff[dff['retrieval_top_one_accurate'] == 1])/len(dff) * 100)
y_r_top_five.append(len(dff[dff['retrieval_top_five_accurate'] == 1])/len(dff) * 100)
y_t_top_one.append(len(dff[dff['text_top_one_accurate'] == 1])/len(dff) * 100)
y_t_top_five.append(len(dff[dff['text_top_five_accurate'] == 1])/len(dff) * 100)
y_g_top_one.append(len(dff[dff['graph_top_one_accurate'] == 1])/len(dff) * 100)
y_g_top_five.append(len(dff[dff['graph_top_five_accurate'] == 1])/len(dff) * 100)
y_avg_r_ndcg.append(dff['r_ndcg'].mean())
y_avg_t_ndcg.append(dff['t_ndcg'].mean())
y_avg_g_ndcg.append(dff['g_ndcg'].mean())
len(y_r_top_one), len(y_g_top_one), len(y_t_top_one), len(y_avg_r_ndcg)
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.set_ylabel('percent')
ax.set_xlabel('candidate set size')
ax.plot(x_candidate_set_size, y_r_top_one, 'ro', label='retrieval_top_one_accurate')
ax.plot(x_candidate_set_size, y_r_top_five, 'bo', label='retrieval_top_five_accurate')
ax.legend(bbox_to_anchor=(1,1), loc="upper left")
fig.show()
fig, ax = plt.subplots()
ax.set_ylabel('percent')
ax.set_xlabel('candidate set size')
ax.plot(x_candidate_set_size, y_t_top_one, 'ro', label='text_top_one_accurate')
ax.plot(x_candidate_set_size, y_t_top_five, 'bo', label='text_top_five_accurate')
ax.legend(bbox_to_anchor=(1,1), loc="upper left")
fig.show()
fig, ax = plt.subplots()
ax.set_ylabel('percent')
ax.set_xlabel('candidate set size')
ax.plot(x_candidate_set_size, y_g_top_one, 'ro', label='graph_top_one_accurate')
ax.plot(x_candidate_set_size, y_g_top_five, 'bo', label='graph_top_five_accurate')
ax.legend(bbox_to_anchor=(1,1), loc="upper left")
fig.show()
fig, ax = plt.subplots()
ax.set_ylabel('average ndcg')
ax.set_xlabel('candidate set size')
ax.plot(x_candidate_set_size, y_avg_r_ndcg, 'ro', label='average ndcg score ranked by retrieval score')
ax.plot(x_candidate_set_size, y_avg_t_ndcg, 'bo', label='average ndcg score ranked by text-embedding-score')
ax.plot(x_candidate_set_size, y_avg_g_ndcg, 'go', label='average ndcg score ranked by graph-embedding-score')
fig.legend(bbox_to_anchor=(1,1), loc="upper left")
fig.show() | /Users/summ7t/dev/novartis/novartis_env/lib/python3.6/site-packages/ipykernel_launcher.py:8: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.
| MIT | exact-match-centroid-pipeline/embedding_eval.ipynb | nicklein/table-linker-pipelines |
02/16 stats on each eval file | import pandas as pd
candidate_eval_data = pd.read_csv('./candidate_eval.csv', index_col=False)
candidate_eval_data
# candidate_eval_data = candidate_eval_data.drop(['Unnamed: 0'], axis=1)
# candidate_eval_data
import os
eval_file_names = []
eval_file_ids = []
for (dirpath, dirnames, filenames) in os.walk('/Users/summ7t/dev/novartis/table-linker/SemTab2019/embedding_evaluation_files/'):
for fn in filenames:
if "csv" not in fn:
continue
abs_fn = dirpath + fn
assert os.path.isfile(abs_fn)
if os.path.getsize(abs_fn) == 0:
continue
eval_file_names.append(abs_fn)
eval_file_ids.append(fn.split('.csv')[0])
len(eval_file_names), len(eval_file_ids)
eval_file_ids
f_candidate_eval_data = candidate_eval_data[candidate_eval_data['table_id'] == '52299421_0_4473286348258170200']
f_candidate_eval_data
def compute_eval_file_stats(f_candidate_eval_data):
res = {}
num_tasks = len(f_candidate_eval_data)
df_has_gt = f_candidate_eval_data[f_candidate_eval_data['has_gt'] == 1]
df_has_gt_in_candidate = f_candidate_eval_data[f_candidate_eval_data['has_gt_in_candidate'] == 1]
df_singleton_candidate = f_candidate_eval_data[f_candidate_eval_data['count'] == 1]
df_singleton_candidate_has_gt = f_candidate_eval_data[(f_candidate_eval_data['count'] == 1) & (f_candidate_eval_data['has_gt_in_candidate'] == 1)]
df_retrieval_top_one_accurate = f_candidate_eval_data[f_candidate_eval_data['retrieval_top_one_accurate'] == 1]
df_retrieval_top_five_accurate = f_candidate_eval_data[f_candidate_eval_data['retrieval_top_five_accurate'] == 1]
df_text_top_one_accurate = f_candidate_eval_data[f_candidate_eval_data['text_top_one_accurate'] == 1]
df_text_top_five_accurate = f_candidate_eval_data[f_candidate_eval_data['text_top_five_accurate'] == 1]
df_graph_top_one_accurate = f_candidate_eval_data[f_candidate_eval_data['graph_top_one_accurate'] == 1]
df_graph_top_five_accurate = f_candidate_eval_data[f_candidate_eval_data['graph_top_five_accurate'] == 1]
res['table_id'] = f_candidate_eval_data['table_id'].iloc[0]
res['num_tasks'] = num_tasks
res['num_tasks_with_gt'] = len(df_has_gt)
res['num_tasks_with_gt_in_candidate'] = len(df_has_gt_in_candidate) / len(df_has_gt) * 100
res['num_tasks_with_singleton_candidate'] = len(df_singleton_candidate) / len(df_has_gt) * 100
res['num_tasks_with_singleton_candidate_with_gt'] = len(df_singleton_candidate_has_gt) / len(df_has_gt) * 100
res['num_tasks_with_retrieval_top_one_accurate'] = len(df_retrieval_top_one_accurate) / len(df_has_gt) * 100
res['num_tasks_with_retrieval_top_five_accurate'] = len(df_retrieval_top_five_accurate) / len(df_has_gt) * 100
res['num_tasks_with_text_top_one_accurate'] = len(df_text_top_one_accurate) / len(df_has_gt) * 100
res['num_tasks_with_text_top_five_accurate'] = len(df_text_top_five_accurate) / len(df_has_gt) * 100
res['num_tasks_with_graph_top_one_accurate'] = len(df_graph_top_one_accurate) / len(df_has_gt) * 100
res['num_tasks_with_graph_top_five_accurate'] = len(df_graph_top_five_accurate) / len(df_has_gt) * 100
res['average_ndcg_retrieval'] = df_has_gt['r_ndcg'].mean()
res['average_ndcg_text'] = df_has_gt['t_ndcg'].mean()
res['average_ndcg_graph'] = df_has_gt['g_ndcg'].mean()
return res
def compute_eval_file_stats_count(f_candidate_eval_data):
res = {}
num_tasks = len(f_candidate_eval_data)
df_has_gt = f_candidate_eval_data[f_candidate_eval_data['has_gt'] == 1]
df_has_gt_in_candidate = f_candidate_eval_data[f_candidate_eval_data['has_gt_in_candidate'] == 1]
df_singleton_candidate = f_candidate_eval_data[f_candidate_eval_data['count'] == 1]
df_singleton_candidate_has_gt = f_candidate_eval_data[(f_candidate_eval_data['count'] == 1) & (f_candidate_eval_data['has_gt_in_candidate'] == 1)]
df_retrieval_top_one_accurate = f_candidate_eval_data[f_candidate_eval_data['retrieval_top_one_accurate'] == 1]
df_retrieval_top_five_accurate = f_candidate_eval_data[f_candidate_eval_data['retrieval_top_five_accurate'] == 1]
df_text_top_one_accurate = f_candidate_eval_data[f_candidate_eval_data['text_top_one_accurate'] == 1]
df_text_top_five_accurate = f_candidate_eval_data[f_candidate_eval_data['text_top_five_accurate'] == 1]
df_graph_top_one_accurate = f_candidate_eval_data[f_candidate_eval_data['graph_top_one_accurate'] == 1]
df_graph_top_five_accurate = f_candidate_eval_data[f_candidate_eval_data['graph_top_five_accurate'] == 1]
res['table_id'] = f_candidate_eval_data['table_id'].iloc[0]
res['num_tasks'] = num_tasks
res['num_tasks_with_gt'] = len(df_has_gt)
res['num_tasks_with_gt_in_candidate'] = len(df_has_gt_in_candidate)
res['num_tasks_with_singleton_candidate'] = len(df_singleton_candidate)
res['num_tasks_with_singleton_candidate_with_gt'] = len(df_singleton_candidate_has_gt)
res['num_tasks_with_retrieval_top_one_accurate'] = len(df_retrieval_top_one_accurate) / len(df_has_gt) * 100
res['num_tasks_with_retrieval_top_five_accurate'] = len(df_retrieval_top_five_accurate) / len(df_has_gt) * 100
res['num_tasks_with_text_top_one_accurate'] = len(df_text_top_one_accurate) / len(df_has_gt) * 100
res['num_tasks_with_text_top_five_accurate'] = len(df_text_top_five_accurate) / len(df_has_gt) * 100
res['num_tasks_with_graph_top_one_accurate'] = len(df_graph_top_one_accurate) / len(df_has_gt) * 100
res['num_tasks_with_graph_top_five_accurate'] = len(df_graph_top_five_accurate) / len(df_has_gt) * 100
res['average_ndcg_retrieval'] = df_has_gt['r_ndcg'].mean()
res['average_ndcg_text'] = df_has_gt['t_ndcg'].mean()
res['average_ndcg_graph'] = df_has_gt['g_ndcg'].mean()
return res
res = compute_eval_file_stats(f_candidate_eval_data)
print(f"table id is {res['table_id']}")
print(f"number of tasks: {res['num_tasks']}")
print(f"number of tasks with ground truth: {res['num_tasks_with_gt']}")
print(f"number of tasks with ground truth in candidate set: {res['num_tasks_with_gt_in_candidate']}")
print(f"number of tasks has singleton candidate set: {res['num_tasks_with_singleton_candidate']}")
print(f"number of tasks has singleton candidate set which is ground truth: {res['num_tasks_with_singleton_candidate_with_gt']}")
print()
print(f"number of tasks with top-1 accuracy in terms of retrieval score: {res['num_tasks_with_retrieval_top_one_accurate']}")
print(f"number of tasks with top-5 accuracy in terms of retrieval score: {res['num_tasks_with_retrieval_top_five_accurate']}")
print(f"number of tasks with top-1 accuracy in terms of text embedding score: {res['num_tasks_with_text_top_one_accurate']}")
print(f"number of tasks with top-5 accuracy in terms of text embedding score: {res['num_tasks_with_text_top_five_accurate']}")
print(f"number of tasks with top-1 accuracy in terms of graph embedding score: {res['num_tasks_with_graph_top_one_accurate']}")
print(f"number of tasks with top-5 accuracy in terms of graph embedding score: {res['num_tasks_with_graph_top_five_accurate']}")
print()
print(f"average ndcg score ranked by retrieval score: {res['average_ndcg_retrieval']}")
print(f"average ndcg score ranked by text-embedding-score: {res['average_ndcg_text']}")
print(f"average ndcg score ranked by graph-embedding-score: {res['average_ndcg_graph']}")
all_tables = {}
for tid in eval_file_ids:
f_candidate_eval_data = candidate_eval_data[candidate_eval_data['table_id'] == tid]
all_tables[tid] = compute_eval_file_stats(f_candidate_eval_data)
all_tables
all_tables = {}
for tid in eval_file_ids:
f_candidate_eval_data = candidate_eval_data[candidate_eval_data['table_id'] == tid]
all_tables[tid] = compute_eval_file_stats_count(f_candidate_eval_data)
all_tables
eval_file_ids
# visualize ten dev eval file stats
# Recompute all tables if needed
x_eval_fid = [
'movies',
'players I',
'video games',
'magazines',
'companies',
'country I',
'players II',
'pope',
'country II'
]
x_eval_fidx = range(len(x_eval_fid))
y_num_tasks_with_gt_in_candidate = []
y_num_tasks_with_singleton_candidate = []
y_num_tasks_with_singleton_candidate_with_gt = []
y_num_tasks_with_retrieval_top_one_accurate = []
y_num_tasks_with_retrieval_top_five_accurate = []
y_num_tasks_with_text_top_one_accurate = []
y_num_tasks_with_text_top_five_accurate = []
y_num_tasks_with_graph_top_one_accurate = []
y_num_tasks_with_graph_top_five_accurate = []
y_average_ndcg_retrieval = []
y_average_ndcg_text = []
y_average_ndcg_graph = []
for idx in range(len(x_eval_fid)):
table_id = eval_file_ids[idx]
y_num_tasks_with_gt_in_candidate.append(all_tables[table_id]['num_tasks_with_gt_in_candidate'])
y_num_tasks_with_singleton_candidate.append(all_tables[table_id]['num_tasks_with_singleton_candidate'])
y_num_tasks_with_singleton_candidate_with_gt.append(all_tables[table_id]['num_tasks_with_singleton_candidate_with_gt'])
y_num_tasks_with_retrieval_top_one_accurate.append(all_tables[table_id]['num_tasks_with_retrieval_top_one_accurate'])
y_num_tasks_with_retrieval_top_five_accurate.append(all_tables[table_id]['num_tasks_with_retrieval_top_five_accurate'])
y_num_tasks_with_text_top_one_accurate.append(all_tables[table_id]['num_tasks_with_text_top_one_accurate'])
y_num_tasks_with_text_top_five_accurate.append(all_tables[table_id]['num_tasks_with_text_top_five_accurate'])
y_num_tasks_with_graph_top_one_accurate.append(all_tables[table_id]['num_tasks_with_graph_top_one_accurate'])
y_num_tasks_with_graph_top_five_accurate.append(all_tables[table_id]['num_tasks_with_graph_top_five_accurate'])
y_average_ndcg_retrieval.append(all_tables[table_id]['average_ndcg_retrieval'])
y_average_ndcg_text.append(all_tables[table_id]['average_ndcg_text'])
y_average_ndcg_graph.append(all_tables[table_id]['average_ndcg_graph'])
y_num_tasks_with_text_top_five_accurate
import statistics
def compute_list_stats(l):
return min(l), max(l), statistics.median(l), statistics.mean(l), statistics.stdev(l)
print('% tasks_with_gt_in_candidate : \n min is {},\n max is {},\n median is {},\n mean is {},\n std is {}'.format(*compute_list_stats(y_num_tasks_with_gt_in_candidate)))
print('% tasks_with_singleton_candidate : \n min is {},\n max is {},\n median is {},\n mean is {},\n std is {}'.format(*compute_list_stats(y_num_tasks_with_singleton_candidate)))
print('% tasks_with_singleton_candidate_with_gt : \n min is {},\n max is {},\n median is {},\n mean is {},\n std is {}'.format(*compute_list_stats(y_num_tasks_with_singleton_candidate_with_gt)))
print('% tasks_with_retrieval_top_one_accurate : \n min is {},\n max is {},\n median is {},\n mean is {},\n std is {}'.format(*compute_list_stats(y_num_tasks_with_retrieval_top_one_accurate)))
print('% tasks_with_retrieval_top_five_accurate : \n min is {},\n max is {},\n median is {},\n mean is {},\n std is {}'.format(*compute_list_stats(y_num_tasks_with_retrieval_top_five_accurate)))
print('% tasks_with_text_top_one_accurate : \n min is {},\n max is {},\n median is {},\n mean is {},\n std is {}'.format(*compute_list_stats(y_num_tasks_with_text_top_one_accurate)))
print('% tasks_with_text_top_five_accurate : \n min is {},\n max is {},\n median is {},\n mean is {},\n std is {}'.format(*compute_list_stats(y_num_tasks_with_text_top_five_accurate)))
print('% tasks_with_graph_top_one_accurate : \n min is {},\n max is {},\n median is {},\n mean is {},\n std is {}'.format(*compute_list_stats(y_num_tasks_with_graph_top_one_accurate)))
print('% tasks_with_graph_top_five_accurate : \n min is {},\n max is {},\n median is {},\n mean is {},\n std is {}'.format(*compute_list_stats(y_average_ndcg_retrieval)))
print('average_ndcg_retrieval : \n min is {},\n max is {},\n median is {},\n mean is {},\n std is {}'.format(*compute_list_stats(y_num_tasks_with_graph_top_five_accurate)))
print('average_ndcg_text : \n min is {},\n max is {},\n median is {},\n mean is {},\n std is {}'.format(*compute_list_stats(y_average_ndcg_text)))
print('average_ndcg_graph : \n min is {}, \nmax is {},\n median is {},\n mean is {},\n std is {}'.format(*compute_list_stats(y_average_ndcg_graph)))
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(10, 10))
ax.set_ylabel('average ndcg')
ax.set_xlabel('table content')
ax.plot(x_eval_fid, y_average_ndcg_retrieval, 'rx', label='average ndcg score ranked by retrieval score')
ax.plot(x_eval_fid, y_average_ndcg_text, 'bx', label='average ndcg score ranked by text embedding score')
ax.plot(x_eval_fid, y_average_ndcg_graph, 'gx', label='average ndcg score ranked by graph embedding score')
ax.legend(bbox_to_anchor=(1,1), loc="upper left")
fig.show()
fig, ax = plt.subplots(figsize=(10, 10))
ax.set_ylabel('percent')
ax.set_xlabel('table content')
ax.plot(x_eval_fid, y_num_tasks_with_retrieval_top_one_accurate, 'rx', label='ranked by retrieval score top-1 accuracy')
ax.plot(x_eval_fid, y_num_tasks_with_text_top_one_accurate, 'bx', label='ranked by text embedding score top-1 accuracy')
ax.plot(x_eval_fid, y_num_tasks_with_graph_top_one_accurate, 'gx', label='ranked by graph embedding score top-1 accuracy')
ax.plot(x_eval_fid, y_num_tasks_with_retrieval_top_five_accurate, 'ro', label='ranked by retrieval score top-5 accuracy')
ax.plot(x_eval_fid, y_num_tasks_with_text_top_five_accurate, 'bo', label='ranked by text embedding score top-5 accuracy')
ax.plot(x_eval_fid, y_num_tasks_with_graph_top_five_accurate, 'go', label='ranked by graph embedding score top-5 accuracy')
ax.legend(bbox_to_anchor=(1,1), loc="upper left")
fig.show()
# fig, ax = plt.subplots(figsize=(10, 10))
# ax.set_ylabel('percent')
# ax.set_xlabel('table_id idx')
# ax.plot(x_eval_fid, y_num_tasks_with_retrieval_top_five_accurate, 'rx', label='ranked by retrieval score top-5 accuracy')
# ax.plot(x_eval_fid, y_num_tasks_with_text_top_five_accurate, 'bx', label='ranked by text embedding score top-5 accuracy')
# ax.plot(x_eval_fid, y_num_tasks_with_graph_top_five_accurate, 'gx', label='ranked by graph embedding score top-5 accuracy')
# ax.legend(bbox_to_anchor=(1,1), loc="upper left")
# fig.show()
fig, ax = plt.subplots(figsize=(10, 10))
ax.set_ylabel('percent')
ax.set_xlabel('table content')
ax.plot(x_eval_fid, y_num_tasks_with_singleton_candidate, 'rx', label='percent of tasks with singleton candidate set')
ax.plot(x_eval_fid, y_num_tasks_with_singleton_candidate_with_gt, 'bx', label='percent of tasks with ground truth in singleton candidate set')
ax.legend(bbox_to_anchor=(1,1), loc="upper left")
fig.show() | /Users/summ7t/dev/novartis/novartis_env/lib/python3.6/site-packages/ipykernel_launcher.py:7: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.
import sys
| MIT | exact-match-centroid-pipeline/embedding_eval.ipynb | nicklein/table-linker-pipelines |
02/17 More plots | candidate_eval_data[candidate_eval_data['count'] == 1]
[all_tables[tid]['num_tasks_with_singleton_candidate'] for tid in all_tables]
# x_axis percetage of singleton
x_pos = [all_tables[tid]['num_tasks_with_singleton_candidate'] for tid in all_tables]
x_posgt = [all_tables[tid]['num_tasks_with_singleton_candidate_with_gt'] for tid in all_tables]
len(x_pos), len(x_posgt)
fig, ax = plt.subplots()
ax.set_ylabel('average ndcg')
# ax.set_xlabel('percentage of singleton candidate set')
ax.set_xlabel('number of singleton candidate set')
ax.plot(x_pos, y_average_ndcg_retrieval, 'rx', label='average ndcg score ranked by retrieval score')
ax.plot(x_pos, y_average_ndcg_text, 'bx', label='average ndcg score ranked by text embedding score')
ax.plot(x_pos, y_average_ndcg_graph, 'gx', label='average ndcg score ranked by graph embedding score')
ax.legend(bbox_to_anchor=(1,1), loc="upper left")
fig.show()
fig, ax = plt.subplots()
ax.set_ylabel('percent')
# ax.set_xlabel('percentage of singleton candidate set')
ax.set_xlabel('number of singleton candidate set')
ax.plot(x_pos, y_num_tasks_with_retrieval_top_one_accurate, 'rx', label='ranked by retrieval score top-1 accuracy')
ax.plot(x_pos, y_num_tasks_with_text_top_one_accurate, 'bx', label='ranked by text embedding score top-1 accuracy')
ax.plot(x_pos, y_num_tasks_with_graph_top_one_accurate, 'gx', label='ranked by graph embedding score top-1 accuracy')
ax.plot(x_pos, y_num_tasks_with_retrieval_top_five_accurate, 'ro', label='ranked by retrieval score top-5 accuracy')
ax.plot(x_pos, y_num_tasks_with_text_top_five_accurate, 'bo', label='ranked by text embedding score top-5 accuracy')
ax.plot(x_pos, y_num_tasks_with_graph_top_five_accurate, 'go', label='ranked by graph embedding score top-5 accuracy')
ax.legend(bbox_to_anchor=(1,1), loc="upper left")
fig.show()
fig, ax = plt.subplots()
ax.set_ylabel('percent')
# ax.set_xlabel('percentage of singleton candidate set with ground truth')
ax.set_xlabel('number of singleton candidate set with ground truth')
ax.plot(x_posgt, y_num_tasks_with_retrieval_top_one_accurate, 'rx', label='ranked by retrieval score top-1 accuracy')
ax.plot(x_posgt, y_num_tasks_with_text_top_one_accurate, 'bx', label='ranked by text embedding score top-1 accuracy')
ax.plot(x_posgt, y_num_tasks_with_graph_top_one_accurate, 'gx', label='ranked by graph embedding score top-1 accuracy')
ax.plot(x_posgt, y_num_tasks_with_retrieval_top_five_accurate, 'ro', label='ranked by retrieval score top-5 accuracy')
ax.plot(x_posgt, y_num_tasks_with_text_top_five_accurate, 'bo', label='ranked by text embedding score top-5 accuracy')
ax.plot(x_posgt, y_num_tasks_with_graph_top_five_accurate, 'go', label='ranked by graph embedding score top-5 accuracy')
ax.legend(bbox_to_anchor=(1,1), loc="upper left")
fig.show()
fig, ax = plt.subplots()
ax.set_ylabel('average ndcg')
ax.set_xlabel('number of singleton candidate set with ground truth')
# ax.set_xlabel('percentage of singleton candidate set with ground truth')
ax.plot(x_posgt, y_average_ndcg_retrieval, 'rx', label='average ndcg score ranked by retrieval score')
ax.plot(x_posgt, y_average_ndcg_text, 'bx', label='average ndcg score ranked by text embedding score')
ax.plot(x_posgt, y_average_ndcg_graph, 'gx', label='average ndcg score ranked by graph embedding score')
ax.legend(bbox_to_anchor=(1,1), loc="upper left")
fig.show() | /Users/summ7t/dev/novartis/novartis_env/lib/python3.6/site-packages/ipykernel_launcher.py:9: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.
if __name__ == '__main__':
| MIT | exact-match-centroid-pipeline/embedding_eval.ipynb | nicklein/table-linker-pipelines |
02/19 More experiments: wrong singleton | import pandas as pd
candidate_eval_data = pd.read_csv('./candidate_eval_no_empty.csv', index_col=False)
candidate_eval_data
# Sub all singleton candidate set to see how "good" the algorithm can be
subbed_candidate_eval_data = candidate_eval_data.copy()
for i, row in subbed_candidate_eval_data.iterrows():
if row['count'] == 1:
subbed_candidate_eval_data.loc[i, 'retrieval_top_one_accurate'] = 1
subbed_candidate_eval_data.loc[i, 'retrieval_top_five_accurate'] = 1
subbed_candidate_eval_data.loc[i, 'text_top_one_accurate'] = 1
subbed_candidate_eval_data.loc[i, 'text_top_five_accurate'] = 1
subbed_candidate_eval_data.loc[i, 'graph_top_one_accurate'] = 1
subbed_candidate_eval_data.loc[i, 'graph_top_five_accurate'] = 1
subbed_candidate_eval_data.loc[i, 'has_gt'] = 1
subbed_candidate_eval_data.loc[i, 'has_gt_in_candidate'] = 1
subbed_candidate_eval_data.loc[i, 'r_ndcg'] = 1
subbed_candidate_eval_data.loc[i, 't_ndcg'] = 1
subbed_candidate_eval_data.loc[i, 'g_ndcg'] = 1
subbed_candidate_eval_data
dropped_candidate_eval_data = candidate_eval_data.copy()[(candidate_eval_data['count'] != 1) | (candidate_eval_data['has_gt_in_candidate'] == 1)]
dropped_candidate_eval_data
candidate_eval_data[candidate_eval_data['count'] == 1]
subbed_candidate_eval_data[subbed_candidate_eval_data['count'] == 1]
dropped_candidate_eval_data[dropped_candidate_eval_data['count'] == 1]
# compute the same metrics
import os
eval_file_names = []
eval_file_ids = []
for (dirpath, dirnames, filenames) in os.walk('/Users/summ7t/dev/novartis/table-linker/SemTab2019/embedding_evaluation_files/'):
for fn in filenames:
if "csv" not in fn:
continue
abs_fn = dirpath + fn
assert os.path.isfile(abs_fn)
if os.path.getsize(abs_fn) == 0:
continue
eval_file_names.append(abs_fn)
eval_file_ids.append(fn.split('.csv')[0])
len(eval_file_names), len(eval_file_ids)
subbed_all_tables = {}
for tid in eval_file_ids:
f_candidate_eval_data = subbed_candidate_eval_data[subbed_candidate_eval_data['table_id'] == tid]
subbed_all_tables[tid] = compute_eval_file_stats(f_candidate_eval_data)
subbed_all_tables
dropped_all_tables = {}
for tid in eval_file_ids:
f_candidate_eval_data = dropped_candidate_eval_data[dropped_candidate_eval_data['table_id'] == tid]
dropped_all_tables[tid] = compute_eval_file_stats(f_candidate_eval_data)
dropped_all_tables
# visualize ten dev eval file stats
# Same process as before
x_eval_fid = [
'movies',
'players I',
'video games',
'magazines',
'companies',
'country I',
'players II',
'pope',
'country II'
]
x_eval_fidx = range(len(x_eval_fid))
r_y_num_tasks_with_gt_in_candidate = []
r_y_num_tasks_with_singleton_candidate = []
r_y_num_tasks_with_singleton_candidate_with_gt = []
r_y_num_tasks_with_retrieval_top_one_accurate = []
r_y_num_tasks_with_retrieval_top_five_accurate = []
r_y_num_tasks_with_text_top_one_accurate = []
r_y_num_tasks_with_text_top_five_accurate = []
r_y_num_tasks_with_graph_top_one_accurate = []
r_y_num_tasks_with_graph_top_five_accurate = []
r_y_average_ndcg_retrieval = []
r_y_average_ndcg_text = []
r_y_average_ndcg_graph = []
for idx in range(len(x_eval_fid)):
table_id = eval_file_ids[idx]
r_y_num_tasks_with_gt_in_candidate.append(subbed_all_tables[table_id]['num_tasks_with_gt_in_candidate'])
r_y_num_tasks_with_singleton_candidate.append(subbed_all_tables[table_id]['num_tasks_with_singleton_candidate'])
r_y_num_tasks_with_singleton_candidate_with_gt.append(subbed_all_tables[table_id]['num_tasks_with_singleton_candidate_with_gt'])
r_y_num_tasks_with_retrieval_top_one_accurate.append(subbed_all_tables[table_id]['num_tasks_with_retrieval_top_one_accurate'])
r_y_num_tasks_with_retrieval_top_five_accurate.append(subbed_all_tables[table_id]['num_tasks_with_retrieval_top_five_accurate'])
r_y_num_tasks_with_text_top_one_accurate.append(subbed_all_tables[table_id]['num_tasks_with_text_top_one_accurate'])
r_y_num_tasks_with_text_top_five_accurate.append(subbed_all_tables[table_id]['num_tasks_with_text_top_five_accurate'])
r_y_num_tasks_with_graph_top_one_accurate.append(subbed_all_tables[table_id]['num_tasks_with_graph_top_one_accurate'])
r_y_num_tasks_with_graph_top_five_accurate.append(subbed_all_tables[table_id]['num_tasks_with_graph_top_five_accurate'])
r_y_average_ndcg_retrieval.append(subbed_all_tables[table_id]['average_ndcg_retrieval'])
r_y_average_ndcg_text.append(subbed_all_tables[table_id]['average_ndcg_text'])
r_y_average_ndcg_graph.append(subbed_all_tables[table_id]['average_ndcg_graph'])
r_y_average_ndcg_retrieval, y_average_ndcg_retrieval
x_eval_fid = [
'movies',
'players I',
'video games',
'magazines',
'companies',
'country I',
'players II',
'pope',
'country II'
]
x_eval_fidx = range(len(x_eval_fid))
d_y_num_tasks_with_gt_in_candidate = []
d_y_num_tasks_with_singleton_candidate = []
d_y_num_tasks_with_singleton_candidate_with_gt = []
d_y_num_tasks_with_retrieval_top_one_accurate = []
d_y_num_tasks_with_retrieval_top_five_accurate = []
d_y_num_tasks_with_text_top_one_accurate = []
d_y_num_tasks_with_text_top_five_accurate = []
d_y_num_tasks_with_graph_top_one_accurate = []
d_y_num_tasks_with_graph_top_five_accurate = []
d_y_average_ndcg_retrieval = []
d_y_average_ndcg_text = []
d_y_average_ndcg_graph = []
for idx in range(len(x_eval_fid)):
table_id = eval_file_ids[idx]
d_y_num_tasks_with_gt_in_candidate.append(dropped_all_tables[table_id]['num_tasks_with_gt_in_candidate'])
d_y_num_tasks_with_singleton_candidate.append(dropped_all_tables[table_id]['num_tasks_with_singleton_candidate'])
d_y_num_tasks_with_singleton_candidate_with_gt.append(dropped_all_tables[table_id]['num_tasks_with_singleton_candidate_with_gt'])
d_y_num_tasks_with_retrieval_top_one_accurate.append(dropped_all_tables[table_id]['num_tasks_with_retrieval_top_one_accurate'])
d_y_num_tasks_with_retrieval_top_five_accurate.append(dropped_all_tables[table_id]['num_tasks_with_retrieval_top_five_accurate'])
d_y_num_tasks_with_text_top_one_accurate.append(dropped_all_tables[table_id]['num_tasks_with_text_top_one_accurate'])
d_y_num_tasks_with_text_top_five_accurate.append(dropped_all_tables[table_id]['num_tasks_with_text_top_five_accurate'])
d_y_num_tasks_with_graph_top_one_accurate.append(dropped_all_tables[table_id]['num_tasks_with_graph_top_one_accurate'])
d_y_num_tasks_with_graph_top_five_accurate.append(dropped_all_tables[table_id]['num_tasks_with_graph_top_five_accurate'])
d_y_average_ndcg_retrieval.append(dropped_all_tables[table_id]['average_ndcg_retrieval'])
d_y_average_ndcg_text.append(dropped_all_tables[table_id]['average_ndcg_text'])
d_y_average_ndcg_graph.append(dropped_all_tables[table_id]['average_ndcg_graph'])
d_y_average_ndcg_retrieval, y_average_ndcg_retrieval
# import matplotlib.pyplot as plt
# fig, ax = plt.subplots(figsize=(10, 10))
# ax.set_ylabel('average ndcg')
# ax.set_xlabel('table content')
# ax.plot(x_eval_fid, r_y_average_ndcg_retrieval, 'ro', label='R: average ndcg score ranked by retrieval score')
# ax.plot(x_eval_fid, r_y_average_ndcg_text, 'bo', label='R: average ndcg score ranked by text embedding score')
# ax.plot(x_eval_fid, r_y_average_ndcg_graph, 'go', label='R: average ndcg score ranked by graph embedding score')
# ax.plot(x_eval_fid, y_average_ndcg_retrieval, 'rx', label='average ndcg score ranked by retrieval score')
# ax.plot(x_eval_fid, y_average_ndcg_text, 'bx', label='average ndcg score ranked by text embedding score')
# ax.plot(x_eval_fid, y_average_ndcg_graph, 'gx', label='average ndcg score ranked by graph embedding score')
# ax.legend(bbox_to_anchor=(1,1), loc="upper left")
# fig.show()
# fig, ax = plt.subplots(figsize=(10, 10))
# ax.set_ylabel('R: percent')
# ax.set_xlabel('table content')
# ax.plot(x_eval_fid, r_y_num_tasks_with_retrieval_top_one_accurate, 'rx', label='ranked by retrieval score top-1 accuracy')
# ax.plot(x_eval_fid, r_y_num_tasks_with_text_top_one_accurate, 'bx', label='ranked by text embedding score top-1 accuracy')
# ax.plot(x_eval_fid, r_y_num_tasks_with_graph_top_one_accurate, 'gx', label='ranked by graph embedding score top-1 accuracy')
# ax.plot(x_eval_fid, r_y_num_tasks_with_retrieval_top_five_accurate, 'ro', label='ranked by retrieval score top-5 accuracy')
# ax.plot(x_eval_fid, r_y_num_tasks_with_text_top_five_accurate, 'bo', label='ranked by text embedding score top-5 accuracy')
# ax.plot(x_eval_fid, r_y_num_tasks_with_graph_top_five_accurate, 'go', label='ranked by graph embedding score top-5 accuracy')
# ax.legend(bbox_to_anchor=(1,1), loc="upper left")
# fig.show()
# p_min, p_max, p_median, p_mean, p_std = compute_list_stats(y_num_tasks_with_text_top_five_accurate)
# r_min, r_max, r_median, r_mean, r_std = compute_list_stats(r_y_num_tasks_with_text_top_five_accurate)
# r_min - p_min, r_max - p_max, r_median - p_median, r_mean - p_mean, r_std - p_std
# Plot dropped wrong singleton
# import matplotlib.pyplot as plt
# fig, ax = plt.subplots(figsize=(10, 10))
# ax.set_ylabel('average ndcg')
# ax.set_xlabel('table content')
# ax.plot(x_eval_fid, d_y_average_ndcg_retrieval, 'ro', label='D: average ndcg score ranked by retrieval score')
# ax.plot(x_eval_fid, d_y_average_ndcg_text, 'bo', label='D: average ndcg score ranked by text embedding score')
# ax.plot(x_eval_fid, d_y_average_ndcg_graph, 'go', label='D: average ndcg score ranked by graph embedding score')
# ax.plot(x_eval_fid, r_y_average_ndcg_retrieval, 'r+', label='R: average ndcg score ranked by retrieval score')
# ax.plot(x_eval_fid, r_y_average_ndcg_text, 'b+', label='R: average ndcg score ranked by text embedding score')
# ax.plot(x_eval_fid, r_y_average_ndcg_graph, 'g+', label='R: average ndcg score ranked by graph embedding score')
# ax.plot(x_eval_fid, y_average_ndcg_retrieval, 'rx', label='average ndcg score ranked by retrieval score')
# ax.plot(x_eval_fid, y_average_ndcg_text, 'bx', label='average ndcg score ranked by text embedding score')
# ax.plot(x_eval_fid, y_average_ndcg_graph, 'gx', label='average ndcg score ranked by graph embedding score')
# ax.legend(bbox_to_anchor=(1,1), loc="upper left")
# fig.show()
# fig, ax = plt.subplots(figsize=(10, 10))
# ax.set_ylabel('D: percent')
# ax.set_xlabel('table content')
# ax.plot(x_eval_fid, d_y_num_tasks_with_retrieval_top_one_accurate, 'rx', label='ranked by retrieval score top-1 accuracy')
# ax.plot(x_eval_fid, d_y_num_tasks_with_text_top_one_accurate, 'bx', label='ranked by text embedding score top-1 accuracy')
# ax.plot(x_eval_fid, d_y_num_tasks_with_graph_top_one_accurate, 'gx', label='ranked by graph embedding score top-1 accuracy')
# ax.plot(x_eval_fid, d_y_num_tasks_with_retrieval_top_five_accurate, 'ro', label='ranked by retrieval score top-5 accuracy')
# ax.plot(x_eval_fid, d_y_num_tasks_with_text_top_five_accurate, 'bo', label='ranked by text embedding score top-5 accuracy')
# ax.plot(x_eval_fid, d_y_num_tasks_with_graph_top_five_accurate, 'go', label='ranked by graph embedding score top-5 accuracy')
# ax.legend(bbox_to_anchor=(1,1), loc="upper left")
# fig.show()
dropped_all_tables
# construct differene table
diff_ndcg_df = pd.DataFrame(columns=['table_content', 'r_ndcg', 'R: r_ndcg', 'D: r_ndcg', 't_ndcg', 'R: t_ndcg', 'D: t_ndcg', 'g_ndcg', 'R: g_ndcg', 'D: g_ndcg'])
for idx in range(len(x_eval_fid)):
table_id = eval_file_ids[idx]
diff_ndcg_df.loc[table_id] = [
x_eval_fid[idx],
y_average_ndcg_retrieval[idx],
r_y_average_ndcg_retrieval[idx],
d_y_average_ndcg_retrieval[idx],
y_average_ndcg_text[idx],
r_y_average_ndcg_text[idx],
d_y_average_ndcg_text[idx],
y_average_ndcg_graph[idx],
r_y_average_ndcg_graph[idx],
d_y_average_ndcg_graph[idx]
]
diff_ndcg_df
diff_accuracy_df = pd.DataFrame(columns=[
'table_content', 'top1-retr', 'R: top1-retr', 'D: top1-retr',
'top1-text', 'R: top1-text', 'D: top1-text',
'top1-graph', 'R: top1-graph', 'D: top1-graph'
])
for idx in range(len(x_eval_fid)):
table_id = eval_file_ids[idx]
diff_accuracy_df.loc[table_id] = [
x_eval_fid[idx],
y_num_tasks_with_retrieval_top_one_accurate[idx],
r_y_num_tasks_with_retrieval_top_one_accurate[idx],
d_y_num_tasks_with_retrieval_top_one_accurate[idx],
y_num_tasks_with_text_top_one_accurate[idx],
r_y_num_tasks_with_text_top_one_accurate[idx],
d_y_num_tasks_with_text_top_one_accurate[idx],
y_num_tasks_with_graph_top_one_accurate[idx],
r_y_num_tasks_with_graph_top_one_accurate[idx],
d_y_num_tasks_with_graph_top_one_accurate[idx]
]
diff_accuracy_df
diff_accuracy_f_df = pd.DataFrame(columns=[
'table_content',
'top5-retr', 'R: top5-retr', 'D: top5-retr',
'top5-text', 'R: top5-text', 'D: top5-text',
'top5-graph', 'R: top5-graph', 'D: top5-graph'
])
for idx in range(len(x_eval_fid)):
table_id = eval_file_ids[idx]
diff_accuracy_f_df.loc[table_id] = [
x_eval_fid[idx],
y_num_tasks_with_retrieval_top_five_accurate[idx],
r_y_num_tasks_with_retrieval_top_five_accurate[idx],
d_y_num_tasks_with_retrieval_top_five_accurate[idx],
y_num_tasks_with_text_top_five_accurate[idx],
r_y_num_tasks_with_text_top_five_accurate[idx],
d_y_num_tasks_with_text_top_five_accurate[idx],
y_num_tasks_with_graph_top_five_accurate[idx],
r_y_num_tasks_with_graph_top_five_accurate[idx],
d_y_num_tasks_with_graph_top_five_accurate[idx]
]
diff_accuracy_f_df
# distribution of wrong singleton
wrong_singleton_df = candidate_eval_data[(candidate_eval_data['count'] == 1) & (candidate_eval_data['has_gt_in_candidate'] != 1)]
wrong_singleton_df
# get candidate from eval file + get label from ground truth file
wrong_files = list(pd.unique(wrong_singleton_df['table_id']))
wrong_tasks_df = pd.DataFrame(columns=['table_id', 'row', 'column', 'GT_kg_label', 'GT_kg_id', 'candidates'])
for fid in wrong_files:
f_data = pd.read_csv(f'/Users/summ7t/dev/novartis/table-linker/SemTab2019/embedding_evaluation_files/{fid}.csv')
f_wrong_tasks = wrong_singleton_df[wrong_singleton_df['table_id'] == fid]
for i, row in f_wrong_tasks.iterrows():
candidates_df = f_data[(f_data['row'] == row['row']) & (f_data['column'] == row['column'])]
candidates_df = candidates_df.fillna("")
# print(row)
# display(candidates_df)
assert row['count'] == len(candidates_df)
c_list = list(pd.unique(candidates_df['kg_id']))
GT_kg_id = candidates_df['GT_kg_id'].iloc[0]
GT_kg_label = candidates_df['GT_kg_label'].iloc[0]
# print(row['row'], row['column'], GT_kg_label, GT_kg_id)
# print(c_list)
wrong_tasks_df = wrong_tasks_df.append({
'table_id': fid,
'row': row['row'],
'column': row['column'],
'GT_kg_label': GT_kg_label,
'GT_kg_id': GT_kg_id,
'candidates': " ".join(c_list)
}, ignore_index=True)
wrong_tasks_df
pd.unique(wrong_tasks_df['candidates'])
wrong_tasks_df[wrong_tasks_df['candidates'] > '']
data[242:245] | _____no_output_____ | MIT | exact-match-centroid-pipeline/embedding_eval.ipynb | nicklein/table-linker-pipelines |
Homework 5: Problems Due Wednesday 28 October, before class PHYS 440/540, Fall 2020https://github.com/gtrichards/PHYS_440_540/ Problems 1&2Complete Chapters 1 and 2 in the *unsupervised learning* course in Data Camp. The last video (and the two following code examples) in Chapter 2 are off topic, but we'll discuss those next week, so this will be a good intro. The rest is highly relevant to this week's material. These are worth 1000 and 900 points, respectively. I'll be grading on the number of points earned instead of completion (as I have been), so try to avoid using the hints unless you really need them. Problem 3Fill in the blanks below. This exercise will take you though an example of everything that we did this week. Please copy the relevant import statements (below) to the cells where they are used (so that they can be run out of order). If a question is calling for a word-based answer, I'm not looking for more than ~1 sentence. --- | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.metrics.cluster import homogeneity_score
from sklearn.datasets import make_blobs
from sklearn.neighbors import KernelDensity
from astroML.density_estimation import KNeighborsDensity
from sklearn.model_selection import GridSearchCV
from sklearn.mixture import GaussianMixture
from sklearn.cluster import KMeans
from sklearn.cluster import DBSCAN | _____no_output_____ | MIT | homeworks/PHYS_440_540_F20_HW5.ipynb | KleinWang/PHYS_440_540 |
Setup up the data set. We will do both density estimation and clustering on it. | from sklearn.datasets import make_blobs
#Make two blobs with 3 features and 1000 samples
N=1000
X,y = make_blobs(n_samples=N, centers=5, n_features=2, random_state=25)
plt.figure(figsize=(10,10))
plt.scatter(X[:, 0], X[:, 1], s=100, c=y) | _____no_output_____ | MIT | homeworks/PHYS_440_540_F20_HW5.ipynb | KleinWang/PHYS_440_540 |
Start with kernel density estimation, including a grid search to find the best bandwidth | bwrange = np.linspace(____,____,____) # Test 30 bandwidths from 0.1 to 1.0 ####
K = ____ # 5-fold cross validation ####
grid = GridSearchCV(KernelDensity(), {'bandwidth': ____}, cv=K) ####
grid.fit(X) #Fit the histogram data that we started the lecture with.
h_opt = ____.best_params_['bandwidth'] ####
print(h_opt)
kde = KernelDensity(kernel='gaussian', bandwidth=h_opt)
kde.fit(X) #fit the model to the data
u = v = np.linspace(-15,15,100)
Xgrid = np.vstack(map(np.ravel, np.meshgrid(u, v))).T
dens = np.exp(kde.score_samples(Xgrid)) #evaluate the model on the grid
plt.scatter(____[:,0],____[:,1], c=dens, cmap="Purples", edgecolor="None") ####
plt.colorbar() | _____no_output_____ | MIT | homeworks/PHYS_440_540_F20_HW5.ipynb | KleinWang/PHYS_440_540 |
--- Now try a nearest neighbors approach to estimating the density. What value of $k$ do you need to make the plot look similar to the one above? | # Compute density with Bayesian nearest neighbors
k=____ ####
nbrs = KNeighborsDensity('bayesian',n_neighbors=____) ####
nbrs.____(X) ####
dens_nbrs = nbrs.eval(Xgrid) / N
plt.scatter(Xgrid[:,0],Xgrid[:,1], c=dens_nbrs, cmap="Purples", edgecolor="None")
plt.colorbar() | _____no_output_____ | MIT | homeworks/PHYS_440_540_F20_HW5.ipynb | KleinWang/PHYS_440_540 |
--- Now do a Gaussian mixture model. Do a grid search for between 1 and 10 components. | #Kludge to fix the bug with draw_ellipse in astroML v1.0
from matplotlib.patches import Ellipse
def draw_ellipse(mu, C, scales=[1, 2, 3], ax=None, **kwargs):
if ax is None:
ax = plt.gca()
# find principal components and rotation angle of ellipse
sigma_x2 = C[0, 0]
sigma_y2 = C[1, 1]
sigma_xy = C[0, 1]
alpha = 0.5 * np.arctan2(2 * sigma_xy,
(sigma_x2 - sigma_y2))
tmp1 = 0.5 * (sigma_x2 + sigma_y2)
tmp2 = np.sqrt(0.25 * (sigma_x2 - sigma_y2) ** 2 + sigma_xy ** 2)
sigma1 = np.sqrt(tmp1 + tmp2)
sigma2 = np.sqrt(tmp1 - tmp2)
for scale in scales:
ax.add_patch(Ellipse((mu[0], mu[1]),
2 * scale * sigma1, 2 * scale * sigma2,
alpha * 180. / np.pi,
**kwargs))
ncomps = np.arange(____,____,____) # Test 10 bandwidths from 1 to 10 ####
K = 5 # 5-fold cross validation
grid = ____(GaussianMixture(), {'n_components': ncomps}, cv=____) ####
grid.fit(X) #Fit the histogram data that we started the lecture with.
ncomp_opt = grid.____['n_components'] ####
print(ncomp_opt)
gmm = ____(n_components=ncomp_opt) ####
gmm.fit(X)
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111)
ax.scatter(X[:,0],X[:,1])
ax.scatter(gmm.means_[:,0], gmm.means_[:,1], marker='s', c='red', s=80)
for mu, C, w in zip(gmm.means_, gmm.covariances_, gmm.weights_):
draw_ellipse(mu, 1*C, scales=[2], ax=ax, fc='none', ec='k') #2 sigma ellipses for each component | _____no_output_____ | MIT | homeworks/PHYS_440_540_F20_HW5.ipynb | KleinWang/PHYS_440_540 |
Do you get the same answer (the same number of components) each time you run it? --- Now try Kmeans. Here we will scale the data. | kmeans = KMeans(n_clusters=5)
scaler = StandardScaler()
X_scaled = ____.____(X) ####
kmeans.fit(X_scaled)
centers=kmeans.____ #location of the clusters ####
labels=kmeans.predict(____) #labels for each of the points ####
centers_unscaled = scaler.____(centers) ####
fig,ax = plt.subplots(1,2,figsize=(16, 8))
ax[0].scatter(X[:,0],X[:,1],c=labels)
ax[0].scatter(centers_unscaled[:,0], centers_unscaled[:,1], marker='s', c='red', s=80)
ax[0].set_title("Predictions")
ax[1].scatter(X[:, 0], X[:, 1], c=y)
ax[1].set_title("Truth") | _____no_output_____ | MIT | homeworks/PHYS_440_540_F20_HW5.ipynb | KleinWang/PHYS_440_540 |
Let's evaluate how well we did in two other ways: a matrix and a score. | df = pd.DataFrame({'predictions': labels, 'truth': y})
ct = pd.crosstab(df['predictions'], df['truth'])
print(ct)
from sklearn.metrics.cluster import homogeneity_score
score = homogeneity_score(df['truth'], df['predictions'])
print(score) | _____no_output_____ | MIT | homeworks/PHYS_440_540_F20_HW5.ipynb | KleinWang/PHYS_440_540 |
What is the score for 3 clusters? --- Finally, let's use DBSCAN. Note that outliers are flagged as `labels_=-1`, so there is one more class that you might think.Full credit if you can get a score of 0.6 or above. Extra credit (0.1 of 5 points) for a score of 0.85 or above. | def plot_dbscan(dbscan, X, size, show_xlabels=True, show_ylabels=True):
core_mask = np.zeros_like(dbscan.labels_, dtype=bool)
core_mask[dbscan.core_sample_indices_] = True
anomalies_mask = dbscan.labels_ == -1
non_core_mask = ~(core_mask | anomalies_mask)
cores = dbscan.components_
anomalies = X[anomalies_mask]
non_cores = X[non_core_mask]
plt.scatter(cores[:, 0], cores[:, 1],
c=dbscan.labels_[core_mask], marker='o', s=size, cmap="Paired")
plt.scatter(cores[:, 0], cores[:, 1], marker='*', s=20, c=dbscan.labels_[core_mask])
plt.scatter(anomalies[:, 0], anomalies[:, 1],
c="r", marker="x", s=100)
plt.scatter(non_cores[:, 0], non_cores[:, 1], c=dbscan.labels_[non_core_mask], marker=".")
if show_xlabels:
plt.xlabel("$x_1$", fontsize=14)
else:
plt.tick_params(labelbottom=False)
if show_ylabels:
plt.ylabel("$x_2$", fontsize=14, rotation=0)
else:
plt.tick_params(labelleft=False)
plt.title("eps={:.2f}, min_samples={}".format(dbscan.eps, dbscan.min_samples), fontsize=14)
dbscan = DBSCAN(eps=0.15, min_samples=7)
dbscan.fit(X_scaled)
plt.figure(figsize=(10, 10))
plot_dbscan(dbscan, X_scaled, size=100)
n_clusters=np.unique(dbscan.labels_)
print(len(n_clusters)) #Number of clusters found (+1)
df2 = pd.DataFrame({'predictions': dbscan.labels_, 'truth': y})
ct2 = pd.crosstab(df2['predictions'], df2['truth'])
print(ct2)
from sklearn.metrics.cluster import homogeneity_score
score2 = homogeneity_score(df2['truth'], df2['predictions'])
print(score2) | _____no_output_____ | MIT | homeworks/PHYS_440_540_F20_HW5.ipynb | KleinWang/PHYS_440_540 |
Dataset Source:About this fileBoston House Price dataset columns:* CRIM per capita crime rate by town* ZN proportion of residential land zoned for lots over 25,000 sq.ft.* INDUS proportion of non-retail business acres per town* CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)* NOX nitric oxides concentration (parts per 10 million)* RM average number of rooms per dwelling* AGE proportion of owner-occupied units built prior to 1940* DIS weighted distances to five Boston employment centres* RAD index of accessibility to radial highways* TAX full-value property-tax rate per 10,000* PTRATIO pupil-teacher ratio by town* B where Bk is the proportion of blacks by town* LSTAT percentage lower status of the population* MEDV Median value of owner-occupied homes in 1000$ Load Modules | import numpy as np # linear algebra python library
import pandas as pd # data structure for tabular data.
import matplotlib.pyplot as plt # visualization library
%matplotlib inline | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
Loading data | filename = "housing.csv"
boston_data = pd.read_csv(filename, delim_whitespace=True, header=None)
header = ["CRIM","ZN","INDUS","CHAS","NOX","RM",
"AGE","DIS","RAD","TAX","PTRATIO","B","LSTAT","MEDV"]
boston_data.columns = header
# display the first 10 rows of dataframe.
boston_data.head(10) | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
Inspecting variable types | boston_data.dtypes | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
In many datasets, integer variables are cast as float. So, after inspectingthe data type of the variable, even if you get float as output, go aheadand check the unique values to make sure that those variables are discreteand not continuous. Inspecting all variables inspecting distinct values of `RAD`(index of accessibility to radial highways). | boston_data['RAD'].unique() | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
inspecting distinct values of `CHAS` Charles River dummy variable (= 1 if tract bounds river; 0 otherwise). | boston_data['CHAS'].unique() | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
inspecting the first 20 distinct values of all continous variables as following:* CRIM per capita crime rate by town* ZN proportion of residential land zoned for lots over 25,000 sq.ft.* INDUS proportion of non-retail business acres per town* NOX nitric oxides concentration (parts per 10 million)* RM average number of rooms per dwelling* AGE proportion of owner-occupied units built prior to 1940* DIS weighted distances to five Boston employment centres* TAX full-value property-tax rate per 10,000* PTRATIO pupil-teacher ratio by town* B where Bk is the proportion of blacks by town* LSTAT percentage lower status of the population* MEDV Median value of owner-occupied homes in 1000$ CRIM per capita crime rate by town. | boston_data['CRIM'].unique()[0:20] | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
ZN proportion of residential land zoned for lots over 25,000 sq.ft. | boston_data['ZN'].unique()[0:20] | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
INDUS proportion of non-retail business acres per town | boston_data['INDUS'].unique()[0:20] | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
NOX nitric oxides concentration (parts per 10 million) | boston_data['NOX'].unique()[0:20] | _____no_output_____ | Apache-2.0 | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.