markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Solution
<a id='sd_numb_of_heads'></a>
Example 2.12: Standard deviation of the number of Heads in five coin flips
The expected value of an RV is its long run average, while the standard deviation of an RV measures the average degree to which individual values of the RV vary from the expected value. The standard deviation of an RV can be approximated from simulated values with .sd(). Continuing Example 2.1, the following code estimates the standard deviation of the number of Heads in five coin flips. | P = BoxModel([1, 0], size=5)
X = RV(P, sum)
sims = X.sim(10000)
sims.sd() | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
Inspecting the plot in Example 2.3 we see there are many simulated values of 2 and 3, which are 0.5 units away from the expected value of 2.5. There are relatively fewer values of 0 and 5 which are 2.5 units away from the expected value of 2.5. Roughly, the simulated values are on average 1.1 units away from the expected value.
Variance is the square of the standard deviation and can be approximated with .var(). | sims.var() | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
<a id='sd_sum_of_dice'></a>
Exercise 2.13: Standard deviation of the sum of two dice rolls
Continuing Exercise 2.2, approximate the standard deviation of the sum of two six-sided dice rolls. (Bonus: interpret the value.) | ### Type your commands in this cell and then run using SHIFT-ENTER. | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
Solution
<a id='dist_of_normal'></a>
Example 2.14: Continuous random variables
The RVs we have seen so far have been discrete. A discrete random variable can take at most countably many distinct values. For example, the number of Heads in five coin flips can only take values 0, 1, 2, 3, 4, 5.
A continuous random variable can take any value in some interval of real numbers. For example, if X represents the height of a randomly selected U.S. adult male then X is a continuous random variable. Many continuous random variables are assumed to have a Normal distribution. The following simulates values of the RV X assuming it has a Normal distribution with mean 69.1 inches and standard deviation 2.9 inches. | X = RV(Normal(mean=69.1, sd=2.9))
sims = X.sim(10000) | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
The same simulation tools are available for both discrete and continuous RVs. Calling .plot() for a continuous RV produces a histogram which displays frequencies of simulated values falling in interval "bins". | sims.plot() | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
The number of bins can be set using the bins= option in .plot() | X.sim(10000).plot(bins=60) | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
It is not recommended to use .tabulate() with continuous RVs as almost all simulated values will only occur once.
<a id='sim_unif'></a>
Exercise 2.15: Simulating from a (continuous) uniform distribution
The continuous analog of a BoxModel is a Uniform distribution which produces "equally likely" values in an interval with endpoints a and b. (What would you expect the plot of such a distribution to look like?)
Let X be a random variable which has a Uniform distribution on the interval [0, 1]. Define an appropriate RV and use simulation to display its approximate distribution. (Note that the underlying probability space is unspecified.) | ### Type your commands in this cell and then run using SHIFT-ENTER. | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
Solution
<a id='sqrt_ex'></a>
Example 2.16: Transformations of random variables
In Example 2.9 we defined a new random variable Y = 5 - X (the number of Tails) by transforming the RV X (the number of Heads). A transformation of an RV is also an RV. If X is an RV, define a new random variable Y = g(X) using X.apply(g). The resulting Y behaves like any other RV.
Note that for arithmetic operations and many common math functions (such as exp, log, sin) you can simply call g(X) rather than X.apply(g).
Continuing Example 2.1, let $X$ represent the number of Heads in five coin flips and define the random variable $Y = \sqrt{X}$. The plot below approximates the distribution of $Y$; note that the possible values of $Y$ are 0, 1, $\sqrt{2}$, $\sqrt{3}$, 2, and $\sqrt{5}$. | P = BoxModel([1, 0], size=5)
X = RV(P, sum)
Y = X.apply(sqrt)
Y.sim(10000).plot() | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
The following code uses a g(X) definition rather than X.apply(g). | P = BoxModel([1, 0], size=5)
X = RV(P, sum)
Y = sqrt(X)
Y.sim(10000).plot() | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
<a id='dif_normal'></a>
Exercise 2.17 Function of a RV that has a Uniform distribution
In Example 2.15 we encountered uniform distributions. Let $U$ be a random variable which has a Uniform distribution on the interval [0, 1]. Use simulation to display the approximate distribution of the random variable $Y = -\log(U)$. | ### Type your commands in this cell and then run using SHIFT-ENTER. | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
Solution
<a id='Numb_distinct'></a>
Example 2.18: Number of switches between Heads and Tails in coin flips
RVs can be defined or transformed through user defined functions. As an example, let Y be the number of times a sequence of five coin flips switches between Heads and Tails (not counting the first toss). For example, for the outcome (0, 1, 0, 0, 1), a switch occurs on the second third, and fifth flip so Y = 3. We define the random variable Y by first defining a function that takes as an input a list of values and returns as an output the number of times a switch from the previous value occurs in the sequence. (Defining functions is one area where some familiarity with Python is helpful.) | def number_switches(x):
count = 0
for i in list(range(1, len(x))):
if x[i] != x[i-1]:
count += 1
return count
number_switches((1, 1, 1, 0, 0, 1, 0, 1, 1, 1)) | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
Now we can use the number_switches function to define the RV Y on the probability space corresponding to five flips of a fair coin. | P = BoxModel([1, 0], size=5)
Y = RV(P, number_switches)
outcome = (0, 1, 0, 0, 1)
Y(outcome) | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
An RV defined or transformed through a user-defined function behaves like any other RV. | Y.sim(10000).plot() | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
<a id='Numb_alterations'></a>
Exercise 2.19: Number of distinct faces rolled in 6 rolls
Let X count the number of distinct faces rolled in 6 rolls of a fair six-sided die. For example, if the result of the rolls is (3, 3, 3, 3, 3, 3) then X = 1; if (6, 4, 5, 4, 6, 6) then X=3; etc. Use the number_distinct_values function defined below to define the RV X on an appropriate probability space. Then simulate values of X and plot its approximate distribution. (The number_distinct_values function takes as an input a list of values and returns as an output the number of distinct values in the list. We have used the Python functions set and len.) | def number_distinct_values(x):
return len(set(x))
number_distinct_values((1, 1, 4))
### Type your commands in this cell and then run using SHIFT-ENTER. | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
Solution
Additional Exercises
<a id='ev_max_of_dice'></a>
Exercise 2.20: Max of two dice rolls
1) Approximate the distribution of the max of two six-sided dice rolls. | ### Type your commands in this cell and then run using SHIFT-ENTER. | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
2) Approximate the probability that the max of two six-sided dice rolls is greater than or equal to 5. | ### Type your commands in this cell and then run using SHIFT-ENTER. | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
3) Approximate the mean and standard deviation of the max of two six-sided dice rolls. | ### Type your commands in this cell and then run using SHIFT-ENTER.
### Type your commands in this cell and then run using SHIFT-ENTER. | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
Hint
Solution
<a id='var_transformed_unif'></a>
Exercise 2.21: Transforming a random variable
Let $X$ have a Uniform distribution on the interval [0, 3] and let $Y = 2\cos(X)$.
1) Approximate the distribution of $Y$. | ### Type your commands in this cell and then run using SHIFT-ENTER. | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
2) Approximate the probability that the $Y$ is less than 1. | ### Type your commands in this cell and then run using SHIFT-ENTER. | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
3) Approximate the mean and standard deviation of $Y$. | ### Type your commands in this cell and then run using SHIFT-ENTER.
### Type your commands in this cell and then run using SHIFT-ENTER. | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
Hint
Solution
<a id='log_normal'></a>
Exercise 2.22: Function of a random variable.
Let $X$ be a random variable which has a Normal(0,1) distribution. Let $Y = e^X$.
1) Use simulation to display the approximate distribution of $Y$. | ### Type your commands in this cell and then run using SHIFT-ENTER. | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
2) Approximate the probability that the $Y$ is greater than 2. | ### Type your commands in this cell and then run using SHIFT-ENTER. | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
Hint
Solution
<a id='hints'></a>
Hints for Additional Exercises
<a id='hint_ev_max_of_dice'></a>
Exercise 2.20: Hint
In Exercise 2.2 we simulated the sum of two six-sided dice rolls. Define an RV using the max function to return the larger of the two rolls. In Example 2.5 we estimated the probability of a random variable taking a value. In Example 2.10 we applied the .mean() funtion to return the long run expected average. In Example 2.12 we estimated the standard deviation.
Back
<a id='hint_var_transformed_unif'></a>
Exercise 2.21: Hint
Example 2.9 introduces transformations. In Exercise 2.15 we simulated an RV that had a Uniform distribution. In Example 2.5 we estimated the probabilities for a RV. In Example 2.10 we applied the .mean() funtion to return the long run expected average. In Example 2.12 we estimated the standard deviation.
Back
<a id='hint_log_normal'></a>
Exercise 2.22: Hint
In Example 2.14 we simulated an RV with a Normal distribution. In Example 2.9 we defined a random variable as a function of another random variable. In Example 2.5 we estimated the probability of a random variable taking a value. In Example 2.10 we applied the .mean() funtion to return the long run expected average. In Example 2.12 we estimated the standard deviation.
Back
Solutions to Exercises
<a id='sol_sum_of_two_dice'></a>
Exercise 2.2: Solution | P = BoxModel([1, 2, 3, 4, 5, 6], size=2)
X = RV(P, sum)
X.sim(10000) | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
Back
<a id='sol_dist_of_sum_of_two_dice'></a>
Exercise 2.4: Solution | P = BoxModel([1, 2, 3, 4, 5, 6], size=2)
X = RV(P, sum)
sims = X.sim(10000)
sims.tabulate(normalize=True)
sims.plot() | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
Back
<a id='sol_prob_of_10_two_dice'></a>
Exercise 2.6: Solution | P = BoxModel([1, 2, 3, 4, 5, 6], size=2)
X = RV(P, sum)
sims = X.sim(10000)
sims.count_geq(10) / 10000 | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
Back
<a id='sol_expected_discrete_unif_dice'></a>
Exercise 2.8: Solution | X = RV(DiscreteUniform(a=1, b=6))
X.sim(10000).plot(normalize=True) | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
Back
<a id='sol_expected_value_sum_of_dice'></a>
Exercise 2.11: Solution | P = BoxModel([1, 2, 3, 4, 5, 6], size=2)
X = RV(P, sum)
X.sim(10000).mean() | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
Over many pairs of rolls of fair six-sided dice, we expect that on average the sum of the two rolls will be about 7.
Back
<a id='sol_sd_sum_of_dice'></a>
Exercise 2.13: Solution | P = BoxModel([1, 2, 3, 4, 5, 6], size=2)
X = RV(P, sum)
X.sim(10000).sd() | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
Over many pairs of rolls of fair six-sided dice, the values of the sum are on average roughly 2.4 units away from the expected value of 7.
Back
<a id='sol_sim_unif'></a>
Exercise 2.15: Solution | X = RV(Uniform(a=0, b=1))
X.sim(10000).plot() | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
Back
<a id='sol_dif_normal'></a>
Exercise 2.17: Solution | U = RV(Uniform(a=0, b=1))
Y = -log(U)
Y.sim(10000).plot() | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
Note that the RV has an Exponential(1) distribution.
Back
<a id='sol_Numb_alterations'></a>
Exercise 2.19: Solution | def number_distinct_values(x):
return len(set(x))
P = BoxModel([1,2,3,4,5,6], size=6)
X = RV(P, number_distinct_values)
X.sim(10000).plot() | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
Back
<a id='sol_ev_max_of_dice'></a>
Exercise 2.20: Solution
1) Approximate the distribution of the max of two six-sided dice rolls. | P = BoxModel([1, 2, 3, 4, 5, 6], size=2)
X = RV(P, max)
sims = X.sim(10000)
sims.plot() | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
2) Approximate the probability that the max of two six-sided dice rolls is greater than or equal to 5. | sims.count_geq(5)/10000 | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
3) Approximate the mean and standard deviation of the max of two six-sided dice rolls. | sims.mean()
sims.sd() | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
Back
<a id='sol_var_transformed_unif'></a>
Exercise 2.21: Solution
1) Approximate the distribution of $Y$. | X = RV(Uniform(0, 3))
Y = 2 * cos(X)
sims = Y.sim(10000)
sims.plot() | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
Alternatively, | X = RV(Uniform(0, 3))
Y = 2 * X.apply(cos)
sims = Y.sim(10000)
sims.plot() | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
2) Approximate the probability that the Y is less than 2. | sims.count_lt(1)/10000 | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
3) Approximate the mean and standard deviation of Y. | sims.mean()
sims.sd() | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
Back
<a id='sol_log_normal'></a>
Exercise 2.22: Solution
1) Use simulation to display the approximate distribution of Y. | X = RV(Normal(0, 1))
Y = exp(X)
sims = Y.sim(10000)
sims.plot() | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
2) Approximate the probability that the Y is greater than 2. | sims.count_gt(2)/10000 | tutorial/gs_rv.ipynb | dlsun/symbulate | mit |
The Adult Data Set is commonly used to benchmark machine learning algorithms. The goal is to use demographic features, or variables, to predict whether an individual makes more than \$50,000 per year. The data set is almost 20 years old, and therefore, not perfect for determining the probability that I will make more than \$50K, but it is a nice, simple dataset that can be used to showcase a few benefits of using Bayesian logistic regression over its frequentist counterpart.
The motivation for myself to reproduce this piece of work was to learn how to use Odd Ratio in Bayesian Regression. | data = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data", header=None, names=['age', 'workclass', 'fnlwgt',
'education-categorical', 'educ',
'marital-status', 'occupation',
'relationship', 'race', 'sex',
'captial-gain', 'capital-loss',
'hours', 'native-country',
'income'])
data | pymc3/examples/GLM-logistic.ipynb | superbobry/pymc3 | apache-2.0 |
Scrubbing and cleaning
We need to remove any null entries in Income.
And we also want to restrict this study to the United States. | data = data[~pd.isnull(data['income'])]
data[data['native-country']==" United-States"]
income = 1 * (data['income'] == " >50K")
age2 = np.square(data['age'])
data = data[['age', 'educ', 'hours']]
data['age2'] = age2
data['income'] = income
income.value_counts() | pymc3/examples/GLM-logistic.ipynb | superbobry/pymc3 | apache-2.0 |
Exploring the data
Let us get a feel for the parameters.
* We see that age is a tailed distribution. Certainly not Gaussian!
* We don't see much of a correlation between many of the features, with the exception of Age and Age2.
* Hours worked has some interesting behaviour. How would one describe this distribution? |
g = seaborn.pairplot(data)
# Compute the correlation matrix
corr = data.corr()
# Generate a mask for the upper triangle
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(11, 9))
# Generate a custom diverging colormap
cmap = seaborn.diverging_palette(220, 10, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
seaborn.heatmap(corr, mask=mask, cmap=cmap, vmax=.3,
linewidths=.5, cbar_kws={"shrink": .5}, ax=ax) | pymc3/examples/GLM-logistic.ipynb | superbobry/pymc3 | apache-2.0 |
We see here not many strong correlations. The highest is 0.30 according to this plot. We see a weak-correlation between hours and income
(which is logical), we see a slighty stronger correlation between education and income (which is the kind of question we are answering).
The model
We will use a simple model, which assumes that the probability of making more than $50K
is a function of age, years of education and hours worked per week. We will use PyMC3
do inference.
In Bayesian statistics, we treat everything as a random variable and we want to know the posterior probability distribution of the parameters
(in this case the regression coefficients)
The posterior is equal to the likelihood $$p(\theta | D) = \frac{p(D|\theta)p(\theta)}{p(D)}$$
Because the denominator is a notoriously difficult integral, $p(D) = \int p(D | \theta) p(\theta) d \theta $ we would prefer to skip computing it. Fortunately, if we draw examples from the parameter space, with probability proportional to the height of the posterior at any given point, we end up with an empirical distribution that converges to the posterior as the number of samples approaches infinity.
What this means in practice is that we only need to worry about the numerator.
Getting back to logistic regression, we need to specify a prior and a likelihood in order to draw samples from the posterior. We could use sociological knowledge about the effects of age and education on income, but instead, let's use the default prior specification for GLM coefficients that PyMC3 gives us, which is $p(θ)=N(0,10^{12}I)$. This is a very vague prior that will let the data speak for themselves.
The likelihood is the product of n Bernoulli trials, $\prod^{n}{i=1} p{i}^{y} (1 - p_{i})^{1-y_{i}}$,
where $p_i = \frac{1}{1 + e^{-z_i}}$,
$z_{i} = \beta_{0} + \beta_{1}(age){i} + \beta_2(age)^{2}{i} + \beta_{3}(educ){i} + \beta{4}(hours){i}$ and $y{i} = 1$ if income is greater than 50K and $y_{i} = 0$ otherwise.
With the math out of the way we can get back to the data. Here I use PyMC3 to draw samples from the posterior. The sampling algorithm used is NUTS, which is a form of Hamiltonian Monte Carlo, in which parameteres are tuned automatically. Notice, that we get to borrow the syntax of specifying GLM's from R, very convenient! I use a convenience function from above to plot the trace infromation from the first 1000 parameters. | with pm.Model() as logistic_model:
pm.glm.glm('income ~ age + age2 + educ + hours', data, family=pm.glm.families.Binomial())
trace_logistic_model = pm.sample(2000, pm.NUTS(), progressbar=True)
plot_traces(trace_logistic_model, retain=1000) | pymc3/examples/GLM-logistic.ipynb | superbobry/pymc3 | apache-2.0 |
Some results
One of the major benefits that makes Bayesian data analysis worth the extra computational effort in many circumstances is that we can be explicit about our uncertainty. Maximum likelihood returns a number, but how certain can we be that we found the right number? Instead, Bayesian inference returns a distribution over parameter values.
I'll use seaborn to look at the distribution of some of these factors. | plt.figure(figsize=(9,7))
trace = trace_logistic_model[1000:]
seaborn.jointplot(trace['age'], trace['educ'], kind="hex", color="#4CB391")
plt.xlabel("beta_age")
plt.ylabel("beta_educ")
plt.show() | pymc3/examples/GLM-logistic.ipynb | superbobry/pymc3 | apache-2.0 |
So how do age and education affect the probability of making more than $$50K?$ To answer this question, we can show how the probability of making more than $50K changes with age for a few different education levels. Here, we assume that the number of hours worked per week is fixed at 50. PyMC3 gives us a convenient way to plot the posterior predictive distribution. We need to give the function a linear model and a set of points to evaluate. We will pass in three different linear models: one with educ == 12 (finished high school), one with educ == 16 (finished undergrad) and one with educ == 19 (three years of grad school). | # Linear model with hours == 50 and educ == 12
lm = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] +
samples['age']*x +
samples['age2']*np.square(x) +
samples['educ']*12 +
samples['hours']*50)))
# Linear model with hours == 50 and educ == 16
lm2 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] +
samples['age']*x +
samples['age2']*np.square(x) +
samples['educ']*16 +
samples['hours']*50)))
# Linear model with hours == 50 and educ == 19
lm3 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] +
samples['age']*x +
samples['age2']*np.square(x) +
samples['educ']*19 +
samples['hours']*50))) | pymc3/examples/GLM-logistic.ipynb | superbobry/pymc3 | apache-2.0 |
Each curve shows how the probability of earning more than $ 50K$ changes with age. The red curve represents 19 years of education, the green curve represents 16 years of education and the blue curve represents 12 years of education. For all three education levels, the probability of making more than $50K increases with age until approximately age 60, when the probability begins to drop off. Notice that each curve is a little blurry. This is because we are actually plotting 100 different curves for each level of education. Each curve is a draw from our posterior distribution. Because the curves are somewhat translucent, we can interpret dark, narrow portions of a curve as places where we have low uncertainty and light, spread out portions of the curve as places where we have somewhat higher uncertainty about our coefficient values. | # Plot the posterior predictive distributions of P(income > $50K) vs. age
pm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm, samples=100, color="blue", alpha=.15)
pm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm2, samples=100, color="green", alpha=.15)
pm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm3, samples=100, color="red", alpha=.15)
import matplotlib.lines as mlines
blue_line = mlines.Line2D(['lm'], [], color='b', label='High School Education')
green_line = mlines.Line2D(['lm2'], [], color='g', label='Bachelors')
red_line = mlines.Line2D(['lm3'], [], color='r', label='Grad School')
plt.legend(handles=[blue_line, green_line, red_line], loc='lower right')
plt.ylabel("P(Income > $50K)")
plt.xlabel("Age")
plt.show()
b = trace['educ']
plt.hist(np.exp(b), bins=20, normed=True)
plt.xlabel("Odds Ratio")
plt.show() | pymc3/examples/GLM-logistic.ipynb | superbobry/pymc3 | apache-2.0 |
Finally, we can find a credible interval (remember kids - credible intervals are Bayesian and confidence intervals are frequentist) for this quantity. This may be the best part about Bayesian statistics: we get to interpret credibility intervals the way we've always wanted to interpret them. We are 95% confident that the odds ratio lies within our interval! | lb, ub = np.percentile(b, 2.5), np.percentile(b, 97.5)
print("P(%.3f < O.R. < %.3f) = 0.95"%(np.exp(3*lb),np.exp(3*ub))) | pymc3/examples/GLM-logistic.ipynb | superbobry/pymc3 | apache-2.0 |
Model selection
The Deviance Information Criterion (DIC) is a fairly unsophisticated method for comparing the deviance of likelhood across the the sample traces of a model run. However, this simplicity apparently yields quite good results in a variety of cases. We'll run the model with a few changes to see what effect higher order terms have on this model.
One question that was immediately asked was what effect does age have on the model, and why should it be age^2 versus age? We'll use the DIC to answer this question. | models_lin, traces_lin = run_models(data, 4)
dfdic = pd.DataFrame(index=['k1','k2','k3','k4'], columns=['lin'])
dfdic.index.name = 'model'
for nm in dfdic.index:
dfdic.loc[nm, 'lin'] = pm.stats.dic(traces_lin[nm],models_lin[nm])
dfdic = pd.melt(dfdic.reset_index(), id_vars=['model'], var_name='poly', value_name='dic')
g = seaborn.factorplot(x='model', y='dic', col='poly', hue='poly', data=dfdic, kind='bar', size=6) | pymc3/examples/GLM-logistic.ipynb | superbobry/pymc3 | apache-2.0 |
There isn't a lot of difference between these models in terms of DIC. So our choice is fine in the model above, and there isn't much to be gained for going up to age^3 for example.
Next we look at WAIC. Which is another model selection technique. | dfdic = pd.DataFrame(index=['k1','k2','k3','k4'], columns=['lin'])
dfdic.index.name = 'model'
for nm in dfdic.index:
dfdic.loc[nm, 'lin'] = pm.stats.waic(traces_lin[nm],models_lin[nm])
dfdic = pd.melt(dfdic.reset_index(), id_vars=['model'], var_name='poly', value_name='waic')
g = seaborn.factorplot(x='model', y='waic', col='poly', hue='poly', data=dfdic, kind='bar', size=6) | pymc3/examples/GLM-logistic.ipynb | superbobry/pymc3 | apache-2.0 |
Les quelques règles de Python
Python est un peu susceptible et protocolaire, il y a quelques règles à respecter :
1) L'indentation est primordiale : un code mal indenté ne fonctionne pas.
L'indentation indique à l'interpréteur où se trouvent les séparations entre des blocs d'instructions. Un peu comme des points dans un texte.
Si les lignes ne sont pas bien alignées, l'interpréteur ne sait plus à quel bloc associer la ligne.
2) On commence à compter à 0. Ca peut paraitre bizarre mais c'est comme ça. Le premier élément d'une liste est le 0-ème.
3) Les marques de ponctuation sont importantes :
- Pour une liste : []
- Pour un dictionnaire : {}
- Pour un tuple : ()
- Pour séparer des éléments : ,
- Pour commenter un bout de code : #
- Pour aller à la ligne dans un bloc d'instructions : \
- Les majuscules et minuscules sont importantes
- Par contre l'usage des ' ou des " est indifférente. Il faut juste avoir les mêmes début et fin.
- Pour documenter une fonction ou une classe """ documentation """
Les outputs de Python : l'opération, le print et le return
Quand Python réalise des opérations, il faut lui préciser ce qu'il doit en faire :
- est-ce qu'il doit juste faire l'opération,
- afficher le résultat de l'opération,
- créer un objet avec le résultat de l'opération ?
Remarque : dans l'environnement Notebook, le dernier élement d'une cellule est automatiquement affiché (print), qu'on lui demande ou non de le faire. Ce n'est pas le cas dans un éditeur classique comme Spyder. | # on calcule : dans le cas d'une opération par exemple une somme
2+3 # Python calcule le résultat mais n'affiche rien dans la sortie
# le print : on affiche
print(2+3) # Python calcule et on lui demande juste de l'afficher
# le résultat est en dessous du code
# le print dans une fonction
def addition_v1(a,b) :
print(a+b)
resultat_print = addition_v1(2,0)
print(type(resultat_print))
# dans la sortie on a l'affichage du résultat, car la sortie de la fonction est un print
# en plus on lui demande quel est le type du résultat. Un print ne renvoie aucun type, ce n'est ni un numérique,
# ni une chaine de charactères, le résultat d'un print n'est pas un format utilisable | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
Le résultat de l'addition est affiché
La fonction addition_v1 effectue un print
Par contre, l'objet crée n'a pas de type, il n'est pas un chiffre, ce n'est qu'un affichage.
Pour créer un objet avec le résultat de la fonction, il faut utiliser return | # le return dans une fonction
def addition_v2(a,b) :
return a+b
resultat_return = addition_v2(2,5) #
print(type(resultat_return))
## là on a bien un résultat qui est du type "entier" | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
Le résultat de addition_v2 n'est pas affiché comme dans addition_v1
Par contre, la fonction addition_v2 permet d'avoir un objet de type int, un entier donc.
Type de base : variables, listes, dictionnaires ...
Pyhon permet de manipuler différents types de base
On distingue deux types de variables : les immuables qui ne peuvent être modifiés et les modifiables
Les variables - types immuables
Les variables immuables ne peuvent être modifiées
None : ce type est une convention de programmation pour dire que la valeur n'est pas calculée
bool : un booléen
int : un entier
float : un réel
str : une chaine de caractères
tuple : un vecteur | i = 3 # entier = type numérique (type int)
r = 3.3 # réel = type numérique (type float)
s = "exemple" # chaîne de caractères = type str
n = None # None signifie que la variable existe mais qu'elle ne contient rien
# elle est souvent utilisée pour signifier qu'il n'y a pas de résultat
a = (1,2) # tuple
print(i,r,s,n,a) | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
Si on essaie de changer le premier élément de la chaine de caractères s on va avoir un peu de mal.
Par exemple si on voulait mettre une majuscule à "exemple", on aurait envie d'écrire que le premier élément de la chaine s est "E" majuscule
Mais Python ne va pas nous laisser faire, il nous dit que les objets "chaine de caractère" ne peuvent être modifiés | s[0] = "E" # déclenche une exception | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
Tout ce qu'on peut faire avec une variable immutable, c'est le réaffecter à une autre valeur : il ne peut pas être modifié
Pour s'en convaincre, utilisons la fonction id() qui donne un identifiant à chaque objet. | print(s)
id(s)
s = "autre_mot"
id(s) | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
On voit bien que s a changé d'identifiant : il peut avoir le même nom, ce n'est plus le même objet
Les variables - types modifiable : listes et dictionnaires
Heureusement, il existe des variables modifiables comme les listes et les dictionnaires.
Les listes - elles s''écrivent entre [ ]
Les listes sont des élements très utiles, notamment quand vous souhaitez faire des boucles
Pour faire appel aux élements d'une liste, on donne leur position dans la liste : le 1er est le 0, le 2ème est le 1 ... | ma_liste = [1,2,3,4]
print("La longueur de ma liste est de", len(ma_liste))
print("Le premier élément de ma liste est :", ma_liste[0])
print("Le dernier élément de ma liste est :", ma_liste[3])
print("Le dernier élément de ma liste est :", ma_liste[-1]) | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
Les dictionnaires - ils s'écrivent entre { }
Un dictionnaire associe à une clé un autre élément, appelé une valeur : un chiffre, un nom, une liste, un autre dictionnaire etc.
Format d'un dictionnaire : {Clé : valeur}
Dictionnaire avec des valeurs int
On peut par exemple associer à un nom, un nombre | mon_dictionnaire_notes = { 'Nicolas' : 18 , 'Pimprenelle' : 15}
# un dictionnaire qui à chaque nom associe un nombre
# à Nicolas, on associe 18
print(mon_dictionnaire_notes) | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
Dictionnaire avec des valeurs qui sont des listes
Pour chaque clé d'un dictionnaire, il ne faut pas forcément garder la même forme de valeur
Dans l'exemple, la valeur de la clé "Nicolas" est une liste, alors que celle de "Philou" est une liste de liste | mon_dictionnaire_loisirs = \
{ 'Nicolas' : ['Rugby','Pastis','Belote'] ,
'Pimprenelle' : ['Gin Rami','Tisane','Tara Jarmon','Barcelone','Mickey Mouse'],
'Philou' : [['Maths','Jeux'],['Guillaume','Jeanne','Thimothée','Adrien']]} | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
Pour accéder à un élément du dictionnaire, on fait appel à la clé et non plus à la position, comme c'était le cas dans les listes | print(mon_dictionnaire_loisirs['Nicolas']) # on affiche une liste
print(mon_dictionnaire_loisirs['Philou']) # on affiche une liste de listes | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
Si on ne veut avoir que la première liste des loisirs de Philou, on demande le premier élément de la liste | print(mon_dictionnaire_loisirs['Philou'][0]) # on affiche alors juste la première liste | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
On peut aussi avoir des valeurs qui sont des int et des listes | mon_dictionnaire_patchwork_good = \
{ 'Nicolas' : ['Rugby','Pastis','Belote'] ,
'Pimprenelle' : 18 } | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
A retenir
L'indentation du code est importante (4 espaces et pas une tabulation)
Une liste est entre [] et on peut appeler les positions par leur place
Un dictionnaire, clé x valeur, s'écrit entre {} et on appelle un élément en fonction de la clé
Questions pratiques :
Quelle est la position de 7 dans la liste suivante | liste_nombres = [1,2,7,5,3] | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
Combien de clés a ce dictionnaire ? | dictionnaire_evangile = {"Marc" : "Lion", "Matthieu" : ["Ange","Homme ailé"] ,
"Jean" : "Aigle" , "Luc" : "Taureau"} | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
Que faut-il écrire pour obtenir "Ange" en résultat à partir du dictionnaire_evangile ?
Objets : méthodes et attributs
Mainentant qu'on a vu quels objets existaient en Python, nous allons voir comment nous en servir.
Un petit détour pour bien comprendre : Un objet, c'est quoi ?
Un objet a deux choses : des attributs et des méthodes
Les attributs décrivent sa structure interne : sa taille, sa forme (dont on ne va pas parler ici)
Les méthodes sont des "actions" qui s'appliqueront à l'objet
Premiers exemples de méthode
Avec les éléments définis dans la partie 1 (les listes, les dictionnaires) on peut faire appel à des méthodes qui sont directement liées à ces objets.
Les méthodes, c'est un peu les actions de Python.
Une méthode pour les listes
Pour ajouter un item dans une liste : on va utiliser la méthode .append() | ma_liste = ["Nicolas","Michel","Bernard"]
ma_liste.append("Philippe")
print(ma_liste) | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
Une méthode pour les dictionnaires
Pour connaitre l'ensemble des clés d'un dictionnaire, on appelle la méthode .keys() | mon_dictionnaire = {"Marc" : "Lion", "Matthieu" : ["Ange","Homme ailé"] ,
"Jean" : "Aigle" , "Luc" : "Taureau"}
print(mon_dictionnaire.keys()) | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
Connaitre les méthodes d'un objet
Pour savoir quelles sont les méthodes d'un objet vous pouvez :
- taper help(mon_objet) ou mon_objet? dans la console iPython
- taper mon_objet. + touche tabulation dans la console iPython ou dans le notebook . iPython permet la complétion, c'est-à-dire que vous pouvez faire appaître la liste
Les opérations et méthodes classiques des listes
Créer une liste
Pour créer un objet de la classe list, il suffit de le déclarer. Ici on affecte à x une liste | x = [4, 5] # création d’une liste composée de deux entiers
x = ["un", 1, "deux", 2] # création d’une liste composée de 2 chaînes de caractères
# et de deux entiers, l’ordre d’écriture est important
x = [3] # création d’une liste d’un élément, sans la virgule,
x = [ ] # crée une liste vide
x = list () # crée une liste vide | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
Un premier test sur les listes
Si on veut tester la présence d'un élément dans une liste, on l'écrit de la manière suivante : | # Exemple
x = "Marcel"
l = ["Marcel","Edith","Maurice","Jean"]
print(x in l)
#vrai si x est un des éléments de l | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
Pour concaténer deux listes :
On utilise le symbole + | t = ["Antoine","David"]
print(l + t) #concaténation de l et t | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
Pour trouver certains éléments d'une liste
Pour chercher des élements dans une liste, on utilise la position dans la liste. | l[1] # donne l'élément qui est en 2ème position de la liste
l[1:3] # donne les éléments de la 2ème position de la liste à la 4ème exclue | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
Quelques fonctions des listes | longueur = len(l) # nombre d’éléments de l
minimum = min(l) # plus petit élément de l, ici par ordre alphabétique
maximum = max(l) # plus grand élément de l, ici par ordre alphabétique
print(longueur,minimum,maximum)
del l[0 : 2] # supprime les éléments entre la position 0 et 2 exclue
print(l) | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
Les méthodes des listes
On les trouve dans l'aide de la liste. On distingue les méthodes et les méthodes spéciales : visuellement, les méthodes spéciales sont celles qui précédées et suivis de deux caractères de soulignement, les autres sont des méthodes classiques. | help(l) | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
A retenir et questions
A retenir :
Chaque objet Python a des attributs et des méthodes
Vous pouvez créer des classes avec des attributs et des méthodes
Les méthodes des listes et des dictionnaires sont les plus utilisées :
list.count()
list.sort()
list.append()
dict.keys()
dict.items()
dict.values()
Questions pratiques :
Définir la liste allant de 1 à 10, puis effectuez les actions suivantes :
– triez et affichez la liste
– ajoutez l’élément 11 à la liste et affichez la liste
– renversez et affichez la liste
– affichez l’indice de l’élément 7
– enlevez l’élément 9 et affichez la liste
– affichez la sous-liste du 2e au 3e élément ;
– affichez la sous-liste du début au 2e élément ;
– affichez la sous-liste du 3e élément à la fin de la liste ;
Construire le dictionnaire des 6 premiers mois de l'année avec comme valeurs le nombre de jours respectif.
- Renvoyer la liste des mois.
- Renvoyer la liste des jours.
- Ajouez la clé du mois de Juillet ?
Passer des listes, dictionnaires à pandas
Supposons que la variable 'data' est un liste qui contient nos données.
Une observation correspond à un dictionnaire qui contient le nom, le type, l'ambiance et la note d'un restaurant.
Il est aisé de transformer cette liste en dataframe grâce à la fonction 'DataFrame'. | import pandas
data = [{"nom": "Little Pub", "type" : "Bar", "ambiance": 9, "note": 7},
{"nom": "Le Corse", "type" : "Sandwicherie", "ambiance": 2, "note": 8},
{"nom": "Café Caumartin", "type" : "Bar", "ambiance": 1}]
df = pandas.DataFrame(data)
print(data)
df | _doc/notebooks/td2a_eco/td2_eco_rappels_1a.ipynb | sdpython/ensae_teaching_cs | mit |
File Reading
Line Plots
plt.plot Plot lines and/or markers:
* plot(x, y)
* plot x and y using default line style and color
* plot(x, y, 'bo')
* plot x and y using blue circle markers
* plot(y)
* plot y using x as index array 0..N-1
* plot(y, 'r+')
* Similar, but with red plusses
run
%pdoc plt.plot
for more details | x = np.arange(-np.pi,np.pi,0.01) # Create an array of x values from -pi to pi with 0.01 interval
y = np.sin(x) # Apply sin function on all x
plt.plot(x,y)
plt.plot(y) | notebooks/Jan2018/python-matplotlib.ipynb | ryan-leung/PHYS4650_Python_Tutorial | bsd-3-clause |
Scatter Plots
plt.plot can also plot markers. | x = np.arange(0,10,1) # x = 1,2,3,4,5...
y = x*x # Squared x
plt.plot(x,y,'bo') # plot x and y using blue circle markers
plt.plot(x,y,'r+') # plot x and y using red plusses | notebooks/Jan2018/python-matplotlib.ipynb | ryan-leung/PHYS4650_Python_Tutorial | bsd-3-clause |
Plot properties
Add x-axis and y-axis | x = np.arange(-np.pi,np.pi,0.001)
plt.plot(x,np.sin(x))
plt.title('y = sin(x)') # title
plt.xlabel('x (radians)') # x-axis label
plt.ylabel('y') # y-axis label
# To plot the axis label in LaTex, we can run
from matplotlib import rc
## For sans-serif font:
rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']})
rc('text', usetex=True)
## for Palatino and other serif fonts use:
#rc('font',**{'family':'serif','serif':['Palatino']})
plt.plot(x,np.sin(x))
plt.title(r'T = sin($\theta$)') # title, the `r` in front of the string means raw string
plt.xlabel(r'$\theta$ (radians)') # x-axis label, LaTex synatx should be encoded with $$
plt.ylabel('T') # y-axis label | notebooks/Jan2018/python-matplotlib.ipynb | ryan-leung/PHYS4650_Python_Tutorial | bsd-3-clause |
Multiple plots | x1 = np.linspace(0.0, 5.0)
x2 = np.linspace(0.0, 2.0)
y1 = np.cos(2 * np.pi * x1) * np.exp(-x1)
y2 = np.cos(2 * np.pi * x2)
plt.subplot(2, 1, 1)
plt.plot(x1, y1, '.-')
plt.title('Plot 2 graph at the same time')
plt.ylabel('Amplitude (Damped)')
plt.subplot(2, 1, 2)
plt.plot(x2, y2, '.-')
plt.xlabel('time (s)')
plt.ylabel('Amplitude (Undamped)') | notebooks/Jan2018/python-matplotlib.ipynb | ryan-leung/PHYS4650_Python_Tutorial | bsd-3-clause |
Save figure | plt.plot(x,np.sin(x))
plt.savefig('plot.pdf')
plt.savefig('plot.png')
# To load image into this Jupyter notebook
from IPython.display import Image
Image("plot.png") | notebooks/Jan2018/python-matplotlib.ipynb | ryan-leung/PHYS4650_Python_Tutorial | bsd-3-clause |
Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab) | import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
vocab = set(text)
vocab_to_int = {word: i for word, i in zip(vocab, range(len(vocab)))}
int_to_vocab = {i: word for word, i in vocab_to_int.items()}
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables) | tv-script-generation/dlnd_tv_script_generation.ipynb | brandoncgay/deep-learning | mit |
Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||". | def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
token_dict = { '.': '||Period||',
',': '||Comma||',
'"': '||Quotation_Mark||',
';': '||Semi_Colon||',
'!': '||Exclamation_Mark||',
'?': '||Question_Mark||',
'(': '||Left_Parentheses||',
')': '||Right_Parentheses||',
'--': '||Dash||',
'\n': '||Return||'
}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup) | tv-script-generation/dlnd_tv_script_generation.ipynb | brandoncgay/deep-learning | mit |
Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate) | def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
inputs = tf.placeholder(tf.int32, shape=[None, None], name='input')
labels = tf.placeholder(tf.int32, shape=[None, None], name='labels')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return (inputs, labels, learning_rate)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs) | tv-script-generation/dlnd_tv_script_generation.ipynb | brandoncgay/deep-learning | mit |
Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState) | def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
layers = 1
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=0.5)
cell = tf.contrib.rnn.MultiRNNCell([drop]*layers)
initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), name='initial_state')
return (cell, initial_state)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell) | tv-script-generation/dlnd_tv_script_generation.ipynb | brandoncgay/deep-learning | mit |
Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence. | def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed) | tv-script-generation/dlnd_tv_script_generation.ipynb | brandoncgay/deep-learning | mit |
Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState) | def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return (outputs, final_state)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn) | tv-script-generation/dlnd_tv_script_generation.ipynb | brandoncgay/deep-learning | mit |
Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState) | def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
embed = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(inputs=outputs,
num_outputs=vocab_size,
activation_fn=None,
weights_initializer=tf.truncated_normal_initializer(stddev=0.1),
biases_initializer=tf.zeros_initializer())
return (logits, final_state)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn) | tv-script-generation/dlnd_tv_script_generation.ipynb | brandoncgay/deep-learning | mit |
Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive. | def test(int_text, batch_size, seq_length):
batches = []
n_batches = len(int_text)//(batch_size * seq_length)
x = np.array(int_text[:n_batches * batch_size * seq_length])
y = np.array(int_text[1:n_batches * batch_size * seq_length + 1])
x = x.reshape((batch_size, -1))
y = y.reshape((batch_size, -1))
x = np.split(x, n_batches, axis=1)
y = np.split(y, n_batches, axis=1)
batches = np.array(list(zip(x, y)))
return batches
print(test([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15], 2, 3))
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
n_batches = len(int_text)//(batch_size * seq_length)
x = np.array(int_text[:n_batches * batch_size * seq_length])
y = np.array(int_text[1:n_batches * batch_size * seq_length + 1])
x = x.reshape((batch_size, -1))
y = y.reshape((batch_size, -1))
x = np.split(x, n_batches, axis=1)
y = np.split(y, n_batches, axis=1)
batches = np.array(list(zip(x, y)))
return batches
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches) | tv-script-generation/dlnd_tv_script_generation.ipynb | brandoncgay/deep-learning | mit |
Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress. | # Number of Epochs
num_epochs = 40
# Batch Size
batch_size = 500
# RNN Size
rnn_size = 512
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 10
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 20
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save' | tv-script-generation/dlnd_tv_script_generation.ipynb | brandoncgay/deep-learning | mit |
Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) | def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
InputTensor = loaded_graph.get_tensor_by_name("input:0")
InitialStateTensor = loaded_graph.get_tensor_by_name("initial_state:0")
FinalStateTensor = loaded_graph.get_tensor_by_name("final_state:0")
ProbsTensor = loaded_graph.get_tensor_by_name("probs:0")
return (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors) | tv-script-generation/dlnd_tv_script_generation.ipynb | brandoncgay/deep-learning | mit |
Choose Word
Implement the pick_word() function to select the next word using probabilities. | np.random.choice(5, 1)[0]
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
i = np.random.choice(len(probabilities), 1, p=probabilities)[0]
word = int_to_vocab[i]
return word
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word) | tv-script-generation/dlnd_tv_script_generation.ipynb | brandoncgay/deep-learning | mit |
Dictcc
Download the dictionary from http://www.dict.cc/?s=about%3Awordlist
Print out the first 20 lines of the dictionary | !head -n 20 de-en.txt | build_wordlist.ipynb | sdaros/placeword | unlicense |
Use pandas library to import csv file | import pandas as pd
dictcc_df = pd.read_csv("de-en.txt",
sep='\t',
skiprows=8,
header=None,
names=["GermanWord","Word","WordType"]) | build_wordlist.ipynb | sdaros/placeword | unlicense |
Preview a few entries of the wordlist | dictcc_df[90:100] | build_wordlist.ipynb | sdaros/placeword | unlicense |
We only need "Word" and "WordType" column | dictcc_df = dictcc_df[["Word", "WordType"]][:].copy() | build_wordlist.ipynb | sdaros/placeword | unlicense |
Convert WordType Column to a pandas.Categorical | word_types = dictcc_df["WordType"].astype('category')
dictcc_df["WordType"] = word_types
# show data types of each column in the dataframe
dictcc_df.dtypes | build_wordlist.ipynb | sdaros/placeword | unlicense |
List the current distribution of word types in dictcc dataframe | # nltk TaggedCorpusParses requires uppercase WordType
dictcc_df["WordType"] = dictcc_df["WordType"].str.upper()
dictcc_df["WordType"].value_counts().head() | build_wordlist.ipynb | sdaros/placeword | unlicense |
Add dictcc corpus to our wordlists array | wordlists.append(dictcc_df) | build_wordlist.ipynb | sdaros/placeword | unlicense |
Moby
Download the corpus from http://icon.shef.ac.uk/Moby/mpos.html
Perform some basic cleanup on the wordlist | # the readme file in `nltk/corpora/moby/mpos` gives some information on how to parse the file
result = []
# replace all DOS line endings '\r' with newlines then change encoding to UTF8
moby_words = !cat nltk/corpora/moby/mpos/mobyposi.i | iconv --from-code=ISO88591 --to-code=UTF8 | tr -s '\r' '\n' | tr -s '×' '/'
result.extend(moby_words)
moby_df = pd.DataFrame(data = result, columns = ['Word'])
moby_df.tail(10) | build_wordlist.ipynb | sdaros/placeword | unlicense |
sort out the nouns, verbs and adjectives | # Matches nouns
nouns = moby_df[moby_df["Word"].str.contains('/[Np]$')].copy()
nouns["WordType"] = "NOUN"
# Matches verbs
verbs = moby_df[moby_df["Word"].str.contains('/[Vti]$')].copy()
verbs["WordType"] = "VERB"
# Magtches adjectives
adjectives = moby_df[moby_df["Word"].str.contains('/A$')].copy()
adjectives["WordType"] = "ADJ" | build_wordlist.ipynb | sdaros/placeword | unlicense |
remove the trailing stuff and concatenate the nouns, verbs and adjectives | nouns["Word"] = nouns["Word"].str.replace(r'/N$','')
verbs["Word"] = verbs["Word"].str.replace(r'/[Vti]$','')
adjectives["Word"] = adjectives["Word"].str.replace(r'/A$','')
# Merge nouns, verbs and adjectives into one dataframe
moby_df = pd.concat([nouns,verbs,adjectives]) | build_wordlist.ipynb | sdaros/placeword | unlicense |
Add moby corpus to wordlists array | wordlists.append(moby_df) | build_wordlist.ipynb | sdaros/placeword | unlicense |
Combine all wordlists | wordlist = pd.concat(wordlists) | build_wordlist.ipynb | sdaros/placeword | unlicense |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.