repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
Vvkmnn/books | ThinkBayes/06_Decision_Analysis.ipynb | gpl-3.0 | from price import *
import matplotlib.pyplot as plt
player1, player2 = MakePlayers(path='../code')
MakePrice1(player1, player2)
plt.legend();
"""
Explanation: Decision Analysis
The Price is Right problem
On November 1, 2007, contestants named Letia and Nathaniel appeared on
The Price is Right, an American game show. They competed
in a game called The Showcase, where the objective is to
guess the price of a showcase of prizes. The contestant who comes
closest to the actual price of the showcase, without going over, wins
the prizes.
Nathaniel went first. His showcase included a dishwasher, a wine
cabinet, a laptop computer, and a car. He bid \$26,000.
Letia’s showcase included a pinball machine, a video arcade game, a pool
table, and a cruise of the Bahamas. She bid \$21,500.
The actual price of Nathaniel’s showcase was \$25,347. His bid was too
high, so he lost.
The actual price of Letia’s showcase was \$21,578. She was only off by
\$78, so she won her showcase and, because her bid was off by less than
\$250, she also won Nathaniel’s showcase.
For a Bayesian thinker, this scenario suggests several questions:
Before seeing the prizes, what prior beliefs should the contestant
have about the price of the showcase?
After seeing the prizes, how should the contestant update those
beliefs?
Based on the posterior distribution, what should the contestant bid?
The third question demonstrates a common use of Bayesian analysis:
decision analysis. Given a posterior distribution, we can choose the bid
that maximizes the contestant’s expected return.
This problem is inspired by an example in Cameron Davidson-Pilon’s book,
Bayesian Methods for Hackers. The code I wrote for this
chapter is available from http://thinkbayes.com/price.py; it reads
data files you can download from
http://thinkbayes.com/showcases.2011.csv and
http://thinkbayes.com/showcases.2012.csv. For more information see
Section [download].
The prior
To choose a prior distribution of prices, we can take advantage of data
from previous episodes. Fortunately, fans of the show keep detailed
records. When I corresponded with Mr. Davidson-Pilon about his book, he
sent me data collected by Steve Gee at http://tpirsummaries.8m.com. It
includes the price of each showcase from the 2011 and 2012 seasons and
the bids offered by the contestants.
End of explanation
"""
class Pdf(object):
def Density(self, x):
raise UnimplementedMethodException()
def MakePmf(self, xs):
pmf = Pmf()
for x in xs:
pmf.Set(x, self.Density(x))
pmf.Normalize()
return pmf
"""
Explanation: This shows the distribution of prices for these
showcases. The most common value for both showcases is around \$28,000,
but the first showcase has a second mode near \$50,000, and the second
showcase is occasionally worth more than \$70,000.
These distributions are based on actual data, but they have been
smoothed by Gaussian kernel density estimation (KDE). Before we go on, I
want to take a detour to talk about probability density functions and
KDE.
Probability density functions
So far we have been working with probability mass functions, or PMFs. A
PMF is a map from each possible value to its probability. In my
implementation, a Pmf object provides a method named Prob
that takes a value and returns a probability, also known as a
probability mass.
A probability density function, or PDF, is the
continuous version of a PMF, where the possible values make up a
continuous range rather than a discrete set.
In mathematical notation, PDFs are usually written as functions; for
example, here is the PDF of a Gaussian distribution with mean 0 and
standard deviation 1:
$$f(x) = \frac{1}{\sqrt{2 \pi}} \exp(-x^2/2)$$
For a given value of $x$, this function computes a probability density. A
density is similar to a probability mass in the sense that a higher
density indicates that a value is more likely.
But a density is not a probability. A density can be 0 or any positive
value; it is not bounded, like a probability, between 0 and 1.
If you integrate a density over a continuous range, the result is a
probability. But for the applications in this book we seldom have to do
that.
Instead we primarily use probability densities as part of a likelihood
function. We will see an example soon.
Representing PDFs
To represent PDFs in Python, thinkbayes.py provides a class
named Pdf. Pdf is an abstract type, which means that it defines the interface a Pdf is
supposed to have, but does not provide a complete implementation. The
Pdf interface includes two methods, Density
and MakePmf:
End of explanation
"""
class GaussianPdf(Pdf):
def __init__(self, mu, sigma):
self.mu = mu
self.sigma = sigma
def Density(self, x):
return scipy.stats.norm.pdf(x, self.mu, self.sigma)
"""
Explanation: Density takes a value, x, and returns the
corresponding density. MakePmf makes a discrete
approximation to the PDF.
Pdf provides an implementation of MakePmf, but
not Density, which has to be provided by a child class.
A concrete type is a child class that extends an
abstract type and provides an implementation of the missing methods. For
example, GaussianPdf extends Pdf and provides
Density:
End of explanation
"""
class EstimatedPdf(Pdf):
def __init__(self, sample):
self.kde = scipy.stats.gaussian_kde(sample)
def Density(self, x):
return self.kde.evaluate(x)
"""
Explanation: __init__ takes mu and sigma, which are the
mean and standard deviation of the distribution, and stores them as
attributes.
Density uses a function from scipy.stats to
evaluate the Gaussian PDF. The function is called norm.pdf
because the Gaussian distribution is also called the “normal”
distribution.
The Gaussian PDF is defined by a simple mathematical function, so it is
easy to evaluate. And it is useful because many quantities in the real
world have distributions that are approximately Gaussian.
But with real data, there is no guarantee that the distribution is
Gaussian or any other simple mathematical function. In that case we can
use a sample to estimate the PDF of the whole population.
For example, in The Price Is Right data, we have 313
prices for the first showcase. We can think of these values as a sample
from the population of all possible showcase prices.
This sample includes the following values (in order):
$$28800, 28868, 28941, 28957, 28958$$
In the sample, no values appear
between 28801 and 28867, but there is no reason to think that these
values are impossible. Based on our background information, we expect
all values in this range to be equally likely. In other words, we expect
the PDF to be fairly smooth.
Kernel density estimation (KDE) is an algorithm that takes a sample and
finds an appropriately smooth PDF that fits the data. You can read
details at http://en.wikipedia.org/wiki/Kernel_density_estimation.
scipy provides an implementation of KDE and
thinkbayes provides a class called
EstimatedPdf that uses it:
End of explanation
"""
data = ReadData(path='../code')
cols = zip(*data)
price1, price2, bid1, bid2, diff1, diff2 = cols
pdf = thinkbayes.EstimatedPdf(price1)
low, high = 0, 75000
n = 101
xs = numpy.linspace(low, high, n)
pdf.kde.evaluate([3, 3])
pmf = pdf.MakePmf(xs)
thinkplot.Pmfs([pmf])
"""
Explanation: __init__ takes a sample and computes a kernel density estimate. The
result is a gaussian_kde object that provides an evaluate
method.
Density takes a value, calls gaussian_kde.evaluate, and
returns the resulting density.
Finally, here’s an outline of the code I used to generate
Figure [fig.price1]:
End of explanation
"""
MakePrice2(player1, player2)
"""
Explanation: pdf is a Pdf object, estimated by KDE.
pmf is a Pmf object that approximates the Pdf by evaluating
the density at a sequence of equally spaced values.
linspace stands for “linear space.” It takes a range,
low and high, and the number of points,
n, and returns a new numpy array with
n elements equally spaced between low and
high, including both.
And now back to The Price is Right.
Modeling the contestants
The PDFs in Figure [fig.price1] estimate the distribution of possible
prices. If you were a contestant on the show, you could use this
distribution to quantify your prior belief about the price of each
showcase (before you see the prizes).
To update these priors, we have to answer these questions:
What data should we consider and how should we quantify it?
Can we compute a likelihood function; that is, for each hypothetical
value of price, can we compute the conditional
likelihood of the data?
To answer these questions, I am going to model the contestant as a
price-guessing instrument with known error characteristics. In other
words, when the contestant sees the prizes, he or she guesses the price
of each prize—ideally without taking into consideration the fact that
the prize is part of a showcase—and adds up the prices. Let’s call this
total guess.
Under this model, the question we have to answer is, “If the actual
price is price, what is the likelihood that the
contestant’s estimate would be guess?”
Or if we define
python
error = price - guess
then we could ask, “What is the likelihood that the contestant’s
estimate is off by error?”
To answer this question, we can use the historical data again.
Figure [fig.price2] shows the cumulative distribution of
diff, the difference between the contestant’s bid and the
actual price of the showcase.
The definition of diff is
python
diff = price - bid
When diff is negative, the bid is too high. As an aside, we
can use this distribution to compute the probability that the
contestants overbid: the first contestant overbids 25% of the time; the
second contestant overbids 29% of the time.
We can also see that the bids are biased; that is, they are more likely
to be too low than too high. And that makes sense, given the rules of
the game.
Finally, we can use this distribution to estimate the reliability of the
contestants’ guesses. This step is a little tricky because we don’t
actually know the contestant’s guesses; we only know what they bid.
So we’ll have to make some assumptions. Specifically, I assume that the
distribution of error is Gaussian with mean 0 and the same
variance as diff.
The Player class implements this model:
```python
class Player(object):
def __init__(self, prices, bids, diffs):
self.pdf_price = thinkbayes.EstimatedPdf(prices)
self.cdf_diff = thinkbayes.MakeCdfFromList(diffs)
mu = 0
sigma = numpy.std(diffs)
self.pdf_error = thinkbayes.GaussianPdf(mu, sigma)
```
prices is a sequence of showcase prices, bids
is a sequence of bids, and diffs is a sequence of diffs,
where again diff = price - bid.
pdf_price is the smoothed PDF of prices, estimated by KDE. cdf_diff
is the cumulative distribution of diff, which we saw in
Figure [fig.price2]. And pdf_error is the PDF that characterizes the
distribution of errors; where error = price - guess.
End of explanation
"""
class Price(thinkbayes.Suite):
def __init__(self, pmf, player):
thinkbayes.Suite.__init__(self, pmf)
self.player = player
def Likelihood(self, data, hypo):
price = hypo
guess = data
error = price - guess
like = self.player.ErrorDensity(error)
return like
"""
Explanation: Again, we use the variance of diff to estimate the variance
of error. This estimate is not perfect because contestants’
bids are sometimes strategic; for example, if Player 2 thinks that
Player 1 has overbid, Player 2 might make a very low bid. In that case
diff does not reflect error. If this happens a
lot, the observed variance in diff might overestimate the
variance in error. Nevertheless, I think it is a reasonable
modeling decision.
As an alternative, someone preparing to appear on the show could
estimate their own distribution of error by watching
previous shows and recording their guesses and the actual prices.
Likelihood
Now we are ready to write the likelihood function. As usual, I define a
new class that extends thinkbayes.Suite:
End of explanation
"""
class GainCalculator(object):
def __init__(self, player, opponent):
self.player = player
self.opponent = opponent
def ExpectedGains(self, low=0, high=75000, n=101):
bids = numpy.linspace(low, high, n)
gains = [self.ExpectedGain(bid) for bid in bids]
return bids, gains
def ExpectedGain(self, bid):
suite = self.player.posterior
total = 0
for price, prob in sorted(suite.Items()):
gain = self.Gain(bid, price)
total += prob * gain
return total
def Gain(self, bid, price):
# if you overbid, you get nothing
if bid > price:
return 0
# otherwise compute the probability of winning
diff = price - bid
prob = self.ProbWin(diff)
# if you are within 250 dollars, you win both showcases
if diff <= 250:
return 2 * price * prob
else:
return price * prob
def ProbWin(self, diff):
prob = (self.opponent.ProbOverbid() +
self.opponent.ProbWorseThan(diff))
return prob
"""
Explanation: pmf represents the prior distribution and
player is a Player object as described in the previous
section. In Likelihood hypo is the hypothetical price of the showcase.
data is the contestant’s best guess at the price.
error is the difference, and like is the
likelihood of the data, given the hypothesis.
ErrorDensity is defined in Player:
```python
class Player:
def ErrorDensity(self, error):
return self.pdf_error.Density(error)
```
ErrorDensity works by evaluating pdf_error at the given
value of error. The result is a probability density, so it
is not really a probability. But remember that Likelihood
doesn’t need to compute a probability; it only has to compute something
proportional to a probability. As long as the constant of
proportionality is the same for all likelihoods, it gets canceled out
when we normalize the posterior distribution.
And therefore, a probability density is a perfectly good likelihood.
Update
Player provides a method that takes the contestant’s guess
and computes the posterior distribution:
```python
class Player
def MakeBeliefs(self, guess):
pmf = self.PmfPrice()
self.prior = Price(pmf, self)
self.posterior = self.prior.Copy()
self.posterior.Update(guess)
```
PmfPrice generates a discrete approximation to the PDF of
price, which we use to construct the prior.
PmfPrice uses MakePmf, which evaluates
pdf_price at a sequence of values:
```python
class Player
n = 101
price_xs = numpy.linspace(0, 75000, n)
def PmfPrice(self):
return self.pdf_price.MakePmf(self.price_xs)
```
To construct the posterior, we make a copy of the prior and then invoke
Update, which invokes Likelihood for each
hypothesis, multiplies the priors by the likelihoods, and renormalizes.
So let’s get back to the original scenario. Suppose you are Player 1 and
when you see your showcase, your best guess is that the total price of
the prizes is \$20,000.
Figure [fig.price3] shows prior and posterior beliefs about the actual
price. The posterior is shifted to the left because your guess is on the
low end of the prior range.
On one level, this result makes sense. The most likely value in the
prior is \$27,750, your best guess is \$20,000, and the mean of the
posterior is somewhere in between: \$25,096.
On another level, you might find this result bizarre, because it
suggests that if you think the price is \$20,000, then
you should believe the price is \$24,000.
To resolve this apparent paradox, remember that you are combining two
sources of information, historical data about past showcases and guesses
about the prizes you see.
We are treating the historical data as the prior and updating it based
on your guesses, but we could equivalently use your guess as a prior and
update it based on historical data.
If you think of it that way, maybe it is less surprising that the most
likely value in the posterior is not your original guess.
Optimal bidding
Now that we have a posterior distribution, we can use it to compute the
optimal bid, which I define as the bid that maximizes expected return
(see http://en.wikipedia.org/wiki/Expected_return).
I’m going to present the methods in this section top-down, which means I
will show you how they are used before I show you how they work. If you
see an unfamiliar method, don’t worry; the definition will be along
shortly.
To compute optimal bids, I wrote a class called
GainCalculator:
End of explanation
"""
player1.MakeBeliefs(20000)
player2.MakeBeliefs(40000)
calc1 = GainCalculator(player1, player2)
calc2 = GainCalculator(player2, player1)
bids, gains = calc1.ExpectedGains()
thinkplot.Plot(bids, gains, label='Player 1')
print('Player 1 optimal bid', max(zip(gains, bids)))
bids, gains = calc2.ExpectedGains()
thinkplot.Plot(bids, gains, label='Player 2')
plt.legend();
"""
Explanation: player and opponent are Player
objects.
GainCalculator provides ExpectedGains, which
computes a sequence of bids and the expected gain for each bid:
low and high specify the range of possible
bids; n is the number of bids to try.
ExpectedGains calls ExpectedGain, which
computes expected gain for a given bid:
ExpectedGain loops through the values in the posterior and
computes the gain for each bid, given the actual prices of the showcase.
It weights each gain with the corresponding probability and returns the
total.
ExpectedGain invokes Gain, which takes a bid
and an actual price and returns the expected gain.
End of explanation
"""
|
km-Poonacha/python4phd | Session 1/ipython/Lesson 1 - Data and Types-Worksheet.ipynb | gpl-3.0 | print('The first element is: ', c_list[0])
"""
Explanation: Lesson 1: Data and Types
In this lesson we learn about the basic data types and data structures and play with them a little.
Defines the format by which you input data to a program, modify it and output it in the consol
Data types: integer, float, string and boolean
Data structures: Lists, dictionaries, tuples etc
Global vs. local variable
Lets start by initializing integer, string, list and dictionary
Data Types
Let us create an integer and string variable and see its output.
Some Simple Data Operations
Most integer/floating point operators that you use in stata or other languages work in python as well - eg (+,-,*,/).
Let us write a code to find the quotient and reminder of a number given a divisor. The operator '//' can be used to find the quotient and '%' can be used to find the reminder.
Data Structures
Data structures combine the basic datatypes (integer, float, string, boolean) to create more complex types. There are several types of data structures but the most commonly used ones are "lists" and "dictionaries"
Lists
We start with a list. A list is a sequence of integer / character data types.
Let us now create two list's one with only integer data "i_list" and the other with only character data "c_list".
The elements in the lists are numbered upwards from 0. Hence, we can access each element in the list using its index number. The following code prints the first element of c_list.
End of explanation
"""
i_list = [1,2,3,4]
# Append the number 5 at the end
print(i_list)
# Insert the 1.5 between 1 and 2
print(i_list)
#delete the number 1.5
print(i_list)
#delete the number between position 0 and 2
print(i_list)
#delete the entire list
print(i_list)
"""
Explanation: List Operations
Lists are mutable i.e. we can append, insert and delete elements from the list.
Syntax - append(data to be entered), insert(index, data), del(index)
End of explanation
"""
# Enter Code
"""
Explanation: String
Strings are used to input and read sentences and words. There are several operations that can be done on a string.
1. Concatinating - We can use either "+" operator combine two strings. Join() can be used to combine a list of strings into a single string.
split('seperator') - We can use the split() command to split a string into a list of smaller strings depending on a specified seperator
Conversely, join('seperator') can be used to combine a list of strings into a single string using the seperator
If no seperator is specified, it assumes the seperator is a space or a tab
The input('input message') function is used to enter data from the consol
Let us write a code to enter a sentence and split it into a list of words.
Excercise 1:
Write a small code to enter a date in "dd/mm/yyyy" format. Use the split() command to split the entered date into day, month and year. Hint. you need to specify the seperater as '/'.
End of explanation
"""
|
jerkos/cobrapy | documentation_builder/building_model.ipynb | lgpl-2.1 | from cobra import Model, Reaction, Metabolite
# Best practise: SBML compliant IDs
cobra_model = Model('example_cobra_model')
reaction = Reaction('3OAS140')
reaction.name = '3 oxoacyl acyl carrier protein synthase n C140 '
reaction.subsystem = 'Cell Envelope Biosynthesis'
reaction.lower_bound = 0. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.objective_coefficient = 0. # this is the default
"""
Explanation: Building a Model
This simple example demonstrates how to create a model, create a reaction, and then add the reaction to the model.
We'll use the '3OAS140' reaction from the STM_1.0 model:
1.0 malACP[c] + 1.0 h[c] + 1.0 ddcaACP[c] $\rightarrow$ 1.0 co2[c] + 1.0 ACP[c] + 1.0 3omrsACP[c]
First, create the model and reaction.
End of explanation
"""
ACP_c = Metabolite('ACP_c',
formula='C11H21N2O7PRS',
name='acyl-carrier-protein',
compartment='c')
omrsACP_c = Metabolite('3omrsACP_c',
formula='C25H45N2O9PRS',
name='3-Oxotetradecanoyl-acyl-carrier-protein',
compartment='c')
co2_c = Metabolite('co2_c',
formula='CO2',
name='CO2',
compartment='c')
malACP_c = Metabolite('malACP_c',
formula='C14H22N2O10PRS',
name='Malonyl-acyl-carrier-protein',
compartment='c')
h_c = Metabolite('h_c',
formula='H',
name='H',
compartment='c')
ddcaACP_c = Metabolite('ddcaACP_c',
formula='C23H43N2O8PRS',
name='Dodecanoyl-ACP-n-C120ACP',
compartment='c')
"""
Explanation: We need to create metabolites as well. If we were using an existing model, we could use get_by_id to get the apporpriate Metabolite objects instead.
End of explanation
"""
reaction.add_metabolites({malACP_c: -1.0,
h_c: -1.0,
ddcaACP_c: -1.0,
co2_c: 1.0,
ACP_c: 1.0,
omrsACP_c: 1.0})
reaction.reaction # This gives a string representation of the reaction
"""
Explanation: Adding metabolites to a reaction requires using a dictionary of the metabolites and their stoichiometric coefficients. A group of metabolites can be added all at once, or they can be added one at a time.
End of explanation
"""
reaction.gene_reaction_rule = '( STM2378 or STM1197 )'
reaction.genes
"""
Explanation: The gene_reaction_rule is a boolean representation of the gene requirements for this reaction to be active as described in Schellenberger et al 2011 Nature Protocols 6(9):1290-307. We will assign the gene reaction rule string, which will automatically create the corresponding gene objects.
End of explanation
"""
print('%i reactions in initial model' % len(cobra_model.reactions))
print('%i metabolites in initial model' % len(cobra_model.metabolites))
print('%i genes in initial model' % len(cobra_model.genes))
"""
Explanation: At this point in time, the model is still empty
End of explanation
"""
cobra_model.add_reaction(reaction)
# Now there are things in the model
print('%i reaction in model' % len(cobra_model.reactions))
print('%i metabolites in model' % len(cobra_model.metabolites))
print('%i genes in model' % len(cobra_model.genes))
"""
Explanation: We will add the reaction to the model, which will also add all associated metabolites and genes
End of explanation
"""
# Iterate through the the objects in the model
print("Reactions")
print("---------")
for x in cobra_model.reactions:
print("%s : %s" % (x.id, x.reaction))
print("Metabolites")
print("-----------")
for x in cobra_model.metabolites:
print('%s : %s' % (x.id, x.formula))
print("Genes")
print("-----")
for x in cobra_model.genes:
reactions_list_str = "{" + ", ".join((i.id for i in x.reactions)) + "}"
print("%s is associated with reactions: %s" % (x.id, reactions_list_str))
"""
Explanation: We can iterate through the model objects to observe the contents
End of explanation
"""
|
JoseGuzman/myIPythonNotebooks | Stochastic_systems/Conditional Probability.ipynb | gpl-2.0 | %pylab inline
# conf is a dictionay with the recording configurations
conf ={
'pairs': 495.,
'triplets': 96.,
'quadruples': 135.,
'quintuples': 120.,
'sextuples': 118.,
'septuples': 66.,
'octuples': 72.
}
# syn is a dictionary with the number of connections found
syn ={
'pairs': 4.,
'triplets': 6.,
'quadruples': 18.,
'quintuples': 27.,
'sextuples': 39.,
'septuples': 25.,
'octuples': 27.
}
# Remember, the number of recording configurations
# is NOT the same as the number of connections tested
nconf = np.sum( conf.values() )
nsyn = np.sum( syn.values() )
print('Total recording configurations = %4d'%nconf)
print('Total connections = %4d'%nsyn)
"""
Explanation: <H1> Conditional Probability </H1>
<P>It is the probability of an event given that another event has occurred.
The probability of an event R given C is defined as:</P>
$$P(C|R) = \frac{P(C \cap R)}{P(R)}$$
<P>where $P(C \cap R)$ is the probability of both $C$ and $R$ ocurring.</P>
End of explanation
"""
PCR = syn['pairs'] /conf['pairs']
print "P(connection | pairs): ", PCR
"""
Explanation: <H2>P(C|R): probability of connection given pair configuration</H2>
<P>We will compute P(C|R) directly; C is "connection" and R is "recording configuration". For example, to compute the probability of getting a connection in a pair configuration we simply calculate how many connections were found in the total number of pair configurations tried.</P>
End of explanation
"""
PR = conf['pairs'] / nconf
print "P(pairs): ", PR
"""
Explanation: <H2>P(R): probability of pair configuraiton</H2>
<P>
To compute the is just the probability of 'pair' configuration P(R) in this data set</P>
End of explanation
"""
PC = nsyn/nconf
print "P(connection): ", PC
"""
Explanation: <H2>P(C): probability of connection </H2>
<P> P(C) is the overall probability of finding a connection in a recording type, regardless of the recording configuration:</P>
End of explanation
"""
PCUR = syn['pairs']/nconf
print "P(connection $\cap$ pair): ", PCUR
PC*PR
"""
Explanation: If number of connections (C) and recording configuration (R) were independent, then we would expect P(C | R) to be about the same as P(C), but they're not; P(C) is 0.132, and P(C|R) is 0.008. This tells us that conections (C) and recording configuratons (R) are dependent.
$P(C \cap R)$ is different from P(C|R). $P(C \cap R)$ would be the probability of both recording in a pair configuration
and getting a connection, out of the total population - not just the population of recording with pairs
End of explanation
"""
PCUR/PR # P(C|R)
"""
Explanation: $P(C \cap R)$ and P(C)P(R) are pretty different because R and C are actually dependent on each other.
We can also check that $P(C|R) = \frac{P(C \cap R)}{P(R)}$ and sure enough, it is:
End of explanation
"""
print( "P(connection): %2.4f"%PC)
print( "==================================")
prob = list()
#for config in conf.keys():
myconfig = ['pairs', 'triplets', 'quadruples',
'quintuples', 'sextuples', 'septuples', 'octuples']
for config in myconfig:
PCR = syn[config] /conf[config]
print( "P(connection | %-10s): %2.4f"%(config,PCR) )
prob.append(PCR)
print( "==================================")
plt.plot(range(2,9),prob, 'ko-')
plt.axhline(0.1325, color='#AA0000', linestyle='dashed')
plt.xlim(1, 9), plt.ylim(-0.05,.55)
plt.xlabel('Configuration'), plt.ylabel('P(connection)');
"""
Explanation: <H2>Conditional probability of every recording configuration</H2>
<P>
Let's compute the conditional probability for every recording configuration, to evaluate which ones are above the average probability of connection in a configuration and see if we obtain an optimum for recording configuration.</P>
End of explanation
"""
|
nlooije/pythreejs | examples/Examples.ipynb | bsd-3-clause | ball = Mesh(geometry=SphereGeometry(radius=1), material=LambertMaterial(color='red'), position=[2,1,0])
scene = Scene(children=[ball, AmbientLight(color=0x777777), make_text('Hello World!', height=.6)])
c = PerspectiveCamera(position=[0,5,5], up=[0,0,1], children=[DirectionalLight(color='white',
position=[3,5,1],
intensity=0.5)])
renderer = Renderer(camera=c, scene = scene, controls=[OrbitControls(controlling=c)])
display(renderer)
ball.geometry.radius=0.5
import time, math
ball.material.color = 0x4400dd
for i in range(1,150,2):
ball.geometry.radius=i/100.
ball.material.color +=0x000300
ball.position = [math.cos(i/10.), math.sin(i/50.), i/100.]
time.sleep(.05)
"""
Explanation: Simple sphere and text
End of explanation
"""
nx,ny=(20,20)
xmax=1
x = np.linspace(-xmax,xmax,nx)
y = np.linspace(-xmax,xmax,ny)
xx, yy = np.meshgrid(x,y)
z = xx**2-yy**2
#z[6,1] = float('nan')
surf_g = SurfaceGeometry(z=list(z[::-1].flat),
width=2*xmax,
height=2*xmax,
width_segments=nx-1,
height_segments=ny-1)
surf = Mesh(geometry=surf_g, material=LambertMaterial(map=height_texture(z[::-1], 'YlGnBu_r')))
surfgrid = SurfaceGrid(geometry=surf_g, material=LineBasicMaterial(color='black'))
hover_point = Mesh(geometry=SphereGeometry(radius=0.05), material=LambertMaterial(color='hotpink'))
scene = Scene(children=[surf, surfgrid, hover_point, AmbientLight(color=0x777777)])
c = PerspectiveCamera(position=[0,3,3], up=[0,0,1],
children=[DirectionalLight(color='white', position=[3,5,1], intensity=0.6)])
click_picker = Picker(root=surf, event='dblclick')
hover_picker = Picker(root=surf, event='mousemove')
renderer = Renderer(camera=c, scene = scene, controls=[OrbitControls(controlling=c), click_picker, hover_picker])
def f(name, value):
print "Clicked on %s"%value
point = Mesh(geometry=SphereGeometry(radius=0.05),
material=LambertMaterial(color='red'),
position=value)
scene.children = list(scene.children)+[point]
click_picker.on_trait_change(f, 'point')
link((hover_point, 'position'), (hover_picker, 'point'))
h = HTML()
def g(name, value):
h.value="Green point at (%.3f, %.3f, %.3f)"%tuple(value)
g(None, hover_point.position)
hover_picker.on_trait_change(g, 'point')
display(h)
display(renderer)
# when we change the z values of the geometry, we need to also change the height map
surf_g.z = list((-z[::-1]).flat)
surf.material.map = height_texture(-z[::-1])
"""
Explanation: Clickable Surface
End of explanation
"""
import numpy as np
from scipy import ndimage
import matplotlib
import matplotlib.pyplot as plt
from skimage import img_as_ubyte
jet = matplotlib.cm.get_cmap('jet')
np.random.seed(int(1)) # start random number generator
n = int(5) # starting points
size = int(32) # size of image
im = np.zeros((size,size)) # create zero image
points = size*np.random.random((2, n**2)) # locations of seed values
im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = size # seed high values
im = ndimage.gaussian_filter(im, sigma=size/(float(4)*n)) # smooth high values into surrounding areas
im *= 1/np.max(im)# rescale to be in the range [0,1]
rgba_im = img_as_ubyte(jet(im)) # convert the values to rgba image using the jet colormap
rgba_list = list(rgba_im.flat) # make a flat list
t = DataTexture(data=rgba_list, format='RGBAFormat', width=size, height=size)
geometry = SphereGeometry()#TorusKnotGeometry(radius=2, radialSegments=200)
material = LambertMaterial(map=t)
myobject = Mesh(geometry=geometry, material=material)
c = PerspectiveCamera(position=[0,3,3], fov=40, children=[DirectionalLight(color=0xffffff, position=[3,5,1], intensity=0.5)])
scene = Scene(children=[myobject, AmbientLight(color=0x777777)])
renderer = Renderer(camera=c, scene = scene, controls=[OrbitControls(controlling=c)])
display(renderer)
"""
Explanation: Design our own texture
End of explanation
"""
# On windows, linewidth of the material has no effect
size = 4
linesgeom = PlainGeometry(vertices=[[0,0,0],[size,0,0],[0,0,0],[0,size,0],[0,0,0],[0,0,size]],
colors = ['red', 'red', 'green', 'green', 'white', 'orange'])
lines = Line(geometry=linesgeom,
material=LineBasicMaterial( linewidth=5, vertexColors='VertexColors'),
type='LinePieces')
scene = Scene(children=[lines, DirectionalLight(color=0xccaabb, position=[0,10,0]),AmbientLight(color=0xcccccc)])
c = PerspectiveCamera(position=[0,10,10])
renderer = Renderer(camera=c, scene = scene, controls=[OrbitControls(controlling=c)])
display(renderer)
"""
Explanation: Lines
End of explanation
"""
geometry = SphereGeometry(radius=4)
t = ImageTexture(imageuri="")
material = LambertMaterial(color='white', map=t)
sphere = Mesh(geometry=geometry, material=material)
point = Mesh(geometry=SphereGeometry(radius=.1),
material=LambertMaterial(color='red'))
c = PerspectiveCamera(position=[0,10,10], fov=40, children=[DirectionalLight(color='white',
position=[3,5,1],
intensity=0.5)])
scene = Scene(children=[sphere, point, AmbientLight(color=0x777777)])
p=Picker(event='mousemove', root=sphere)
renderer = Renderer(camera=c, scene = scene, controls=[OrbitControls(controlling=c), p])
coords = Text()
display(coords)
display(renderer)
#dlink((p,'point'), (point, 'position'), (coords, 'value'))
#
#camera=WebCamera()
#display(camera)
#display(Link(widgets=[[camera, 'imageurl'], [t, 'imageuri']]))
"""
Explanation: Camera
End of explanation
"""
f = """
function f(origu,origv) {
// scale u and v to the ranges I want: [0, 2*pi]
var u = 2*Math.PI*origu;
var v = 2*Math.PI*origv;
var x = Math.sin(u);
var y = Math.cos(v);
var z = Math.cos(u+v);
return new THREE.Vector3(x,y,z)
}
"""
surf_g = ParametricGeometry(func=f);
surf = Mesh(geometry=surf_g,material=LambertMaterial(color='green', side ='FrontSide'))
surf2 = Mesh(geometry=surf_g,material=LambertMaterial(color='yellow', side ='BackSide'))
scene = Scene(children=[surf, surf2, AmbientLight(color=0x777777)])
c = PerspectiveCamera(position=[5,5,3], up=[0,0,1],children=[DirectionalLight(color='white', position=[3,5,1], intensity=0.6)])
renderer = Renderer(camera=c,scene = scene,controls=[OrbitControls(controlling=c)])
display(renderer)
"""
Explanation: Parametric Functions
To use the ParametricGeometry class, you need to specify a javascript function as a string. The function should take two parameters that vary between 0 and 1, and return a new THREE.Vector3(x,y,z).
If you want to build the surface in Python, you'll need to explicitly construct the vertices and faces and build a basic geometry from the vertices and faces.
End of explanation
"""
|
ocean-color-ac-challenge/evaluate-pearson | evaluation-participant-a.ipynb | apache-2.0 | w_412 = 0.56
w_443 = 0.73
w_490 = 0.71
w_510 = 0.36
w_560 = 0.01
"""
Explanation: E-CEO Challenge #3 Evaluation
Weights
Define the weight of each wavelength
End of explanation
"""
run_id = '0000000-150625115710650-oozie-oozi-W'
run_meta = 'http://sb-10-16-10-55.dev.terradue.int:50075/streamFile/ciop/run/participant-a/0000000-150625115710650-oozie-oozi-W/results.metalink?'
participant = 'participant-a'
"""
Explanation: Run
Provide the run information:
* run id
* run metalink containing the 3 by 3 kernel extractions
* participant
End of explanation
"""
import glob
import pandas as pd
from scipy.stats.stats import pearsonr
import numpy
import math
"""
Explanation: Define all imports in a single cell
End of explanation
"""
!curl $run_meta | aria2c -d $participant -M -
path = participant # use your path
allFiles = glob.glob(path + "/*.txt")
frame = pd.DataFrame()
list_ = []
for file_ in allFiles:
df = pd.read_csv(file_,index_col=None, header=0)
list_.append(df)
frame = pd.concat(list_)
len(frame.index)
"""
Explanation: Manage run results
Download the results and aggregate them in a single Pandas dataframe
End of explanation
"""
insitu_path = './insitu/AAOT.csv'
insitu = pd.read_csv(insitu_path)
frame_full = pd.DataFrame.merge(frame.query('Name == "AAOT"'), insitu, how='inner', on = ['Date', 'ORBIT'])
frame_xxx= frame_full[['reflec_1_mean', 'rho_wn_IS_412']].dropna()
r_aaot_412 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @412")
frame_xxx= frame_full[['reflec_2_mean', 'rho_wn_IS_443']].dropna()
r_aaot_443 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @443")
frame_xxx= frame_full[['reflec_3_mean', 'rho_wn_IS_490']].dropna()
r_aaot_490 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @490")
r_aaot_510 = 0
print("0 observations for band @510")
frame_xxx= frame_full[['reflec_5_mean', 'rho_wn_IS_560']].dropna()
r_aaot_560 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @560")
insitu_path = './insitu/BOUSS.csv'
insitu = pd.read_csv(insitu_path)
frame_full = pd.DataFrame.merge(frame.query('Name == "BOUS"'), insitu, how='inner', on = ['Date', 'ORBIT'])
frame_xxx= frame_full[['reflec_1_mean', 'rho_wn_IS_412']].dropna()
r_bous_412 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @412")
frame_xxx= frame_full[['reflec_2_mean', 'rho_wn_IS_443']].dropna()
r_bous_443 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @443")
frame_xxx= frame_full[['reflec_3_mean', 'rho_wn_IS_490']].dropna()
r_bous_490 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @490")
frame_xxx= frame_full[['reflec_4_mean', 'rho_wn_IS_510']].dropna()
r_bous_510 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @510")
frame_xxx= frame_full[['reflec_5_mean', 'rho_wn_IS_560']].dropna()
r_bous_560 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @560")
insitu_path = './insitu/MOBY.csv'
insitu = pd.read_csv(insitu_path)
frame_full = pd.DataFrame.merge(frame.query('Name == "MOBY"'), insitu, how='inner', on = ['Date', 'ORBIT'])
frame_xxx= frame_full[['reflec_1_mean', 'rho_wn_IS_412']].dropna()
r_moby_412 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @12")
frame_xxx= frame_full[['reflec_2_mean', 'rho_wn_IS_443']].dropna()
r_moby_443 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @443")
frame_xxx= frame_full[['reflec_3_mean', 'rho_wn_IS_490']].dropna()
r_moby_490 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @490")
frame_xxx= frame_full[['reflec_4_mean', 'rho_wn_IS_510']].dropna()
r_moby_510 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @510")
frame_xxx= frame_full[['reflec_5_mean', 'rho_wn_IS_560']].dropna()
r_moby_560 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @560")
[r_aaot_412, r_aaot_443, r_aaot_490, r_aaot_510, r_aaot_560]
[r_bous_412, r_bous_443, r_bous_490, r_bous_510, r_bous_560]
[r_moby_412, r_moby_443, r_moby_490, r_moby_510, r_moby_560]
r_final = (numpy.mean([r_bous_412, r_moby_412, r_aaot_412]) * w_412 \
+ numpy.mean([r_bous_443, r_moby_443, r_aaot_443]) * w_443 \
+ numpy.mean([r_bous_490, r_moby_490, r_aaot_490]) * w_490 \
+ numpy.mean([r_bous_510, r_moby_510, r_aaot_510]) * w_510 \
+ numpy.mean([r_bous_560, r_moby_560, r_aaot_560]) * w_560) \
/ (w_412 + w_443 + w_490 + w_510 + w_560)
r_final
"""
Explanation: Number of points extracted from MERIS level 2 products
Calculate Pearson
For all three sites, AAOT, BOUSSOLE and MOBY, calculate the Pearson factor for each band.
Note AAOT does not have measurements for band @510
AAOT site
End of explanation
"""
|
deep-learning-indaba/practicals2017 | practical3.ipynb | mit | # Import TensorFlow and some other libraries we'll be using.
import datetime
import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
# Import Matplotlib and set some defaults
from matplotlib import pyplot as plt
plt.ioff()
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Download the MNIST dataset onto the local machine.
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
"""
Explanation: DL Indaba Practical 3
Convolutional Neural Networks
Developed by Stephan Gouws, Avishkar Bhoopchand & Ulrich Paquet.
Introduction
In this practical we will cover the basics of convolutional neural networks, or "ConvNets". ConvNets were invented in the late 1980s/early 1990s, and have had tremendous success especially with vision (although they have also been used to great success in speech processing pipelines, and more recently, for machine translation).
We will work to build our mathematical and algorithmic intuition around the "convolution" operation. Then we will construct a deep feedforward convolutional model with which we can classify MNIST digits with over 99% accuracy (our best model yet!).
Learning objectives
Understand:
* what a convolutional layer is & how it's different from a fully-connected layer (including the assumptions and trade-offs that are being made),
* how and when to use convolutional layers (relate it to the assumptions the model makes),
* how backpropagation works through convolutional layers.
What is expected of you:
Read through the explanations and make sure you understand how to implement the convolutional forwards pass.
Do the same for the backwards phase.
Train a small model on MNIST.
At this point, flag a tutor and they will give you access to a GPU instance. Now use the hyperparameters provided to train a state-of-the-art ConvNet model on MNIST.
End of explanation
"""
## IMPLEMENT-ME: ...
# Conv layer forward pass
def convolutional_forward(X, W, b, filter_size, depth, stride, padding):
# X has size [batch_size, input_width, input_height, input_depth]
# W has shape [filter_size, filter_size, input_depth, depth]
# b has shape [depth]
batch_size, input_width, input_height, input_depth = X.shape
# Check that the weights are of the expected shape
assert W.shape == (filter_size, filter_size, input_depth, depth)
# QUESTION: Calculate the width and height of the output
# output_width = ...
# output_height = ...
#
# ANSWER:
output_width = (input_width - filter_size + 2*padding) / stride + 1
output_height = (input_height - filter_size + 2*padding) / stride + 1
####
# Apply padding to the width and height dimensions of the input
X_padded = np.pad(X, ((0,0), (padding, padding), (padding, padding), (0,0)), 'constant')
# Allocate the output Tensor
out = np.zeros((batch_size, output_width, output_height, depth))
# NOTE: There is a more efficient way of doing a convolution, but this most
# clearly illustrates the idea.
for i in range(output_width): # Loop over the output width dimension
for j in range(output_height): # Loop over the output height dimension
# Select the current block in the input that the filter will be applied to
block_width_start = i * stride
block_width_end = block_width_start + filter_size
block_height_start = j * stride
block_height_end = block_height_start + filter_size
block = X_padded[:, block_width_start:block_width_end, block_height_start:block_height_end, :]
for d in range(depth): # Loop over the filters in the layer (output depth dimension)
filter_weights = W[:, :, :, d]
# QUESTION: Apply the filter to the block over all inputs in the batch
# out[:, w, h, f] = ...
# HINT: Have a look at numpy's sum function and pay attention to the axis parameter
# ANSWER:
out[:, i, j, d] = np.sum(block * filter_weights, axis=(1,2,3)) + b[d]
###
return out
"""
Explanation: ConvNet Architectures
When modelling an image using a regular feed-forward network, we quickly find that the number of model parameters grows exponentially. For example, our 2 layer MNIST feed-forward model from the previous practical already had over 600 000 parameters!
QUESTION: How many parameters would a feed-forward network require if it had 2 hidden layers with 512 and 256 neurons respectively, an output size of 10 and an input image of shape [32, 32, 3]?
ConvNets address this model parameter issue by exploiting structure in the inputs to the network (in particular, by making the assumption that the input is a 3D volume, which applies to images for example). The two key differences between a ConvNet and a Feed-forward network are:
* ConvNets have neurons that are arranged in 3 dimensions: width, height, depth (depth here means the depth of an activation volume, not the depth of a deep neural network!)
* The neurons in each layer are only connected to a small region of the layer before it.
QUESTION: Unfortunately there is no such thing as a free lunch. What do you think is the trade-off a ConvNet makes for the reduction in memory required by fewer parameters?
Generally a ConvNet architecture is made up of different types of layers, the most common being convolutional layers, pooling layers and fully connected layers that we encountered in the last practical.
ConvNet architectures were key to the tremendous success of deep learning in machine vision. In particular, the first deep learning model to win the ImageNet competition in 2012 was called AlexNet (after Alex Krizhevsky, one of its inventors). It had 5 convolutional layers followed by 3 fully connected layers. Later winners included GoogLeNet and ResNet which also used batch normalisation, a technique we will see in this practical. If you're curious, have a look at this link for a great summary of different ConvNet archiectures.
We will start by implementing the forward and backward passes of these layers in Numpy to get a good sense for how they work. Afterwards, we will implement a full ConvNet classifier in TensorFlow that we will apply to the MNIST dataset. This model should give us the best test accuracy we've seen so far!
Convolutional Layers
A convolutional layer maps an input volume* to an output volume through a set of learnable filters, which make up the parameters of the layer. Every filter is small spatially (along width and height), but extends through the full depth of the input volume. (Eg: A filter in the first layer of a ConvNet might have size [5, 5, 3]). During the forward pass, we convolve ("slide") each filter across the width and height of the input volume and compute dot products between the entries of the filter and the input at any position. As we slide the filter over the width and height of the input volume we will produce a 2-dimensional activation map that gives the responses of that filter at every spatial position. Each convolutional layer will have a set of filters, and each of them will produce a separate 2-dimensional activation map. We will stack these activation maps along the depth dimension to produce the output volume.
The following diagram and animation illustrates these ideas, make sure you understand them!
* An input volume refers to a 3 dimensional input. For example, a colour image is often represented as a 3 dimensional tensor of shape [width, height, channels] where channels refers to the colour values. A common colour encoding is RGB which has a value between 0 and 256 for each of the red, green and blue channels.
What size is the output volume?
The size of the output volume is controlled by the hyperparameters of the convolutional layer:
* Filter Size (F) defines the width and height of the filters in the layer. Note that filters always have the same depth as the inputs to the layer.
Depth (D) of the layer defines the number of filters in the layer.
* Stride (S) defines the number of pixels by which we move the filter when "sliding" it along the input volume. Typically this value would be 1, but values of 2 and 3 are also sometimes used.
* Padding* (P) refers to the number of 0 pixels we add to the input volume along the width and height dimensions. This parameter is useful in that it gives us more control over the desired size of the output volume and in fact is often used to ensure that the output volume has the same width and height as the input volume.
If the width of the input volume is $w$, the width of the output volume will be $(w−F+2P)/S+1$. (QUESTION: Why?). Similarly for the height ($h$).
QUESTION: What is the final 3D shape of the output volume?
Implementing the forward pass
The parameters of a convolutional layer, with padded input $X^{pad}$, are stored in a weight tensor, $W$ of shape $[F, F, I, D]$ and bias vector $b$ of shape $[D]$ where I is the depth of $X$.
For each filter $d \in [0,D)$ in our convolutional layer, the value of the output volume ($O$) at position $(i, j, d)$ is given by:
\begin{align}
O_{ij}^d = b_{d} + \sum_{a=0}^{F-1} \sum_{b=0}^{F-1} \sum_{c=0}^{I-1} W_{a, b, c, d} X^{pad}_{i+a, j+b, c} && (1)
\end{align}
Don't be put off by all the notation, it's actually quite simple, see if you can tie this formula to the explanation of the convolutional layer and diagrams you saw earlier.
QUESTION: The formula above assumed a stride size of 1 for simplicity. Can you modify the formula to work with an arbitrary stride?
Now let's implement the forward pass of a convolutional layer in Numpy:
End of explanation
"""
### Hyperparameters
batch_size = 2
input_width = 4
input_height = 4
input_depth = 3
filter_size = 4
output_depth = 3
stride = 2
padding = 1
###
# Create a helper function that calculates the relative error between two arrays
def relative_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Define the shapes of the input and weights
input_shape = (batch_size, input_width, input_height, input_depth)
w_shape = (filter_size, filter_size, input_depth, output_depth)
# Create the dummy input
X = np.linspace(-0.1, 0.5, num=np.prod(input_shape)).reshape(input_shape)
# Create the weights and biases
W = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=output_depth)
# Get the output of the convolutional layer
out = convolutional_forward(X, W, b, filter_size, output_depth, stride, padding)
correct_out = np.array(
[[[[8.72013250e-02, 2.37300699e-01, 3.87400074e-01],
[1.34245123e-01, 2.86133235e-01, 4.38021347e-01]],
[[8.21928598e-02, 2.39447184e-01, 3.96701509e-01],
[4.47552448e-04, 1.59490615e-01, 3.18533677e-01]]],
[[[1.11179021e+00, 1.29050939e+00, 1.46922856e+00],
[9.01255797e-01, 1.08176371e+00, 1.26227162e+00]],
[[7.64688995e-02, 2.62343025e-01, 4.48217151e-01],
[-2.62854619e-01, -7.51917556e-02, 1.12471108e-01]]]])
# Compare your output to the "correct" ones
# The difference should be around 2e-8 (or lower)
print 'Testing convolutional_forward'
diff = relative_error(out, correct_out)
if diff <= 2e-8:
print 'PASSED'
else:
print 'The difference of %s is too high, try again' % diff
"""
Explanation: Let's test our layer on some dummy data:
End of explanation
"""
## IMPLEMENT-ME: ...
def convolutional_backward(dout, X, W, b, filter_size, depth, stride, padding):
batch_size, input_width, input_height, input_depth = X.shape
# Apply padding to the width and height dimensions of the input
X_padded = np.pad(X, ((0,0), (padding, padding), (padding, padding), (0,0)), 'constant')
# Calculate the width and height of the forward pass output
output_width = (input_width - filter_size + 2*padding) / stride + 1
output_height = (input_height - filter_size + 2*padding) / stride + 1
# Allocate output arrays
# QUESTION: What is the shape of dx? dw? db?
# ANSWER: ...
dx_padded = np.zeros_like(X_padded)
dw = np.zeros_like(W)
db = np.zeros_like(b)
# QUESTION: Calculate db, the derivative of the final loss with respect to the bias term
# HINT: Have a look at the axis parameter of the np.sum function.
db = np.sum(dout, axis = (0, 1, 2))
for i in range(output_width):
for j in range(output_height):
# Select the current block in the input that the filter will be applied to
block_width_start = i*stride
block_width_end = block_width_start+filter_size
block_height_start = j*stride
block_height_end = block_height_start + filter_size
block = X_padded[:, block_width_start:block_width_end, block_height_start:block_height_end, :]
for d in range(depth):
# QUESTION: Calculate dw[:,:,:,f], the derivative of the loss with respect to the weight parameters of the f'th filter.
# HINT: You can do this in a loop if you prefer, or use np.sum and "None" indexing to get your result to the correct
# shape to assign to dw[:,:,:,f], see (https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#numpy.newaxis)
dw[:,:,:,d] += np.sum(block*(dout[:,i,j,d])[:,None,None,None], axis=0)
dx_padded[:,block_width_start:block_width_end, block_height_start:block_height_end, :] += np.einsum('ij,klmj->iklm', dout[:,i,j,:], W)
# Now we remove the padding to arrive at dx
dx = dx_padded[:,padding:-padding, padding:-padding, :]
return dx, dw, db
"""
Explanation: The derivative of a convolutional layer
Assume we have some final loss function L and by following the steps of backpropagation, have computed the derivative of this loss up to the output of our convolutional layer ($\frac{\partial L}{\partial O}$ or dout in the code below). In order to update the parameters of our layer, we require the derivative of L with respect to the weights and biases of the convolutional layer ($\frac{\partial L}{\partial W}$ and $\frac{\partial L}{\partial b}$). We also require the derivative with respect to the inputs of the layer ($\frac{\partial L}{\partial X}$) in order to propagate the error back to the preceding layers. Unfortunately calculating these derivatives can be a little fiddly due to having to keep track of multiple indices. The calculus is very basic though!
We start with the easiest one, $\frac{\partial L}{\partial b}$:
\begin{align}
\frac{\partial L}{\partial b} &= \frac{\partial L}{\partial O} \frac{\partial O}{\partial b} && \vartriangleright \text{(Chain Rule)} \
&= \frac{\partial L}{\partial O} \mathbf{1} && \vartriangleright (\frac{\partial O}{\partial b} = 1 \text{ from equation } (1))
\end{align}
Now we tackle $\frac{\partial L}{\partial W}$:
\begin{align}
\frac{\partial L}{\partial W} &= \frac{\partial L}{\partial O} \frac{\partial O}{\partial W} && \vartriangleright \text{(Chain Rule)}
\end{align}
Let's calculate this derivative with respect to a single point $W_{abcd}$ in our weight tensor ($O_w$ and $O_h$ are the output width and height respectively):
\begin{align}
\frac{\partial L}{\partial W_{abcd}} &= \sum_{i=0}^{O_w-1} \sum_{j=0}^{O_h-1} \frac{\partial L}{\partial O_{ij}^d} \frac{\partial O_{ij}^d}{\partial W_{abcd}}
\end{align}
QUESTION: Why do we sum over the outputs here? HINT: Think about how many times a particular weight gets used.
Now, looking at equation $(1)$, we can easily calculate $\frac{\partial O_{ij}^d}{\partial W_{abcd}}$ as:
\begin{align}
\frac{\partial O_{ij}^d}{\partial W_{abcd}} &= X^{pad}_{i+a, j+b, c}
\end{align}
Which gives a final result of:
\begin{align}
\frac{\partial L}{\partial W_{abcd}} &= \sum_{i=0}^{O_w-1} \sum_{j=0}^{O_h-1} \frac{\partial L}{\partial O_{ij}^d} X^{pad}_{i+a, j+b, c}
\end{align}
Finally, we need $\frac{\partial L}{\partial X}$, the derivative of the loss with respect to the input of the layer. This is sometimes also called a "delta". Remember, that before doing the convolution, we applied padding to the input $X$ to get $X^{pad}$. It's easier to calculate the derivative with respect to $X^{pad}$, which appears in our convolution equation, and then remove the padding later on to arrive at the delta. Unfortunately we need to introduce some more indexing for the individual components of $X^{pad}$:
\begin{align}
\frac{\partial L}{\partial X^{pad}{mnc}} &= \sum{i=0}^{O_w-1} \sum_{j=0}^{O_h-1} \sum_{d=0}^{D-1} W_{m-i, n-j, c, d} \frac{\partial L}{\partial O_{ij}^d}
\end{align}
Where do the indices $m-i$ and $n-j$ come from? Notice in equation $(1)$ that the padded input $X^{pad}{i+a, j+b, c}$ is multiplied by the weight $W{abcd}$. Now, when we index $X^{pad}$ with $m$ and $n$, setting $m=i+a$ and $n=j+b$ gives us $a=m-i$ and $b=n-j$ for the indices of $W$!
Phew! Spend a few minutes to understand these equations, particularly where the indices come from. Ask a tutor if you get stuck!
Note: Did you notice that the delta, $\frac{\partial L}{\partial X^{pad}{mnc}}$ looks suspiciously like the convolutional forward equation with the inputs $X^{pad}$ replaced by $\frac{\partial L}{\partial O{ij}^d}$ and different indexing into the weights? In fact the delta is exactly that, the forward convolution applied to the incoming derivative, with the filters flipped along the width and height axes.
Now let's implement this in Numpy:
End of explanation
"""
def eval_numerical_gradient_array(f, x, df, h=1e-5):
"""
Evaluate a numeric gradient for a function that accepts a numpy
array and returns a numpy array.
"""
# QUESTION: Can you describe intuitively what this function is doing?
grad = np.zeros_like(x)
it = np.nditer(x, flags=['multi_index'], op_flags=['readwrite'])
while not it.finished:
ix = it.multi_index
oldval = x[ix]
x[ix] = oldval + h
pos = f(x).copy()
x[ix] = oldval - h
neg = f(x).copy()
x[ix] = oldval
grad[ix] = np.sum((pos - neg) * df) / (2 * h)
it.iternext()
return grad
np.random.seed(231)
# Normally, backpropagation will have calculated a derivative of the final loss with respect to
# the output of our layer. Since we're testing our layer in isolation here, we'll just pretend
# and use a random value
dout = np.random.randn(2, 2, 2, 3)
dx_num = eval_numerical_gradient_array(lambda x: convolutional_forward(X, W, b, filter_size, output_depth, stride, padding), X, dout)
dw_num = eval_numerical_gradient_array(lambda w: convolutional_forward(X, W, b, filter_size, output_depth, stride, padding), W, dout)
db_num = eval_numerical_gradient_array(lambda b: convolutional_forward(X, W, b, filter_size, output_depth, stride, padding), b, dout)
out = convolutional_forward(X, W, b, filter_size, output_depth, stride, padding)
dx, dw, db = convolutional_backward(dout, X, W, b, filter_size, output_depth, stride, padding)
# Your errors should be around 1e-8'
print('Testing conv_backward_naive function')
dx_diff = relative_error(dx, dx_num)
if dx_diff < 1e-8:
print 'dx check: PASSED'
else:
print 'The difference of %s on dx is too high, try again!' % dx_diff
dw_diff = relative_error(dw, dw_num)
if dw_diff < 1e-8:
print 'dw check: PASSED'
else:
print 'The difference of %s on dw is too high, try again!' % dw_diff
db_diff = relative_error(db, db_num)
if db_diff < 1e-8:
print 'db check: PASSED'
else:
print 'The difference of %s on db is too high, try again!' % db_diff
"""
Explanation: Finally, we test the backward pass using numerical gradient checking. This compares the gradients generated by our backward function, with a numerical approximation obtained by treating our forward function as a "black box". This gradient checking is a very important testing tool when building your own neural network components or back-propagation system!
End of explanation
"""
def max_pool_forward(X, pool_size, stride):
batch_size, input_width, input_height, input_depth = X.shape
# Calculate the output dimensions
output_width = (input_width - pool_size)/stride + 1
output_height = (input_height - pool_size)/stride + 1
# Allocate the output array
out = np.zeros((batch_size, output_width, output_height, input_depth))
# Select the current block in the input that the filter will be applied to
for w in range(output_width):
for h in range(output_height):
block_width_start = w*stride
block_width_end = block_width_start+pool_size
block_height_start = h*stride
block_height_end = block_height_start + pool_size
block = X[:, block_width_start:block_width_end, block_height_start:block_height_end, :]
## IMPLEMENT-ME CANDIDATE
out[:,w,h,:] = np.max(block, axis=(1,2))
return out
"""
Explanation: (Max) Pooling Layers
The purpose of a pooling layer is to is to reduce the spatial size of the representation and therefore control the number of parameters in the network. A pooling layer has no trainable parameters itself. It applies some 2D aggegation operation (usually a MAX, but others like average may also be used) to regions of the input volume. This is done independently for each depth dimension of the input. For example, a 2x2 max pooling operation with a stride of 2, downsamples every depth slice of the input by 2 along both the width and height.
The output volume of a pooling layer alwyas has the same depth as the input volume. The width and height are calcualted as follows:
$(W−F)/S+1$ where W is the width/height of the
Implementing the forward pass
We again implement this in Numpy:
End of explanation
"""
### Hyperparameters
batch_size = 2
input_width = 4
input_height = 4
input_depth = 3
pool_size = 2
stride = 2
###
input_shape = (batch_size, input_width, input_height, input_depth)
X = np.linspace(-0.3, 0.4, num=np.prod(input_shape)).reshape(input_shape)
out = max_pool_forward(X, pool_size, stride)
correct_out = np.array([
[[[-0.18947368, -0.18210526, -0.17473684],
[-0.14526316, -0.13789474, -0.13052632]],
[[-0.01263158, -0.00526316, 0.00210526],
[0.03157895, 0.03894737, 0.04631579]]],
[[[0.16421053, 0.17157895, 0.17894737],
[0.20842105, 0.21578947, 0.22315789]],
[[0.34105263, 0.34842105, 0.35578947],
[0.38526316, 0.39263158, 0.4]]]])
# Compare the output. The difference should be less than 1e-6.
print('Testing max_pool_forward function:')
diff = relative_error(out, correct_out)
if diff < 1e-6:
print 'PASSED'
else:
print 'The difference of %s is too high, try again!' % diff
"""
Explanation: Now we can test the max_pool_forward function.
End of explanation
"""
def max_pool_backward(dout, X, max_pool_output, pool_size, stride):
batch_size, input_width, input_height, input_depth = X.shape
# Calculate the output dimensions
output_width = (input_width - pool_size)/stride + 1
output_height = (input_height - pool_size)/stride + 1
# QUESTION: What is the size of dx, the derivative with respect to x?
# Allocate an array to hold the derivative
dx = np.zeros_like(X)
for w in range(output_width):
for h in range(output_height):
# Which block in the input did the value at the forward pass output come from?
block_width_start = w*stride
block_width_end = block_width_start+pool_size
block_height_start = h*stride
block_height_end = block_height_start + pool_size
block = X[:, block_width_start:block_width_end, block_height_start:block_height_end, :]
# What was the maximum value
max_val = max_pool_output[:, w, h, :]
# Which values in the input block resulted in the output?
responsible_values = block == max_val[:, None, None, :]
# Add the contribution of the current block to the gradient
dx[:,block_width_start:block_width_end,block_height_start:block_height_end, :] += responsible_values * (dout[:,w,h,:])[:,None,None,:]
return dx
"""
Explanation: The derivative of a max-pool layer
The max-pooling layer has no learnable parameters of its own, so the only derivative of concern is that of the output of the layer with respect to the input for the purpose of backpropagating the error through the layer. This is easy to calculate as it only requires that we recalculate (or remember) which value in each block was the maximum. Since each output depends only on one value in some FxF block of the input, the gradients of the max-pool layer will be sparse.
Let's implement the backward pass in Numpy:
End of explanation
"""
# Define a hypothetical derivative of the loss function with respect to the output of the max-pooling layer.
dout = np.random.randn(batch_size, pool_size, pool_size, input_depth)
dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward(x, pool_size, stride), X, dout)
out = max_pool_forward(X, pool_size, stride)
dx = max_pool_backward(dout, X, out, pool_size, stride)
# Your error should be less than 1e-12
print('Testing max_pool_backward function:')
diff = relative_error(dx, dx_num)
if diff < 1e-12:
print 'PASSED'
else:
print 'The diff of %s is too large, try again!' % diff
"""
Explanation: And we again use numerical gradient checking to ensure that the backward function is correct:
End of explanation
"""
class BaseSoftmaxClassifier(object):
def __init__(self, input_size, output_size):
# Define the input placeholders. The "None" dimension means that the
# placeholder can take any number of images as the batch size.
self.x = tf.placeholder(tf.float32, [None, input_size])
self.y = tf.placeholder(tf.float32, [None, output_size])
# We add an additional input placeholder for Dropout regularisation
self.keep_prob = tf.placeholder(tf.float32, name="keep_prob")
# And one for bath norm regularisation
self.is_training = tf.placeholder(tf.bool, name="is_training")
self.input_size = input_size
self.output_size = output_size
# You should override these in your build_model() function.
self.logits = None
self.predictions = None
self.loss = None
self.build_model()
def get_logits(self):
return self.logits
def build_model(self):
# OVERRIDE THIS FOR YOUR PARTICULAR MODEL.
raise NotImplementedError("Subclasses should implement this function!")
def compute_loss(self):
"""All models share the same softmax cross-entropy loss."""
assert self.logits is not None # Ensure that logits has been created!
data_loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=self.logits, labels=self.y))
return data_loss
def accuracy(self):
# Calculate accuracy.
assert self.predictions is not None # Ensure that pred has been created!
correct_prediction = tf.equal(tf.argmax(self.predictions, 1), tf.argmax(self.y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
return accuracy
"""
Explanation: Optimisation - an exercise for later
Our implementations of convolutional and max-pool layers were based on loops, which are easy to understand, but are slow and inefficient compared to a vectorised implementation exploiting matrix multiplications. The vectorised form is how these layers are actually implemented in practice and are also required to make efficient use of GPUs in frameworks that support it, like TensorFlow. As an exercise, once you fully understand how the layers work, try to rewrite the code such that the convolution and max-pool operations are each implemented in a single matrix multiplication.
(HINT: Matlab has a function called "im2col" that rearranges blocks of an image into columns, you will need to achieve something similar using Numpy!)
Building a 2-layer ConvNet in TensorFlow
Now that we understand the convolutional and max pool layers, let's switch back to TensorFlow and build a 2-layer ConvNet classifier that we can apply to MNIST. We reuse essentially the same classifier framework we used in Practical 2 as well as the training and plotting functions, but we have added support for 2 new forms of regularisation, dropout and batch normalisation. These are explained in more detail later.
End of explanation
"""
def train_tf_model(tf_model,
session, # The active session.
num_epochs, # Max epochs/iterations to train for.
batch_size=100, # Number of examples per batch.
keep_prob=1.0, # (1. - dropout) probability, none by default.
optimizer_fn=None, # TODO(sgouws): more correct to call this optimizer_obj
report_every=1, # Report training results every nr of epochs.
eval_every=1, # Evaluate on validation data every nr of epochs.
stop_early=True, # Use early stopping or not.
verbose=True):
# Get the (symbolic) model input, output, loss and accuracy.
x, y = tf_model.x, tf_model.y
loss = tf_model.loss
accuracy = tf_model.accuracy()
# Compute the gradient of the loss with respect to the model parameters
# and create an op that will perform one parameter update using the specific
# optimizer's update rule in the direction of the gradients.
if optimizer_fn is None:
optimizer_fn = tf.train.AdamOptimizer(1e-4)
# For batch normalisation: Ensure that the mean and variance tracking
# variables get updated at each training step
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
optimizer_step = optimizer_fn.minimize(loss)
# Get the op which, when executed, will initialize the variables.
init = tf.global_variables_initializer()
# Actually initialize the variables (run the op).
session.run(init)
# Save the training loss and accuracies on training and validation data.
train_costs = []
train_accs = []
val_costs = []
val_accs = []
mnist_train_data = mnist.train
prev_c_eval = 1000000
# Main training cycle.
for epoch in range(num_epochs):
avg_cost = 0.
avg_acc = 0.
total_batch = int(mnist.train.num_examples / batch_size)
# Loop over all batches.
for i in range(total_batch):
batch_x, batch_y = mnist_train_data.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value),
# and compute the accuracy of the model.
feed_dict = {x: batch_x, y: batch_y, tf_model.keep_prob: keep_prob,
tf_model.is_training: True}
_, c, a = session.run(
[optimizer_step, loss, accuracy], feed_dict=feed_dict)
# Compute average loss/accuracy
avg_cost += c / total_batch
avg_acc += a / total_batch
train_costs.append((epoch, avg_cost))
train_accs.append((epoch, avg_acc))
# Display logs per epoch step
if epoch % report_every == 0 and verbose:
print "Epoch:", '%04d' % (epoch+1), "Training cost=", \
"{:.9f}".format(avg_cost)
if epoch % eval_every == 0:
val_x, val_y = mnist.validation.images, mnist.validation.labels
feed_dict = {x : val_x, y : val_y, tf_model.keep_prob: 1.0,
tf_model.is_training: False}
c_eval, a_eval = session.run([loss, accuracy], feed_dict=feed_dict)
if verbose:
print "Epoch:", '%04d' % (epoch+1), "Validation acc=", \
"{:.9f}".format(a_eval)
if c_eval >= prev_c_eval and stop_early:
print "Validation loss stopped improving, stopping training early after %d epochs!" % (epoch + 1)
break
prev_c_eval = c_eval
val_costs.append((epoch, c_eval))
val_accs.append((epoch, a_eval))
print "Optimization Finished!"
return train_costs, train_accs, val_costs, val_accs
# Helper functions to plot training progress.
def my_plot(list_of_tuples):
"""Take a list of (epoch, value) and split these into lists of
epoch-only and value-only. Pass these to plot to make sure we
line up the values at the correct time-steps.
"""
plt.plot(*zip(*list_of_tuples))
def plot_multi(values_lst, labels_lst, y_label, x_label='epoch'):
# Plot multiple curves.
assert len(values_lst) == len(labels_lst)
plt.subplot(2, 1, 2)
for v in values_lst:
my_plot(v)
plt.legend(labels_lst, loc='upper left')
plt.xlabel(x_label)
plt.ylabel(y_label)
plt.show()
"""
Explanation: Lets also bring in the training and plotting routines we developed in Prac 2:
End of explanation
"""
def _convolutional_layer(inputs, filter_size, output_depth):
"""Build a convolutional layer with `output_depth` square
filters, each of size `filter_size` x `filter_size`."""
input_features = inputs.shape[3]
weights = tf.get_variable(
"conv_weights",
[filter_size, filter_size, input_features, output_depth],
dtype=tf.float32,
initializer=tf.truncated_normal_initializer(stddev=0.1))
## IMPLEMENT-ME CANDIDATE
conv = tf.nn.conv2d(inputs, weights, strides=[1, 1, 1, 1], padding='SAME')
return conv
def _dense_linear_layer(inputs, layer_name, input_size, output_size, weights_initializer):
"""
Builds a layer that takes a batch of inputs of size `input_size` and returns
a batch of outputs of size `output_size`.
Args:
inputs: A `Tensor` of shape [batch_size, input_size].
layer_name: A string representing the name of the layer.
input_size: The size of the inputs
output_size: The size of the outputs
Returns:
out, weights: tuple of layer outputs and weights.
"""
# Name scopes allow us to logically group together related variables.
# Setting reuse=False avoids accidental reuse of variables between different runs.
with tf.variable_scope(layer_name, reuse=False):
# Create the weights for the layer
layer_weights = tf.get_variable("weights",
shape=[input_size, output_size],
dtype=tf.float32,
initializer=weights_initializer)
# Create the biases for the layer
layer_bias = tf.get_variable("biases",
shape=[output_size],
dtype=tf.float32,
initializer=tf.constant_initializer(0.1))
outputs = tf.matmul(inputs, layer_weights) + layer_bias
return outputs
"""
Explanation: Now define some helper functions to build a convolutional layer and a linear layer (this is mostly the same as the previous practical, but we use slightly different weight and bias initializations which seem to work better with ConvNets on MNIST). In terms of regularisation, we use dropout rather than the L2 regularisation from the previous practical.
Dropout
Dropout is a neural-network regularisation technique that is applied during model training. At each training step, a proportion (1-keep_prob) of neurons the network are "dropped out" (their inputs and outputs are set to 0, effectively ignoring their contribution) while the remaining keep_prob fraction are "let through" (Nit: they're actually rescaled by 1/keep_prob to ensure that the variance of the pre-activations at the next layer remains unchanged). This can be interpreted as there being actually $2^n$ different network architectures (where n is the number of neurons) while only one is being trained at each training step. At test time, we use the full network, where each neuron's contribution is weighted by keep_prob. This is effectively the average of all the network possibilities and therefore dropout can also be thought of as an ensemble technique.
In our ConvNet architecture, the majority of neurons occur in the fully connected layer between the convolutional layers and the output. It is therefore this fully connected layer that we are most concerned about overfitting and this is where we apply dropout.
End of explanation
"""
class ConvNetClassifier(BaseSoftmaxClassifier):
def __init__(self,
input_size, # The size of the input
output_size, # The size of the output
filter_sizes=[], # The number of filters to use per convolutional layer
output_depths=[], # The number of features to output per convolutional layer
hidden_linear_size=512, # The size of the hidden linear layer
use_batch_norm=False, # Flag indicating whether or not to use batch normalisation
linear_weights_initializer=tf.truncated_normal_initializer(stddev=0.1)):
assert len(filter_sizes) == len(output_depths)
self.filter_sizes = filter_sizes
self.output_depths = output_depths
self.linear_weights_initializer = linear_weights_initializer
self.use_batch_norm = use_batch_norm
self.hidden_linear_size = hidden_linear_size
super(ConvNetClassifier, self).__init__(input_size, output_size)
def build_model(self):
# Architecture: INPUT - {CONV - RELU - POOL}*N - FC
# Reshape the input to [batch_size, width, height, input_depth]
conv_input = tf.reshape(self.x, [-1, 28, 28, 1])
prev_inputs = conv_input
# Create the CONV-RELU-POOL layers:
for layer_number, (layer_filter_size, layer_features) in enumerate(
zip(self.filter_sizes, self.output_depths)):
with tf.variable_scope("layer_{}".format(layer_number), reuse=False):
# Create the convolution:
conv = _convolutional_layer(prev_inputs, layer_filter_size, layer_features)
# Apply batch normalisation, if required
if self.use_batch_norm:
conv = tf.contrib.layers.batch_norm(conv, center=True, scale=True,
is_training=self.is_training)
# Apply the RELU activation with a bias
bias = tf.get_variable("bias", [layer_features], dtype=tf.float32, initializer=tf.constant_initializer(0.1))
relu = tf.nn.relu(conv + bias)
# Apply max-pooling using patch-sizes of 2x2
pool = tf.nn.max_pool(relu, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# QUESTION: What is the shape of the pool tensor?
# ANSWER: ...
prev_inputs = pool
# QUESTION: What is the shape of prev_inputs now?
# We need to flatten the last (non-batch) dimensions of the convolutional
# output in order to pass it to a fully-connected layer:
flattened = tf.contrib.layers.flatten(prev_inputs)
# Create the fully-connected (linear) layer that maps the flattened inputs
# to `hidden_linear_size` hidden outputs
flat_size = flattened.shape[1]
fully_connected = _dense_linear_layer(
flattened, "fully_connected", flat_size, self.hidden_linear_size, self.linear_weights_initializer)
# Apply batch normalisation, if required
if self.use_batch_norm:
fully_connected = tf.contrib.layers.batch_norm(
fully_connected, center=True, scale=True, is_training=self.is_training)
fc_relu = tf.nn.relu(fully_connected)
fc_drop = tf.nn.dropout(fc_relu, self.keep_prob)
# Now we map the `hidden_linear_size` outputs to the `output_size` logits, one for each possible digit class
logits = _dense_linear_layer(
fc_drop, "logits", self.hidden_linear_size, self.output_size, self.linear_weights_initializer)
self.logits = logits
self.predictions = tf.nn.softmax(self.logits)
self.loss = self.compute_loss()
"""
Explanation: Now build the ConvNetClassifier, we make the number of convolutional layers and the filter sizes parameters so that you can easily experiment with different variations.
End of explanation
"""
def build_train_eval_and_plot(build_params, train_params, verbose=True):
tf.reset_default_graph()
m = ConvNetClassifier(**build_params)
with tf.Session() as sess:
# Train model on the MNIST dataset.
train_losses, train_accs, val_losses, val_accs = train_tf_model(
m,
sess,
verbose=verbose,
**train_params)
# Now evaluate it on the test set:
accuracy_op = m.accuracy() # Get the symbolic accuracy operation
# Calculate the accuracy using the test images and labels.
accuracy = accuracy_op.eval({m.x: mnist.test.images,
m.y: mnist.test.labels,
m.keep_prob: 1.0,
m.is_training: False})
if verbose:
print "Accuracy on test set:", accuracy
# Plot losses and accuracies.
plot_multi([train_losses, val_losses], ['train', 'val'], 'loss', 'epoch')
plot_multi([train_accs, val_accs], ['train', 'val'], 'accuracy', 'epoch')
ret = {'train_losses': train_losses, 'train_accs' : train_accs,
'val_losses' : val_losses, 'val_accs' : val_accs,
'test_acc' : accuracy}
# Evaluate the final convolutional weights
conv_variables = [v for v in tf.trainable_variables() if "conv_weights" in v.name]
conv_weights = sess.run(conv_variables)
return m, ret, conv_weights
"""
Explanation: Finally a function that wraps up the training and evaluation of the model (the same as prac 2)
End of explanation
"""
%%time
# Create a ConvNet classifier with 2 CONV-RELU-POOL layers, with 7x7 filters,
# 32 and 64 output features and a hidden linear layer size of 512.
model_params = {
'input_size': 784,
'output_size': 10,
'filter_sizes': [5],
'output_depths': [4],
'hidden_linear_size': 128,
'use_batch_norm': False
}
training_params = {
'keep_prob': 0.5,
'num_epochs': 5,
'batch_size': 50,
'stop_early': False,
}
trained_model, training_results, conv_weights = build_train_eval_and_plot(
model_params,
training_params,
verbose=True
)
"""
Explanation: Now train and evaluate the ConvNet model on MNIST.
NOTE: Hopefully you answered the question in the first section about the tradeoffs a ConvNet makes with "extra computation"! Unfortuntaly the VMs we're using are pretty low-powered, so we will train a very small ConvNet just to check that it works. Once you've got this small ConvNet working, chat to a tutor to get access to a machine with a GPU and run with the following configuration to get to the promised 99%+ accuracy on MNIST!
```
model_params = {
'input_size': 784,
'output_size': 10,
'filter_sizes': [5, 5],
'output_depths': [32, 64],
'hidden_linear_size': 1024,
'use_batch_norm': False
}
training_params = {
'keep_prob': 0.5,
'num_epochs': 20,
'batch_size': 50,
'stop_early': False,
}
```
End of explanation
"""
weights = conv_weights[0]
_, _, _, out_depth = weights.shape
grid_size = int(out_depth**0.5)
fig = plt.figure()
i = 1
for r in range(grid_size):
for c in range(grid_size):
ax = fig.add_subplot(grid_size, grid_size, i)
ax.imshow(weights[:, :, 0, r*grid_size+c], cmap="Greys")
i += 1
plt.show()
"""
Explanation: The ConvNet classifier takes quite a long time to train, but gives a very respectable test accuracy of over 99%!
What has the network learned?
Remember that a filter in a convolutional layer is used to multiply blocks in the input volume. Let's plot the weights of the first layer of the trained model. Darker pixels indicate that the particular filter reacts more strongly to those regions of the input blocks. Notice how each filter has learned to react differently to different patterns in the input. It's tricky to see in our tiny filters, but those in lower layers of ConvNets, particularly when applied to natural images, often function as simple Gabor filters or edge detectors, while filters in higher layers often react to more abstract shapes and concepts.
End of explanation
"""
%%time
# Create a ConvNet classifier with 2 CONV-RELU-POOL layers, with filter sizes of
# 5 and 5 and 32 and 64 output features.
model_params = {
'input_size': 784,
'output_size': 10,
'filter_sizes': [5, 5],
'output_depths': [32, 64],
'hidden_linear_size': 1024,
'use_batch_norm': False
'linear_weights_initializer': tf.random_normal_initializer()
}
training_params = {
'keep_prob': 0.5,
'num_epochs': 5,
'batch_size': 50,
'stop_early': False,
}
trained_model, training_results = build_train_eval_and_plot(
model_params,
training_params,
verbose=True
)
"""
Explanation: Aside: The Effect of Random Initialization - RUN THIS ON A GPU INSTANCE ONLY!
Initialization of model parameters matters! Here is a ConvNet with different, but seemingly sensible, initialization of the weights in the linear layer. Running this gives significantly worse results. Judging by the accuracy plot, it's possible that training this model long enough will get it to a simliar level as before, but it will take much longer. This shows that initialization of model parameters is a an important consideration, especially as models become more complex. In practice, there are a number of different initialization schemes to consider. In particular, Xavier tends to work well with ConvNets and is worth considering. We won't go into any details in this practical though.
End of explanation
"""
%%time
## UNFORTUNATELY THIS WILL ALSO NOT WORK ON THE VM's, YOU'LL NEED TO GET A GPU INSTANCE TO RUN THIS!
# Create a ConvNet classifier with 2 CONV-RELU-POOL layers, with filter sizes of
# 5 and 5 and 32 and 64 output features.
model_params = {
'input_size': 784,
'output_size': 10,
'filter_sizes': [5, 5],
'output_depths': [32, 64],
'use_batch_norm': True,
}
training_params = {
'keep_prob': 1.0, # Switch off dropout
'num_epochs': 15,
'batch_size': 50,
'stop_early': False,
}
trained_model, training_results = build_train_eval_and_plot(
model_params,
training_params,
verbose=True
)
# QUESTION: Try experimenting with different archictures and hyperparameters and
# see how well you can classify MNIST digits!
"""
Explanation: Batch Normalisation
Batch normalisation (batch norm) is a more recent (2015) and arguably more powerful normalisation technique than dropout. It is based on the observation that machine learning models often perform better and train faster when their inputs are normalised to have 0 mean and unit variance. In multi-layered deep neural networks, the output of one layer becomes the input to the next. The insight behind batch norm is that each of these layer inputs can also be normalised. Batch norm has been shown to have numerous benefits including:
* Networks tend to train faster
* Allows higher learning rates to be used (further improving training speed).
* Reduced sensitivity to weight initialisation.
* Makes certain activation functions feasible in deep networks (When inputs have very large (absolute) expected values, certain activation functions become saturated (For example, the output of sigmoid is always to 1 for large inputs). Relu activations can also "die out" when the expected value of the input is a large negative value (why?). This results in wasted computation as these neurons become uninformative. Normalising the inputs to have 0 mean keeps these activation functions in the "sensible" parts of their domains.)
How does it work?
To normalise some inputs X, ideally we would like to set
$\hat X = \frac{X - E[X]}{\sqrt{VAR[X]}}$
but this requires knowledge of the population mean and variance statistics, which we don't know, at least during training. We therefore use the sample mean and sample variance of each batch encountered during training as unbiased estimates of these statistics. During testing, we use statistics gathered throughout training as better estimates of the population statistics. In addition to this, we would like the model to have some flexibility over the extent to which batch norm is applied, and this flexibility should be learned! In order to do this, we introduce two new trainable parameters, $\gamma$ and $\beta$ for each layer that batch norm is applied to. Suppose we have a batch of inputs to a layer, $B={x_1,...,x_m}$, we normalise these as follows:
$\mu_B = \frac{1}{m} \sum_{i=1}^{m} x_i$ (Batch mean)
${\sigma_B}^2 = \frac{1}{m} \sum_{i=1}^{m} (x_i - \mu_B)^2$ (Batch variance)
$\hat x_i= \frac{x_i - \mu_B}{\sqrt{{\sigma_B}^2}}$ (Normalised)
$y_i = \gamma \hat x_i + \beta$ (Scale and shift)
At test time, we normalise using the mean and variance computed over the entire training set:
$E[x] = E_B[\mu_B]$
$VAR[x] = \frac{m}{m-1}E_B[{\sigma_B}^2]$
$\hat x = \frac{x - E[x]}{\sqrt{VAR[x]}}$
$y = \gamma \hat x + \beta$
Implementation Details
Tracking the mean and variance over the training set can become a little fiddly. Many implementations also use a moving average of the batch mean and variance as estimates of the population mean and variance for use during testing. Luckily, TensorFlow provides batch norm out of the box in the form of the tf.contrib.layers.batch_norm function.
Since the behaviour of batch norm changes during training and testing, we need to pass a placeholder input to the function that indicates which phase we are in. Furthermore, the batch norm function uses variable updates to track the moving average mean and variance. These values are not used during training and so TensorFlow's graph execution logic will not naturally run these updates when you run a training step. In order to get around this, the batch_norm function adds these update ops to a graph collection that we can access in our training function. The following code, which you will see in the train_tf_model function retrieves these ops and then adds a control dependency to the optimiser step. This effectively tells TensorFlow that the update_ops must be run before the optimizer_step can be run, ensuring that the estimates are updated whenever we do a training step.
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
optimizer_step = optimizer_fn.minimize(loss)
Further choices to consider when using batch norm are where to apply it (some apply it immediately before each activation function, some after the activation function), whether to apply it to all layers and whether or not to share the gamma and beta parameters over all layers or have separate values for each layer. .
Have a look at the ConvNetClassifer class above to see what choices were made, try changing these and see what results you get! (See the TensorFlow documentation for a list of even more parameters you can experiment with)
Now, finally, let's switch batch norm on and see how our ConvNetClassifier performs. (Note: we shouldn't expect it to necessarily perform better than dropout as we are already close to the limits of how well we can classify MNIST with our relatively small ConvNet!)
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/td2a/td2a_cenonce_session_2A.ipynb | mit | %matplotlib inline
from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: 2A.data - Calcul Matriciel, Optimisation
numpy arrays sont la première chose à considérer pour accélérer un algorithme. Les matrices sont présentes dans la plupart des algorithmes et numpy optimise les opérations qui s'y rapporte. Ce notebook est un parcours en diagonal.
End of explanation
"""
import numpy as np
"""
Explanation: Numpy arrays
La convention d'import classique de numpy est la suivante:
End of explanation
"""
l = [1, 42, 18 ]
a = np.array(l)
print(a)
print(a.dtype)
print(a.ndim)
print(a.shape)
print(a.size)
a
"""
Explanation: Creation d'un array: notion de datatype, et dimensions
On part d'une liste python contenant des entiers. On peut créer un array numpy à partir de cette liste.
Cet array possède des attributs indiquant le data type, le nombre de dimensions de l'array, etc...
End of explanation
"""
b = np.array(l, dtype=float)
print(b)
print(b.dtype)
l[0] = 1.0
bb = np.array(l)
print(bb)
print(bb.dtype)
"""
Explanation: On peut indiquer explicitement le dtype lors de la création de l'array. Sinon, Numpy sélectionne automatiquement le dtype.
Numpy ajoute un grand nombre de dtype à ceux de Python. Allez jeter un oeil à la liste.
End of explanation
"""
a[0] = 2.5
a
"""
Explanation: Assigner un float dans un array de type int va caster le float en int, et ne modifie pas le dtype de l'array.
End of explanation
"""
aa = a.astype(float)
aa[0] = 2.5
aa
"""
Explanation: On peut forcer le casting dans un autre type avec astype :
End of explanation
"""
c = np.array([range(5), range(5,10), range(5)])
print(c)
print("ndim:{}".format(c.ndim))
print("shape:{}".format(c.shape))
print(c.transpose()) #same as c.T
print("shape transposed:{}".format(c.T.shape))
print(c.flatten())
print("ndim flattened:{}".format(c.flatten().ndim))
"""
Explanation: A partir d'une liste de listes, on obtient un array bi-dimmensionnel.
On peut le transposer ou encore l'aplatir en un array 1d
End of explanation
"""
print(c)
"""
Explanation: Indexation, Slicing, Fancy indexing
End of explanation
"""
print(c[1,3])
print(c[1,:3])
print(c[:,4])
"""
Explanation: L'indexation des array multidimensionnels fonctionne avec des tuples.
La syntaxe ':' permet d'obtenir tous les éléments de la dimension.
End of explanation
"""
print(c[1], c[1].shape)
print(c[1][:3])
"""
Explanation: Si on utilise pas un couple sur un array 2d on récupère un array 1d
End of explanation
"""
ar = np.arange(1,10) #arange est l'equivalent de range mais retourne un numpy array
print('ar = ',ar)
idx = np.array([1, 4, 3, 2, 1, 7, 3])
print('idx = ',idx)
print("ar[idx] =", ar[idx])
print('######')
idx_bool = np.ones(ar.shape, dtype=bool)
idx_bool[idx] = False
print('idx_bool = ', idx_bool)
print('ar[idx_bool] = ', ar[idx_bool])
print('######', 'Que se passe-t-il dans chacun des cas suivants?', '######' )
try:
print('ar[np.array([True, True, False, True])] = ', ar[np.array([True, True, False, True])])
except Exception as e:
# l'expression ar[[True, True, False, True]] déclenche une erreur depuis numpy 1.13
print("Erreur", e)
"""
Explanation: On peut aussi utiliser l'indexation par un array (ou une liste python) de booléens ou d'entiers (un mask). Cela s'appelle le fancy indexing. Un mask d'entiers permet de désigner les éléments que l'on souhaite extraire via la liste de leurs indices, on peut aussi répéter l'indice d'un élément pour répéter l'élement dans l'array que l'on extrait.
End of explanation
"""
list_python = range(10)
list_python[[True, True, False, True]] # déclenche une exception
list_python[[2, 3, 2, 7]] # déclenche une exception
"""
Explanation: Pourquoi parle-t-on de fancy indexing? Essayez d'indexer des listes python de la même manière...
End of explanation
"""
d = np.arange(1, 6, 0.5)
d
"""
Explanation: View contre Copy
Créons un array $d$. En plus de renvoyer directement un array, la fonction arange permet aussi d'utiliser un step flottant. (Essayer avec le range de python pour voir)
End of explanation
"""
e = d
e[[0,2, 4]] = - np.pi
e
d
"""
Explanation: Un point important est que l'on ne recopie pas un array lorsqu'on effectue une assignation ou un slicing d'un array.
On travaille dans ce cas avec une View sur l'array d'origine (shallow copy). Toute modification sur la View affecte l'array d'origine.
Dans l'exemple qui suit, $e$ est une view sur $d$. Lorsqu'on modifie $e$, $d$ aussi est modifié. (Remarquez au passage que numpy fournit quelques constantes bien pratiques....)
End of explanation
"""
d = np.linspace(1,5.5,10) #Question subsidiaire: en quoi est-ce différent de np.arange avec un step float?
f = d.copy()
f[:4] = -np.e #il s'agit du nombre d'euler, pas de l'array e ;)
print(f)
print(d)
"""
Explanation: Si on ne veut pas modifier $d$ indirectement, il faut travailler sur une copie de $d$ (deep copy).
End of explanation
"""
print('d = ',d)
slice_of_d = d[2:5]
print('\nslice_of_d = ', slice_of_d)
slice_of_d[0] = np.nan
print('\nd = ', d)
mask = np.array([2, 3, 4])
fancy_indexed_subarray = d[mask]
print('\nfancy_indexed_subarray = ', fancy_indexed_subarray)
fancy_indexed_subarray[0] = -2
print('\nd = ', d)
"""
Explanation: Ce point est important car source classique d'erreurs silencieuses: les erreurs les plus vicieuses car l'output sera faux mais python ne râlera pas...
Il faut un peu de temps pour s'habituer mais on finit par savoir de manière naturelle quand on travaille sur une view, quand on a besoin de faire une copie explicitement, etc... En tout cas, vérifiez vos sorties, faites des tests de cohérence, cela ne nuit jamais.
Retenez par exemple que le slicing vous renvoie une view sur l'array, alors que le fancy indexing effectue une copie.
(Au passage, remarquez le NaN (=NotaNumber) déjà introduit lors de la séance 1 sur pandas qui est un module basé sur numpy)
End of explanation
"""
g = np.arange(12)
print(g)
g.reshape((4,3))
"""
Explanation: Manipulation de shape
La méthode reshape permet de changer la forme de l'array. Il existe de nombreuses manipulations possibles.
On précise à reshape la forme souhaitée: par un entier si on veut un array 1d de cette longueur, ou un couple pour un array 2d de cette forme.
End of explanation
"""
g.reshape((4,3), order='F')
"""
Explanation: Par défaut, reshape utilise l'énumération dans l'ordre du langage C (aussi appelé "row first" ), on peut préciser que l'on souhaite utiliser l'ordre de Fortran ("column first"). Ceux qui connaissent Matlab et R sont habitués à l'ordre "column-first". Voir l'article wikipedia
End of explanation
"""
np.zeros_like(g)
np.ones_like(g)
"""
Explanation: On peut utiliser -1 sur une dimension, cela sert de joker: numpy infère la dimension nécessaire ! On peut créer directement des matrices de 0 et de 1 à la dimension d'un autre array.
End of explanation
"""
np.concatenate((g, np.zeros_like(g))) #Attention à la syntaxe: le type d'entrée est un tuple!
gmat = g.reshape((1, len(g)))
np.concatenate((gmat, np.ones_like(gmat)), axis=0)
np.concatenate((gmat, np.ones_like(gmat)), axis=1)
np.hstack((g, g))
np.vstack((g,g))
"""
Explanation: On peut aussi concatener ou stacker horizontalement/verticalement différents arrays.
End of explanation
"""
#Exo1a-1:
#Exo1a-2:
#Exo1B:
#Exo1C:
"""
Explanation: Exercice 1: Echiquier et Crible d'Erathosthène
Exercice 1-A Echiquier: Créer une matrice échiquier (des 1 et des 0 alternés) de taille 8x8, de deux façons différentes
en vous servant de slices
en vous servant de la fonction tile
Exercice 1-B Piège lors d'une extraction 2d:
Définir la matrice $M = \left(\begin{array}{ccccc} 1 & 5 & 9 & 13 & 17 \ 2 & 6 & 10 & 14 & 18 \ 3 & 7 & 11 & 15 & 19 \ 4 & 8 & 12 & 16 & 20 \ \end{array}\right)$
En extraire la matrice $\left(\begin{array}{ccc} 6 & 18 & 10 \ 7 & 19 & 11 \ 5 & 17 & 9 \ \end{array}\right)$
Exercice 1-C Crible d'Erathosthène: On souhaite implémenter un crible d'Erathosthène pour trouver les nombres premiers inférieurs à $N=1000$.
partir d'un array de booléens de taille N+1, tous égaux à True.
Mettre 0 et 1 à False car ils ne sont pas premiers
pour chaque entier $k$ entre 2 et $\sqrt{N}$:
si $k$ est premier: on passe ses multiples (entre $k^2$ et $N$) à False
on print la liste des entiers premiers
End of explanation
"""
a = np.ones((3,2))
b = np.arange(6).reshape(a.shape)
print(a)
b
"""
Explanation: Manipulation et Opérations sur les arrays
Il existe un très grand nombre de routines pour manipuler les arrays numpy:
Vous trouverez sans doute utiles les pages spécifiques aux routines de stats ou de maths
Opérations élément par élément
On déclare $a$ et $b$ sur lesquelles nous allons illustrer quelques opérations
End of explanation
"""
print( (a + b)**2 )
print( np.abs( 3*a - b ) )
f = lambda x: np.exp(x-1)
print( f(b) )
"""
Explanation: Les opérations arithmétiques avec les scalaires, ou entre arrays s'effectuent élément par élément.
Lorsque le dtype n'est pas le même ($a$ contient des float, $b$ contient des int), numpy adopte le type le plus "grand" (au sens de l'inclusion).
End of explanation
"""
b
1/b
"""
Explanation: Remarquez que la division par zéro ne provoque pas d'erreur mais introduit la valeur inf :
End of explanation
"""
c = np.ones(6)
c
b+c # déclenche une exception
c = np.arange(3).reshape((3,1))
print(b,c, sep='\n')
b+c
"""
Explanation: Broadcasting
Que se passe-t-il si les dimensions sont différentes?
End of explanation
"""
a = np.zeros((3,3))
a[:,0] = -1
b = np.array(range(3))
print(a + b)
"""
Explanation: L'opération précédente fonctionne car numpy effectue ce qu'on appelle un broadcasting de c : une dimension étant commune, tout se passe comme si on dupliquait c sur la dimension non-partagée avec b. Vous trouverez une explication visuelle simple ici :
End of explanation
"""
print(b.shape)
print(b[:,np.newaxis].shape)
print(b[np.newaxis,:].shape)
print( a + b[np.newaxis,:] )
print( a + b[:,np.newaxis] )
print(b[:,np.newaxis]+b[np.newaxis,:])
print(b + b)
"""
Explanation: Par contre, il peut parfois être utile de préciser la dimension sur laquelle on souhaite broadcaster, on ajoute alors explicitement une dimension :
End of explanation
"""
c = np.arange(10).reshape((2,-1)) #Note: -1 is a joker!
print(c)
print(c.sum())
print(c.sum(axis=0))
print(np.sum(c, axis=1))
print(np.all(c[0] < c[1]))
print(c.min(), c.max())
print(c.min(axis=1))
"""
Explanation: Réductions
On parle de réductions lorsque l'opération réduit la dimension de l'array.
Il en existe un grand nombre. Elles existent souvent sous forme de fonction de numpy ou de méthodes d'un array numpy.
On n'en présente que quelques unes, mais le principe est le même : par défaut elles opèrent sur toutes les dimensions, mais on peut via l'argument axis préciser la dimension selon laquelle on souhaite effectuer la réduction.
End of explanation
"""
A = np.tril(np.ones((3,3)))
A
b = np.diag([1,2, 3])
b
"""
Explanation: Algèbre linéaire
Vous avez un éventail de fonctions pour faire de l'algèbre linéaire dans numpy ou dans scipy.
Cela peut vous servir si vous cherchez à faire une décomposition matricielle particulière (LU, QR, SVD,...), si vous vous intéressez aux valeurs propres d'une matrice, etc...
Exemples simples
Commençons par construire deux arrays 2d correspondant à une matrice triangulaire inférieure et une matrice diagonale :
End of explanation
"""
print(A.dot(b))
print(A*b)
print(A.dot(A))
"""
Explanation: On a vu que les multiplications entre array s'effectuaient élément par élement.
Si l'on souhaite faire des multiplications matricielles, il faut utiliser la fonction dot. La version 3.5 introduit un nouvel opérateur @ qui désigne explicitement la multiplication matricielle.
End of explanation
"""
print(np.linalg.det(A))
inv_A = np.linalg.inv(A)
print(inv_A)
print(inv_A.dot(A))
"""
Explanation: On peut calculer l'inverse ou le déterminant de $A$
End of explanation
"""
x = np.linalg.solve(A, np.diag(b))
print(np.diag(b))
print(x)
print(A.dot(x))
"""
Explanation: ... résoudre des systèmes d'equations linéaires du type $Ax = b$...
End of explanation
"""
np.linalg.eig(A)
np.linalg.eigvals(A)
"""
Explanation: ... ou encore obtenir les valeurs propres de $A$.
End of explanation
"""
m = np.matrix(' 1 2 3; 4 5 6; 7 8 9')
a = np.arange(1,10).reshape((3,3))
print(m)
print(a)
print(m[0], a[0])
print(m[0].shape, a[0].shape)
"""
Explanation: Numpy Matrix
Matrix est une sous classe spécialisée pour le calcul matriciel. Il s'agit d'un array numpy 2d qui conserve sa dimension 2d à travers les opérations. Pensez aux différences que cela implique...
On peut les construire classiquement depuis les array ou les objets pythons, ou via une string à la Matlab ( où les points virgules indiquent les lignes).
End of explanation
"""
m * m
a * a
m * a # La priorité des matrix est plus importantes que celles des arrays
print(m**2)
print(a**2)
"""
Explanation: Matrix surcharge par ailleurs les opérateurs * et ** pour remplacer les opérations élément par élément par les opérations matricielles.
Enfin, une Matrix possède des attributs supplémentaires. Notamment, Matrix.I qui désigne l'inverse, Matrix.A l'array de base.
Il est probable que cela évolue puisque Python 3.5 a introduit le symbol @ pour la multiplication matricielle.
End of explanation
"""
m[0,0]= -1
print("det", np.linalg.det(m), "rank",np.linalg.matrix_rank(m))
print(m.I*m)
a[0,0] = -1
print("det", np.linalg.det(a), "rank",np.linalg.matrix_rank(a))
print(a.dot(np.linalg.inv(a)))
"""
Explanation: La syntaxe est plus légère pour effectuer du calcul matriciel
End of explanation
"""
np.random.randn(4,3)
"""
Explanation: Génération de nombres aléatoires et statistiques
Le module numpy.random apporte à python la possibilité de générer un échantillon de taille $n$ directement, alors que le module natif de python ne produit des tirages que un par un. Le module numpy.random est donc bien plus efficace si on veut tirer des échantillon conséquents. Par ailleurs, scipy.stats fournit des méthodes pour un très grand nombre de distributions et quelques fonctions classiques de statistiques.
Par exemple, on peut obtenir un array 4x3 de tirages gaussiens standard (soit en utilisant randn ou normal):
End of explanation
"""
N = int(1e7)
from random import normalvariate
%timeit [normalvariate(0,1) for _ in range(N)]
%timeit np.random.randn(N)
"""
Explanation: Pour se convaincre que numpy.random est plus efficace que le module random de base de python. On effectue un grand nombre de tirages gaussiens standard, en python pur et via numpy.
End of explanation
"""
def bowl_peak(x,y):
return x*np.exp(-x**2-y**2)+(x**2+y**2)/20
"""
Explanation: Exercice 2 : marches aléatoires
Simulez (en une seule fois!) 10000 marches aléatoires de taille 1000, partant de 0 et de pas +1 ou -1 équiprobables
Faites un graphe représentant la racine de la moyenne des carrés des positions (=cumul des pas à un instant donné) en fonction du temps
Quels sont les amplitudes maximales et minimales atteintes parmi l'ensemble des marches aléatoires?
Combien de marches s'éloigne de plus de 50 de l'origine?
Parmi celles qui le font, quelle est la moyenne des temps de passage (i.e. le premier moment où ces marches dépassent +/-50)?
Vous aurez peut-être besoin des fonctions suivantes: np.abs, np.mean, np.max, np.where, np.argmax, np.any, np.cumsum, np.random.randint.
Exercice 3 : retrouver la série aléatoire à partir des marches aléatoires
L'exercice précédent montre comment générer une marche aléatoire à partir d'une série temporelle aléatoire. Comment retrouver la série initiale à partir de la marche aléatoire ?
Optimisation avec scipy
Le module scipy.optimize fournit un panel de méthodes d'optimisation. En fonction du problème que vous souhaitez résoudre, il vous faut choisir la méthode adéquate. Je vous conseille vivement la lecture de ce tutoriel sur l'optimisation numérique, écrit par Gaël Varoquaux.
Récemment, l'ensemble des solvers ont été regroupés sous deux interfaces, même si on peut toujours faire appel à chaque solver directement, ce qui n'est pas conseillé car les entrées sorties ne sont pas normalisées (par contre vous devrez sans doute aller voir l'aide de chaque méthode pour vous en servir):
Pour minimiser une fonction scalaire d'une ou plusieurs variables:scipy.optimize.minimize
Pour minimiser une fonction scalaire d'une variable uniquement:scipy.optimize.minimize_scalar
Vous obtiendrez en sortie un objet de type scipy.optimize.OptimizeResult.
Dans la suite, je développe un petit exemple inspiré du tutoriel de la toolbox d'optimisation de Matlab. Par ailleurs, la documentation de cette toolbox est plutôt claire et peut toujours vous servir lorsque que vous avez besoin de vous rafraichir la mémoire sur l'optimisation numérique.
On commence par définir la fonction bowl_peak
End of explanation
"""
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
from matplotlib import cm #colormaps
min_val = -2
max_val = 2
fig = plt.figure()
ax = fig.gca(projection='3d')
x_axis = np.linspace(min_val,max_val,100)
y_axis = np.linspace(min_val,max_val,100)
X, Y = np.meshgrid(x_axis, y_axis, copy=False, indexing='xy')
Z = bowl_peak(X,Y)
#X, Y, Z = axes3d.get_test_data(0.05)
ax.plot_surface(X, Y, Z, rstride=5, cstride=5, alpha=0.2)
cset = ax.contour(X, Y, Z, zdir='z', offset=-0.5, cmap=cm.coolwarm)
cset = ax.contour(X, Y, Z, zdir='x', offset=min_val, cmap=cm.coolwarm)
cset = ax.contour(X, Y, Z, zdir='y', offset=max_val, cmap=cm.coolwarm)
ax.set_xlabel('X')
ax.set_xlim(min_val, max_val)
ax.set_ylabel('Y')
ax.set_ylim(min_val, max_val)
ax.set_zlabel('Z')
ax.set_zlim(-0.5, 0.5)
"""
Explanation: On va ensuite chercher un exemple dans la gallerie matplotlib pour la représenter: contour3d_demo3. On modifie légèrement le code pour l'utiliser avec bowl_peak
End of explanation
"""
from scipy import optimize
x0 = np.array([-0.5, 0])
fun = lambda x: bowl_peak(x[0],x[1])
methods = [ 'Nelder-Mead', 'CG', 'BFGS', 'Powell', 'COBYLA', 'L-BFGS-B' ]
for m in methods:
optim_res = optimize.minimize(fun, x0, method=m)
print("---\nMethod:{}\n".format(m),optim_res, "\n")
"""
Explanation: On voit que le minimum se trouve près de $[-\frac{1}{2}, 0]$. On va utiliser ce point pour initialiser l'optimisation.
On va tester différentes méthodes et comparer les sorties obtenues.
End of explanation
"""
for i in range(4):
optim_res = optimize.minimize(fun, x0, method='BFGS')
print("---\nMethod:{} - Test:{}\n".format(m,i),optim_res, "\n")
"""
Explanation: On trouve un minimum à $-0.4052$ en $[-0.669, 0.000]$ pour toutes les méthodes qui convergent. Notez le message de sortie de 'CG' qui signifie que le gradient ne varie plus assez. Personnellement, je ne trouve pas ce message de sortie très clair. Le point trouvé est bien l'optimum cherché pourtant. Notez aussi le nombre d'évaluations de la fonction (nfev) pour chaque méthode, et le nombre d'évaluation de gradient (njev) pour les méthodes qui reposent sur un calcul de gradient.
Remarquez aussi que si on relance Anneal plusieurs fois, on n'est pas assuré d'obtenir la même solution, puisqu'il s'agit d'une métaheuristique.
End of explanation
"""
for m in methods:
print("Method:{}:".format(m))
%timeit optim_res = optimize.minimize(fun, x0, method=m)
print('############')
"""
Explanation: On va évaluer le temps de calcul nécessaire à chaque méthode.
End of explanation
"""
def shifted_scaled_bowlpeak(x,a,b,c):
return (x[0]-a)*np.exp(-((x[0]-a)**2+(x[1]-b)**2))+((x[0]-a)**2+(x[0]-b)**2)/c
a = 2
b = 3
c = 10
optim_res = optimize.minimize(shifted_scaled_bowlpeak, x0, args=(a,b,c), method='BFGS')
print(optim_res)
print('#######')
optim_res = optimize.minimize(lambda x:shifted_scaled_bowlpeak(x,a,b,c), x0, method='BFGS')
print(optim_res)
"""
Explanation: On peut aussi fournir des arguments supplémentaires à la fonction qu'on optimise. Par exemple, les données lorsque vous maximisez une log-vraissemblance. En voici un exemple: on considère une version rescaled de la fonction bowl_peak. Vous pourriez aussi utiliser une lambda fonction.
End of explanation
"""
|
timstaley/voeventdb | notebooks/notes_on_scoped_session.ipynb | gpl-2.0 | # sm.query(Voevent).count() #<--Raises
"""
Explanation: A sessionmaker does not have a query property - we don't expect it to, after all it's for making sessions, not queries:
End of explanation
"""
regular_session = sm()
regular_session.query(Voevent).count()
"""
Explanation: So, make a session:
End of explanation
"""
scoped_session = scoped_sm()
scoped_session.query(Voevent).count()
"""
Explanation: Ok. We can do the same sort of thing with a scoped session:
End of explanation
"""
scoped_sm.query(Voevent).count()
"""
Explanation: However - shenanigans! - a sqlalchemy.orm.scoped_session (i.e. a scoped-session factory) has a .query attribute, created via the query_property method. AFAICT this is syntactic sugar, proxying to query attribute of the underlying session.
This is documented here:
http://docs.sqlalchemy.org/en/rel_1_0/orm/contextual.html?highlight=scoped_session#implicit-method-access
(Though not very prominently, considering how heavily it's used in flask-related stuff. Breadcrumbs from e.g. flask-sqlalchemy docs might have been nice.)
End of explanation
"""
|
jamesfolberth/NGC_STEM_camp_AWS | notebooks/data8_notebooks/project1/project1.ipynb | bsd-3-clause | # Run this cell, but please don't change it.
import numpy as np
import math
from datascience import *
# These lines set up the plotting functionality and formatting.
import matplotlib
matplotlib.use('Agg', warn=False)
%matplotlib inline
import matplotlib.pyplot as plots
plots.style.use('fivethirtyeight')
# These lines load the tests.
from client.api.assignment import load_assignment
tests = load_assignment('project1.ok')
"""
Explanation: Project 1 - California Water Usage
Welcome to the first project in Data 8! We will be exploring possible connections between water usage, geography, and income in California. The water data for this project was procured from the California State Water Resources Control Board and curated by the Pacific Institute. The map data includes US topography, California counties, and ZIP codes.
The dataset on income comes from the IRS (documented here). We have identified some interesting columns in the dataset, but a full description of all the columns (and a definition of the population in the dataset and some interesting anonymization procedures they used) is available here.
Administrivia
Piazza
While collaboration is encouraged on this and other assignments, sharing answers is never okay. In particular, posting code or other assignment answers publicly on Piazza (or elsewhere) is academic dishonesty. It will result in a reduced project grade at a minimum. If you wish to ask a question and include code, you must make it a private post.
Partners
You may complete the project with up to one partner. Partnerships are an exception to the rule against sharing answers. If you have a partner, one person in the partnership should submit your project on Gradescope and include the other partner in the submission. (Gradescope will prompt you to fill this in.)
Your partner must be in your lab section. You can ask your TA to pair you with someone from your lab if you’re unable to find a partner. (That will happen in lab the week the project comes out.)
Due Date and Checkpoint
Part of the project will be due early. Parts 1 and 2 of the project (out of 3) are due Tuesday, September 27th at 5PM. Unlike the final submission, this early checkpoint will be graded for completion. It will be worth approximately 10% of the total project grade. Simply submit your partially-completed notebook as a PDF, as you would submit any other notebook. (See the note above on submitting with a partner.)
The entire project (parts 1, 2, and 3) will be due Tuesday, October 4 at 5PM. (Again, see the note above on submitting with a partner.)
On to the project!
As usual, run the cell below to prepare the automatic tests. Passing the automatic tests does not guarantee full credit on any question. The tests are provided to help catch some common errors, but it is your responsibility to answer the questions correctly.
End of explanation
"""
# Run this cell, but please don't change it.
districts = Map.read_geojson('water_districts.geojson')
zips = Map.read_geojson('ca_zips.geojson.gz')
usage_raw = Table.read_table('water_usage.csv', dtype={'pwsid': str})
income_raw = Table.read_table('ca_income_by_zip.csv', dtype={'ZIP': str}).drop('STATEFIPS', 'STATE', 'agi_stub')
wd_vs_zip = Table.read_table('wd_vs_zip.csv', dtype={'PWSID': str, 'ZIP': str}).set_format(make_array(2, 3), PercentFormatter)
"""
Explanation: First, load the data. Loading may take some time.
End of explanation
"""
districts.format(width=400, height=200)
"""
Explanation: Part 1: Maps
The districts and zips data sets are Map objects. Documentation on mapping in the datascience package can be found at data8.org/datascience/maps.html. To view a map of California's water districts, run the cell below. Click on a district to see its description.
End of explanation
"""
district_table = Table.from_records(districts.features)
district_table.show(3)
"""
Explanation: A Map is a collection of regions and other features such as points and markers, each of which has a string id and various properties. You can view the features of the districts map as a table using Table.from_records.
End of explanation
"""
# Fill in the next line so the last line draws a map of those two districts.
alameda_and_east_bay = ...
Map(alameda_and_east_bay, height=300, width=300)
_ = tests.grade('q11')
"""
Explanation: To display a Map containing only two features from the district_table, call Map on an array containing those two features from the feature column.
Question 1.1. Draw a map of the Alameda County Water District (row 0) and the East Bay Municipal Utilities District (row 2).
End of explanation
"""
income_raw
"""
Explanation: Hint: If scrolling becomes slow on your computer, you can clear maps for the cells above by running Cell > All Output > Clear from the Cell menu.
Part 2: California Income
Let's look at the income_raw table, which comes from the IRS. We're going to link this information about incomes to our information about water. First, we need to investigate the income data and get it into a more usable form.
End of explanation
"""
income_by_zipcode = ...
income_by_zipcode
_ = tests.grade('q21')
"""
Explanation: Some observations:
The table contains several numerical columns and a column for the ZIP code.
For each ZIP code, there are 6 rows. Each row for a ZIP code has data from tax returns in one income bracket. (A tax return is the tax filing from one person or household. An income bracket is a group of people whose annual income is in some range, like \$25,000 to $34,999.)
According to the IRS documentation, all the numerical columns are totals -- either total numbers of returns that fall into various categories, or total amounts of money (in thousands of dollars) from returns in those categories. For example, the column 'N02650' is the number of returns that included a total income amount, and 'A02650' is the total amount of total income (in thousands of dollars) from those returns.
For the analysis we're about to do, we won't need to use the information about tax brackets. We will need to know the total income, total number of returns, and other totals from each ZIP code.
Question 2.1. Assign the name income_by_zipcode to a table with just one row per ZIP code. When you group according to ZIP code, the remaining columns should be summed. In other words, for any other column such as 'N02650', the value of 'N02650' in a row corresponding to ZIP code 90210 (for example) should be the sum of the values of 'N02650' in the 6 rows of income_raw corresponding to ZIP code 90210.
End of explanation
"""
...
...
income_by_zipcode
_ = tests.grade('q22')
"""
Explanation: Your income_by_zipcode table probably has column names like N1 sum, which looks a little weird.
Question 2.2. Relabel the columns in income_by_zipcode to match the labels in income_raw
Hint: Inspect income_raw.labels and income_by_zipcode.labels to find the differences you need to change.
Hint 2: Since there are many columns, it will be easier to relabel each of them by using a for statement. See Chapter 8 of the textbook for details.
Hint 3: You can use the replace method of a string to remove excess content. See lab02 for examples.
Hint 4: To create a new table from an existing table with one label replaced, use relabeled. To change a label in an existing table permanently, use relabel. Both methods take two arguments: the old label and the new label. You can solve this problem with either one, but relabel is simpler.
End of explanation
"""
income = Table().with_columns(
...
...
...
...
)
income.set_format('total income ($)', NumberFormatter(0)).show(5)
_ = tests.grade('q23')
"""
Explanation: Question 2.3.
Create a table called income with one row per ZIP code and the following columns.
A ZIP column with the same contents as 'ZIP' from income_by_zipcode.
A num returns column containing the total number of tax returns that include a total income amount (column 'N02650' from income_by_zipcode).
A total income ($) column containing the total income in all tax returns in thousands of dollars (column 'A02650' from income_by_zipcode).
A num farmers column containing the number of farmer returns (column 'SCHF' from income_by_zipcode).
End of explanation
"""
income = ...
_ = tests.grade('q24')
"""
Explanation: Question 2.4. All ZIP codes with less than 100 returns (or some other special conditions) are grouped together into one ZIP code with a special code. Remove the row for that ZIP code from the income table.
Hint 1: This ZIP code value has far more returns than any of the other ZIP codes. Try using group and sort to find it.
Hint 2: To remove a row in the income table using where, assign income to the smaller table using the following expression structure:
income = income.where(...)
Hint 3: Each ZIP code is represented as a string, not an int.
End of explanation
"""
# Our solution took several lines of code.
average_income = ...
average_income
_ = tests.grade('q25')
"""
Explanation: Because each ZIP code has a different number of people, computing the average income across several ZIP codes requires some care. This will come up several times in this project. Here is a simple example:
Question 2.5 Among all the tax returns that
1. include a total income amount, and
2. are filed by people living in either ZIP code 94576 (a rural area north of Napa) or in ZIP code 94704 (a moderately-dense area in South Berkeley),
what is the average total income? Express the answer in dollars as an int rounded to the nearest dollar.
End of explanation
"""
avg_total = ...
avg_total
"""
Explanation: Question 2.6. Among all California tax returns that include a total income amount, what is the average total income? Express the answer in dollars as an int rounded to the nearest dollar.
End of explanation
"""
# Write code to make a scatter plot here.
...
"""
Explanation: Farming
Farms use water, so it's plausible that farming is an important factor in water usage. Here, we will check for a relationship between farming and income.
Among the tax returns in California for ZIP codes represented in the incomes table, is there an association between income and living in a ZIP code with a lot of farmers?
We'll try to answer the question in 3 ways.
Question 2.7. Make a scatter plot with one point for each ZIP code. Display the average income in dollars on the vertical axis and the proportion of returns that are from farmers on the horizontal axis.
End of explanation
"""
# Build and display a table with two rows:
# 1) incomes of returns in ZIP codes with a greater-than-average proportion of farmers
# 2) incomes of returns in other ZIP codes
"""
Explanation: Question 2.8. From the graph, can you say whether ZIP codes with more farmers typically have lower or higher average income than ZIP codes with few or no farmers? Can you say how much lower or higher?
Write your answer here, replacing this text.
Question 2.9. Compare the average incomes for two groups of tax returns: those in ZIP codes with a greater-than-average proportion of farmers and those in ZIP codes with a less-than-average (or average) proportion. Make sure both of these values are displayed (preferably in a table). Then, describe your findings.
Hint: Make sure your result correctly accounts for the number of tax returns in each ZIP code, as in questions 2.5 and 2.6.
End of explanation
"""
# Write code to draw a map of only the high-income ZIP codes.
# We have filled in some of it and suggested names for variables
# you might want to define.
zip_features = Table.from_records(zips.features)
high_average_zips = ...
high_zips_with_region = ...
Map(high_zips_with_region.column('feature'), width=400, height=300)
"""
Explanation: Write your answer here, replacing this text.
Question 2.10. The graph below displays two histograms: the distribution of average incomes of ZIP codes that have above-average proportions of farmers, and that of ZIP codes with below-average proportions of farmers.
<img src="https://i.imgur.com/jicA2to.png"/>
Are ZIP codes with below-average proportions of farmers more or less likely to have very low incomes? Explain how your answer is consistent with your answer to question 2.8.
Write your answer here, replacing this text.
ZIP codes cover all the land in California and do not overlap. Here's a map of all of them.
<img src="california-zip-code-map.jpg" alt="CA ZIP Codes"/>
Question 2.11. Among the ZIP codes represented in the incomes table, is there an association between high average income and some aspect of the ZIP code's location? If so, describe one aspect of the location that is clearly associated with high income.
Answer the question by drawing a map of all ZIP codes that have an average income above 100,000 dollars. Then, describe an association that you observe.
In order to create a map of certain ZIP codes, you need to:
- Construct a table containing only the ZIP codes of interest, called high_average_zips.
- Join high_average_zips with the zip_features table to find the region for each ZIP code of interest.
- Call Map(...) on the column of features (provided).
End of explanation
"""
# Run this cell to create the usage table.
usage_raw.set_format(4, NumberFormatter)
max_pop = usage_raw.select(0, 'population').group(0, max).relabeled(1, 'Population')
avg_water = usage_raw.select(0, 'res_gpcd').group(0, np.mean).relabeled(1, 'Water')
usage = max_pop.join('pwsid', avg_water).relabeled(0, 'PWSID')
usage
"""
Explanation: Write your answer here, replacing this text.
Part 3: Water Usage
We will now investigate water usage in California. The usage table contains three columns:
PWSID: The Public Water Supply Identifier of the district
Population: Estimate of average population served in 2015
Water: Average residential water use (gallons per person per day) in 2014-2015
End of explanation
"""
# We have filled in the call to districts.color(...). Set per_capita_usage
# to an appropriate table so that a map of all the water districts is
# displayed.
per_capita_usage = ...
districts.color(per_capita_usage, key_on='feature.properties.PWSID')
_ = tests.grade('q31')
"""
Explanation: Question 3.1. Draw a map of the water districts, colored by the per capita water usage in each district.
Use the districts.color(...) method to generate the map. It takes as its first argument a two-column table with one row per district that has the district PWSID as its first column. The label of the second column is used in the legend of the map, and the values are used to color each region.
End of explanation
"""
wd_vs_zip.show(5)
"""
Explanation: Question 3.2. Based on the map above, which part of California appears to use more water per person: the San Francisco area or the Los Angeles area?
Write your answer here, replacing this text.
Next, we will try to match each ZIP code with a water district. ZIP code boundaries do not always line up with water districts, and one water district often covers multiple ZIP codes, so this process is imprecise. It is even the case that some water districts overlap each other. Nonetheless, we can continue our analysis by matching each ZIP code to the water district with the largest geographic overlap.
The table wd_vs_zip describes the proportion of land in each ZIP code that is contained in each water district and vice versa. (The proportions are approximate because they do not correctly account for discontiguous districts, but they're mostly accurate.)
End of explanation
"""
def district_for_zip(zip_code):
zip_code = str(zip_code) # Ensure that the ZIP code is a string, not an integer
districts = ...
at_least_half = ...
if at_least_half:
...
else:
return 'No District'
district_for_zip(94709)
_ = tests.grade('q33')
"""
Explanation: Question 3.3. Complete the district_for_zip function that takes a ZIP code as its argument. It returns the PWSID with the largest value of ZIP in District for that zip_code, if that value is at least 50%. Otherwise, it returns the string 'No District'.
End of explanation
"""
zip_pwsids = income.apply(district_for_zip, 'ZIP')
income_with_pwsid = income.with_column('PWSID', zip_pwsids).where('PWSID', are.not_equal_to("No District"))
income_with_pwsid.set_format(2, NumberFormatter(0)).show(5)
"""
Explanation: This function can be used to associate each ZIP code in the income table with a PWSID and discard ZIP codes that do not lie (mostly) in a water district.
End of explanation
"""
district_income = ...
district_data = ...
district_data.set_format(make_array('Population', 'Water', 'Income'), NumberFormatter(0))
_ = tests.grade('q34')
"""
Explanation: Question 3.4. Create a table called district_data with one row per PWSID and the following columns:
PWSID: The ID of the district
Population: Population estimate
Water: Average residential water use (gallons per person per day) in 2014-2015
Income: Average income in dollars of all tax returns in ZIP codes that are (mostly) contained in the district according to income_with_pwsid.
Hint: First create a district_income table that sums the incomes and returns for ZIP codes in each water district.
End of explanation
"""
bay_districts = Table.read_table('bay_districts.csv')
bay_water_vs_income = ...
top_10 = ...
...
"""
Explanation: Question 3.5. The bay_districts table gives the names of all water districts in the San Francisco Bay Area. Is there an association between water usage and income among Bay Area water districts? Use the tables you have created to compare water usage between the 10 Bay Area water districts with the highest average income and the rest of the Bay Area districts, then describe the association. Do not include any districts in your analysis for which you do not have income information.
The names below are just suggestions; you may perform the analysis in any way you wish.
Note: Some Bay Area water districts may not appear in your district_data table. That's ok. Perform your analysis only on the subset of districts where you have both water usage & income information.
End of explanation
"""
# For your convenience, you can run this cell to run all the tests at once!
import os
_ = [tests.grade(q[:-3]) for q in os.listdir("tests") if q.startswith('q')]
"""
Explanation: Complete this one-sentence conclusion: In the Bay Area, people in the top 10 highest-income water districts used an average of _________ more gallons of water per person per day than people in the rest of the districts.
Question 3.6. In one paragraph, summarize what you have discovered through the analyses in this project and suggest what analysis should be conducted next to better understand California water usage, income, and geography. What additional data would be helpful in performing this next analysis?
Replace this line with your conclusion paragraph.
Congratulations - you've finished Project 1 of Data 8!
To submit:
Select Run All from the Cell menu to ensure that you have executed all cells, including the test cells. Make sure that the visualizations you create are actually displayed.
Select Download as PDF via LaTeX (.pdf) from the File menu. (Sometimes that seems to fail. If it does, you can download as HTML, open the .html file in your browser, and print it to a PDF.)
Read that file! If any of your lines are too long and get cut off, we won't be able to see them, so break them up into multiple lines and download again. If maps do not appear in the output, that's ok.
Submit that downloaded file (called project1.pdf) to Gradescope.
If you cannot submit online, come to office hours for assistance. The office hours
schedule appears on data8.org/weekly.
End of explanation
"""
# Your extensions here (completely optional)
"""
Explanation: If you want, draw some more maps below.
End of explanation
"""
|
david-abel/simple_rl | examples/.ipynb_checkpoints/examples_overview-checkpoint.ipynb | apache-2.0 | # Add simple_rl to system path.
import os
import sys
parent_dir = os.path.abspath(os.path.join(os.getcwd(), os.pardir))
sys.path.insert(0, parent_dir)
from simple_rl.agents import QLearningAgent, RandomAgent
from simple_rl.tasks import GridWorldMDP
from simple_rl.run_experiments import run_agents_on_mdp
"""
Explanation: Simple RL
Welcome! Here we'll showcase some basic examples of typical RL programming tasks.
Example 1: Grid World
First, we'll grab our relevant imports: some agents, an MDP, an a function to facilitate running experiments and plotting:
End of explanation
"""
# Setup MDP.
mdp = GridWorldMDP(width=6, height=6, init_loc=(1,1), goal_locs=[(6,6)])
# Setup Agents.
ql_agent = QLearningAgent(actions=mdp.get_actions())
rand_agent = RandomAgent(actions=mdp.get_actions())
"""
Explanation: Next, we make an MDP and a few agents:
End of explanation
"""
# Run experiment and make plot.
run_agents_on_mdp([ql_agent, rand_agent], mdp, instances=5, episodes=100, steps=40, reset_at_terminal=True, verbose=False)
"""
Explanation: The real meat of <i>simple_rl</i> are the functions that run experiments. The first of which takes a list of agents and an mdp and simulates their interaction:
End of explanation
"""
from simple_rl.agents import RMaxAgent
rmax_agent = RMaxAgent(actions=mdp.get_actions(), horizon=3, s_a_threshold=1)
# Run experiment and make plot.
run_agents_on_mdp([rmax_agent, ql_agent, rand_agent], mdp, instances=5, episodes=100, steps=20, reset_at_terminal=True, verbose=False)
"""
Explanation: We can throw R-Max, introduced by [Brafman and Tennenholtz, 2002] in the mix, too:
End of explanation
"""
from simple_rl.tasks import FourRoomMDP
four_room_mdp = FourRoomMDP(9, 9, goal_locs=[(9, 9)], gamma=0.95)
# Run experiment and make plot.
four_room_mdp.visualize_value()
"""
Explanation: Each experiment we run generates an Experiment object. This facilitates recording results, making relevant files, and plotting. When the <code>run_agents...</code> function is called, a <i>results</i> dir is created containing relevant experiment data. There should be a subdirectory in <i>results</i> named after the mdp you ran experiments on -- this is where the plot, agent results, and <i>parameters.txt</i> file are stored.
All of the above code is contained in the <i>simple_example.py</i> file.
Example 2: Visuals (require pygame)
First let's make a FourRoomMDP from [Sutton, Precup, Singh 1999], which is more visually interesting than a grid world.
End of explanation
"""
from simple_rl.tasks.grid_world import GridWorldMDPClass
pblocks_mdp = GridWorldMDPClass.make_grid_world_from_file("pblocks_grid.txt", randomize=False)
pblocks_mdp.visualize_value()
"""
Explanation: <img src="val.png" alt="Val" style="width: 400px;"/>
Or we can visualize a policy:
<img src="pol.png" alt="Val Visual" style="width: 400px;"/>
Both of these are in examples/viz_example.py. If you need pygame in anaconda, give this a shot:
> conda install -c cogsci pygame
If you get an sdl font related error on Mac/Linux, try:
> brew update sdl && sdl_tf
We can also make grid worlds with a text file. For instance, we can construct the grid problem from [Barto and Pickett 2002] by making a text file:
--w-----w---w----g
--------w---------
--w-----w---w-----
--w-----w---w-----
wwwww-wwwwwwwww-ww
---w----w----w----
---w---------w----
--------w---------
wwwwwwwww---------
w-------wwwwwww-ww
--w-----w---w-----
--------w---------
--w---------w-----
--w-----w---w-----
wwwww-wwwwwwwww-ww
---w-----w---w----
---w-----w---w----
a--------w--------
Then, we make a grid world out of it:
End of explanation
"""
from simple_rl.tasks import TaxiOOMDP
from simple_rl.run_experiments import run_agents_on_mdp
from simple_rl.agents import QLearningAgent, RandomAgent
# Taxi initial state attributes..
agent = {"x":1, "y":1, "has_passenger":0}
passengers = [{"x":3, "y":2, "dest_x":2, "dest_y":3, "in_taxi":0}]
taxi_mdp = TaxiOOMDP(width=4, height=4, agent=agent, walls=[], passengers=passengers)
# Make agents.
ql_agent = QLearningAgent(actions=taxi_mdp.get_actions())
rand_agent = RandomAgent(actions=taxi_mdp.get_actions())
"""
Explanation: Which Produces:
<img src="pblocks.png" alt="Policy Blocks Grid World" style="width: 400px;"/>
Example 3: OOMDPs, Taxi
There's also a Taxi MDP, which is actually built on top of an Object Oriented MDP Abstract class from [Diuk, Cohen, Littman 2008].
End of explanation
"""
# Run experiment and make plot.
run_agents_on_mdp([ql_agent, rand_agent], taxi_mdp, instances=5, episodes=100, steps=150, reset_at_terminal=True)
"""
Explanation: Above, we specify the objects of the OOMDP and their attributes. Now, just as before, we can let some agents interact with the MDP:
End of explanation
"""
from simple_rl.run_experiments import play_markov_game
from simple_rl.agents import QLearningAgent, FixedPolicyAgent
from simple_rl.tasks import RockPaperScissorsMDP
import random
# Setup MDP, Agents.
markov_game = RockPaperScissorsMDP()
ql_agent = QLearningAgent(actions=markov_game.get_actions(), epsilon=0.2)
fixed_action = random.choice(markov_game.get_actions())
fixed_agent = FixedPolicyAgent(policy=lambda s:fixed_action)
# Run experiment and make plot.
play_markov_game([ql_agent, fixed_agent], markov_game, instances=10, episodes=1, steps=10)
"""
Explanation: More on OOMDPs in <i>examples/oomdp_example.py</i>
Example 4: Markov Games
--------
I've added a few markov games, including rock paper scissors, grid games, and prisoners dilemma. Just as before, we get a run agents method that simulates learning and makes a plot:
End of explanation
"""
from simple_rl.tasks import GymMDP
from simple_rl.agents import LinearQLearningAgent, RandomAgent
from simple_rl.run_experiments import run_agents_on_mdp
# Gym MDP.
gym_mdp = GymMDP(env_name='CartPole-v0', render=False) # If render is true, visualizes interactions.
num_feats = gym_mdp.get_num_state_feats()
# Setup agents and run.
lin_agent = LinearQLearningAgent(gym_mdp.get_actions(), num_features=num_feats, alpha=0.2, epsilon=0.4, rbf=True)
run_agents_on_mdp([lin_agent], gym_mdp, instances=3, episodes=1, steps=50)
"""
Explanation: Example 5: Gym MDP
--------
Recently I added support for making OpenAI gym MDPs. It's again only a few lines of code:
End of explanation
"""
|
arnicas/eyeo_nlp | python/Tokenizing_Stopwords_Freqs.ipynb | cc0-1.0 | import itertools
import nltk
import string
nltk.data.path
nltk.data.path.append("../nltk_data")
nltk.data.path = ['../nltk_data']
"""
Explanation: Intro to low level NLP - Tokenization, Stopwords, Frequencies, Bigrams
Lynn Cherny, arnicas@gmail
End of explanation
"""
ls ../data/books
# the "U" here is for universal newline mode, because newlines on Mac are \r\n and on Windows are \n.
with open("../data/books/Austen_Emma.txt", "U") as handle:
text = handle.read()
text[0:120]
"""
Explanation: Tokenization
Read in a file to use for practice. The directory is one level above us now, in data/books. You can add other files into the data directory if you want.
End of explanation
"""
## if you don't want the newlines in there - replace them all.
text = text.replace('\n', ' ')
text[0:120]
## Breaking it up by sentence! Can be very useful for vis :)
nltk.sent_tokenize(text)[0:10]
tokens = nltk.word_tokenize(text)
tokens[70:85] # Notice the punctuation:
# Notice the difference here:
nltk.wordpunct_tokenize(text)[70:85]
"""
Explanation: Before we go further, it might be worth saying that even the lines of a text can be interesting as a visual. Here are a couple of books where every line is a line of pixels, and we've applied a simple search in JS to show lines of dialogue in pink. (The entire analysis is done in the web file book_shape.html -- so it's a little slow to load.)
<img src="img/book_shape_dialog_emma_moby.png">
But usually you want to extract some sense of the content, which means crunching the text itself to get insights about the overall file.
End of explanation
"""
# run text2words on this book file at this location, pipe the output to the unix "head" command, showing 20 lines
!textkit text2words ../data/books/Austen_Emma.txt | head -n20
# Pipe the output through the lowercase textkit operation, before showing 20 lines again!
!textkit text2words ../data/books/Austen_Emma.txt | textkit lowercase | head -n20
"""
Explanation: There are other options for tokenization in NLTK. You can test some out here: http://text-processing.com/demo/tokenize/
Doing it in textkit at the command line:
Thanks to the work of Bocoup.com, we have a library that will do some simple text analysis at the command line, wrapping up some of the python functions I'll be showing you. The library is at https://github.com/learntextvis/textkit. Be aware it is under development! Also, some of these commands will be slower than running the code in the notebook.
When I say you can run these at the command line, what I mean is that in your terminal window you can type the command you see here after the !. The ! in the Jupyter notebook means this is a shell command.
The | is a "pipe." This means take the output from the previous command and make it the input to the next command.
End of explanation
"""
!textkit text2words ../data/books/Austen_Emma.txt | textkit filterpunc | textkit tokens2counts > ../outputdata/simple_emma_counts.csv
!ls -al ../outputputdata/simple_emma_counts.csv
"""
Explanation: What if, at this point, we made a word cloud? Let's say we strip out the punctuation and just count the words. I'll do it quickly just to show you... but we'll go a bit further.
End of explanation
"""
from nltk.corpus import stopwords
english_stops = stopwords.words('english')
# Notice they are lowercase. This means we need to be sure we lowercase our text if we want to match against them.
english_stops
tokens = nltk.word_tokenize(text)
tokens[0:15]
# this is how many tokens we have:
len(tokens)
"""
Explanation: Using the html file simple_wordcloud.html and this data file, we can see something basically useless. You don't have to do this yourself, but if you want to, edit that file to point to the ../outputdata/simple_emma_counts.csv at the bottom.
<img src="img/emma_wc_nostops.png">
StopWords
"Stopwords" are words that are usually excluded because they are common connectors (or determiners, or short verbs) that are not considered to carry meaning. BEWARE hidden stopword filtering in libraries you use and always check stopword lists to see if you agree with their contents!
End of explanation
"""
# try this without .lower in the if-statement and check the size!
# We are using a python list comprehension to remove the tokens from Emma (after lowercasing them!) that are stopwords
tokens = [token.lower() for token in tokens if token.lower() not in english_stops]
len(tokens)
# now look at the first 15 words:
tokens[0:15]
"""
Explanation: We want to strip out stopwords - use a list comprehension. Notice you need to lower case the words before you check for membership!
End of explanation
"""
import string
string.punctuation
# Now remove the punctuation and see how much smaller the token list is now:
tokens = [token for token in tokens if token not in string.punctuation]
len(tokens)
# But there's some awful stuff still in here:
sorted(tokens)[0:20]
"""
Explanation: Let's get rid of punctuation too, which isn't used in most bag-of-words analyses. "Bag of words" means lists of words where the order doesn't matter. That's how most NLP tasks are done!
End of explanation
"""
[token for token in tokens if len(token) <= 2][0:20]
# Let's define a small python function that's a pretty common one for text processing.
def clean_tokens(tokens):
""" Lowercases, takes out punct and stopwords and short strings """
return [token.lower() for token in tokens if (token not in string.punctuation) and
(token.lower() not in english_stops) and len(token) > 2]
clean = clean_tokens(tokens)
clean[0:20]
len(clean)
"""
Explanation: The ugliness of some of those tokens! You have some possibilities now - add to your stopwords list the ones you want removed; or remove all very short words, which will get rid of our puntuation problem too.
End of explanation
"""
!textkit text2words ../data/books/Austen_Emma.txt | textkit filterpunc | textkit tokens2lower > ../outputdata/emma_lower.txt
!head -n5 ../outputdata/emma_lower.txt
!textkit text2words ../outputdata/emma_lower.txt | textkit filterwords | textkit filterlengths -m 3 > ../outputdata/emma_clean.txt
!head -n10 ../outputdata/emma_clean.txt
"""
Explanation: So now we've reduced our data set from 191739 to 72576, just by removing stopwords, punctuation, and short strings. If we're interested in "meaning", this is a useful removal of noise.
Using textkit at the commandline for filtering stopwords and punctuation and lowercase and short words:
(We are breaking these lines up with some intermediate output files (emma_lower.txt) because of how long these get.)
End of explanation
"""
from nltk import Text
cleantext = Text(clean)
cleantext.vocab().most_common()[0:20]
# if you want to know all the vocabular, without counts - you can remove the [0:10] which just shows the first 10:
cleantext.vocab().keys()[0:10]
# Another way to do this is with nltk.FreqDist, which creates an object with keys that are
# the vocabulary, and values for the counts:
nltk.FreqDist(clean)['sashed']
"""
Explanation: Count Word Frequencies
The obvious thing you want to do next is count frequencies of words in texts - NLTK has you covered. (Or you can do it easily yourself using a Counter object.)
End of explanation
"""
wordpairs = cleantext.vocab().most_common()
with open("../outputdata/emma_word_counts.csv", "w") as handle:
for pair in wordpairs:
handle.write(pair[0] + "," + str(pair[1]) + "\n")
!head -n5 ../outputdata/emma_word_counts.csv
"""
Explanation: If you wanted to save the words and counts to a file to use, you can do it like this:
End of explanation
"""
!textkit text2words ../data/books/Austen_Emma.txt | textkit filterpunc | textkit tokens2lower > ../outputdata/emma_lower.txt
!textkit filterwords ../outputdata/emma_lower.txt | textkit filterlengths -m 3 > ../outputdata/emma_clean.txt
!head -n5 ../outputdata/emma_clean.txt
!textkit tokens2counts ../outputdata/emma_clean.txt > ../outputdata/emma_word_counts.csv
!head ../outputdata/emma_word_counts.csv
"""
Explanation: Using Textkit at the command line:
Let's save the output of the filtered, lowercase words into a file called cleantokens.txt:
End of explanation
"""
from nltk.collocations import *
bigram_measures = nltk.collocations.BigramAssocMeasures()
word_fd = nltk.FreqDist(clean) # all the words
bigram_fd = nltk.FreqDist(nltk.bigrams(clean))
finder = BigramCollocationFinder(word_fd, bigram_fd)
scored = finder.score_ngrams(bigram_measures.likelihood_ratio) # a good option here, there are others:
scored[0:50]
# Trigrams - using raw counts is much faster.
finder = TrigramCollocationFinder.from_words(clean,
window_size = 15)
finder.apply_freq_filter(2)
#ignored_words = nltk.corpus.stopwords.words('english') # if you hadn't removed them...
# if you want to remove extra words, like character names, you can create the ignored_words list too:
#finder.apply_word_filter(lambda w: len(w) < 3 or w.lower() in ignored_words)
finder.nbest(trigram_measures.raw_freq, 20)
finder.score_ngrams(trigram_measures.raw_freq)[0:20]
## This is very slow! Don't run unless you're serious :)
finder = TrigramCollocationFinder.from_words(clean,
window_size = 20)
finder.apply_freq_filter(2)
#ignored_words = nltk.corpus.stopwords.words('english') # if you hadn't removed them...
# if you want to remove extra words, like character names, you can create the ignored_words list too:
#finder.apply_word_filter(lambda w: len(w) < 3 or w.lower() in ignored_words)
finder.apply_word_filter(lambda w: len(w) < 3) # remove short words
finder.nbest(trigram_measures.likelihood_ratio, 10)
"""
Explanation: Now you are ready to make word clouds that are smarter than your average word cloud. Move your counts file into a place where your html can find it. Edit the file "simple_wordcloud.html" to use the name of your file, including the path!
<img src=img/edit_file_name.png>
You may still see some words in here you don't love -- names, modal verbs (would, could):
<img src="img/emma_wc_before_more_stops.png">
We can actually edit those by hand in the html/js code if you want. Look for the list of stopwords. You can change the color, too, if you want. I've added a few more stops to see how it looks now:
<img src="img/emma_wc_after_more_stops.png">
You might want to keep going.
By this point, we already know a lot about how to make texts manageable. A nice example of counting words over time in text appeared in the Washington Post, for SOTU speeches: https://www.washingtonpost.com/graphics/politics/2016-sotu/language/
There have also been a lot of studies of sentence and speech or book length. I hope that seems easy now. You could tokenize by sentence using nltk, and plot those lengths. And you could just count the words in speeches or books to plot them.
Finding Most Common Pairs of Words ("Bigrams")
Words occur in common sequences, sometimes. We call word pairs "bigrams" (and word triples "trigrams"). We refer to N-grams when we mean "sequences of some length."
End of explanation
"""
with open("../data/movie_reviews/all_pos.txt", "U") as handle:
text = handle.read()
tokens = nltk.word_tokenize(text) # tokenize them - split into words and punct
clean_posrevs = clean_tokens(tokens) # clean up stopwords and punct
clean_posrevs[0:10]
word_fd = nltk.FreqDist(clean_posrevs)
bigram_fd = nltk.FreqDist(nltk.bigrams(clean_posrevs))
finder = BigramCollocationFinder(word_fd, bigram_fd)
scored = finder.score_ngrams(bigram_measures.likelihood_ratio) # other options are
scored[0:50]
"""
Explanation: Some more help is here: http://www.nltk.org/howto/collocations.html
What if we wanted to try non-fiction, to see if there are more interesting results?
We need to read and clean the text for another file. Let's try positive movie reviews, located in data/movie_reviews/all_pos.txt.
End of explanation
"""
!textkit text2words ../data/books/Austen_Emma.txt | textkit filterpunc | textkit tokens2lower > ../outputdata/emma_lower.txt
!textkit filterwords ../outputdata/emma_lower.txt | textkit filterlengths -m 3 | textkit words2bigrams > ../outputdata/bigrams_emma.txt
!head -n10 ../outputdata/bigrams_emma.txt
"""
Explanation: To see more details about the NLTK Text object methods, read the code/doc here: http://www.nltk.org/_modules/nltk/text.html
Bigrams in Textkit at the command line:
Create a file with all the word pairs, after making everything lowercase and removing punctuation and basic stopwords:
End of explanation
"""
!textkit tokens2counts ../outputdata/bigrams_emma.txt > ../outputdata/bigrams_emma_counts.txt
!head -n20 ../outputdata/bigrams_emma_counts.txt
"""
Explanation: Then count them to get frequencies of the pairs. This may reveal custom stopwords you want to filter out.
End of explanation
"""
!textkit filterwords --custom ../data/emma_customstops.txt ../outputdata/emma_lower.txt > ../outputdata/emma_custom_stops.txt
!textkit filterlengths -m 3 ../outputdata/emma_custom_stops.txt | textkit words2bigrams > ../outputdata/bigrams_emma.txt
!textkit tokens2counts ../outputdata/bigrams_emma.txt > ../outputdata/bigrams_emma_counts.txt
!head -n20 ../outputdata/bigrams_emma_counts.txt
"""
Explanation: Suppose you didn't want the names in there? Custom stopwords can be created in a file, one per line, and added as an argument to the filterwords command:
End of explanation
"""
text = nltk.word_tokenize("And now I present your cat with something completely different.")
tagged = nltk.pos_tag(text) # there are a few options for taggers, details in NLTK books
tagged
nltk.untag(tagged)
"""
Explanation: You could add more if you wanted.
Parts of Speech - Abbreviated POS
To do this, you need to make sure your nltk_data has the the MaxEnt Treebank POS tagger -- you can get it interactively with nltk.download() (on the models tab) - but we have it here already in the nltk_data directory.
End of explanation
"""
!textkit text2words ../data/books/Austen_Emma.txt | textkit tokens2pos | grep NNP | cut -d, -f1 > ../outputdata/emma_nouns.txt
!textkit tokens2counts ../outputdata/emma_nouns.txt > ../outputdata/emma_NNP_counts.csv
!head -n10 ../outputdata/emma_NNP_counts.csv
"""
Explanation: The Penn Treebank part of speech tags are these:
<img src="./img/TreebankPOSTags.png">
source: https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html
Parts of speech are used in anaysis that's "deeper" than bags-of-words approaches. For instance, chunking (parsing for structure) may be used for entity identification and semantics. See http://www.nltk.org/book/ch07.html for a little more info, and the 2 Perkins NLTK books.
Note also that "real linguists" parse a sentence into a syntactic structure, which is usually a tree form.
<img src="img/sentence_tree.png">
(Source)
For instance, try out the Stanford NLP parser visually at http://corenlp.run/.
In TextKit at the command line:
This requires more Unix-foo, since Textkit doesn't have the full capability yet to do just a count of certain POS. We'll use grep to search for all the NNPs (proper names, or characters) and cut to get the first column (the word).
End of explanation
"""
!textkit text2words ../data/books/Austen_Emma.txt | textkit tokens2pos | grep VB | cut -d, -f1 > ../outputdata/emma_verbs.txt
!textkit tokens2counts ../outputdata/emma_verbs.txt > ../outputdata/emma_VB_counts.csv
!head -n20 ../outputdata/emma_VB_counts.csv
"""
Explanation: That's all proper names. Maybe not very interesting.
Let's look at the verbs now.
End of explanation
"""
!textkit text2words ../data/books/Austen_Emma.txt | textkit tokens2lower \
| textkit filterwords | textkit tokens2pos | grep VB | cut -d, -f1 > ../outputdata/emma_verbs.txt
!textkit tokens2counts ../outputdata/emma_verbs.txt > ../outputdata/emma_VB_counts.csv
!textkit text2words ../data/books/Austen_Pride.txt | textkit tokens2lower \
| textkit filterwords | textkit tokens2pos | grep VB | cut -d, -f1 > ../outputdata/pride_verbs.txt
!textkit tokens2counts ../outputdata/pride_verbs.txt > ../outputdata/pride_VB_counts.csv
"""
Explanation: Keep in mind that you can filter stopwords before you do this, too, although you should also lowercase things too. Also note that "grep VB" will also match other forms of VB, like VBP!
Suppose you want to make a word cloud of just the verbs... without stopwords, and you want to compare two books by the same author, say Emma and Pride and Prejudice. Let's try it. (I'm using a \ to wrap the line here, so I don't need to use intermediate files for short commands.)
End of explanation
"""
# stemming removes affixes. This is the default choice for stemming although other algorithms exist.
from nltk.stem import PorterStemmer
stemmer = PorterStemmer()
stemmer.stem('believes')
# lemmatizing transforms to root words using grammar rules. It is slower. Stemming is more common.
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
lemmatizer.lemmatize('said', pos='v') # if you don't specify POS, you get zilch.
lemmatizer.lemmatize('cookbooks')
stemmer.stem('wicked')
lemmatizer.lemmatize("were", pos="v") # lemmatizing would allow us to collapse all forms of "be" into one token
# an apparently recommended compression recipe in Perkins Python 3 NLTK book? Not sure I agree.
stemmer.stem(lemmatizer.lemmatize('buses'))
"""
Explanation: If you load those files into wc_clouds_bars.html (at the bottom!), and use the provided extra stopwords, you'll get this:
<img src="img/two_clouds_bars.png">
Underneath the word clouds are simple bar charts, to allow you to more precisely see the top words (it's cut off at 150). This is one of the issues with word clouds, they lack precision in their display.
Another option for showing the difference more clearly is in analytic_wordlist.html.
<img src="img/analytic_wordlists.png">
Stemming / Lemmatizing
The goal is to merge data items that are the same at some "root" meaning level, and reduce the number of features in your data set. "Cats" and "Cat" might want to be treated as the same thing, from a topic or summarization perspective. You can really see this in the word clouds above...so many forms of the same word!
End of explanation
"""
def make_verbs_lemmas(filename, outputfile):
from collections import Counter
with open(filename, 'U') as handle:
emmav = handle.read()
emmaverbs = emmav.split('\n')
lemmaverbs = []
for verb in emmaverbs:
lemmaverbs.append(lemmatizer.lemmatize(verb, pos='v'))
counts = Counter(lemmaverbs)
with open(outputfile, 'w') as handle:
for key, value in counts.items():
if key:
handle.write(key + "," + str(value) + "\n")
print "wrote ", outputfile
make_verbs_lemmas("../outputdata/emma_verbs.txt", "../outputdata/emma_lemma_verbs.csv")
!head -n5 ../outputdata/emma_lemma_verbs.csv
make_verbs_lemmas("../outputdata/pride_verbs.txt", "../outputdata/pride_lemma_verbs.csv")
!head -n5 ../outputdata/pride_lemma_verbs.csv
"""
Explanation: Look at some of the clouds above. How would this be useful, do you think?
End of explanation
"""
|
cbcoutinho/gravBody2D | AnimationEmbedding.ipynb | gpl-3.0 | %pylab inline
"""
Explanation: Embedding Matplotlib Animations in IPython Notebooks
This notebook first appeared as a
blog post
on
Pythonic Perambulations.
License: BSD
(C) 2013, Jake Vanderplas.
Feel free to use, distribute, and modify with the above attribution.
<!-- PELICAN_BEGIN_SUMMARY -->
I've spent a lot of time on this blog working with matplotlib animations
(see the basic tutorial
here,
as well as my examples of animating
a quantum system,
an optical illusion,
the Lorenz system in 3D,
and recreating Super Mario).
Up until now, I've not have not combined the animations with IPython notebooks.
The problem is that so far the integration of IPython with matplotlib is
entirely static, while animations are by their nature dynamic. There are some
efforts in the IPython and matplotlib development communities to remedy this,
but it's still not an ideal setup.
I had an idea the other day about how one might get around this limitation
in the case of animations. By creating a function which saves an animation
and embeds the binary data into an HTML string, you can fairly easily create
automatically-embedded animations within a notebook.
<!-- PELICAN_END_SUMMARY -->
The Animation Display Function
As usual, we'll start by enabling the pylab inline mode to make the
notebook play well with matplotlib.
End of explanation
"""
from tempfile import NamedTemporaryFile
VIDEO_TAG = """<video controls>
<source src="data:video/x-m4v;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>"""
def anim_to_html(anim):
if not hasattr(anim, '_encoded_video'):
with NamedTemporaryFile(suffix='.mp4') as f:
anim.save(f.name, fps=20, extra_args=['-vcodec', 'libx264'])
video = open(f.name, "rb").read()
anim._encoded_video = video.encode("base64")
return VIDEO_TAG.format(anim._encoded_video)
"""
Explanation: Now we'll create a function that will save an animation and embed it in
an html string. Note that this will require ffmpeg or mencoder to be
installed on your system. For reasons entirely beyond my limited understanding
of video encoding details, this also requires using the libx264 encoding
for the resulting mp4 to be properly embedded into HTML5.
End of explanation
"""
from IPython.display import HTML
def display_animation(anim):
plt.close(anim._fig)
return HTML(anim_to_html(anim))
"""
Explanation: With this HTML function in place, we can use IPython's HTML display tools
to create a function which will show the video inline:
End of explanation
"""
from matplotlib import animation
# First set up the figure, the axis, and the plot element we want to animate
fig = plt.figure()
ax = plt.axes(xlim=(0, 2), ylim=(-2, 2))
line, = ax.plot([], [], lw=2)
# initialization function: plot the background of each frame
def init():
line.set_data([], [])
return line,
# animation function. This is called sequentially
def animate(i):
x = np.linspace(0, 2, 1000)
y = np.sin(2 * np.pi * (x - 0.01 * i))
line.set_data(x, y)
return line,
# call the animator. blit=True means only re-draw the parts that have changed.
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=100, interval=20, blit=True)
# call our new function to display the animation
display_animation(anim)
"""
Explanation: Example of Embedding an Animation
The result looks something like this -- we'll use a basic animation example
taken from my earlier
Matplotlib Animation Tutorial post:
End of explanation
"""
animation.Animation._repr_html_ = anim_to_html
"""
Explanation: Making the Embedding Automatic
We can go a step further and use IPython's display hooks to automatically
represent animation objects with the correct HTML. We'll simply set the
_repr_html_ member of the animation base class to our HTML converter
function:
End of explanation
"""
animation.FuncAnimation(fig, animate, init_func=init,
frames=100, interval=20, blit=True)
"""
Explanation: Now simply creating an animation will lead to it being automatically embedded
in the notebook, without any further function calls:
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/snu/cmip6/models/sandbox-1/ocnbgchem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'snu', 'sandbox-1', 'ocnbgchem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: SNU
Source ID: SANDBOX-1
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:38
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
"""
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
"""
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
"""
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
"""
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
"""
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
"""
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation
"""
|
ZoranPandovski/al-go-rithms | machine_learning/tensorflow/Classification.ipynb | cc0-1.0 | from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
import pandas as pd
CSV_COLUMN_NAMES = ['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth', 'Species']
SPECIES = ['Setosa', 'Versicolor', 'Virginica']
train_path = tf.keras.utils.get_file(
"iris_training.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv")
test_path = tf.keras.utils.get_file(
"iris_test.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv")
train = pd.read_csv(train_path, names=CSV_COLUMN_NAMES, header=0)
test = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=0)
train.head()
train_y = train.pop('Species')
test_y = test.pop('Species')
train.head()
def input_fn(features, labels, training=True, batch_size=256):
# Convert the inputs to a Dataset.
dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels))
# Shuffle and repeat if you are in training mode.
if training:
dataset = dataset.shuffle(1000).repeat()
return dataset.batch(batch_size)
my_feature_columns = []
for key in train.keys():
my_feature_columns.append(tf.feature_column.numeric_column(key=key))
print(my_feature_columns)
"""
Explanation: Dataset
This specific dataset seperates flowers into 3 different classes of species.
- Setosa
- Versicolor
- Virginica
The information about each flower is the following.
- sepal length
- sepal width
- petal length
- petal width
End of explanation
"""
# Build a DNN with 2 hidden layers with 30 and 10 hidden nodes each.
classifier = tf.estimator.DNNClassifier(
feature_columns=my_feature_columns,
# Two hidden layers of 30 and 10 nodes respectively.
hidden_units=[30, 10],
# The model must choose between 3 classes.
n_classes=3)
"""
Explanation: Building the Model
And now we are ready to choose a model. For classification tasks there are variety of different estimators/models that we can pick from. Some options are listed below.
- DNNClassifier (Deep Neural Network)
- LinearClassifier
We can choose either model but the DNN seems to be the best choice. This is because we may not be able to find a linear coorespondence in our data.
So let's build a model!
End of explanation
"""
classifier.train(
input_fn=lambda: input_fn(train, train_y, training=True),
steps=5000)
"""
Explanation: Training
End of explanation
"""
eval_result = classifier.evaluate(
input_fn=lambda: input_fn(test, test_y, training=False))
print('\nTest set accuracy: {accuracy:0.3f}\n'.format(**eval_result))
"""
Explanation: Evaluation
End of explanation
"""
|
radhikapc/foundation-homework | homework_sql/Homework_4-Radhika_graded.ipynb | mit | numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120'
"""
Explanation: Grade: 10 / 11
Homework #4
These problem sets focus on list comprehensions, string operations and regular expressions.
Problem set #1: List slices and list comprehensions
Let's start with some data. The following cell contains a string with comma-separated integers, assigned to a variable called numbers_str:
End of explanation
"""
# TA-COMMENT: You commented out the answer!
raw_data = numbers_str.split(",")
numbers = []
for i in raw_data:
numbers.append(int(i))
numbers
#max(numbers)
"""
Explanation: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
End of explanation
"""
sorted(numbers)[11:]
"""
Explanation: Great! We'll be using the numbers list you created above in the next few problems.
In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output:
[506, 528, 550, 581, 699, 721, 736, 804, 855, 985]
(Hint: use a slice.)
End of explanation
"""
# TA-COMMENT: (-1) This isn't sorted -- it doesn't match Allison's expected output.
[i for i in numbers if i % 3 == 0]
# TA-COMMET: This would have been an acceptable answer.
[i for i in sorted(numbers) if i % 3 == 0]
"""
Explanation: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output:
[120, 171, 258, 279, 528, 699, 804, 855]
End of explanation
"""
from math import sqrt
[sqrt(i) for i in numbers if i < 100]
"""
Explanation: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output:
[2.6457513110645907, 8.06225774829855, 8.246211251235321]
(These outputs might vary slightly depending on your platform.)
End of explanation
"""
planets = [
{'diameter': 0.382,
'mass': 0.06,
'moons': 0,
'name': 'Mercury',
'orbital_period': 0.24,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.949,
'mass': 0.82,
'moons': 0,
'name': 'Venus',
'orbital_period': 0.62,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 1.00,
'mass': 1.00,
'moons': 1,
'name': 'Earth',
'orbital_period': 1.00,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.532,
'mass': 0.11,
'moons': 2,
'name': 'Mars',
'orbital_period': 1.88,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 11.209,
'mass': 317.8,
'moons': 67,
'name': 'Jupiter',
'orbital_period': 11.86,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 9.449,
'mass': 95.2,
'moons': 62,
'name': 'Saturn',
'orbital_period': 29.46,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 4.007,
'mass': 14.6,
'moons': 27,
'name': 'Uranus',
'orbital_period': 84.01,
'rings': 'yes',
'type': 'ice giant'},
{'diameter': 3.883,
'mass': 17.2,
'moons': 14,
'name': 'Neptune',
'orbital_period': 164.8,
'rings': 'yes',
'type': 'ice giant'}]
"""
Explanation: Problem set #2: Still more list comprehensions
Still looking good. Let's do a few more with some different data. In the cell below, I've defined a data structure and assigned it to a variable planets. It's a list of dictionaries, with each dictionary describing the characteristics of a planet in the solar system. Make sure to run the cell before you proceed.
End of explanation
"""
earth_diameter = [i['diameter'] for i in planets if i['name'] == "Earth"]
earth = int(earth_diameter[0])
[i['name'] for i in planets if i['diameter'] > 4 * earth]
"""
Explanation: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a diameter greater than four earth radii. Expected output:
['Jupiter', 'Saturn', 'Uranus']
End of explanation
"""
#count = 0
#for i in planets:
#count = count + i['mass']
#print(count)
sum([i['mass'] for i in planets])
"""
Explanation: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: 446.79
End of explanation
"""
[i['name'] for i in planets if "giant" in i['type']]
"""
Explanation: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output:
['Jupiter', 'Saturn', 'Uranus', 'Neptune']
End of explanation
"""
import re
poem_lines = ['Two roads diverged in a yellow wood,',
'And sorry I could not travel both',
'And be one traveler, long I stood',
'And looked down one as far as I could',
'To where it bent in the undergrowth;',
'',
'Then took the other, as just as fair,',
'And having perhaps the better claim,',
'Because it was grassy and wanted wear;',
'Though as for that the passing there',
'Had worn them really about the same,',
'',
'And both that morning equally lay',
'In leaves no step had trodden black.',
'Oh, I kept the first for another day!',
'Yet knowing how way leads on to way,',
'I doubted if I should ever come back.',
'',
'I shall be telling this with a sigh',
'Somewhere ages and ages hence:',
'Two roads diverged in a wood, and I---',
'I took the one less travelled by,',
'And that has made all the difference.']
"""
Explanation: EXTREME BONUS ROUND: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the key parameter of the sorted function, which we haven't yet discussed in class! That's why this is an EXTREME BONUS question.) Expected output:
['Mercury', 'Venus', 'Earth', 'Mars', 'Neptune', 'Uranus', 'Saturn', 'Jupiter']
Problem set #3: Regular expressions
In the following section, we're going to do a bit of digital humanities. (I guess this could also be journalism if you were... writing an investigative piece about... early 20th century American poetry?) We'll be working with the following text, Robert Frost's The Road Not Taken. Make sure to run the following cell before you proceed.
End of explanation
"""
# TA-COMMENT: A better way of writing this regular expression: r"\b\w{4}\b \b\w{4}\b"
[line for line in poem_lines if re.search(r"\b\w\w\w\w\b \b\w\w\w\w\b", line)]
"""
Explanation: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint: use the \b anchor. Don't overthink the "two words in a row" requirement.)
Expected result:
['Then took the other, as just as fair,',
'Had worn them really about the same,',
'And both that morning equally lay',
'I doubted if I should ever come back.',
'I shall be telling this with a sigh']
End of explanation
"""
[line for line in poem_lines if re.search(r"\b\w{5}[^0-9a-zA-Z]?$", line)]
"""
Explanation: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the ? quantifier. Is there an existing character class, or a way to write a character class, that matches non-alphanumeric characters?) Expected output:
['And be one traveler, long I stood',
'And looked down one as far as I could',
'And having perhaps the better claim,',
'Though as for that the passing there',
'In leaves no step had trodden black.',
'Somewhere ages and ages hence:']
End of explanation
"""
all_lines = " ".join(poem_lines)
"""
Explanation: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
End of explanation
"""
re.findall(r"I (\b\w+\b)", all_lines)
#re.findall(r"New York (\b\w+\b)", all_subjects)
"""
Explanation: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint: Use re.findall() and grouping! Expected output:
['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took']
End of explanation
"""
entrees = [
"Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95",
"Lavender and Pepperoni Sandwich $8.49",
"Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v",
"Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v",
"Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95",
"Rutabaga And Cucumber Wrap $8.49 - v"
]
"""
Explanation: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
End of explanation
"""
# TA-COMMENT: Note that 'price' should contain floats, not strings!
menu = []
for item in entrees:
menu_items = {}
match = re.search(r"^(.*) \$(\d{1,2}\.\d{2})", item)
#print("name",match.group(1))
#print("price", match.group(2))
#menu_items.update({'name': match.group(1), 'price': match.group(2)})
if re.search("v$", item):
menu_items.update({'name': match.group(1), 'price': match.group(2), 'vegetarian': True})
else:
menu_items.update({'name': match.group(1), 'price': match.group(2),'vegetarian': False})
menu_items
menu.append(menu_items)
menu
"""
Explanation: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
Expected output:
[{'name': 'Yam, Rosemary and Chicken Bowl with Hot Sauce ',
'price': 10.95,
'vegetarian': False},
{'name': 'Lavender and Pepperoni Sandwich ',
'price': 8.49,
'vegetarian': False},
{'name': 'Water Chestnuts and Peas Power Lunch (with mayonnaise) ',
'price': 12.95,
'vegetarian': True},
{'name': 'Artichoke, Mustard Green and Arugula with Sesame Oil over noodles ',
'price': 9.95,
'vegetarian': True},
{'name': 'Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce ',
'price': 19.95,
'vegetarian': False},
{'name': 'Rutabaga And Cucumber Wrap ', 'price': 8.49, 'vegetarian': True}]
End of explanation
"""
|
stevetjoa/stanford-mir | exercise_genre_recognition.ipynb | mit | filename_brahms = 'brahms_hungarian_dance_5.mp3'
url = "http://audio.musicinformationretrieval.com/" + filename_brahms
if not os.path.exists(filename_brahms):
urllib.urlretrieve(url, filename=filename_brahms)
"""
Explanation: ← Back to Index
Exercise: Genre Recognition
Goals
Extract features from an audio signal.
Train a genre classifier.
Use the classifier to classify the genre in a song.
Step 1: Retrieve Audio
Download an audio file onto your local machine.
End of explanation
"""
librosa.load?
x_brahms, fs_brahms = librosa.load(filename_brahms, duration=120)
"""
Explanation: Load 120 seconds of an audio file:
End of explanation
"""
librosa.display.waveplot?
# Your code here:
"""
Explanation: Plot the time-domain waveform of the audio signal:
End of explanation
"""
IPython.display.Audio?
# Your code here:
"""
Explanation: Play the audio file:
End of explanation
"""
librosa.feature.mfcc?
n_mfcc = 12
mfcc_brahms = librosa.feature.mfcc(x_brahms, sr=fs_brahms, n_mfcc=n_mfcc).T
"""
Explanation: Step 2: Extract Features
For each segment, compute the MFCCs. Experiment with n_mfcc to select a different number of coefficients, e.g. 12.
End of explanation
"""
mfcc_brahms.shape
"""
Explanation: We transpose the result to accommodate scikit-learn which assumes that each row is one observation, and each column is one feature dimension:
End of explanation
"""
scaler = sklearn.preprocessing.StandardScaler()
mfcc_brahms_scaled = scaler.fit_transform(mfcc_brahms)
"""
Explanation: Scale the features to have zero mean and unit variance:
End of explanation
"""
mfcc_brahms_scaled.mean(axis=0)
mfcc_brahms_scaled.std(axis=0)
"""
Explanation: Verify that the scaling worked:
End of explanation
"""
filename_busta = 'busta_rhymes_hits_for_days.mp3'
url = "http://audio.musicinformationretrieval.com/" + filename_busta
urllib.urlretrieve?
# Your code here. Download the second audio file in the same manner as the first audio file above.
"""
Explanation: Step 2b: Repeat steps 1 and 2 for another audio file.
End of explanation
"""
librosa.load?
# Your code here. Load the second audio file in the same manner as the first audio file.
# x_busta, fs_busta =
"""
Explanation: Load 120 seconds of an audio file:
End of explanation
"""
IPython.display.Audio?
"""
Explanation: Listen to the second audio file.
End of explanation
"""
plt.plot?
# See http://musicinformationretrieval.com/stft.html for more details on displaying spectrograms.
librosa.feature.melspectrogram?
librosa.amplitude_to_db?
librosa.display.specshow?
"""
Explanation: Plot the time-domain waveform and spectrogram of the second audio file. In what ways does the time-domain waveform look different than the first audio file? What differences in musical attributes might this reflect? What additional insights are gained from plotting the spectrogram? Explain.
End of explanation
"""
librosa.feature.mfcc?
# Your code here:
# mfcc_busta =
mfcc_busta.shape
"""
Explanation: Extract MFCCs from the second audio file. Be sure to transpose the resulting matrix such that each row is one observation, i.e. one set of MFCCs. Also be sure that the shape and size of the resulting MFCC matrix is equivalent to that for the first audio file.
End of explanation
"""
scaler.transform?
# Your code here:
# mfcc_busta_scaled =
"""
Explanation: Scale the resulting MFCC features to have approximately zero mean and unit variance. Re-use the scaler from above.
End of explanation
"""
mfcc_busta_scaled.mean?
mfcc_busta_scaled.std?
"""
Explanation: Verify that the mean of the MFCCs for the second audio file is approximately equal to zero and the variance is approximately equal to one.
End of explanation
"""
features = numpy.vstack((mfcc_brahms_scaled, mfcc_busta_scaled))
features.shape
"""
Explanation: Step 3: Train a Classifier
Concatenate all of the scaled feature vectors into one feature table.
End of explanation
"""
labels = numpy.concatenate((numpy.zeros(len(mfcc_brahms_scaled)), numpy.ones(len(mfcc_busta_scaled))))
"""
Explanation: Construct a vector of ground-truth labels, where 0 refers to the first audio file, and 1 refers to the second audio file.
End of explanation
"""
# Support Vector Machine
model = sklearn.svm.SVC()
"""
Explanation: Create a classifer model object:
End of explanation
"""
model.fit?
# Your code here
"""
Explanation: Train the classifier:
End of explanation
"""
x_brahms_test, fs_brahms = librosa.load(filename_brahms, duration=10, offset=120)
x_busta_test, fs_busta = librosa.load(filename_busta, duration=10, offset=120)
"""
Explanation: Step 4: Run the Classifier
To test the classifier, we will extract an unused 10-second segment from the earlier audio fields as test excerpts:
End of explanation
"""
IPython.display.Audio?
IPython.display.Audio?
"""
Explanation: Listen to both of the test audio excerpts:
End of explanation
"""
librosa.feature.mfcc?
librosa.feature.mfcc?
"""
Explanation: Compute MFCCs from both of the test audio excerpts:
End of explanation
"""
scaler.transform?
scaler.transform?
"""
Explanation: Scale the MFCCs using the previous scaler:
End of explanation
"""
numpy.vstack?
"""
Explanation: Concatenate all test features together:
End of explanation
"""
numpy.concatenate?
"""
Explanation: Concatenate all test labels together:
End of explanation
"""
model.predict?
"""
Explanation: Compute the predicted labels:
End of explanation
"""
score = model.score(test_features, test_labels)
score
"""
Explanation: Finally, compute the accuracy score of the classifier on the test data:
End of explanation
"""
# Your code here.
"""
Explanation: Currently, the classifier returns one prediction for every MFCC vector in the test audio signal. Can you modify the procedure above such that the classifier returns a single prediction for a 10-second excerpt?
End of explanation
"""
df_brahms = pandas.DataFrame(mfcc_brahms_test_scaled)
df_brahms.shape
df_brahms.head()
df_busta = pandas.DataFrame(mfcc_busta_test_scaled)
"""
Explanation: Step 5: Analysis in Pandas
Read the MFCC features from the first test audio excerpt into a data frame:
End of explanation
"""
df_brahms.corr()
df_busta.corr()
"""
Explanation: Compute the pairwise correlation of every pair of 12 MFCCs against one another for both test audio excerpts. For each audio excerpt, which pair of MFCCs are the most correlated? least correlated?
End of explanation
"""
df_brahms.plot.scatter?
"""
Explanation: Display a scatter plot of any two of the MFCC dimensions (i.e. columns of the data frame) against one another. Try for multiple pairs of MFCC dimensions.
End of explanation
"""
df_busta.plot.scatter?
"""
Explanation: Display a scatter plot of any two of the MFCC dimensions (i.e. columns of the data frame) against one another. Try for multiple pairs of MFCC dimensions.
End of explanation
"""
df_brahms[0].plot.hist()
df_busta[11].plot.hist()
"""
Explanation: Plot a histogram of all values across a single MFCC, i.e. MFCC coefficient number. Repeat for a few different MFCC numbers:
End of explanation
"""
|
arongdari/sparse-graph-prior | notebooks/SimulateSparseGraph.ipynb | mit | from operator import itemgetter
import numpy as np
from scipy.special import gamma
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
from sgp import GGPrnd, BSgraphrnd, GGPgraphrnd
from sgp.GraphUtil import compute_growth_rate, degree_distribution, degree_one_nodes
%matplotlib inline
"""
Explanation: Generalised Gamma Process (GGP)
Generalised gamma process with intensity
$\lambda(w) = \frac{\alpha}{\Gamma(1-\sigma)} w^{(-1-\sigma)} \exp(-\tau w) $
End of explanation
"""
def levy_intensity(w, alpha, sigma, tau):
return alpha/gamma(1.-sigma) * w**(-1.-sigma) * np.exp(-tau*w)
"""
Explanation: Intensity plot of GGP measure
We plot the intensity function to give some intuition about GGP measure.
End of explanation
"""
alpha = 20.
sigma = 0.1
tau = 1.
intensity = levy_intensity(np.linspace(0,0.001,1000), alpha, sigma, tau)
sigma = 0.5
intensity2 = levy_intensity(np.linspace(0,0.001,1000), alpha, sigma, tau)
sigma = 0.9
intensity3 = levy_intensity(np.linspace(0,0.001,1000), alpha, sigma, tau)
#finite activity case
sigma = -1.
intensity4 = levy_intensity(np.linspace(0,10,1000), alpha, sigma, tau)
sigma = -2.5
intensity5 = levy_intensity(np.linspace(0,10,1000), alpha, sigma, tau)
sigma = -5.
intensity6 = levy_intensity(np.linspace(0,10,1000), alpha, sigma, tau)
"""
Explanation: Typically, sigma is the most important parameter of GGP of which controls the number of clusters in mixture models.
Reference: Lijoi, A., Mena, R. H., & Prunster, I. (2007). Controlling the reinforcement in Bayesian non-parametric mixture models. Journal of the Royal Statistical Society. Series B: Statistical Methodology, 69(4), 715–740. http://doi.org/10.1111/j.1467-9868.2007.00609.x
End of explanation
"""
plt.loglog(np.linspace(0,0.1,1000)[1:], intensity[1:], label='$\sigma = 0.1$')
plt.loglog(np.linspace(0,0.1,1000)[1:], intensity2[1:], label='$\sigma = 0.5$')
plt.loglog(np.linspace(0,0.1,1000)[1:], intensity3[1:], label='$\sigma = 0.9$')
#plt.ylim([0,1e8])
plt.legend()
plt.title('GGP intensity function alpha=20, tau=1')
"""
Explanation: Infinite activity case
End of explanation
"""
plt.plot(np.linspace(0,10,1000)[1:], intensity4[1:], label='$\sigma = -1$')
plt.plot(np.linspace(0,10,1000)[1:], intensity5[1:], label='$\sigma = -2.5$')
plt.plot(np.linspace(0,10,1000)[1:], intensity6[1:], label='$\sigma = -5$')
plt.legend()
plt.title('GGP intensity function for finite activity case, alpha=20, tau=1')
"""
Explanation: As the sigma is more closer to 1, it gives more weight to the tail of the intensity.
Finite-activity case
End of explanation
"""
alpha = 20.
tau = 1.
sigma = 0.5
w, T = GGPrnd(alpha, sigma, tau)
thetas = np.random.random(size = w.size)*alpha
"""
Explanation: Simulating Random Measure Drawn from GGP
A random measure is drawn from GGP:
$\mu \sim \text{GGP}(\alpha, \sigma, \tau)$
$\mu = \sum_{i=1}^{\infty} w_i \delta_{\theta_i}$
It's not possible to simulate an infinite dimensional GGP. We use an adaptive thinning method proposed by Favaro and Teh.
Reference: Favaro, S., & Teh, Y. W. (2013). MCMC for Normalized Random Measure Mixture Models. Statistical Science, 28(3), 335–359. http://doi.org/10.1214/13-STS422
End of explanation
"""
print(len(w))
plt.figure(figsize=(12,3))
plt.vlines(thetas, ymin=0, ymax=w)
plt.xlabel('$\Theta$')
plt.ylabel('w')
plt.title('$\mu = \sum_i w_i \delta_{\Theta_i}$')
"""
Explanation: number of atoms:
End of explanation
"""
K = 4
alpha = 20.
tau = 1.
sigma = 0.5
eta = (1.,1.)
eta = np.array([[10, 500, 10, 500],[500, 10, 500, 10],[10, 500, 10, 500],[500, 10, 500, 10]])
BG, w, w_rem, alpha, sigma, tau, eta, group, icount = BSgraphrnd(alpha, sigma, tau, K, eta, K*2.)
g = BG.toarray() > 0
"""
Explanation: As you can see the figure, most of the atoms have very small weight, but some atoms have large weight (>.5)
Simulating Block-structured Sparse Graph
We will simulate block-structred sparse graph suggested by Herlau, T. et al.
Reference: Herlau, T., Schmidt, M. N., & Mørup, M. (2015). Completely random measures for modelling block-structured networks, (1), 1–15. Retrieved from http://arxiv.org/abs/1507.02925
Sample a graph with 4-blocks having predefined interaction rate
eta = interaction rate between blocks. large value shows high interaction rate between two blocks.
End of explanation
"""
print("Graph size = ", BG.shape)
print("Total Edge Count =", BG.sum())
"""
Explanation: Basic stat of random graph
End of explanation
"""
#g_base_idx = [i[0] for i in sorted(enumerate(group), key=itemgetter(1))]
idx = [x for x in enumerate(group)]
np.random.shuffle(idx)
g_base_idx = [i[0] for i in sorted(idx, key=itemgetter(1))]
sorted_g = g[np.ix_(g_base_idx, g_base_idx)]
"""
Explanation: Sort nodes according to block assignment
End of explanation
"""
g_node = [np.sum(group == k) for k in range(K)]
g_idx = np.cumsum(g_node)
print(g_node)
"""
Explanation: number of nodes for each block
End of explanation
"""
plt.figure(figsize=(12, 12))
ax = sns.heatmap(sorted_g, square=True, xticklabels=False, yticklabels=False)
ax.set_xticks(g_idx)
ax.set_xticklabels(['G-%d' % d for d in range(K)])
ax.set_yticks(g_idx[-1] - g_idx)
ax.set_yticklabels(['G-%d' % d for d in range(K)])
ax.hlines(g_idx[-1] - g_idx, xmin=0, xmax=g_idx[-1], linestyles='dashed')
ax.vlines(g_idx, ymin=0, ymax=g_idx[-1], linestyles='dashed')
"""
Explanation: Plot Sampled Graph
End of explanation
"""
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
ax = sns.heatmap(eta, square=True)
ax.set_xlabel('Blocks');ax.set_ylabel('Blocks')
ax.set_title('Predefined interaction rate between blocks')
plt.subplot(1, 2, 2)
ax = sns.heatmap(icount, square=True)
ax.set_xlabel('Blocks');ax.set_ylabel('Blocks')
ax.set_title('Empirical interaction rate between blocks')
"""
Explanation: In the above figure, verticies has been sorted according to their block assignments.
Interaction between blocks
End of explanation
"""
edge_degrees = np.sum(g, 0) + np.sum(g, 1) - np.diag(g)
max_degree = np.max(edge_degrees)
degree_dist = np.array([np.sum(edge_degrees==k) for k in range(1, int(max_degree+1))])
plt.loglog(range(len(degree_dist)), degree_dist+1)
plt.title("Node degree distribution")
plt.ylabel("# of nodes")
plt.xlabel("Node degree")
plt.xlim([0, max_degree])
plt.ylim([0, np.max(degree_dist)])
"""
Explanation: Right figure illustrate the interaction parameter used to simulate the graph. Left figure is the number of edges between two groups from the sampled graph. These figure shows more clear interaction patterns between groups.
Node degree distribution of block graph
End of explanation
"""
alpha = 80
G, D, w, w_rem, alpha, sigma, tau = GGPgraphrnd(alpha, sigma, tau)
sg = G.toarray() > 0
"""
Explanation: Simulating Sparse Graph
We now simulate a sparse graph without any group structures suggested by Caron and Fox.
Reference:Caron, F., & Fox, E. B. (2015). Sparse graphs using exchangeable random measures, 1–64. Retrieved from http://arxiv.org/abs/1401.1137
End of explanation
"""
print("Graph size = ", G.shape)
print("Total Edge Count =", G.sum())
plt.figure(figsize=(10, 10))
ax = sns.heatmap(sg, square=True, xticklabels=False, yticklabels=False)
"""
Explanation: Note that G is a symmetric graph.
Basic stats of the random graph G
End of explanation
"""
edge_degrees = np.sum(sg, 0)
max_degree = np.max(edge_degrees)
degree_dist = np.array([np.sum(edge_degrees==k) for k in range(1, int(max_degree+1))])
plt.loglog(range(len(degree_dist)), degree_dist+1)
plt.title("Node degree distribution")
plt.ylabel("# of nodes")
plt.xlabel("Node degree")
plt.xlim([0, max_degree])
plt.ylim([0, np.max(degree_dist)])
"""
Explanation: Node degree distribution of sparse graph
End of explanation
"""
s_rate = compute_growth_rate(G)
b_rate = compute_growth_rate(BG+BG.T)
xlim = min(len(s_rate), len(b_rate)) - 1
ylim = max(s_rate[xlim], b_rate[xlim])
plt.plot(s_rate, label='Sparse')
plt.plot(b_rate, label='Block-Sparse')
plt.xlim((0, xlim))
plt.ylim((0, ylim))
plt.legend(loc='upper left')
"""
Explanation: Growth rate of random sparse graph
End of explanation
"""
|
cxhernandez/msmbuilder | examples/Fs-Peptide-command-line.ipynb | lgpl-2.1 | # Work in a temporary directory
import tempfile
import os
os.chdir(tempfile.mkdtemp())
# Since this is running from an IPython notebook,
# we prefix all our commands with "!"
# When running on the command line, omit the leading "!"
! msmb -h
"""
Explanation: Modeling dynamics of FS Peptide
This example shows a typical, basic usage of the MSMBuilder command line to model dynamics of a protein system.
End of explanation
"""
! msmb FsPeptide --data_home ./
! tree
"""
Explanation: Get example data
End of explanation
"""
# Remember '\' is the line-continuation marker
# You can enter this command on one line
! msmb DihedralFeaturizer \
--out featurizer.pkl \
--transformed diheds \
--top fs_peptide/fs-peptide.pdb \
--trjs "fs_peptide/*.xtc" \
--stride 10
"""
Explanation: Featurization
The raw (x, y, z) coordinates from the simulation do not respect the translational and rotational symmetry of our problem. A Featurizer transforms cartesian coordinates into other representations. Here we use the DihedralFeaturizer to turn our data into phi and psi dihedral angles. Observe that the 264*3-dimensional space is reduced to 84 dimensions.
End of explanation
"""
! msmb RobustScaler \
-i diheds \
--transformed scaled_diheds.h5
"""
Explanation: Preprocessing
Since the range of values in our raw data can vary widely from feature to feature, we can scale values to reduce bias. Here we use the RobustScaler to center and scale our dihedral angles by their respective interquartile ranges.
End of explanation
"""
! msmb tICA -i scaled_diheds.h5 \
--out tica_model.pkl \
--transformed tica_trajs.h5 \
--n_components 4 \
--lag_time 2
"""
Explanation: Intermediate kinetic model: tICA
tICA is similar to principal component analysis (see "tICA vs. PCA" example). Note that the 84-dimensional space is reduced to 4 dimensions.
End of explanation
"""
from msmbuilder.dataset import dataset
ds = dataset('tica_trajs.h5')
%matplotlib inline
import msmexplorer as msme
import numpy as np
txx = np.concatenate(ds)
msme.plot_histogram(txx)
"""
Explanation: tICA Histogram
We can histogram our data projecting along the two slowest degrees of freedom (as found by tICA). You have to do this in a python script.
End of explanation
"""
! msmb MiniBatchKMeans -i tica_trajs.h5 \
--transformed labeled_trajs.h5 \
--out clusterer.pkl \
--n_clusters 100 \
--random_state 42
"""
Explanation: Clustering
Conformations need to be clustered into states (sometimes written as microstates). We cluster based on the tICA projections to group conformations that interconvert rapidly. Note that we transform our trajectories from the 4-dimensional tICA space into a 1-dimensional cluster index.
End of explanation
"""
! msmb MarkovStateModel -i labeled_trajs.h5 \
--out msm.pkl \
--lag_time 2
"""
Explanation: MSM
We can construct an MSM from the labeled trajectories
End of explanation
"""
from msmbuilder.utils import load
msm = load('msm.pkl')
clusterer = load('clusterer.pkl')
assignments = clusterer.partial_transform(txx)
assignments = msm.partial_transform(assignments)
from matplotlib import pyplot as plt
msme.plot_free_energy(txx, obs=(0, 1), n_samples=10000,
pi=msm.populations_[assignments],
xlabel='tIC 1', ylabel='tIC 2')
plt.scatter(clusterer.cluster_centers_[msm.state_labels_, 0],
clusterer.cluster_centers_[msm.state_labels_, 1],
s=1e4 * msm.populations_, # size by population
c=msm.left_eigenvectors_[:, 1], # color by eigenvector
cmap="coolwarm",
zorder=3
)
plt.colorbar(label='First dynamical eigenvector')
plt.tight_layout()
"""
Explanation: Plot Free Energy Landscape
Subsequent plotting and analysis should be done from Python
End of explanation
"""
|
tpin3694/tpin3694.github.io | python/test_if_an_output_is_close_to_a_value.ipynb | mit | import unittest
import sys
"""
Explanation: Title: Test If Output Is Close To A Value
Slug: test_if_an_output_is_close_to_a_value
Summary: Test if an output is close to a value in Python.
Date: 2016-01-23 12:00
Category: Python
Tags: Testing
Authors: Chris Albon
Interesting in learning more? Here are some good books on unit testing in Python: Python Testing: Beginner's Guide and Python Testing Cookbook.
Preliminaries
End of explanation
"""
def add(x, y):
return x + y
"""
Explanation: Create Function To Be Tested
End of explanation
"""
# Create a test case
class TestAdd(unittest.TestCase):
# Create the unit test
def test_add_two_floats_roughly_equals_11(self):
# Test if add(4.48293848, 6.5023845) return roughly (to 1 place) 11 (actual product: 10.98532298)
self.assertAlmostEqual(11, add(4.48293848, 6.5023845), places=1)
"""
Explanation: Create Test
End of explanation
"""
# Run the unit test (and don't shut down the Jupyter Notebook)
unittest.main(argv=['ignored', '-v'], exit=False)
"""
Explanation: Run Test
End of explanation
"""
|
ioggstream/python-course | python-for-sysadmin/notebooks/01_file_management.ipynb | agpl-3.0 | import os
import os.path
import shutil
import errno
import glob
import sys
"""
Explanation: Path Management
Goal
Normalize paths on different platform
Create, copy and remove folders
Handle errors
Modules
End of explanation
"""
# Be python3 ready
from __future__ import unicode_literals, print_function
"""
Explanation: See also:
pathlib on Python 3.4+
End of explanation
"""
import os
import sys
basedir, hosts = "/", "etc/hosts"
# sys.platform shows the current operating system
if sys.platform.startswith('win'):
basedir = 'c:/windows/system32/drivers'
print(basedir)
# Join removes redundant "/"
hosts = os.path.join(basedir, hosts)
print(hosts)
# normpath fixes "/" orientation
# and redundant ".."
hosts = os.path.normpath(hosts)
print("Normalized path is", hosts)
# realpath resolves symlinks (on unix)
! mkdir -p /tmp/course
! ln -sf /etc/hosts /tmp/course/hosts
realfile = os.path.realpath("/tmp/course/hosts")
print(realfile)
# Exercise: given the following path
base, path = "/usr", "/bin/foo"
# Which is the expected output of result?
result = os.path.join(base, path)
"""
Explanation: Multiplatform Path Management
1- The os.path module seems verbose
but it's the best way to manage paths. It's:
- safe
- multiplatform
2- Here we check the operating system
and prepend the right path
End of explanation
"""
# os and shutil supports basic file operations
# like recursive copy and tree creation.
from os import makedirs
makedirs("/tmp/course/foo/bar")
# while os.path can be used to test file existence
from os.path import isdir
assert isdir("/tmp/course/foo/bar")
# Check the directory content with either one of
!tree /tmp/course || find /tmp/course
# We can use exception handlers to check
# what happened.
try:
# python2 does not allow to ignore
# already existing directories
# and raises an OSError
makedirs("/tmp/course/foo/bar")
except OSError as e:
# Just use the errno module to
# check the error value
print(e)
import errno
assert e.errno == errno.EEXIST
from shutil import copytree, rmtree
# Now copy recursively two directories
# and check the result
copytree("/tmp/course/foo", "/tmp/course/foo2")
assert isdir("/tmp/course/foo2/bar")
#This command should work on both unix and windows
!dir /tmp/course/foo2/
# Now remove it and check the outcome
rmtree("/tmp/course/foo")
assert not isdir("/tmp/course/foo/bar")
#This command should work on both unix and windows
!dir /tmp/course/
# Cleanup created files
rmtree("/tmp/course")
"""
Explanation: Manage trees
Python modules can:
- manage directory trees
- and basic errors
End of explanation
"""
|
marcelomiky/PythonCodes | Coursera/IDSP/Introduction to DS in Python.ipynb | mit | def add_numbers(x,y):
return x+y
a = add_numbers
a(1,2)
x = [1, 2, 4]
x.insert(2, 3) # list.insert(position, item)
x
x = 'This is a string'
print(x[0]) #first character
print(x[0:1]) #first character, but we have explicitly set the end character
print(x[0:2]) #first two characters
x = 'This is a string'
pos = 0
for i in range(len(x) + 1):
print(x[0:pos])
pos += 1
pos -= 2
for i in range(len(x) + 1):
print(x[0:pos])
pos -= 1
"""
Explanation: https://hub.coursera-notebooks.org/user/fhfmrxmooxezwpdotusxza/notebooks/Week%201.ipynb#
End of explanation
"""
firstname = 'Christopher Arthur Hansen Brooks'.split(' ')[0] # [0] selects the first element of the list
lastname = 'Christopher Arthur Hansen Brooks'.split(' ')[-1] # [-1] selects the last element of the list
print(firstname)
print(lastname)
secondname = 'Christopher Arthur Hansen Brooks'.split(' ')[1]
secondname
thirdname = 'Christopher Arthur Hansen Brooks'.split(' ')[2]
thirdname
dict = {'Manuel' : 'manuel@company.com', 'Bill' : 'bill@ig.com'}
dict['Manuel']
for email in dict:
print(dict[email])
for email in dict.values():
print(email)
for name in dict.keys():
print(name)
for name, email in dict.items():
print(name)
print(email)
sales_record = {
'price': 3.24,
'num_items': 4,
'person': 'Chris'}
sales_statement = '{} bought {} item(s) at a price of {} each for a total of {}'
print(sales_statement.format(sales_record['person'],
sales_record['num_items'],
sales_record['price'],
sales_record['num_items']*sales_record['price']))
import csv
%precision 2 # float point precision for printing to 2
with open('mpg.csv') as csvfile: #read the csv file
mpg = list(csv.DictReader(csvfile)) # https://docs.python.org/2/library/csv.html
# csv.DictReader(csvfile, fieldnames=None, restkey=None, restval=None, dialect='excel', *args, **kwds)
mpg[:3] # The first three dictionaries in our list.
len(mpg) # list of 234 dictionaries
mpg[0].keys() # the names of the colums
# How to find the average cty fuel economy across all cars.
# All values in the dictionaries are strings, so we need to
# convert to float.
sum(float(d['cty']) for d in mpg) / len(mpg)
# Similarly this is how to find the average hwy fuel economy across
# all cars.
sum(float(d['hwy']) for d in mpg) / len(mpg)
# Use set to return the unique values for the number of cylinders
# the cars in our dataset have.
cylinders = set(d['cyl'] for d in mpg)
cylinders
# A set is an unordered collection of items. Every element is unique
# (no duplicates) and must be immutable (which cannot be changed).
# >>> x = [1, 1, 2, 2, 2, 2, 2, 3, 3]
# >>> set(x)
# set([1, 2, 3])
CtyMpgByCyl = [] # empty list to start the calculations
for c in cylinders: # iterate over all the cylinder levels
summpg = 0
cyltypecount = 0
for d in mpg: # iterate over all dictionaries
if d['cyl'] == c: # if the cylinder level type matches,
summpg += float(d['cty']) # add the cty mpg
cyltypecount += 1 # increment the count
CtyMpgByCyl.append((c, summpg / cyltypecount)) # append the tuple ('cylinder', 'avg mpg')
CtyMpgByCyl.sort(key=lambda x: x[0])
CtyMpgByCyl
# the city fuel economy appears to be decreasing as the number of cylinders increases
# Use set to return the unique values for the class types in our dataset.
vehicleclass = set(d['class'] for d in mpg) # what are the class types
vehicleclass
# how to find the average hwy mpg for each class of vehicle in our dataset.
HwyMpgByClass = []
for t in vehicleclass: # iterate over all the vehicle classes
summpg = 0
vclasscount = 0
for d in mpg: # iterate over all dictionaries
if d['class'] == t: # if the cylinder amount type matches,
summpg += float(d['hwy']) # add the hwy mpg
vclasscount += 1 # increment the count
HwyMpgByClass.append((t, summpg / vclasscount)) # append the tuple ('class', 'avg mpg')
HwyMpgByClass.sort(key=lambda x: x[1])
HwyMpgByClass
"""
Explanation: ^ Looks like Augusto de Campos' poems ^.^
End of explanation
"""
import datetime as dt
import time as tm
# time returns the current time in seconds since the Epoch. (January 1st, 1970)
tm.time()
# Convert the timestamp to datetime.
dtnow = dt.datetime.fromtimestamp(tm.time())
dtnow
dtnow.year, dtnow.month, dtnow.day, dtnow.hour, dtnow.minute, dtnow.second # get year, month, day, etc.from a datetime
# timedelta is a duration expressing the difference between two dates.
delta = dt.timedelta(days = 100) # create a timedelta of 100 days
delta
a = (1, 2)
type(a)
['a', 'b', 'c'] + [1, 2, 3]
type(lambda x: x+1)
[x**2 for x in range(10)]
str = "Python é muito legal"
lista = []
soma = 0
lista = str.split()
lista
len(lista[0])
for i in lista:
soma += len(i)
soma
"""
Explanation: The Python Programming Language: Dates and Times
End of explanation
"""
|
rfinn/LCS | notebooks/galaxies-missing-in-simard11.ipynb | gpl-3.0 | %run ~/Dropbox/pythonCode/LCSanalyzeblue.py
t = s.galfitflag & s.lirflag & s.sizeflag & ~s.agnflag & s.sbflag
galfitnogim = t & ~s.gim2dflag
sum(galfitnogim)
"""
Explanation: Galaxies that are missing from Simard+2011
Summary
* A total of 44 galaxies are not in galfit sample
* 31/44 are not in the SDSS catalog, so these would not have been targeted by Simard+2011
* this is not true because simard drew from phot catalog. Need to check all.
* 1 arcmin cutouts show: 5 have a bright star overlapping or nearby the galaxy, 2
have a close companion.
* 69538 has problem with NSA coords
* 68342 - not in DR7 for some reason
By galaxy:
NSAID 70630 (202.306824, 11.275839)
STATIONARY BAD_MOVING_FIT BINNED1 INTERP COSMIC_RAY CHILD
r = 13.87 (too bright)
NSAID 70685 (202.269455, 12.238585)
DEBLENDED_AT_EDGE STATIONARY MOVED BINNED1 DEBLENDED_AS_PSF INTERP CHILD,
affected by point source that is offset from center of galaxy. might a foreground star.
(blended)
NSAID 43712 (244.129181, 35.708172)
BINNED1 INTERP COSMIC_RAY CHILD
r = 13.66 (too bright)
NSAID 69538 (244.060699, 34.258434)
NSA has problem with coords, chose nearby pt source rather than galaxy
STATIONARY BINNED1 INTERP COSMIC_RAY CHILD
r = 18.83 (too faint)
NSAID 18158 (219.578888, 3.639615)
PSF_FLUX_INTERP INTERP_CENTER BAD_MOVING_FIT BINNED1 NOTCHECKED SATURATED INTERP COSMIC_RAY CHILD
r = 15.31, ext_r = 0.11
(not sure)
NSAID 68283 (242.047577, 24.507439)
not in dr7?
NSAID 68286 (241.749313, 24.160772)
STATIONARY MOVED BINNED1 INTERP COSMIC_RAY NOPETRO NODEBLEND CHILD BLENDED PrimTarget
r = 17.66, ext_r = 0.2 (too faint)
NSAID 68299 (240.918945, 24.602676)
STATIONARY BINNED1 INTERP COSMIC_RAY CHILD
r = 15.4, ext_r = 0.2
(not sure) why this is not in simard
NSAID 68342 (241.297867, 24.960102)
does not come up under dr7 search. get nearby object instead
NSAID 113068 (175.995667, 20.077011)
STATIONARY MOVED BINNED1 INTERP COSMIC_RAY CHILD
r = 13.74, ext_r = 0.06 (too bright)
NSAID 72631 (230.999481, 8.508963)
MAYBE_CR BAD_MOVING_FIT MOVED BINNED1 INTERP CHILD
Type probably not 3
r = 16.98, ext_r = 0.1
NSAID 103927 (194.490204, 27.443319)
STATIONARY BINNED1 INTERP CHILD
r = 17.64, ext_r = 0.02 (maybe petro r is too faint?)
(too faint?)
NSAID 103966 (194.025421, 27.677467)
DEBLEND_DEGENERATE BAD_MOVING_FIT MOVED BINNED1 INTERP COSMIC_RAY NODEBLEND CHILD BLENDED
r = 15.13, ext_r = 0.02
(not sure) why this isn't in simard
End of explanation
"""
s.s.ISDSS[galfitnogim]
print sum(s.s.ISDSS[galfitnogim] == -1)
"""
Explanation: Galaxies not in SDSS phot catalog
End of explanation
"""
galfitsdssnogim = galfitnogim & (s.s.ISDSS != -1)
sum(galfitsdssnogim)
s.s.NSAID[galfitsdssnogim]
"""
Explanation: Galaxies in SDSS but no B/T fit
End of explanation
"""
from astropy import units as u
from astropy.coordinates import SkyCoord
from astropy.table import Table
try:
# Python 3.x
from urllib.parse import urlencode
from urllib.request import urlretrieve
except ImportError:
# Python 2.x
from urllib import urlencode
from urllib import urlretrieve
import IPython.display
r = 22.5 - 2.5*log10(s.s.NMGY[:,4])
flag = galfitnogim & (r >= 14.) & (r <= 18.)
print sum(flag)
ra = s.s.RA[flag]
dec = s.s.DEC[flag]
ids = s.s.NSAID[flag]
coords = SkyCoord(ra*u.deg, dec*u.deg, frame='icrs')
testcoord = coords[0]
impix = 100
imsize = 1*u.arcmin
cutoutbaseurl = 'http://skyservice.pha.jhu.edu/DR12/ImgCutout/getjpeg.aspx'
for i in range(len(coords.ra)):
query_string = urlencode(dict(ra=coords[i].ra.deg,
dec=coords[i].dec.deg,
width=impix, height=impix,
scale=imsize.to(u.arcsec).value/impix))
url = cutoutbaseurl + '?' + query_string
# this downloads the image to your disk
urlretrieve(url, 'images/'+str(ids[i])+'_SDSS_cutout.jpg')
print 'NSAID %i (%10.6f, %10.6f)'%(ids[i],ra[i],dec[i])
t = IPython.display.Image('images/'+str(ids[i])+'_SDSS_cutout.jpg')
IPython.display.display(t)
for i in range(10,len(ids)):
print '* NSAID %i (%10.6f, %10.6f)'%(ids[i],ra[i],dec[i])
print 'http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=%.5f&dec=%.5f'%(ra[i],dec[i])
"""
Explanation: Download SDSS Images
End of explanation
"""
for i in range(len(coords.ra)):
query_string = urlencode(dict(ra=coords[i].ra.deg,
dec=coords[i].dec.deg,
width=impix, height=impix,
scale=imsize.to(u.arcsec).value/impix))
url = cutoutbaseurl + '?' + query_string
# this downloads the image to your disk
urlretrieve(url, 'images/'+str(nsaids[i])+'_SDSS_cutout.jpg')
print i, nsaids[i],coords[i].ra,coords[i].dec
print 'NSAID %i (%10.6f, %10.6f)'%(nsaids[i],coords[i].ra.deg,coords[i].dec)
t = IPython.display.Image('images/'+str(nsaids[i])+'_SDSS_cutout.jpg')
IPython.display.display(t)
for i in range(len(coords.ra)):
print 'NSAID %i (%10.6f, %10.6f)'%(ids[i],ra[i],dec[i])
ids = where(galfitnogim & (s.s.ISDSS == -1))
print ids
"""
Explanation: NSAID 69538 (244.060699, 34.258434)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=244.06070&dec=34.25843
too faint according to DR7 catalog
NSAID 18158 (219.578888, 3.639615)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=219.57889&dec=3.63961
PSF_FLUX_INTERP INTERP_CENTER BAD_MOVING_FIT BINNED1 NOTCHECKED SATURATED INTERP COSMIC_RAY CHILD
(saturated)
NSAID 165409 (220.235657, 3.527517)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=220.23566&dec=3.52752
DEBLEND_NOPEAK INTERP_CENTER BINNED1 DEBLENDED_AS_PSF INTERP CHILD
(blended)
NSAID 68283 (242.047577, 24.507439)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=242.04758&dec=24.50744
not in DR7
NSAID 68286 (241.749313, 24.160772)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=241.74931&dec=24.16077
STATIONARY MOVED BINNED1 INTERP COSMIC_RAY NOPETRO NODEBLEND CHILD BLENDED
NOT IN SIMARD
(too faint maybe?)
NSAID 68299 (240.918945, 24.602676)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=240.91895&dec=24.60268
STATIONARY BINNED1 INTERP COSMIC_RAY CHILD
(not sure)
NOT IN SIMARD
NSAID 68342 (241.297867, 24.960102)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=241.29787&dec=24.96010
TOO_FEW_GOOD_DETECTIONS PSF_FLUX_INTERP INTERP_CENTER BINNED1 DEBLENDED_AS_PSF INTERP NOPETRO CHILD
BAD COORDS in DR7? Image on website above does not match with galaxy.
NSAID 166124 (230.213974, 8.623065)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=230.21397&dec=8.62306
BINNED1 INTERP CHILD
NOT IN SIMARD CAT
(not sure)
NSAID 146012 (229.295181, 6.941795)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=229.29518&dec=6.94179
BINNED1 NOTCHECKED SATURATED INTERP COSMIC_RAY CHILD
(saturated)
NSAID 166042 (228.910904, 8.302397)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=228.91090&dec=8.30240
DEBLEND_DEGENERATE PSF_FLUX_INTERP BAD_MOVING_FIT MOVED BINNED1 INTERP COSMIC_RAY NODEBLEND CHILD BLENDED
NOT IN SIMARD
(not sure)
NSAID 142819 (195.169479, 28.519848)
http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=195.16948&dec=28.51985
DEBLENDED_AT_EDGE BAD_MOVING_FIT BINNED1 DEBLENDED_AS_PSF NOTCHECKED INTERP NODEBLEND CHILD BLENDED EDGE
(too faint, blended)
End of explanation
"""
lcs = fits.getdata('/Users/rfinn/research/LocalClusters/NSAmastertables/LCS_all_size.fits')
gim = fits.getdata('/Users/rfinn/research/SimardSDSS2011/table1.fits')virgocat = SkyCoord(vdat.RA*u.degree,vdat.DEC*u.degree,frame='icrs')
from astropy.coordinates import SkyCoord
from astropy import units as u
%matplotlib inline
lcat = SkyCoord(lcs.RA*u.degree,lcs.DEC*u.degree,frame='icrs')
gcat = SkyCoord(gim._RAJ2000*u.degree,gim._DEJ2000*u.degree,frame='icrs')
index,dist2d,dist3d = lcat.match_to_catalog_sky(gcat)
plt.figure()
plt.plot
# only keep matches with matched RA and Dec w/in 1 arcsec
matchflag = dist2d.degree < 3./3600
matchedarray1=np.zeros(len(lcat),dtype=gim.dtype)
matchedarray1[matchflag] = gim[index[matchflag]]
print 'percent of LCS galaxies matched = %.1f'%(sum(matchflag)*1./len(matchflag)*100.)
# get rid of names that start with __
# these cause trouble in the analysis program
t = []
for a in matchedarray1.dtype.names:
t.append(a)
for i in range(len(t)):
if t[i].startswith('__'):
t[i] = t[i][2:]
t = tuple(t)
#print t
matchedarray1.dtype.names = t
outfile = '/Users/rfinn/research/LocalClusters/NSAmastertables/LCS_all.gim2d.tab1.fits'
fits.writeto(outfile,matchedarray1,overwrite=True)
diff = (lcs.B_T_r - matchedarray1['B_T_r'])
bad_matches = (abs(diff) > .01) & matchflag
print 'number of bad matches = ',sum(bad_matches)
s.s.NSAID[bad_matches]
plt.figure()
plt.plot(lcs.RA[bad_matches],lcs.DEC[bad_matches],'ko')
print lcs.CLUSTER[bad_matches]
print sum(s.galfitflag[bad_matches])
print sum(diff < 0.)
outfile = '/Users/rfinn/research/LocalClusters/NSAmastertables/LCS_all.gim2d.tab1.fits'
gdat = fits.getdata(outfile)
gdat.__B_T_r
"""
Explanation: NSAID 143514 (202.163284, 11.387049)
(too bright)
NSAID 163615 (202.697357, 11.200765)
(too bright)
NSAID 146832 (243.526657, 34.870651)
BINNED1 SATURATED INTERP COSMIC_RAY CHILD
(saturated)
NSAID 146875 (244.288803, 34.878895)
DEBLENDED_AT_EDGE BINNED1 NOTCHECKED INTERP CHILD EDGE
r = 13.92
(too bright)
NSAID 165409 (220.235657, 3.527517)
DEBLEND_NOPEAK INTERP_CENTER BINNED1 DEBLENDED_AS_PSF INTERP CHILD
r = 18.82
(deblended and too faint)
NSAID 166699 (241.500229, 22.641125)
BAD_MOVING_FIT MOVED BINNED1 INTERP COSMIC_RAY CHILD
r = 15.01, ext_r = 0.18
(not sure)
NSAID 146638 (241.287476, 17.729904)
STATIONARY BINNED1 SATURATED INTERP COSMIC_RAY CHILD
r = 13.57, ext_r = .13
(too bright, saturated)
NSAID 146659 (241.416641, 18.055758)
STATIONARY BINNED1 INTERP COSMIC_RAY CHILD
r = 14.52, ext_r = 0.14
(not sure)
NSAID 146664 (241.435760, 17.715572)
STATIONARY MOVED BINNED1 INTERP COSMIC_RAY CHILD
r = 14.65, ext_r = 0.13
(not sure)
not in simard
NSAID 140139 (175.954529, 19.968401)
DEBLEND_DEGENERATE PSF_FLUX_INTERP DEBLENDED_AT_EDGE BAD_MOVING_FIT MOVED BINNED1 INTERP COSMIC_RAY NODEBLEND CHILD BLENDED
r = 13.78, ext_r = 0.06
(too bright)
NSAID 140160 (176.071716, 20.223295)
STATIONARY BINNED1 INTERP COSMIC_RAY CHILD
r = 13.90, ext_r = 0.06
(too bright)
NSAID 140174 (176.204865, 19.795046)
STATIONARY BINNED1 INTERP COSMIC_RAY CHILD
r = 13.18, ext_r = 0.07
(too bright)
NSAID 140175 (176.195999, 20.125084)
STATIONARY BINNED1 INTERP COSMIC_RAY CHILD
r = 13.38, ext_r = 0.07
not in simard
NSAID 140187 (176.228104, 20.017143)
STATIONARY BINNED1 INTERP CHILD
r = 16.19, ext_r = 0.08
(not sure)
not in simard
NSAID 146094 (230.452133, 8.410197)
STATIONARY BINNED1 CHILD
r = 15.86, ext_r = 0.10
(not sure)
IN SIMARD!!!
NSAID 146121 (230.750824, 8.465475)
MOVED BINNED1 INTERP COSMIC_RAY CHILD
r = 15.80, ext_r = 0.09
NSAID 146127 (230.785812, 8.334576)
PSF_FLUX_INTERP INTERP_CENTER STATIONARY BINNED1 INTERP NOPETRO NODEBLEND CHILD BLENDED
r = 17.20, ext_r = 0.09
(blended?)
NSAID 146130 (230.800995, 8.549866)
BINNED1 INTERP COSMIC_RAY CHILD
r = 15.36, ext_r = 0.09
(not sure, maybe blended?)
NSAID 145965 (228.749756, 6.804669)
STATIONARY MOVED BINNED1 INTERP COSMIC_RAY NOPETRO CHILD
r = 16.70, ext_r = 0.10
(no petro)
NSAID 145984 (229.076614, 6.803605)
BINNED1 INTERP COSMIC_RAY CHILD
r = 15.72, ext_r = 0.1
(not sure)
NSAID 145998 (229.185364, 7.021626)
NSAID 145999 (229.187805, 7.055664)
NSAID 146012 (229.295181, 6.941795)
NSAID 146041 (229.713806, 6.435888)
NSAID 166042 (228.910904, 8.302397)
NSAID 166044 (228.936951, 6.958703)
NSAID 166083 (229.217957, 6.539137)
NSAID 142797 (195.073654, 27.955275)
NSAID 142819 (195.169479, 28.519848)
NSAID 142833 (195.214752, 28.042875)
NSAID 162838 (195.280670, 28.121592)
Oh No!
seems like galaxies that are in simard are not in my catalog :(
Going to read in my catalog
read in simard catalog
match them
and then see what's going on
End of explanation
"""
|
JoseGuzman/myIPythonNotebooks | Dynamic_systems/1st_ODE.ipynb | gpl-2.0 | def diff(p, generation):
"""
Returns the as size of the population as a function of the generation
defined in the following differential equation:
dp/dg = p*(k-p)/tau,
where p is the population size, g is the generation index, k is
the maximal population size (fixed to 1000) and tau a constant that describes the
number of individuals per generation (fixed to 1e4)
p -- (int) population size, dependent variable
generation -- (int) generation index, independent variable
"""
tau = 1e4 # rate of individuals per generation
pMax = 1000 # max population size
return (p * (pMax-p) ) / tau
# define the independent variable (i.e., generations)
g = np.arange(200) # 200 generations
"""
Explanation: <H1>Solving 1st order ODEs</H1>
The logistic equation is a first-order non-linear differential equation that describes the evolution of a population as a function of the population size at a give generation. It can be written as:
${\displaystyle \tau {dp(t) \over dt} = p(t)(k-p(t))},$
where $p(t)$ y the population size at generation $t$, $k$ is the maximal size of the population, and $\tau$ a proportionality factor
<H2>Define the equation</H2>
End of explanation
"""
# solve the differential equation
p = odeint(diff, 2, g) # initial conditions is 2 individuals
# plot the solution
plt.plot(g,p);
plt.ylabel('Population size');
plt.xlabel('Generation');
"""
Explanation: <H2>Numerical solution to the differential equation</H2>
It requieres the initial condition and the independent variable
End of explanation
"""
def diff2(p, generation, tau, pMax):
"""
Returns the as size of the population as a function of the generation
defined in the following differential equation:
dp/dg = p*(pMax-p/tau,
where p is the population size, g is the generation index, pMax is
the maximal population size and tau a constant that describes the
number of individuals per generation
p -- (int) population size
generation -- (int) generation index
pMax -- (int) maximal number of individuals in a population
tau -- (float) rate of individuals per generation
"""
return (p*(pMax-p))/tau
y = odeint(diff2, 2, g, args=(1e4, 1000)) # initial conditions is 2 individuals, tau = 1e4, pMax = 1000
# plot the solution
plt.plot(g,y);
plt.ylabel('Population size');
plt.xlabel('Generation');
"""
Explanation: <H2>Introducing optional arguments to the differential equation</H2>
End of explanation
"""
|
PMEAL/OpenPNM-Examples | PaperRecreations/Wu2010_part_a.ipynb | mit | import openpnm as op
import matplotlib.pyplot as plt
import scipy as sp
import numpy as np
import openpnm.models.geometry as gm
import openpnm.topotools as tt
%matplotlib inline
"""
Explanation: Example: Regenerating Data from
R. Wu et al. / Elec Acta 54 25 (2010) 7394–7403
Import the modules
End of explanation
"""
wrk = op.Workspace()
wrk.loglevel=50
"""
Explanation: Set the workspace loglevel to not print anything
End of explanation
"""
%run shared_funcs.ipynb
"""
Explanation: As the paper requires some lengthy calculation we have split it into parts and put the function in a separate notebook to be re-used in each part. The following code runs and loads the shared functions into this kernel
End of explanation
"""
x_values, y_values = simulation(n=8)
plt.figure()
plt.plot(x_values, y_values, 'ro')
plt.title('normalized diffusivity versus saturation')
plt.xlabel('saturation')
plt.ylabel('normalized diffusivity')
plt.show()
"""
Explanation: The main function runs the simulation for a given network size 'n' and number of points for the relative diffusivity curve. Setting 'npts' to 1 will return the single phase diffusivity. the network size is doubled in the z direction for percolation but the diffusion calculation is effectively only calculated on the middle square section of length 'n'. This is achieved by copying the saturation distribution from the larger network to a smaller one.
We can inspect the source in this notebook by running a code cell with the following: simulation??
Run the simulation once for a network of size 8 x 8 x 8
End of explanation
"""
|
xpharry/Udacity-DLFoudation | tutorials/tensorboard/Anna KaRNNa.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
"""
Explanation: First we'll load the text file and convert it into integers for our network to use.
End of explanation
"""
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
"""
Explanation: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
End of explanation
"""
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]
outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)
final_state = tf.identity(state, name='final_state')
# Reshape output so it's a bunch of rows, one row for each cell output
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
preds = tf.nn.softmax(logits, name='predictions')
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
"""
Explanation: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
End of explanation
"""
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
End of explanation
"""
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/1', sess.graph)
"""
Explanation: Write out the graph for TensorBoard
End of explanation
"""
!mkdir -p checkpoints/anna
epochs = 1
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
"""
Explanation: Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
|
flamingbear/ipython-notebooks | notebooks/Sea Ice Min Max Extents.ipynb | mit | !mkdir -p ../data
!wget -P ../data -qN ftp://sidads.colorado.edu/pub/DATASETS/NOAA/G02135/north/daily/data/NH_seaice_extent_final.csv
!wget -P ../data -qN ftp://sidads.colorado.edu/pub/DATASETS/NOAA/G02135/north/daily/data/NH_seaice_extent_nrt.csv
!wget -P ../data -qN ftp://sidads.colorado.edu/pub/DATASETS/NOAA/G02135/south/daily/data/SH_seaice_extent_final.csv
!wget -P ../data -qN ftp://sidads.colorado.edu/pub/DATASETS/NOAA/G02135/south/daily/data/SH_seaice_extent_nrt.csv
import datetime as dt
import numpy as np
import os
import pandas as pd
from pandas import ExcelWriter
"""
Explanation: Compute summary statistics for the daily sea ice index.
From the CSV files determine the day of maximum and minimum extent for each
month and how that month's max and min ranks with all other months
The input data format is just a date and extent for each day we have data.
Year, Month, Day, Extent, Missing, Source Data
YYYY, MM, DD, 10^6 sq km, 10^6 sq km, Source data product web site: http://nsidc.org/d....
1978, 10, 26, 10.231, 0.000, ftp://sidads.colorado.edu/pub/DATASETS/nsidc0051....
1978, 10, 28, 10.420, 0.000, ftp://sidads.colorado.edu/pub/DATASETS/nsidc0051....
1978, 10, 30, 10.557, 0.000, ftp://sidads.colorado.edu/pub/DATASETS/nsidc0051....
....
Start by downloading the daily sea ice extent data from NSIDC's website.
End of explanation
"""
def parse_the_date(year, mm, dd):
return dt.date(int(year), int(mm), int(dd))
def slurp_csv(filename):
data = pd.read_csv(filename, header = None, skiprows=2,
names=["year", "mm", "dd", "extent", "missing", "source"],
parse_dates={'date':['year', 'mm', 'dd']},
date_parser=parse_the_date, index_col='date')
data = data.drop('missing', axis=1)
return data
def read_a_hemisphere(hemisphere):
the_dir = '../data'
final_prod_filename = os.path.join(the_dir, '{hemi}H_seaice_extent_final.csv'.format(hemi=hemisphere[0:1].upper()))
nrt_prod_filename = os.path.join(the_dir, '{hemi}H_seaice_extent_nrt.csv'.format(hemi=hemisphere[0:1].upper()))
final = slurp_csv(final_prod_filename)
nrt = slurp_csv(nrt_prod_filename)
all_data = pd.concat([final, nrt])
return all_data
"""
Explanation: code to read the CSV files.
End of explanation
"""
north = read_a_hemisphere('north')
south = read_a_hemisphere('south')
south.head()
"""
Explanation: Read CSV data
End of explanation
"""
def add_year_month_columns(df):
a = df.copy()
a = a.drop('source',1)
a = a.reset_index()
a['year'] = pd.to_datetime(a.date).dt.year
a['month'] = pd.to_datetime(a.date).dt.month
a = a.set_index('date')
return a
north = add_year_month_columns(north)
south = add_year_month_columns(south)
north.head()
south.head()
"""
Explanation: Add columns for year and month: We have do this because when we read the CSV file
we converted the existing year/month/day columns into a python datetime object.
also drop the source because we don't care where the data came from (near real time or production)
End of explanation
"""
def add_rolling_mean(df, window=5, min_periods=2):
copy = df.copy()
# create an empty ts to align our extent data with
ts = pd.Series(np.nan, index=pd.date_range('1978-10-25', dt.date.today().strftime('%Y-%m-%d')))
copy.index = copy.index.to_datetime()
copy = df.align(ts, axis=0, join='right')[0]
df['5day-Avg'] = pd.rolling_mean(copy['extent'], window=5, min_periods=2)
return df
"""
Explanation: Add 5 day rolling mean to the timesereis.
End of explanation
"""
north = add_rolling_mean(north)
south = add_rolling_mean(south)
north.head(1)
north = north.reset_index()
south = south.reset_index()
north.head(1)
"""
Explanation: Want date back in the columns
End of explanation
"""
def select_min_and_max_variable_rows_by_year_and_month(df, variable):
min_groups = df.loc[df.groupby(['year','month'])[variable].idxmin()][['date', variable, 'year', 'month']]
max_groups = df.loc[df.groupby(['year','month'])[variable].idxmax()][['date', variable, 'year', 'month']]
return {'min': min_groups, 'max': max_groups}
"""
Explanation: Use a groupby to compute the row locations that represent the minimum and
maximum extent and grab those rows into new variables. AKA: Filter out everything
but the minimum/maximum extent for each month and year
End of explanation
"""
n = select_min_and_max_variable_rows_by_year_and_month(north, 'extent')
navg = select_min_and_max_variable_rows_by_year_and_month(north, '5day-Avg')
s = select_min_and_max_variable_rows_by_year_and_month(south, 'extent')
savg = select_min_and_max_variable_rows_by_year_and_month(south, '5day-Avg')
"""
Explanation: create dictionaries of max and min values for each hemisphere and for daily and rolling-mean
End of explanation
"""
n['max'][3:5]
navg['max'][3:5]
def add_rank(df, rank_by, ascending):
df['rank'] = df.groupby('month')[rank_by].rank(ascending=ascending, method='first')
return df
"""
Explanation: show that we have actually selected different data for daily and 5-average data
End of explanation
"""
n['max'] = add_rank(n['max'], 'extent', ascending=False)
n['min'] = add_rank(n['min'], 'extent', ascending=True)
s['max'] = add_rank(s['max'], 'extent', ascending=False)
s['min'] = add_rank(s['min'], 'extent', ascending=True)
navg['max'] = add_rank(navg['max'], '5day-Avg', ascending=False)
navg['min'] = add_rank(navg['min'], '5day-Avg', ascending=True)
savg['max'] = add_rank(savg['max'], '5day-Avg', ascending=False)
savg['min'] = add_rank(savg['min'], '5day-Avg', ascending=True)
def do_annual_min_max_ranking(df, field):
min_index = df.groupby(['year'])[field].idxmin()
max_index = df.groupby(['year'])[field].idxmax()
mindata = df.loc[min_index][['date', field]]
mindata['rank'] = mindata[field].rank(ascending=True, method='first')
maxdata = df.loc[max_index][['date', field]]
maxdata['rank'] = maxdata[field].rank(ascending=False, method='first')
mindata = mindata.set_index(pd.to_datetime(mindata.date).dt.year)
maxdata = maxdata.set_index(pd.to_datetime(maxdata.date).dt.year)
maxdata = maxdata.add_prefix('max_')
mindata = mindata.add_prefix('min_')
data = pd.concat([mindata, maxdata], axis=1)
return data
"""
Explanation: add rank column for each month and hemsiphere's max and min:
End of explanation
"""
north_annual_by_day = do_annual_min_max_ranking(north, 'extent')
north_annual_averaged = do_annual_min_max_ranking(north, '5day-Avg')
south_annual_by_day = do_annual_min_max_ranking(south, 'extent')
south_annual_averaged = do_annual_min_max_ranking(south, '5day-Avg')
south_annual_averaged.head(3)
"""
Explanation: It is also desired that we have Annual min/max rank data so revisit the north and south
End of explanation
"""
a = navg['min'].copy()
a.columns
a.set_index(['rank', 'month']).unstack('month').head(3)
import calendar
month_names = [calendar.month_name[x] for x in range(1,13)]
def swap_column_level_and_sort(df):
df.columns = df.columns.swaplevel(1,0)
df = df.sortlevel(0, axis=1)
return df
# set index to year and month and then broadcast month back across the columns.
# next swap and sort so that you have the data grouped under the month.
def prepare_for_csv(df):
df = df.set_index(['rank','month']).unstack('month')
df = swap_column_level_and_sort(df)
df.columns = df.columns.set_levels(month_names, level=0)
return df
def write_to_xls(df_list, writer, is_monthly=True):
for df, sheet in df_list:
if is_monthly:
df = prepare_for_csv(df)
df.to_excel(writer,'{sheet}'.format(sheet=sheet), float_format="%.3f")
writer = ExcelWriter('../output/Sea_Ice_MinMax_Statistics.xls')
monthly_dataframelist =[(navg['min'], 'Northern 5day Min'),
(navg['max'], 'Northern 5day Max'),
(savg['min'], 'Southern 5day Min'),
(savg['max'], 'Southern 5day Max'),
(n['min'], 'Northern Daily Min'),
(n['max'], 'Northern Daily Max'),
(s['min'], 'Southern Daily Min'),
(s['max'], 'Southern Daily Max')]
annual_dataframelist = [(north_annual_averaged, 'North Annual 5day-avg'),
(north_annual_by_day, 'North Annual Daily'),
(south_annual_averaged, 'South Annual 5day-avg'),
(south_annual_by_day, 'South Annual Daily')]
write_to_xls(monthly_dataframelist, writer, is_monthly=True)
write_to_xls(annual_dataframelist, writer, is_monthly=False)
writer.save()
b = prepare_for_csv(a)
b
"""
Explanation: Write out the data frames in a nice format
End of explanation
"""
!cd ../data ; rm -f NH_seaice_extent_final.csv NH_seaice_extent_nrt.csv SH_seaice_extent_final.csv SH_seaice_extent_nrt.csv
"""
Explanation: clean up your csv files
End of explanation
"""
|
dato-code/tutorials | dss-2016/recommendation_systems/book-recommender-solutions.ipynb | apache-2.0 | import os
if os.path.exists('books/ratings'):
ratings = gl.SFrame('books/ratings')
items = gl.SFrame('books/items')
users = gl.SFrame('books/users')
else:
ratings = gl.SFrame.read_csv('books/book-ratings.csv')
ratings.save('books/ratings')
items = gl.SFrame.read_csv('books/book-data.csv')
items.save('books/items')
users = gl.SFrame.read_csv('books/user-data.csv')
users.save('books/users')
"""
Explanation: The following code snippet will parse the books data provided at the training.
End of explanation
"""
ratings.show()
"""
Explanation: Visually explore the above data using GraphLab Canvas.
End of explanation
"""
m = gl.recommender.create(ratings, user_id='name', item_id='book')
"""
Explanation: Recommendation systems
In this section we will make a model that can be used to recommend new tags to users.
Creating a Model
Use gl.recommender.create() to create a model that can be used to recommend tags to each user.
End of explanation
"""
m
"""
Explanation: Print a summary of the model by simply entering the name of the object.
End of explanation
"""
users = ratings.head(10000)['name'].unique()
"""
Explanation: Get all unique users from the first 10000 observations and save them as a variable called users.
End of explanation
"""
recs = m.recommend(users, k=20)
"""
Explanation: Get 20 recommendations for each user in your list of users. Save these as a new SFrame called recs.
End of explanation
"""
sims = m.get_similar_items()
"""
Explanation: Inspecting your model
Get an SFrame of the 20 most similar items for each observed item.
End of explanation
"""
items = items.groupby('book', {k: gl.aggregate.SELECT_ONE(k) for k in ['author', 'publisher', 'year']})
"""
Explanation: This dataset has multiple rows corresponding to the same book, e.g., in situations where reprintings were done by different publishers in different year.
For each unique value of 'book' in the items SFrame, select one of the of the available values for author, publisher, and year. Hint: Try using SFrame.groupby and gl.aggregate.SELECT_ONE.
End of explanation
"""
num_ratings_per_book = ratings.groupby('book', gl.aggregate.COUNT)
items = items.join(num_ratings_per_book, on='book')
"""
Explanation: Computing the number of times each book was rated, and add a column containing these counts to the items SFrame using SFrame.join.
End of explanation
"""
items.sort('Count', ascending=False)
"""
Explanation: Print the first few books, sorted by the number of times they have been rated. Do these values make sense?
End of explanation
"""
sims = sims.join(items[['book', 'Count']], on='book')
sims = sims.sort(['Count', 'book', 'rank'], ascending=False)
sims.print_rows(1000, max_row_width=150)
"""
Explanation: Now print the most similar items per item, sorted by the most common books. Hint: Join the two SFrames you created above.
End of explanation
"""
implicit = ratings[ratings['rating'] >= 4]
"""
Explanation: Experimenting with other models
Create a dataset called implicit that contains only ratings data where rating was 4 or greater.
End of explanation
"""
train, test = gl.recommender.util.random_split_by_user(implicit, user_id='name', item_id='book')
"""
Explanation: Create a train/test split of the implicit data created above. Hint: Use random_split_by_user.
End of explanation
"""
train.head(5)
"""
Explanation: Print the first 5 rows of the training set.
End of explanation
"""
m = gl.ranking_factorization_recommender.create(train, 'name', 'book', target='rating', num_factors=20)
"""
Explanation: Create a ranking_factorization_recommender model using just the training set and 20 factors.
End of explanation
"""
m.evaluate_precision_recall(test, cutoffs=[50])['precision_recall_overall']
"""
Explanation: Evaluate how well this model recommends items that were seen in the test set you created above. Hint: Check out m.evaluate_precision_recall().
End of explanation
"""
new_observation_data = gl.SFrame({'name': ['Me'], 'book': ['Animal Farm'], 'rating': [5.0]})
"""
Explanation: Create an SFrame containing only one observation, where 'Billy Bob' has rated 'Animal Farm' with score 5.0.
End of explanation
"""
m.recommend(users=['Me'], new_observation_data=new_observation_data)
"""
Explanation: Use this data when querying for recommendations.
End of explanation
"""
|
QuantCrimAtLeeds/PredictCode | notebooks/sepp_2a_testbed.ipynb | artistic-2.0 | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Crime prediction from Hawkes processes
Here we continue to explore the EM algorithm for Hawkes processes, but now concentrating upon:
Mohler et al. "Randomized Controlled Field Trials of Predictive Policing". Journal of the American Statistical Association (2015) DOI:10.1080/01621459.2015.1077710
End of explanation
"""
import open_cp.sources.sepp as source_sepp
process = source_sepp.SelfExcitingPointProcess(
background_sampler = source_sepp.HomogeneousPoissonSampler(rate=0.1),
trigger_sampler = source_sepp.ExponentialDecaySampler(intensity=0.5, exp_rate=10))
events = process.sample(0, 1000)
fig, ax = plt.subplots(figsize=(18,1))
ax.scatter(events, (np.random.random(len(events))-0.5) * 0.03, alpha=.5)
ax.set(xlim=[900, 1000], ylim=[-0.1,0.1])
"""
Explanation: Simulation of the process in a single cell
End of explanation
"""
rates = np.random.random(size=100)
simulation = source_sepp.GridHawkesProcess(rates, 0.5, 10)
cells = simulation.sample(0, 1000)
"""
Explanation: Model fitting for cells with varying background rate
We'll create 100 cells with varying background rate, but the same $\omega, \theta$. We use our library to perform this simulation.
End of explanation
"""
for i in range(100):
times = cells[i]
cells[i] = times[times>=500] - 500
"""
Explanation: To simulate a steady state, we'll discard the first half of time in each cell.
End of explanation
"""
min(len(t) for t in cells), max(len(t) for t in cells)
import open_cp.seppexp
def optimise(cells, initial_omega=10, iterations=100, time=500):
omega = initial_omega
theta = .5
mu = np.zeros_like(cells) + 0.5
for _ in range(iterations):
omega, theta, mu = open_cp.seppexp.maximisation(cells, omega, theta, mu, time)
return omega, theta, mu
def optimise_corrected(cells, initial_omega=10, iterations=100, time=500):
omega = initial_omega
theta = .5
mu = np.zeros_like(cells) + 0.5
for _ in range(iterations):
omega, theta, mu = open_cp.seppexp.maximisation_corrected(cells, omega, theta, mu, time)
return omega, theta, mu
omega, theta, mu = optimise(cells)
omega, theta
omegac, thetac, muc = optimise_corrected(cells)
omegac, thetac
def plot(rates, mu, ax, title):
ax.plot([0,1], [0,1], color="red", linewidth=1)
ax.scatter(rates, mu)
ax.set(xlim=[0,1], ylim=[0,np.max(mu)*1.05], xlabel="$\\mu$", ylabel="predicted $\\mu$",
title=title)
fig, ax = plt.subplots(ncols=2, figsize=(16,6))
plot(rates, mu, ax[0], "From EM algorithm")
plot(rates, muc,ax[1], "From EM algorithm with edge corrections")
"""
Explanation: The number of events in each cell varies quite a lot.
End of explanation
"""
rates = np.random.random(size=100)
simulation = source_sepp.GridHawkesProcess(rates, 0.5, .1)
cells = simulation.sample(0, 1000)
for i in range(100):
times = cells[i]
cells[i] = times[times>=500] - 500
omega, theta, mu = optimise(cells, .1, 100)
omega, theta
omegac, thetac, muc = optimise_corrected(cells, .1, 100)
omegac, thetac
fig, ax = plt.subplots(ncols=2, figsize=(16,6))
plot(rates, mu, ax[0], "From EM algorithm")
plot(rates, muc, ax[1], "From EM algorithm with edge corrections")
"""
Explanation: Noting that our initial estimate for every $\mu$ is $0.5$, this is good convergence.
More extreme parameters
However, if we try a rather smaller value of $\omega$, then the optimisation doesn't find the real parameters, tending to systematically over-estimate the background rate $\mu$ and under-estimate the aftershock rate.
End of explanation
"""
rates = np.random.random(size=100)
simulation = source_sepp.GridHawkesProcess(rates, 0.5, 10)
cells = simulation.sample(0, 1000)
omega, theta, mu = optimise(cells, 1, 100, 1000)
omega, theta
omegac, thetac, muc = optimise_corrected(cells, 1, 100, 1000)
omegac, thetac
fig, ax = plt.subplots(ncols=2, figsize=(16,6))
plot(rates, mu, ax[0], "From EM algorithm")
plot(rates, muc, ax[1], "From EM algorithm with edge corrections")
"""
Explanation: Sampling the whole process, not just a "steady state"
End of explanation
"""
rates = np.random.random(size=100)
simulation = source_sepp.GridHawkesProcess(rates, 0.5, 10)
cells = simulation.sample(0, 350)
omega, theta, mu = optimise(cells, 1, 100, 350)
omega, theta
omegac, thetac, muc = optimise_corrected(cells, 1, 100, 350)
omegac, thetac
fig, ax = plt.subplots(ncols=2, figsize=(16,6))
plot(rates, mu, ax[0], "From EM algorithm")
plot(rates, muc, ax[1], "From EM algorithm with edge corrections")
"""
Explanation: Taking a smaller sample
End of explanation
"""
|
gcgruen/homework | foundations-homework/07/homework-07-gruen.ipynb | mit | import pandas as pd
"""
Explanation: Part 1: Animals
1. Import pandas with the right name
End of explanation
"""
import matplotlib as plt
import matplotlib.pyplot as plt
% matplotlib inline
"""
Explanation: 2. Set all graphics from matplotlib to display inline
End of explanation
"""
df = pd.read_csv("07-hw-animals.csv")
df
"""
Explanation: 3. Read the csv in (it should be UTF-8 already so you don't have to worry about encoding), save it with the proper boring name
End of explanation
"""
df.columns.values
"""
Explanation: 4. Display the names of the columns in the csv
End of explanation
"""
df.head(3)
"""
Explanation: 5. Display the first 3 animals.
End of explanation
"""
df.sort_values(by='length', ascending = False).head(3)
"""
Explanation: 6. Sort the animals to see the 3 longest animals.
End of explanation
"""
df['animal'].value_counts()
"""
Explanation: 7. What are the counts of the different values of the "animal" column?
End of explanation
"""
df['animal'] == 'dog'
df[df['animal'] == 'dog']
"""
Explanation: 8. Only select the dogs.
End of explanation
"""
df[df['length'] > 40]
"""
Explanation: 9. Display all of the animals that are greater than 40 cm.
End of explanation
"""
cm_in_inch = 0.393701
df['length_inches'] = df['length'] * cm_in_inch
df
"""
Explanation: 10. 'length' is the animal's length in cm. Create a new column called inches that is the length in inches.
End of explanation
"""
cats = df[df['animal'] == 'cat']
cats
dogs = df[df['animal'] == 'dog']
dogs
"""
Explanation: 11. Save the cats to a separate variable called "cats." Save the dogs to a separate variable called "dogs."
End of explanation
"""
cats[cats['length_inches']> 12]
#Using the normal dataframe
df[(df['animal'] == 'cat') & (df['length_inches'] > 12)]
"""
Explanation: 12. Display all of the animals that are cats and above 12 inches long. First do it using the "cats" variable, then do it using your normal dataframe
End of explanation
"""
cats['length'].describe()['mean']
"""
Explanation: 13. What's the mean length of a cat?
End of explanation
"""
dogs['length'].describe()['mean']
"""
Explanation: 14. What's the mean length of a dog?
End of explanation
"""
animals = df.groupby(['animal'])
animals['length'].mean()
"""
Explanation: 15. Use groupby to accomplish both of the above tasks at once.
End of explanation
"""
dogs['length'].hist()
"""
Explanation: 16. Make a histogram of the length of dogs.
End of explanation
"""
plt.style.use('ggplot')
dogs['length'].hist()
"""
Explanation: 17. Change your graphing style to be something else (anything else!)
End of explanation
"""
df.plot(kind='barh', x='name', y='length')
"""
Explanation: 18. Make a horizontal bar graph of the length of the animals, with their name as the label
End of explanation
"""
cats.sort_values(by='length').plot(kind='barh', x='name', y='length')
"""
Explanation: 19. Make a sorted horizontal bar graph of the cats, with the larger cats on top.
End of explanation
"""
import pandas as pd
import matplotlib.pyplot as plt
% matplotlib inline
df = pd.read_csv('richpeople.csv', encoding='latin-1')
df.head(10)
richpeople = df[df['year'] == 2014]
richpeople.columns
"""
Explanation: Part 2: Rich people
Answer your own selection out of the following questions, or any other questions you might be able to think of.
End of explanation
"""
richpeople.sort_values(by='networthusbillion', ascending=False).head(10)
"""
Explanation: 1) Who are the top 10 richest billionaires?
End of explanation
"""
richpeople.sort_values(by='networthusbillion').head(10)
"""
Explanation: 2) Who are the top 10 poorest billionaires?
End of explanation
"""
print("The average networth of billionaires in US billion is", richpeople['networthusbillion'].mean())
richpeople.groupby('gender')['networthusbillion'].mean()
"""
Explanation: 3) What's the average wealth of a billionaire? Male? Female?
End of explanation
"""
richpeople['citizenship'].value_counts()
"""
Explanation: 4) What country are most billionaires from?
End of explanation
"""
richpeople['industry'].value_counts()
"""
Explanation: 4) What are the most common industries for billionaires to come from?
End of explanation
"""
print("On average billionaires are", richpeople['age'].mean(), "years old.")
selfmade = richpeople[richpeople['selfmade'] == 'self-made']
print("Selfmade billionaires are about", selfmade['age'].mean(), "years old.")
non_selfmade = richpeople[richpeople['selfmade'] != 'self-made']
print("Non-selfmade billionaires are on average", non_selfmade['age'].mean(), "years old.")
"""
Explanation: 5) How old are billionaires? How old are billionaires self made vs. non self made?
End of explanation
"""
richpeople.sort_values(by='age', ascending = True).head(3)
"""
Explanation: 6) Who are the youngest billionaires?
End of explanation
"""
richpeople.sort_values(by='age', ascending = False).head(3)
"""
Explanation: 7) Who are the oldest?
End of explanation
"""
plt.style.use('ggplot')
richpeople['age'].hist()
"""
Explanation: 8) Age distribution - maybe make a graph about it
End of explanation
"""
richpeople.plot(kind='scatter', x = 'age', y='networthusbillion', figsize=(10,10), alpha=0.3)
"""
Explanation: 9) Maybe plot their net worth vs age (scatterplot)
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.13.2/examples/notebooks/generated/rolling_ls.ipynb | bsd-3-clause | import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pandas_datareader as pdr
import seaborn
import statsmodels.api as sm
from statsmodels.regression.rolling import RollingOLS
seaborn.set_style("darkgrid")
pd.plotting.register_matplotlib_converters()
%matplotlib inline
"""
Explanation: Rolling Regression
Rolling OLS applies OLS across a fixed windows of observations and then rolls
(moves or slides) the window across the data set. They key parameter is window
which determines the number of observations used in each OLS regression. By
default, RollingOLS drops missing values in the window and so will estimate
the model using the available data points.
Estimated values are aligned so that models estimated using data points
$i+1, i+2, ... i+window$ are stored in location $i+window$.
Start by importing the modules that are used in this notebook.
End of explanation
"""
factors = pdr.get_data_famafrench("F-F_Research_Data_Factors", start="1-1-1926")[0]
factors.head()
industries = pdr.get_data_famafrench("10_Industry_Portfolios", start="1-1-1926")[0]
industries.head()
"""
Explanation: pandas-datareader is used to download data from
Ken French's website.
The two data sets downloaded are the 3 Fama-French factors and the 10 industry portfolios.
Data is available from 1926.
The data are monthly returns for the factors or industry portfolios.
End of explanation
"""
endog = industries.HiTec - factors.RF.values
exog = sm.add_constant(factors["Mkt-RF"])
rols = RollingOLS(endog, exog, window=60)
rres = rols.fit()
params = rres.params.copy()
params.index = np.arange(1, params.shape[0] + 1)
params.head()
params.iloc[57:62]
params.tail()
"""
Explanation: The first model estimated is a rolling version of the CAPM that regresses
the excess return of Technology sector firms on the excess return of the market.
The window is 60 months, and so results are available after the first 60 (window)
months. The first 59 (window - 1) estimates are all nan filled.
End of explanation
"""
fig = rres.plot_recursive_coefficient(variables=["Mkt-RF"], figsize=(14, 6))
"""
Explanation: We next plot the market loading along with a 95% point-wise confidence interval.
The alpha=False omits the constant column, if present.
End of explanation
"""
exog_vars = ["Mkt-RF", "SMB", "HML"]
exog = sm.add_constant(factors[exog_vars])
rols = RollingOLS(endog, exog, window=60)
rres = rols.fit()
fig = rres.plot_recursive_coefficient(variables=exog_vars, figsize=(14, 18))
"""
Explanation: Next, the model is expanded to include all three factors, the excess market, the size factor
and the value factor.
End of explanation
"""
joined = pd.concat([factors, industries], axis=1)
joined["Mkt_RF"] = joined["Mkt-RF"]
mod = RollingOLS.from_formula("HiTec ~ Mkt_RF + SMB + HML", data=joined, window=60)
rres = mod.fit()
rres.params.tail()
"""
Explanation: Formulas
RollingOLS and RollingWLS both support model specification using the formula interface. The example below is equivalent to the 3-factor model estimated previously. Note that one variable is renamed to have a valid Python variable name.
End of explanation
"""
%timeit rols.fit()
%timeit rols.fit(params_only=True)
"""
Explanation: RollingWLS: Rolling Weighted Least Squares
The rolling module also provides RollingWLS which takes an optional weights input to perform rolling weighted least squares. It produces results that match WLS when applied to rolling windows of data.
Fit Options
Fit accepts other optional keywords to set the covariance estimator. Only two estimators are supported, 'nonrobust' (the classic OLS estimator) and 'HC0' which is White's heteroskedasticity robust estimator.
You can set params_only=True to only estimate the model parameters. This is substantially faster than computing the full set of values required to perform inference.
Finally, the parameter reset can be set to a positive integer to control estimation error in very long samples. RollingOLS avoids the full matrix product when rolling by only adding the most recent observation and removing the dropped observation as it rolls through the sample. Setting reset uses the full inner product every reset periods. In most applications this parameter can be omitted.
End of explanation
"""
res = RollingOLS(endog, exog, window=60, min_nobs=12, expanding=True).fit()
res.params.iloc[10:15]
res.nobs[10:15]
"""
Explanation: Expanding Sample
It is possible to expand the sample until sufficient observations are available for the full window length. In this example, we start once we have 12 observations available, and then increase the sample until we have 60 observations available. The first non-nan value is computed using 12 observations, the second 13, and so on. All other estimates are computed using 60 observations.
End of explanation
"""
|
geoscixyz/gpgLabs | notebooks/dcip/DC_SurveyDataInversion.ipynb | mit | cylinder_app()
"""
Explanation: 1. Understanding currents, fields, charges and potentials
Cylinder app
survey: Type of survey
A: (+) Current electrode location
B: (-) Current electrode location
M: (+) Potential electrode location
N: (-) Potential electrode location
r: radius of cylinder
xc: x location of cylinder center
zc: z location of cylinder center
$\rho_1$: Resistivity of the halfspace
$\rho_2$: Resistivity of the cylinder
Field: Field to visualize
Type: which part of the field
Scale: Linear or Log Scale visualization
End of explanation
"""
plot_layer_potentials_app()
"""
Explanation: 2. Potential differences and Apparent Resistivities
Using the widgets contained in this notebook you will develop a better understand of what values are actually measured in a DC resistivity survey and how these measurements can be processed, plotted, inverted, and interpreted.
Computing Apparent Resistivity
In practice we cannot measure the potentials everywhere, we are limited to those locations where we place electrodes. For each source (current electrode pair) many potential differences are measured between M and N electrode pairs to characterize the overall distribution of potentials. The widget below allows you to visualize the potentials, electric fields, and current densities from a dipole source in a simple model with 2 layers. For different electrode configurations you can measure the potential differences and see the calculated apparent resistivities.
In a uniform halfspace the potential differences can be computed by summing up the potentials at each measurement point from the different current sources based on the following equations:
\begin{align}
V_M = \frac{\rho I}{2 \pi} \left[ \frac{1}{AM} - \frac{1}{MB} \right] \
V_N = \frac{\rho I}{2 \pi} \left[ \frac{1}{AN} - \frac{1}{NB} \right]
\end{align}
where $AM$, $MB$, $AN$, and $NB$ are the distances between the corresponding electrodes.
The potential difference $\Delta V_{MN}$ in a dipole-dipole survey can therefore be expressed as follows,
\begin{equation}
\Delta V_{MN} = V_M - V_N = \rho I \underbrace{\frac{1}{2 \pi} \left[ \frac{1}{AM} - \frac{1}{MB} - \frac{1}{AN} + \frac{1}{NB} \right]}_{G}
\end{equation}
and the resistivity of the halfspace $\rho$ is equal to,
$$
\rho = \frac{\Delta V_{MN}}{IG}
$$
In this equation $G$ is often referred to as the geometric factor.
In the case where we are not in a uniform halfspace the above equation is used to compute the apparent resistivity ($\rho_a$) which is the resistivity of the uniform halfspace which best reproduces the measured potential difference.
In the top plot the location of the A electrode is marked by the red +, the B electrode is marked by the blue -, and the M/N potential electrodes are marked by the black dots. The $V_M$ and $V_N$ potentials are printed just above and to the right of the black dots. The calculted apparent resistivity is shown in the grey box to the right. The bottom plot can show the resistivity model, the electric fields (e), potentials, or current densities (j) depending on which toggle button is selected. Some patience may be required for the plots to update after parameters have been changed.
Two layer app
A: (+) Current electrode location
B: (-) Current electrode location
M: (+) Potential electrode location
N: (-) Potential electrode location
$\rho_1$: Resistivity of the top layer
$\rho_2$: Resistivity of the bottom layer
h: thickness of the first layer
Plot: Field to visualize
Type: which part of the field
End of explanation
"""
MidpointPseudoSectionWidget()
"""
Explanation: 3. Building Pseudosections
2D profiles are often plotted as pseudo-sections by extending $45^{\circ}$ lines downwards from the A-B and M-N midpoints and plotting the corresponding $\Delta V_{MN}$, $\rho_a$, or misfit value at the intersection of these lines as shown below. For pole-dipole or dipole-pole surveys the $45^{\circ}$ line is simply extended from the location of the pole. By using this method of plotting, the long offset electrodes plot deeper than those with short offsets. This provides a rough idea of the region sampled by each data point, but the vertical axis of a pseudo-section is not a true depth.
In the widget below the red dot marks the midpoint of the current dipole or the location of the A electrode location in a pole-dipole array while the green dots mark the midpoints of the potential dipoles or M electrode locations in a dipole-pole array. The blue dots then mark the location in the pseudo-section where the lines from Tx and Rx midpoints intersect and the data is plotted. By stepping through the Tx (current electrode pairs) using the slider you can see how the pseudo section is built up.
The figures shown below show how the points in a pseudo-section are plotted for pole-dipole, dipole-pole, and dipole-dipole arrays. The color coding of the dots match those shown in the widget.
<br />
<br />
<img style="float: center; width: 60%; height: 60%" src="https://github.com/geoscixyz/geosci-labs/blob/main/images/dc/PoleDipole.png?raw=true">
<center>Basic skematic for a uniformly spaced pole-dipole array.
<br />
<br />
<br />
<img style="float: center; width: 60%; height: 60%" src="https://github.com/geoscixyz/geosci-labs/blob/main/images/dc/DipolePole.png?raw=true">
<center>Basic skematic for a uniformly spaced dipole-pole array.
<br />
<br />
<br />
<img style="float: center; width: 60%; height: 60%" src="https://github.com/geoscixyz/geosci-labs/blob/main/images/dc/DipoleDipole.png?raw=true">
<center>Basic skematic for a uniformly spaced dipole-dipole array.
<br />
Pseudo-section app
End of explanation
"""
DC2DPseudoWidget()
"""
Explanation: DC pseudo-section app
$\rho_1$: Resistivity of the first layer (thickness of the first layer is 5m)
$\rho_2$: Resistivity of the cylinder
resistivity of the second layer is 1000 $\Omega$m
xc: x location of cylinder center
zc: z location of cylinder center
r: radius of cylinder
surveyType: Type of survey
End of explanation
"""
DC2DfwdWidget()
"""
Explanation: 4. Parametric Inversion
In this final widget you are able to forward model the apparent resistivity of a cylinder embedded in a two layered earth. Pseudo-sections of the apparent resistivity can be generated using dipole-dipole, pole-dipole, or dipole-pole arrays to see how survey geometry can distort the size, shape, and location of conductive bodies in a pseudo-section. Due to distortion and artifacts present in pseudo-sections trying to interpret them directly is typically difficult and dangerous due to the risk of misinterpretation. Inverting the data to find a model which fits the observed data and is geologically reasonable should be standard practice.
By systematically varying the model parameters and comparing the plots of observed vs. predicted apparent resistivity a parametric inversion can be preformed by hand to find the "best" fitting model. Normalized data misfits, which provide a numerical measure of the difference between the observed and predicted data, are useful for quantifying how well and inversion model fits the observed data. The manual inversion process can be difficult and time consuming even with small examples sure as the one presented here. Therefore, numerical optimization algorithms are typically utilized to minimized the data misfit and a model objective function, which provides information about the model structure and complexity, in order to find an optimal solution.
Parametric DC inversion app
Definition of variables:
- $\rho_1$: Resistivity of the first layer
- $\rho_2$: Resistivity of the cylinder
- xc: x location of cylinder center
- zc: z location of cylinder center
- r: radius of cylinder
- predmis: toggle which allows you to switch the bottom pannel from predicted apparent resistivity to normalized data misfit
- suveyType: toggle which allows you to switch between survey types.
Knonw information
- resistivity of the second layer is 1000 $\Omega$m
- thickness of the first layer is known: 5m
Unknowns are: $\rho_1$, $\rho_2$, xc, zc, and r
End of explanation
"""
|
ledrui/week4_Ridge_Regression | .ipynb_checkpoints/Overfitting_Demo_Ridge_Lasso-checkpoint.ipynb | mit | import graphlab
import math
import random
import numpy
from matplotlib import pyplot as plt
%matplotlib inline
"""
Explanation: Overfitting demo
Create a dataset based on a true sinusoidal relationship
Let's look at a synthetic dataset consisting of 30 points drawn from the sinusoid $y = \sin(4x)$:
End of explanation
"""
random.seed(98103)
n = 30
x = graphlab.SArray([random.random() for i in range(n)]).sort()
"""
Explanation: Create random values for x in interval [0,1)
End of explanation
"""
y = x.apply(lambda x: math.sin(4*x))
"""
Explanation: Compute y
End of explanation
"""
random.seed(1)
e = graphlab.SArray([random.gauss(0,1.0/3.0) for i in range(n)])
y = y + e
"""
Explanation: Add random Gaussian noise to y
End of explanation
"""
data = graphlab.SFrame({'X1':x,'Y':y})
data
"""
Explanation: Put data into an SFrame to manipulate later
End of explanation
"""
def plot_data(data):
plt.plot(data['X1'],data['Y'],'k.')
plt.xlabel('x')
plt.ylabel('y')
plot_data(data)
"""
Explanation: Create a function to plot the data, since we'll do it many times
End of explanation
"""
def polynomial_features(data, deg):
data_copy=data.copy()
for i in range(1,deg):
data_copy['X'+str(i+1)]=data_copy['X'+str(i)]*data_copy['X1']
return data_copy
"""
Explanation: Define some useful polynomial regression functions
Define a function to create our features for a polynomial regression model of any degree:
End of explanation
"""
def polynomial_regression(data, deg):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=0.,l1_penalty=0.,
validation_set=None,verbose=False)
return model
"""
Explanation: Define a function to fit a polynomial linear regression model of degree "deg" to the data in "data":
End of explanation
"""
def plot_poly_predictions(data, model):
plot_data(data)
# Get the degree of the polynomial
deg = len(model.coefficients['value'])-1
# Create 200 points in the x axis and compute the predicted value for each point
x_pred = graphlab.SFrame({'X1':[i/200.0 for i in range(200)]})
y_pred = model.predict(polynomial_features(x_pred,deg))
# plot predictions
plt.plot(x_pred['X1'], y_pred, 'g-', label='degree ' + str(deg) + ' fit')
plt.legend(loc='upper left')
plt.axis([0,1,-1.5,2])
"""
Explanation: Define function to plot data and predictions made, since we are going to use it many times.
End of explanation
"""
def print_coefficients(model):
# Get the degree of the polynomial
deg = len(model.coefficients['value'])-1
# Get learned parameters as a list
w = list(model.coefficients['value'])
# Numpy has a nifty function to print out polynomials in a pretty way
# (We'll use it, but it needs the parameters in the reverse order)
print 'Learned polynomial for degree ' + str(deg) + ':'
w.reverse()
print numpy.poly1d(w)
"""
Explanation: Create a function that prints the polynomial coefficients in a pretty way :)
End of explanation
"""
model = polynomial_regression(data, deg=2)
"""
Explanation: Fit a degree-2 polynomial
Fit our degree-2 polynomial to the data generated above:
End of explanation
"""
print_coefficients(model)
"""
Explanation: Inspect learned parameters
End of explanation
"""
plot_poly_predictions(data,model)
"""
Explanation: Form and plot our predictions along a grid of x values:
End of explanation
"""
model = polynomial_regression(data, deg=4)
print_coefficients(model)
plot_poly_predictions(data,model)
"""
Explanation: Fit a degree-4 polynomial
End of explanation
"""
model = polynomial_regression(data, deg=16)
print_coefficients(model)
"""
Explanation: Fit a degree-16 polynomial
End of explanation
"""
plot_poly_predictions(data,model)
"""
Explanation: Woah!!!! Those coefficients are crazy! On the order of 10^6.
End of explanation
"""
def polynomial_ridge_regression(data, deg, l2_penalty):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=l2_penalty,
validation_set=None,verbose=False)
return model
"""
Explanation: Above: Fit looks pretty wild, too. Here's a clear example of how overfitting is associated with very large magnitude estimated coefficients.
#
#
Ridge Regression
Ridge regression aims to avoid overfitting by adding a cost to the RSS term of standard least squares that depends on the 2-norm of the coefficients $\|w\|$. The result is penalizing fits with large coefficients. The strength of this penalty, and thus the fit vs. model complexity balance, is controled by a parameter lambda (here called "L2_penalty").
Define our function to solve the ridge objective for a polynomial regression model of any degree:
End of explanation
"""
model = polynomial_ridge_regression(data, deg=16, l2_penalty=1e-25)
print_coefficients(model)
plot_poly_predictions(data,model)
"""
Explanation: Perform a ridge fit of a degree-16 polynomial using a very small penalty strength
End of explanation
"""
model = polynomial_ridge_regression(data, deg=16, l2_penalty=100)
print_coefficients(model)
plot_poly_predictions(data,model)
"""
Explanation: Perform a ridge fit of a degree-16 polynomial using a very large penalty strength
End of explanation
"""
for l2_penalty in [1e-25, 1e-10, 1e-6, 1e-3, 1e2]:
model = polynomial_ridge_regression(data, deg=16, l2_penalty=l2_penalty)
print 'lambda = %.2e' % l2_penalty
print_coefficients(model)
print '\n'
plt.figure()
plot_poly_predictions(data,model)
plt.title('Ridge, lambda = %.2e' % l2_penalty)
"""
Explanation: Let's look at fits for a sequence of increasing lambda values
End of explanation
"""
# LOO cross validation -- return the average MSE
def loo(data, deg, l2_penalty_values):
# Create polynomial features
polynomial_features(data, deg)
# Create as many folds for cross validatation as number of data points
num_folds = len(data)
folds = graphlab.cross_validation.KFold(data,num_folds)
# for each value of l2_penalty, fit a model for each fold and compute average MSE
l2_penalty_mse = []
min_mse = None
best_l2_penalty = None
for l2_penalty in l2_penalty_values:
next_mse = 0.0
for train_set, validation_set in folds:
# train model
model = graphlab.linear_regression.create(train_set,target='Y',
l2_penalty=l2_penalty,
validation_set=None,verbose=False)
# predict on validation set
y_test_predicted = model.predict(validation_set)
# compute squared error
next_mse += ((y_test_predicted-validation_set['Y'])**2).sum()
# save squared error in list of MSE for each l2_penalty
next_mse = next_mse/num_folds
l2_penalty_mse.append(next_mse)
if min_mse is None or next_mse < min_mse:
min_mse = next_mse
best_l2_penalty = l2_penalty
return l2_penalty_mse,best_l2_penalty
"""
Explanation: Perform a ridge fit of a degree-16 polynomial using a "good" penalty strength
We will learn about cross validation later in this course as a way to select a good value of the tuning parameter (penalty strength) lambda. Here, we consider "leave one out" (LOO) cross validation, which one can show approximates average mean square error (MSE). As a result, choosing lambda to minimize the LOO error is equivalent to choosing lambda to minimize an approximation to average MSE.
End of explanation
"""
l2_penalty_values = numpy.logspace(-4, 10, num=10)
l2_penalty_mse,best_l2_penalty = loo(data, 16, l2_penalty_values)
"""
Explanation: Run LOO cross validation for "num" values of lambda, on a log scale
End of explanation
"""
plt.plot(l2_penalty_values,l2_penalty_mse,'k-')
plt.xlabel('$\L2_penalty$')
plt.ylabel('LOO cross validation error')
plt.xscale('log')
plt.yscale('log')
"""
Explanation: Plot results of estimating LOO for each value of lambda
End of explanation
"""
best_l2_penalty
model = polynomial_ridge_regression(data, deg=16, l2_penalty=best_l2_penalty)
print_coefficients(model)
plot_poly_predictions(data,model)
"""
Explanation: Find the value of lambda, $\lambda_{\mathrm{CV}}$, that minimizes the LOO cross validation error, and plot resulting fit
End of explanation
"""
def polynomial_lasso_regression(data, deg, l1_penalty):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=0.,
l1_penalty=l1_penalty,
validation_set=None,
solver='fista', verbose=False,
max_iterations=3000, convergence_threshold=1e-10)
return model
"""
Explanation: Lasso Regression
Lasso regression jointly shrinks coefficients to avoid overfitting, and implicitly performs feature selection by setting some coefficients exactly to 0 for sufficiently large penalty strength lambda (here called "L1_penalty"). In particular, lasso takes the RSS term of standard least squares and adds a 1-norm cost of the coefficients $\|w\|$.
Define our function to solve the lasso objective for a polynomial regression model of any degree:
End of explanation
"""
for l1_penalty in [0.0001, 0.01, 0.1, 10]:
model = polynomial_lasso_regression(data, deg=16, l1_penalty=l1_penalty)
print 'l1_penalty = %e' % l1_penalty
print 'number of nonzeros = %d' % (model.coefficients['value']).nnz()
print_coefficients(model)
print '\n'
plt.figure()
plot_poly_predictions(data,model)
plt.title('LASSO, lambda = %.2e, # nonzeros = %d' % (l1_penalty, (model.coefficients['value']).nnz()))
"""
Explanation: Explore the lasso solution as a function of a few different penalty strengths
We refer to lambda in the lasso case below as "l1_penalty"
End of explanation
"""
|
nikbearbrown/Deep_Learning | NEU/Sai_Raghuram_Kothapalli_DL/Autoencoders.ipynb | mit | PATH = "/Users/raghu/Downloads/"
Image(filename = PATH + "autoencoder_schema.jpg", width=500, height=500)
"""
Explanation: Autoencoders
What are Autoencoders?
End of explanation
"""
from keras.layers import Input, Dense
from keras.models import Model
# this is the size of our encoded representations
encoding_dim = 32 # 32 floats -> compression of factor 24.5, assuming the input is 784 floats
# this is our input placeholder
input_img = Input(shape=(784,))
# "encoded" is the encoded representation of the input
encoded = Dense(encoding_dim, activation='relu')(input_img)
# "decoded" is the lossy reconstruction of the input
decoded = Dense(784, activation='sigmoid')(encoded)
# this model maps an input to its reconstruction
autoencoder = Model(input_img, decoded)
"""
Explanation: "Autoencoding" is a data compression algorithm where the compression and decompression functions are 1) data-specific, 2) lossy, and 3) learned automatically from examples rather than engineered by a human. Additionally, in almost all contexts where the term "autoencoder" is used, the compression and decompression functions are implemented with neural networks.
1) Autoencoders are data-specific, which means that they will only be able to compress data similar to what they have been trained on. This is different from, say, the MPEG-2 Audio Layer III (MP3) compression algorithm, which only holds assumptions about "sound" in general, but not about specific types of sounds. An autoencoder trained on pictures of faces would do a rather poor job of compressing pictures of trees, because the features it would learn would be face-specific.
2) Autoencoders are lossy, which means that the decompressed outputs will be degraded compared to the original inputs (similar to MP3 or JPEG compression). This differs from lossless arithmetic compression.
3) Autoencoders are learned automatically from data examples, which is a useful property: it means that it is easy to train specialized instances of the algorithm that will perform well on a specific type of input. It doesn't require any new engineering, just appropriate training data.
To build an autoencoder, you need three things: an encoding function, a decoding function, and a distance function between the amount of information loss between the compressed representation of your data and the decompressed representation (i.e. a "loss" function). The encoder and decoder will be chosen to be parametric functions (typically neural networks), and to be differentiable with respect to the distance function, so the parameters of the encoding/decoding functions can be optimize to minimize the reconstruction loss, using Stochastic Gradient Descent. It's simple! And you don't even need to understand any of these words to start using autoencoders in practice.
Are they good at data compression?
Usually, not really. In picture compression for instance, it is pretty difficult to train an autoencoder that does a better job than a basic algorithm like JPEG, and typically the only way it can be achieved is by restricting yourself to a very specific type of picture (e.g. one for which JPEG does not do a good job). The fact that autoencoders are data-specific makes them generally impractical for real-world data compression problems: you can only use them on data that is similar to what they were trained on, and making them more general thus requires lots of training data.
What are applications of autoencoders?
They are rarely used in practical applications. In 2012 they briefly found an application in greedy layer-wise pretraining for deep convolutional neural networks, but this quickly fell out of fashion as we started realizing that better random weight initialization schemes were sufficient for training deep networks from scratch. In 2014, batch normalization started allowing for even deeper networks, and from late 2015 we could train arbitrarily deep networks from scratch using residual learning.
Today two interesting practical applications of autoencoders are data denoising (which we feature later in this post), and dimensionality reduction for data visualization. With appropriate dimensionality and sparsity constraints, autoencoders can learn data projections that are more interesting than PCA or other basic techniques.
For 2D visualization specifically, t-SNE (pronounced "tee-snee") is probably the best algorithm around, but it typically requires relatively low-dimensional data. So a good strategy for visualizing similarity relationships in high-dimensional data is to start by using an autoencoder to compress your data into a low-dimensional space (e.g. 32 dimensional), then use t-SNE for mapping the compressed data to a 2D plane.
Lets build simplest Autoencoder
End of explanation
"""
# this model maps an input to its encoded representation
encoder = Model(input_img, encoded)
"""
Explanation: Let's also create a separate encoder model:
End of explanation
"""
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input))
"""
Explanation: As well as the decoder model:
End of explanation
"""
autoencoder.compile(optimizer='adagrad', loss='binary_crossentropy')
"""
Explanation: Now let's train our autoencoder to reconstruct MNIST digits.
First, we'll configure our model to use a per-pixel categorical crossentropy loss, and the Adagrad optimizer:
End of explanation
"""
from keras.datasets import mnist
import numpy as np
(x_train, _), (x_test, _) = mnist.load_data()
"""
Explanation: Let's prepare our input data. We're using MNIST digits, and we're discarding the labels (since we're only interested in encoding/decoding the input images).
End of explanation
"""
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
print(x_train.shape)
print(x_test.shape)
"""
Explanation: We will normalize all values between 0 and 1 and we will flatten the 28x28 images into vectors of size 784.
End of explanation
"""
autoencoder.fit(x_train, x_train,
epochs=100,
batch_size=32,
shuffle=True,
validation_data=(x_test, x_test))
"""
Explanation: Now let's train our autoencoder for 100 epochs:
End of explanation
"""
# encode and decode some digits
# note that we take them from the *test* set
encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)
# use Matplotlib
import matplotlib.pyplot as plt
n = 10 # how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
Explanation: After 100 epochs, the autoencoder seems to reach a stable train/test loss value of about 0.0932. We can try to visualize the reconstructed inputs and the encoded representations. We will use Matplotlib.
End of explanation
"""
from keras import regularizers
encoding_dim = 32
input_img = Input(shape=(784,))
# add a Dense layer with a L1 activity regularizer
encoded = Dense(encoding_dim, activation='relu',
activity_regularizer=regularizers.l1(10e-5))(input_img)
decoded = Dense(784, activation='sigmoid')(encoded)
autoencoder = Model(input_img, decoded)
"""
Explanation: Here's what we get. The top row is the original digits, and the bottom row is the reconstructed digits. We are losing quite a bit of detail with this basic approach.
Adding a sparsity constraint on the encoded representations
In the previous example, the representations were only constrained by the size of the hidden layer (32). In such a situation, what typically happens is that the hidden layer is learning an approximation of PCA (principal component analysis). But another way to constrain the representations to be compact is to add a sparsity contraint on the activity of the hidden representations, so fewer units would "fire" at a given time. In Keras, this can be done by adding an activity_regularizer to our Dense layer:
End of explanation
"""
autoencoder.compile(optimizer='adagrad', loss='binary_crossentropy')
autoencoder.fit(x_train, x_train,
epochs=250,
batch_size=25,
shuffle=True,
validation_data=(x_test, x_test))
"""
Explanation: Let's train this model for 100 epochs (with the added regularization the model is less likely to overfit and can be trained longer).
End of explanation
"""
# encode and decode some digits
# note that we take them from the *test* set
encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)
# use Matplotlib
import matplotlib.pyplot as plt
n = 10 # how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
Explanation: The models ends with a train loss of 0.1887 and test loss of 0.876. The difference between the two is mostly due to the regularization term being added to the loss during training (worth about 0.01).
Here's a visualization of our new results:
End of explanation
"""
input_img = Input(shape=(784,))
encoded = Dense(128, activation='relu')(input_img)
encoded = Dense(64, activation='relu')(encoded)
encoded = Dense(32, activation='relu')(encoded)
decoded = Dense(64, activation='relu')(encoded)
decoded = Dense(128, activation='relu')(decoded)
decoded = Dense(784, activation='sigmoid')(decoded)
"""
Explanation: Deep autoencoder
We do not have to limit ourselves to a single layer as encoder or decoder, we could instead use a stack of layers, such as:
End of explanation
"""
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
autoencoder.fit(x_train, x_train,
epochs=100,
batch_size=100,
shuffle=True,
validation_data=(x_test, x_test))
"""
Explanation: Let's try this:
End of explanation
"""
|
justhalf/jupyter_notebooks | neural_network/CEC-test.ipynb | mit | import math
from IPython.display import Markdown, display
def printmd(string):
display(Markdown(string))
# Embedding
embedding = {}
embedding['a'] = (1.0, 1)
embedding['b'] = (-1, -1)
embedding['('] = (1, 0)
embedding[')'] = (0, 1)
# embedding['a'] = (-1, 0)
# embedding['b'] = (-0.5, 0)
# embedding['('] = (1, 1)
# embedding[')'] = (1, -1)
# Weights
w1=1.0
w2=1.0
w3=1.0
ws=1.0
memory_history = [0]
output_history = [0]
def sigmoid(x):
return 1.0/(1+math.exp(-x))
def gold(seq):
result = [0]
bracket_count = 0
for char in seq:
if char == '(':
bracket_count += 1
if char == ')':
bracket_count -= 1
result.append(sigmoid(bracket_count))
return result
def activate_memory(x1, x2):
prev_memory = memory_history[-1]
memory_history.append(ws*prev_memory + w1*x1 + w2*x2)
return memory_history[-1]
def activate_output(h):
output_history.append(sigmoid(w3*h))
return output_history[-1]
def predict(seq):
for char in seq:
activate_output(activate_memory(*embedding[char]))
result = output_history[:]
return result
def reset():
global memory_history, output_history
memory_history = [0]
output_history = [0]
def loss(gold_seq, pred_seq):
result = 0.0
per_position_loss = []
for idx, (corr, pred) in enumerate(zip(gold_seq, pred_seq)):
cur_loss = -(corr*math.log(pred) + (1-corr)*math.log(1-pred))
cur_loss -= -(corr*math.log(corr) + (1-corr)*math.log(1-corr))
result += cur_loss
per_position_loss.append(cur_loss)
return result, per_position_loss
def print_list(lst):
'''A convenience method to print a list of real numbers'''
as_str = ['{:+.3f}'.format(num) for num in lst]
print('[{}]'.format(', '.join(as_str)))
# See typical values of sigmoid
for i in range(5):
print('sigmoid({}) = {}'.format(i, sigmoid(i)))
"""
Explanation: Long Short-Term Memory (LSTM)
This page attempts to explain why LSTM was first proposed, and what are the core features together with some examples.
This is based on the paper Hochreiter and Schmidhuber. 1997. Long Short-Term Memory
Constant-Error Carrousel
The core feature of an LSTM unit as first proposed is the constant-error carrousel (CEC) which solves the vanishing gradient problem with standard RNN.
A CEC is neural network unit which consists of a single neuron with self-loop with weight fixed to 1.0 to ensure constant error flow when doing backpropagation.
<div style="text-align:center">
<img src="CEC.png"/>
<caption>Fig 1. Diagram of a single CEC unit</caption>
</div>
Now let's see an example of CEC at work. We will use CEC to do very simple task: recognizing whether current character is inside a bracketed expression, with the opening bracket considered to be inside, and the closing bracket considered to be outside, for simplicity. This is solvable only using network that can store memory, since to recognize whether a character is inside a bracketed expression, we need to have the knowledge that there is an opening bracket to the left of current character which does not have the corresponding closing bracket.
The input alphabets are coming from the set: ${a, b, (, )}$ with the following 2-dimensional embedding:
$$
\begin{eqnarray}
emb(a) &=& (1, 1) \nonumber\
emb(b) &=& (-1, -1) \nonumber\
emb(() &=& (1, 0) \nonumber\
emb()) &=& (0, -1) \nonumber
\end{eqnarray}
$$
For this task, we define a very simple network with two input units, one CEC unit, and one output unit with sigmoid activation ($\sigma(x) = \frac{1}{1 + e^{-x}}$), as follows:
<div style="text-align:center">
<img src="CEC-example.png"/>
<caption>Fig 2. Network used for the bracketed expression recognition</caption>
</div>
For this task, we define the loss function as the cross-entropy (CE) between the predicted and the true one:
$$
\begin{eqnarray}
\mathrm{CE}(x, y) = - (x\log(y) + (1-x)\log(1-y)) \nonumber\
\mathrm{Loss}(\hat{o}_t, o_t) = \mathrm{CE}(\hat{o}_t, o_t) - \mathrm{CE}(\hat{o}_t, \hat{o}_t)
\end{eqnarray}
$$
with $\hat{o}_t$ and $o_t$ represent the target value (gold standard) and output value (network prediction), respectively, at time step $t$. The first term is the cross-entropy between the target value and the output value, and the second term is the entropy of the target value itself. Note that the second term is a constant, and serves just to make the minimum achievable loss to be 0 (perfect output).
More specifically, we have:
$$
\begin{equation}
o_t = \sigma(w_3*s_t)
\end{equation}
$$
where $s_t$ is the output of the CEC unit (a.k.a. the memory), which depends on the previous value of the memory $s_{t-1}$, and the input $x_{t,1}$ and $x_{t,2}$ (representing the first and second dimension of the input at time step $t$):
$$
\begin{equation}
s_t = \underbrace{w_s * s_{t-1}}\text{previous value} + \underbrace{w_1 * x{t,1} + w_2 * x_{t,2}}_\text{input}
\end{equation}
$$
where $w_s$ is the weight of the self-loop, which is 1.0. But for clarity of why this should be 1.0, the calculation later does not assume $w_s=1.0$.
End of explanation
"""
gold('a(a)a')[1:] # The first element is dummy
"""
Explanation: Now let's check the function calculating the target value. Basically we want it to output $\sigma(0)$ or $\sigma(1)$ when the output is outside or inside a bracketed expression, respectively.
End of explanation
"""
test_seq = 'ab(ab)ab'
reset()
w1 = 1.0
w2 = 1.0
w3 = 1.0
result = predict(test_seq)
correct = gold(test_seq)
print('Output: ', end='')
print_list(result[1:])
print('Target: ', end='')
print_list(correct[1:])
print('Loss : {:.3f}'.format(loss(correct[1:], result[1:])[0]))
"""
Explanation: Which is $\sigma(0), \sigma(1), \sigma(1), \sigma(0), \sigma(0)$, which is what we expect. So far so good.
<hr>
Now let's see what our network outputs
End of explanation
"""
def dLdw1(test_seq, gold_seq, pred_seq, state_seq, info):
result = 0.0
grad_str = '<div style="font-family:monaco; font-size:12px">dL/dw1 = '
for time_step in range(1, len(gold_seq)):
cur_dell = (pred_seq[time_step] - gold_seq[time_step]) * w3
cur_dell *= sum(ws**(step-1)*embedding[test_seq[step-1]][0] for step in range(1, time_step+1))
if cur_dell < 0:
color = 'red'
else:
color = 'blue'
grad_str += '{}<span style="color:{}">{:+.3f}</span>'.format(' + ' if time_step > 1 else '', color, cur_dell)
result += cur_dell
grad_str += ' = <span style="color:{}; text-decoration:underline">{:+.3f}</span></div>'.format(
'red' if result < 0 else 'blue', result)
# printmd(grad_str)
info[0] += grad_str
return result
def dLdw2(test_seq, gold_seq, pred_seq, state_seq, info):
result = 0.0
grad_str = '<div style="font-family:monaco; font-size:12px">dL/dw2 = '
for time_step in range(1, len(gold_seq)):
cur_dell = (pred_seq[time_step] - gold_seq[time_step]) * w3
cur_dell *= sum(ws**(step-1)*embedding[test_seq[step-1]][1] for step in range(1, time_step+1))
if cur_dell < 0:
color = 'red'
else:
color = 'blue'
grad_str += '{}<span style="color:{}">{:+.3f}</span>'.format(' + ' if time_step > 1 else '', color, cur_dell)
result += cur_dell
grad_str += ' = <span style="color:{}; text-decoration:underline">{:+.3f}</span></div>'.format(
'red' if result < 0 else 'blue', result)
# printmd(grad_str)
info[0] += grad_str
return result
def dLdw3(test_seq, gold_seq, pred_seq, state_seq, info):
result = 0.0
grad_str = '<div style="font-family:monaco; font-size:12px">dL/dw3 = '
for time_step in range(1, len(gold_seq)):
cur_dell = (pred_seq[time_step] - gold_seq[time_step]) * state_seq[time_step]
if cur_dell < 0:
color = 'red'
else:
color = 'blue'
grad_str += '{}<span style="color:{}">{:+.3f}</span>'.format(' + ' if time_step > 1 else '', color, cur_dell)
result += cur_dell
grad_str += ' = <span style="color:{}; text-decoration:underline">{:+.3f}</span></div>'.format(
'red' if result < 0 else 'blue', result)
# printmd(grad_str)
info[0] += grad_str
return result
"""
Explanation: We see that the loss is still non-zero, and we see that some values are incorrectly predicted.
Next we will see the gradient calculation in progress, so that we can update the weight to reduce the loss.
Calculating Gradients
To do the weight update, we need to calculate the partial derivative of the loss function with respect to the each weight. We have three weight parameters $w_1, w_2$, and $w_3$, so we need to compute three different partial derivatives.
For ease of notation, we denote $\mathrm{Loss}_t = \mathrm{Loss}(\hat{o}_t, o_t)$ as the loss at time step $t$ and $\mathrm{Loss} = \sum_t \mathrm{Loss}_t$ as the total loss over one sequence.
Remember that our objective is to reduce the total loss.
$$
\begin{eqnarray}
\frac{\partial\mathrm{Loss}}{\partial w_i} & = & \sum_t\frac{\partial \mathrm{Loss}_t}{\partial w_i} \
& = & \sum_t\frac{\partial \mathrm{Loss}_t}{\partial o_t} \cdot \frac{\partial o_t}{\partial w_i} \qquad \text{(by chain rule)} \
\end{eqnarray}
$$
for $w_3$, we can already compute the gradient here, which is:
$$
\require{cancel}
\begin{eqnarray}
\frac{\partial\mathrm{Loss}}{\partial w_3} & = & \sum_t\frac{\partial \mathrm{Loss}t}{\partial o_t} \cdot \frac{\partial o_t}{\partial w_i} \
& = & \sum_t\underbrace{\frac{o_t - \hat{o}_t}{\cancel{o_t(1-o_t)}}}{=\frac{\partial \mathrm{Loss}t}{\partial o_t}} \cdot \underbrace{s_t \cdot \cancel{o_t(1-o_t)}}{=\frac{\partial o_t}{\partial w_i}} \
& = & \sum_t(o_t-\hat{o}_t)s_t
\end{eqnarray}
$$
for $w_1$ and $w_2$, we have:
$$
\begin{eqnarray}
\frac{\partial\mathrm{Loss}}{\partial w_i} & = & \sum_t\frac{\partial \mathrm{Loss}t}{\partial o_t} \cdot \frac{\partial o_t}{\partial w_i} \
& = & \sum_t \frac{o_t - \hat{o}_t}{o_t(1-o_t)} \cdot \frac{\partial o_t}{\partial s_t} \cdot \frac{\partial s_t}{\partial w_i} \
& = & \sum_t \frac{o_t - \hat{o}_t}{\cancel{o_t(1-o_t)}} \cdot w_3\cdot \cancel{o_t(1-o_t)} \cdot \frac{\partial s_t}{\partial w_i} \
& = & \sum_t (o_t - \hat{o}_t)w_3 \cdot \frac{\partial s_t}{\partial w_i} \
& = & \sum_t (o_t - \hat{o}_t)w_3 \cdot \left(w_s\cdot\frac{\partial s{t-1}}{\partial w_i} + x_{t,i}\right) \
& = & \sum_t (o_t - \hat{o}t)w_3 \cdot \left({w_s}^2\cdot\frac{\partial s{t-2}}{\partial w_i} + w_s\cdot x_{t-1,i} + x_{t,i}\right) \
& & \ldots \
& = & \sum_t (o_t - \hat{o}t)w_3 \cdot \left(\sum{t'\leq t} {w_s}^{t-t'}x_{t',i}\right) \
\end{eqnarray}
$$
Important Note on $w_s$!
We see that the gradient with respect to $w_1$ and $w_2$ contains the factor ${w_s}^{t-t'}$, where $t-t'$ can be as large as the input sequence length. So if $w_s \neq 1.0$, then either the gradient will vanish or blow up as the input sequence gets longer.
End of explanation
"""
def experiment(test_seq, _w1=1.0, _w2=1.0, _w3=1.0, alpha=1e-1, max_iter=250, fixed_w3=True):
global w1, w2, w3
reset()
w1 = _w1
w2 = _w2
w3 = _w3
correct = gold(test_seq)
print('w1={:+.3f}, w2={:+.3f}, w3={:+.3f}'.format(w1, w2, w3))
for iter_num in range(max_iter):
result = predict(test_seq)
if iter_num < 15 or (iter_num % 50 == 49):
printmd('<div style="font-weight:bold">Iteration {}</div>'.format(iter_num))
print('Output: ', end='')
print_list(result[1:])
print('Target: ', end='')
print_list(correct[1:])
print('Memory: ', end='')
print_list(memory_history[1:])
total_loss, per_position_loss = loss(correct[1:], result[1:])
info = ['', iter_num]
info[0] = ('<div>Loss: <span style="font-weight:bold">{:.5f}</span>' +
'= <span style="font-family:monaco; font-size:12px">').format(total_loss)
for idx, per_pos_loss in enumerate(per_position_loss):
info[0] += '{}{:.3f}'.format(' + ' if idx > 0 else '', per_pos_loss)
info[0] += '</span></div>'
# printmd(loss_str)
w1 -= alpha * dLdw1(test_seq, correct, result, memory_history, info)
w2 -= alpha * dLdw2(test_seq, correct, result, memory_history, info)
if not fixed_w3:
w3 -= alpha * dLdw3(test_seq, correct, result, memory_history, info)
if iter_num < 15 or (iter_num % 50 == 49):
printmd(info[0])
print('w1={:+.3f}, w2={:+.3f}, w3={:+.3f}'.format(w1, w2, w3))
print()
reset()
return w1, w2, w3
embedding['a'] = (1.0, 1)
embedding['b'] = (-1, -1)
embedding['('] = (1, 0)
embedding[')'] = (0, 1)
w1, w2, w3 = experiment('ab(ab)bb', _w1=1.0, _w2=1.0, max_iter=250, alpha=1e-1, fixed_w3=True)
printmd('## Test on longer sequence')
experiment('aabba(aba)bab', _w1=w1, _w2=w2, _w3=w3, alpha=1e-2, max_iter=100)
"""
Explanation: Experiment
Now we define an experiment which takes in initial values of all the weights, learning rate, and maximum number of iterations. We also want to experiment with fixing the weight $w_3$ (i.e., it is not learned).
The code below will print the total loss, the loss at each time step, the output, target, and memory at each time step, and also the gradient for each learned parameter at each time step.
End of explanation
"""
w4 = 1.0
w5 = 1.0
input_history = [0]
gate_history = [0]
def reset_gated():
global memory_history, output_history, input_history, gate_history
memory_history = [0]
output_history = [0]
input_history = [0]
gate_history = [0]
def activate_input(x1, x2):
result = (w1*x1+w2*x2)
input_history.append(result)
return result
def activate_gate(x1, x2, bilinear_gate=True):
if bilinear_gate:
result = w4 + w5*x1*x2 # Bilinear gate
else:
result = sigmoid(w4*x1+w5*x2) # The true linear gate
gate_history.append(result)
return result
def dLdw1_gated(test_seq, gold_seq, pred_seq, state_seq, input_seq, gate_seq, info, bilinear_gate=True):
result = 0.0
grad_str = '<div style="font-family:monaco; font-size:12px">dL/dw1 = '
for time_step in range(1, len(gold_seq)):
cur_dell = (pred_seq[time_step] - gold_seq[time_step]) * w3
cur_dell *= sum(embedding[test_seq[step-1]][0]*gate_seq[step] for step in range(1, time_step+1))
if cur_dell < 0:
color = 'red'
else:
color = 'blue'
grad_str += '{}<span style="color:{}">{:+.3f}</span>'.format(' + ' if time_step > 1 else '', color, cur_dell)
result += cur_dell
grad_str += ' = <span style="color:{}; text-decoration:underline">{:+.3f}</span></div>'.format(
'red' if result < 0 else 'blue', result)
# printmd(grad_str)
info[0] += grad_str
return result
def dLdw2_gated(test_seq, gold_seq, pred_seq, state_seq, input_seq, gate_seq, info, bilinear_gate=True):
result = 0.0
grad_str = '<div style="font-family:monaco; font-size:12px">dL/dw2 = '
for time_step in range(1, len(gold_seq)):
cur_dell = (pred_seq[time_step] - gold_seq[time_step]) * w3
cur_dell *= sum(embedding[test_seq[step-1]][1]*gate_seq[step] for step in range(1, time_step+1))
if cur_dell < 0:
color = 'red'
else:
color = 'blue'
grad_str += '{}<span style="color:{}">{:+.3f}</span>'.format(' + ' if time_step > 1 else '', color, cur_dell)
result += cur_dell
grad_str += ' = <span style="color:{}; text-decoration:underline">{:+.3f}</span></div>'.format(
'red' if result < 0 else 'blue', result)
# printmd(grad_str)
info[0] += grad_str
return result
def dLdw4_gated(test_seq, gold_seq, pred_seq, state_seq, input_seq, gate_seq, info, bilinear_gate=True):
result = 0.0
grad_str = '<div style="font-family:monaco; font-size:12px">dL/dw4 = '
for time_step in range(1, len(gold_seq)):
cur_dell = (pred_seq[time_step] - gold_seq[time_step]) * w3
if bilinear_gate:
cur_dell *= sum(input_seq[step] for step in range(1, time_step+1))
else:
cur_dell *= sum(embedding[test_seq[step-1]][0]*gate_seq[step]*input_seq[step]*(1-gate_seq[step])
for step in range(1,time_step+1))
if cur_dell < 0:
color = 'red'
else:
color = 'blue'
grad_str += '{}<span style="color:{}">{:+.3f}</span>'.format(' + ' if time_step > 1 else '', color, cur_dell)
result += cur_dell
grad_str += ' = <span style="color:{}; text-decoration:underline">{:+.3f}</span></div>'.format(
'red' if result < 0 else 'blue', result)
# printmd(grad_str)
info[0] += grad_str
return result
def dLdw5_gated(test_seq, gold_seq, pred_seq, state_seq, input_seq, gate_seq, info, bilinear_gate=True):
result = 0.0
grad_str = '<div style="font-family:monaco; font-size:12px">dL/dw5 = '
for time_step in range(1, len(gold_seq)):
cur_dell = (pred_seq[time_step] - gold_seq[time_step]) * w3
if bilinear_gate:
cur_dell *= sum(embedding[test_seq[step-1]][0]*embedding[test_seq[step-1]][1]*input_seq[step]
for step in range(1, time_step+1))
else:
cur_dell *= sum(embedding[test_seq[step-1]][1]*gate_seq[step]*input_seq[step]*(1-gate_seq[step])
for step in range(1,time_step+1))
if cur_dell < 0:
color = 'red'
else:
color = 'blue'
grad_str += '{}<span style="color:{}">{:+.3f}</span>'.format(' + ' if time_step > 1 else '', color, cur_dell)
result += cur_dell
grad_str += ' = <span style="color:{}; text-decoration:underline">{:+.3f}</span></div>'.format(
'red' if result < 0 else 'blue', result)
# printmd(grad_str)
info[0] += grad_str
return result
def activate_memory_gated():
memory_history.append(ws*memory_history[-1] + input_history[-1]*gate_history[-1])
return memory_history[-1]
def predict_gated(seq):
for char in seq:
activate_input(*embedding[char])
activate_gate(*embedding[char])
activate_output(activate_memory_gated())
result = output_history[:]
return result
def experiment_gated(test_seq, _w1=1.0, _w2=1.0, _w3=1.0, _w4=1.0, _w5=1.0, alpha=1e-1, max_iter=750,
bilinear_gate=True, fixed_w3=True, fixed_w4=False, fixed_w5=False):
global w1, w2, w3, w4, w5
reset_gated()
w1 = _w1
w2 = _w2
w3 = _w3
w4 = _w4
w5 = _w5
correct = gold(test_seq)
print('w1={:+.3f}, w2={:+.3f}, w3={:+.3f}, w4={:+.3f}, w5={:+.3f}'.format(w1, w2, w3, w4, w5))
for iter_num in range(max_iter):
result = predict_gated(test_seq)
if iter_num < 15 or (iter_num % 50 == 49):
printmd('<div style="font-weight:bold">Iteration {}</div>'.format(iter_num))
print('Output: ', end='')
print_list(result[1:])
print('Target: ', end='')
print_list(correct[1:])
print('Memory: ', end='')
print_list(memory_history[1:])
print('Input : ', end='')
print_list(input_history[1:])
print('Gate : ', end='')
print_list(gate_history[1:])
total_loss, per_position_loss = loss(correct[1:], result[1:])
info = ['', iter_num]
info[0] = ('<div>Loss: <span style="font-weight:bold">{:.5f}</span>' +
'= <span style="font-family:monaco">').format(total_loss)
for idx, per_pos_loss in enumerate(per_position_loss):
info[0] += '{}{:.3f}'.format(' + ' if idx > 0 else '', per_pos_loss)
info[0] += '</span></div>'
# printmd(loss_str)
w1 -= alpha * dLdw1_gated(test_seq, correct, result, memory_history, input_history, gate_history,
info, bilinear_gate)
w2 -= alpha * dLdw2_gated(test_seq, correct, result, memory_history, input_history, gate_history,
info, bilinear_gate)
if not fixed_w3:
w3 -= alpha * dLdw3(test_seq, correct, result, memory_history, info, bilinear_gate)
if not fixed_w4:
w4 -= alpha * dLdw4_gated(test_seq, correct, result, memory_history, input_history, gate_history,
info, bilinear_gate)
if not fixed_w5:
w5 -= alpha * dLdw5_gated(test_seq, correct, result, memory_history, input_history, gate_history,
info, bilinear_gate)
if iter_num < 15 or (iter_num % 50 == 49):
printmd(info[0])
print('w1={:+.3f}, w2={:+.3f}, w3={:+.3f}, w4={:+.3f}, w5={:+.3f}'.format(w1, w2, w3, w4, w5))
print()
reset_gated()
return w1, w2, w3, w4, w5
embedding['a'] = (1.0, 1)
embedding['b'] = (-1, -1)
embedding['('] = (1, 0)
embedding[')'] = (0, 1)
experiment_gated('ab(ab)bb', _w1=1.0, _w2=1.0, _w4=1.0, _w5=1.0, alpha=1e-1, max_iter=250, fixed_w3=True)
"""
Explanation: Let's Try Adding Input Gate
We saw in the experiment before that there is conflicting update (at one point of the sequence the gradient is positive, while at another point it is negative), which the original paper explains that it is caused by the weight into the memory cell needs to update the memory at one point (when we see brackets in this case) and retain information at another point (when we see any other characters).
Another core feature of LSTM that was designed to resolve this issue is that it adds gates: input gate and output gate, to control the flow of information through the memory cells.
In the following, we try adding an input gate, which the network should learn to activate (value = 1) only when it sees an opening bracket or closing bracket. So basically the input gate is telling the network which inputs are relevant and which are not.
Note: Below we have two versions for the input gate: linear with sigmoid, and bilinear with bias. The $w_4$ and $w_5$ have different interpretation depending on the input gate chosen. The bilinear gate was added because the input doesn't allow the linear gate to be useful.
End of explanation
"""
import random
a_1 = 1.0 + 0.2*(random.random()-0.5)
a_2 = 1.0/a_1
b_1 = -1.0 + 0.2*(random.random()-0.5)
b_2 = 1.0/b_1
embedding['a'] = (a_1, a_2)
embedding['b'] = (b_1, b_2)
embedding['('] = (1, 0)
embedding[')'] = (0, 1)
from pprint import pprint
pprint(embedding)
"""
Explanation: Discussion
We see that after adding input gate (assuming it is possible for the input gate to exhibit the same properties as the true input gate, manifested by using bilinear gate here), can reach the optimal (loss = 0.0) faster (after iteration 199) compared to the one without input gate (only after iteration 249), although there are more parameters to learn with the input gate (two more: $w_4$ and $w_5$) and that the initial loss is higher with input gate (due to the incorrect gate value initially).
Also we see that the gate learned is actually not the true gate that we want. This is because the input is already separable even without input gate.
Noisy embedding
In previous experiment, the input gate learned is not the true gate that we want, but that's because the input embedding is ideal, i.e., it allows separation even without input gate.
Now let's experiment with noisy embedding, in which the true function cannot be obtained without input gate.
End of explanation
"""
embedding['a'] = (a_1, a_2)
embedding['b'] = (b_1, b_2)
embedding['('] = (1, 0)
embedding[')'] = (0, 1)
experiment('ab(ab)bb', _w1=1.0, _w2=1.0, alpha=1e-1, max_iter=250, fixed_w3=True)
embedding['a'] = (a_1, a_2)
embedding['b'] = (b_1, b_2)
embedding['('] = (1, 0)
embedding[')'] = (0, 1)
experiment_gated('ab(ab)bb', _w1=1.0, _w2=1.0, _w4=1.0, _w5=1.0, alpha=1e-1, max_iter=250, fixed_w3=True)
"""
Explanation: Here we make the input embedding such that the other characters have noise which should be ignored.
Let's see how the two models perform in this case.
End of explanation
"""
# Trying nested brackets
experiment_gated('ab(aaa(bab)b)')
"""
Explanation: Now we see that the input gate is closer to the true gate: it tries to ignore irrelevant input by setting the weights of those input closer to 0. Although in this case it is still far from the true gate (the irrelevant input still gets positive score), we see that it has good impact on the loss, reaching an order of magnitude lower. And actually if we run more iterations, we see later that the gate will be learned correctly ($w_4 = 1.0, w_5=-1.0$).
Notice that in the network without input gate, at the end the overall gradient is zero, but actually the gradient at each position in the sequence is not zero, and in fact the magnitude is not quite small, meaning the network ends up at a non-optimal position, while in the gated version, we see the gradient approaches zero in all position.
End of explanation
"""
|
tensorflow/docs-l10n | site/en-snapshot/tutorials/distribute/multi_worker_with_estimator.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
import tensorflow_datasets as tfds
import tensorflow as tf
import os, json
"""
Explanation: Multi-worker training with Estimator
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/distribute/multi_worker_with_estimator"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/distribute/multi_worker_with_estimator.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/distribute/multi_worker_with_estimator.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/distribute/multi_worker_with_estimator.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Warning: Estimators are not recommended for new code. Estimators run v1.Session-style code which is more difficult to write correctly, and can behave unexpectedly, especially when combined with TF 2 code. Estimators do fall under compatibility guarantees, but will receive no fixes other than security vulnerabilities. See the migration guide for details.
Overview
Note: While you can use Estimators with tf.distribute API, it's recommended to use Keras with tf.distribute, see multi-worker training with Keras. Estimator training with tf.distribute.Strategy has limited support.
This tutorial demonstrates how tf.distribute.Strategy can be used for distributed multi-worker training with tf.estimator. If you write your code using tf.estimator, and you're interested in scaling beyond a single machine with high performance, this tutorial is for you.
Before getting started, please read the distribution strategy guide. The multi-GPU training tutorial is also relevant, because this tutorial uses the same model.
Setup
First, setup TensorFlow and the necessary imports.
End of explanation
"""
tf.compat.v1.disable_eager_execution()
"""
Explanation: Note: Starting from TF2.4 multi worker mirrored strategy fails with estimators if run with eager enabled (the default). The error in TF2.4 is TypeError: cannot pickle '_thread.lock' object, See issue #46556 for details. The workaround is to disable eager execution.
End of explanation
"""
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def input_fn(mode, input_context=None):
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
mnist_dataset = (datasets['train'] if mode == tf.estimator.ModeKeys.TRAIN else
datasets['test'])
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
if input_context:
mnist_dataset = mnist_dataset.shard(input_context.num_input_pipelines,
input_context.input_pipeline_id)
return mnist_dataset.map(scale).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
"""
Explanation: Input function
This tutorial uses the MNIST dataset from TensorFlow Datasets. The code here is similar to the multi-GPU training tutorial with one key difference: when using Estimator for multi-worker training, it is necessary to shard the dataset by the number of workers to ensure model convergence. The input data is sharded by worker index, so that each worker processes 1/num_workers distinct portions of the dataset.
End of explanation
"""
LEARNING_RATE = 1e-4
def model_fn(features, labels, mode):
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
logits = model(features, training=False)
if mode == tf.estimator.ModeKeys.PREDICT:
predictions = {'logits': logits}
return tf.estimator.EstimatorSpec(labels=labels, predictions=predictions)
optimizer = tf.compat.v1.train.GradientDescentOptimizer(
learning_rate=LEARNING_RATE)
loss = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction=tf.keras.losses.Reduction.NONE)(labels, logits)
loss = tf.reduce_sum(loss) * (1. / BATCH_SIZE)
if mode == tf.estimator.ModeKeys.EVAL:
return tf.estimator.EstimatorSpec(mode, loss=loss)
return tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
train_op=optimizer.minimize(
loss, tf.compat.v1.train.get_or_create_global_step()))
"""
Explanation: Another reasonable approach to achieve convergence would be to shuffle the dataset with distinct seeds at each worker.
Multi-worker configuration
One of the key differences in this tutorial (compared to the multi-GPU training tutorial) is the multi-worker setup. The TF_CONFIG environment variable is the standard way to specify the cluster configuration to each worker that is part of the cluster.
There are two components of TF_CONFIG: cluster and task. cluster provides information about the entire cluster, namely the workers and parameter servers in the cluster. task provides information about the current task. The first component cluster is the same for all workers and parameter servers in the cluster, and the second component task is different on each worker and parameter server and specifies its own type and index. In this example, the task type is worker and the task index is 0.
For illustration purposes, this tutorial shows how to set a TF_CONFIG with 2 workers on localhost. In practice, you would create multiple workers on an external IP address and port, and set TF_CONFIG on each worker appropriately, i.e. modify the task index.
Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail. See the keras version of this tutorial for an example of how you can test run multiple workers on a single machine.
os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"]
},
'task': {'type': 'worker', 'index': 0}
})
Define the model
Write the layers, the optimizer, and the loss function for training. This tutorial defines the model with Keras layers, similar to the multi-GPU training tutorial.
End of explanation
"""
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
"""
Explanation: Note: Although the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size.
MultiWorkerMirroredStrategy
To train the model, use an instance of tf.distribute.experimental.MultiWorkerMirroredStrategy. MultiWorkerMirroredStrategy creates copies of all variables in the model's layers on each device across all workers. It uses CollectiveOps, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The tf.distribute.Strategy guide has more details about this strategy.
End of explanation
"""
config = tf.estimator.RunConfig(train_distribute=strategy)
classifier = tf.estimator.Estimator(
model_fn=model_fn, model_dir='/tmp/multiworker', config=config)
tf.estimator.train_and_evaluate(
classifier,
train_spec=tf.estimator.TrainSpec(input_fn=input_fn),
eval_spec=tf.estimator.EvalSpec(input_fn=input_fn)
)
"""
Explanation: Train and evaluate the model
Next, specify the distribution strategy in the RunConfig for the estimator, and train and evaluate by invoking tf.estimator.train_and_evaluate. This tutorial distributes only the training by specifying the strategy via train_distribute. It is also possible to distribute the evaluation via eval_distribute.
End of explanation
"""
|
jorisvandenbossche/geopandas | doc/source/gallery/polygon_plotting_with_folium.ipynb | bsd-3-clause | import geopandas as gpd
import folium
import matplotlib.pyplot as plt
"""
Explanation: Plotting polygons with Folium
This example demonstrates how to plot polygons on a Folium map.
End of explanation
"""
path = gpd.datasets.get_path('nybb')
df = gpd.read_file(path)
df.head()
"""
Explanation: Load geometries
This example uses the nybb dataset, which contains polygons of New York boroughs.
End of explanation
"""
df.plot(figsize=(6, 6))
plt.show()
"""
Explanation: Plot from the original dataset
End of explanation
"""
df.crs
"""
Explanation: Notice that the values of the polygon geometries do not directly represent the values of latitude of longitude in a geographic coordinate system.
To view the coordinate reference system of the geometry column, access the crs attribute:
End of explanation
"""
# Use WGS 84 (epsg:4326) as the geographic coordinate system
df = df.to_crs(epsg=4326)
print(df.crs)
df.head()
df.plot(figsize=(6, 6))
plt.show()
"""
Explanation: The epsg:2263 crs is a projected coordinate reference system with linear units (ft in this case).
As folium (i.e. leaflet.js) by default accepts values of latitude and longitude (angular units) as input, we need to project the geometry to a geographic coordinate system first.
End of explanation
"""
m = folium.Map(location=[40.70, -73.94], zoom_start=10, tiles='CartoDB positron')
m
"""
Explanation: Create Folium map
End of explanation
"""
for _, r in df.iterrows():
# Without simplifying the representation of each borough,
# the map might not be displayed
sim_geo = gpd.GeoSeries(r['geometry']).simplify(tolerance=0.001)
geo_j = sim_geo.to_json()
geo_j = folium.GeoJson(data=geo_j,
style_function=lambda x: {'fillColor': 'orange'})
folium.Popup(r['BoroName']).add_to(geo_j)
geo_j.add_to(m)
m
"""
Explanation: Add polygons to map
Overlay the boundaries of boroughs on map with borough name as popup:
End of explanation
"""
# Project to NAD83 projected crs
df = df.to_crs(epsg=2263)
# Access the centroid attribute of each polygon
df['centroid'] = df.centroid
"""
Explanation: Add centroid markers
In order to properly compute geometric properties, in this case centroids, of the geometries, we need to project the data to a projected coordinate system.
End of explanation
"""
# Project to WGS84 geographic crs
# geometry (active) column
df = df.to_crs(epsg=4326)
# Centroid column
df['centroid'] = df['centroid'].to_crs(epsg=4326)
df.head()
for _, r in df.iterrows():
lat = r['centroid'].y
lon = r['centroid'].x
folium.Marker(location=[lat, lon],
popup='length: {} <br> area: {}'.format(r['Shape_Leng'], r['Shape_Area'])).add_to(m)
m
"""
Explanation: Since we're again adding a new geometry to the Folium map, we need to project the geometry back to a geographic coordinate system with latitude and longitude values.
End of explanation
"""
|
yvesdubief/UVM-ME249-CFD | ME249-Lecture-0.ipynb | gpl-2.0 | %matplotlib inline
# plots graphs within the notebook
%config InlineBackend.figure_format='svg' # not sure what this does, may be default images to svg format
import matplotlib.pyplot as plt #calls the plotting library hereafter referred as to plt
import numpy as np
"""
Explanation: Figure 1. Sketch of a cell (top left) with the horizontal (red) and vertical (green) velocity nodes and the cell-centered node (blue). Definition of the normal vector to "surface" (segment) $S_{i+\frac{1}{2},j}$ and $S_{i,j+\frac{1}{2}}$ (top right). Sketch of uniform grid (bottom).
<h1>Derivation of 1D Transport Equation</h1>
<h2>1D Transport Without Diffusion</h2>
Consider a small control surface (cell) of dimensions $\Delta x\times\Delta y$ within which, we know the velocities on the surfaces $u_{i\pm\frac{1}{2},j}$ and $v_{i,j\pm\frac{1}{2}}$ and a quantity $\phi_{i,j}$ at the center of the cell. This quantity may be temperature, or the concentration of chemical specie. The variation in time of $\phi$ within the cell is equal to the amount of $\phi$ that is flowing in and out of the cell through the boundaries of cell. The velocity vector is defined as
$$
\vec{u}=u\vec{e}_x+v\vec{e}_y
$$
The fluxes of $\phi$ across the right-hand-side and left-hand-side vertical boundaries are, respectively:
$$
\int_{S_{i+1/2,j}}\phi(\vec{u}{i+\frac{1}{2},j}\cdot\vec{n}{i+\frac{1}{2},j})dy\text{ and }\int_{S_{i-1/2,j}}\phi(\vec{u}{i-\frac{1}{2},j}\cdot\vec{n}{i+\frac{1}{2},j})dy
$$
In the configuration depicted in Figure 1, the mass or heat variation is equal to the flux of $\phi$ entering the cell minus the flux exiting the cell, or:
$$
-\phi_{i+\frac{1}{2},j}u_{i+\frac{1}{2},j}\Delta y + \phi_{i-\frac{1}{2},j}u_{i-\frac{1}{2},j}\Delta y \text{, when $\Delta y\rightarrow 0$}
$$
Assuming that there is no vertical velocity ($v=0$), this sum is equal to the variation of $\phi$ within the cell,
$$
\frac{\partial}{\partial t}\iint_{V_{i,j}}\phi dxdy\approx\frac{\partial \phi_{i,j}}{\partial t}\Delta x\Delta y \text{, when $\Delta x\rightarrow 0$ and $\Delta y\rightarrow 0$}
$$
yielding
$$
\frac{\partial \phi_{i,j}}{\partial t}\Delta x\Delta y=-\phi_{i+\frac{1}{2},j}u_{i+\frac{1}{2},j}\Delta y + \phi_{i-\frac{1}{2},j}u_{i-\frac{1}{2},j}\Delta y\;,
$$
reducing to
$$
\frac{\partial \phi_{i,j}}{\partial t}=-\frac{\phi_{i+\frac{1}{2},j}u_{i+\frac{1}{2},j} - \phi_{i-\frac{1}{2},j}u_{i-\frac{1}{2},j}}{\Delta x}\;.
$$
In the limit of $\Delta x\rightarrow 0$, we obtain the conservative form of the pure advection equation:
<p class='alert alert-danger'>
$$
\frac{\partial \phi}{\partial t}+\frac{\partial u\phi}{\partial x}=0
$$
</p>
<h2>1.2 Coding the Pure Advection Equation</h2>
The following takes you through the steps to solve numerically the pure advection equation with python. The boundary conditions are (all variables are non-dimensional):
<ol>
<li> Length of the domain: $0\leq x\leq L$ and $L=8\pi$ </li>
<li> Constant velocity $u_0=1$
<li> Inlet $x=0$ and outlet $x=L$: zero-flux variation (in space)</li>
<li> Initial condition:
$$\phi(x,t=0)=\begin{cases}
1+\cos\left(x-\frac{L}{2}\right)&,\text{ for }\left\vert x-\frac{L}{2}\right\vert\leq\pi\\
0&,\text{ for }\left\vert x-\frac{L}{2}\right\vert>\pi
\end{cases}
$$
</li>
</ol>
Here you will <b>discretize</b> your domain in $N$ small control volumes, such that the size of each control volume is
<p class='alert alert-danger'>
$$
\Delta x = \frac{L}{N}
$$
</p>
You will simulate the system defined so far of a time $T$, to be decided, discretized by small time-steps
<p class='alert alert-danger'>
$$
\Delta t = \frac{T}{N_t}
$$
</p>
We adopt the following index convention:
<ul>
<li> Each cell is labeled by a unique integer $i$ with $i\in[0,N-1]$. This is a python convention that vector and matrices start with index 0, instead of 1 for matlab.</li>
<li> A variable defined at the center of cell $i$ is noted with the subscript $i$: $\phi_i$.</li>
<li> A variable defined at the surface of cell $i$ is noted with the subscript $i\pm1/2$: $\phi_{i\pm 1/2}$</li>
<li> The solution $\phi(x_i,t_n)$, where
$$
x_i = i\Delta x\text{ with $x\in[0,N-1]$, and }t_n=n\Delta t\text{ with $n\in[0,N_t]$,}
$$</li>
is noted $\phi_i^n$.
</ul>
At first we will try to solve the advection equation with the following discretization:
$$
\frac{\phi_i^{n+1}-\phi_i^n}{\Delta t}=-\frac{\phi_{i+\frac{1}{2}}u_{i+\frac{1}{2}} - \phi_{i-\frac{1}{2}}u_{i-\frac{1}{2}}}{\Delta x}
$$
or
<p class='alert alert-info'>
$$
\phi_i^{n+1}=\phi_i^n-\frac{\Delta t}{\Delta x}\left(\phi^n_{i+\frac{1}{2}}u_{i+\frac{1}{2}} - \phi^n_{i-\frac{1}{2}}u_{i-\frac{1}{2}}\right)
$$
</p>
The velocity $u$ is constant, therefore defined anywhere in the system (cell center or cell surfaces), however $\phi$ is defined only at the cell center, requiring an interpolation at the cell surface $i\pm 1/2$. For now you will consider a mid-point interpolation:
<p class='alert alert-info'>
$$
\phi^n_{i+\frac{1}{2}} = \frac{\phi^n_{i+1}+\phi^n_i}{2}
$$
</p>
Lastly, our governing equation can be recast with the flux of $\phi$ across the surface $u$:
<p class='alert alert-info'>
$$
F^n_{i\pm\frac{1}{2}}=\phi^n_{i\pm\frac{1}{2}}u_{i\pm\frac{1}{2}}=\frac{\phi^n_{i\pm 1}+\phi^n_i}{2}u_{i\pm\frac{1}{2}}
$$
</p>
yielding the equation you will attempt to solve:
<p class='alert alert-danger'>
$$
\phi_i^{n+1}=\phi_i^n-\frac{\Delta t}{\Delta x}\left(F^n_{i+\frac{1}{2}} - F^n_{i-\frac{1}{2}}\right)
$$
</p>
<h3> Step 1: Import libraries</h3>
Python has a huge collection of libraries contained functions to plot, build matrices, performed mathematical operations, etc. To avoid overloading the CPU and to allow you to choose the best library for your code, you need to first import the libraries you will need, here:
<ul>
<li> <FONT FACE="courier" style="color:blue">matplotlib </FONT>: <a href="http://matplotlib.org">http://matplotlib.org</a> for examples of plots you can make in python.</li>
<li><FONT FACE="courier" style="color:blue">numpy </FONT>: <a href="http://docs.scipy.org/doc/numpy/user/index.html">http://docs.scipy.org/doc/numpy/user/index.html</a> Library for operations on matrices and vectors.</li>
</ul>
Loading a libray in python is done by the command <FONT FACE="courier" style="color:blue">import</FONT>. The best practice is to take the habit to use
<FONT FACE="courier" style="color:blue">import [library] as [library_nickname]</FONT>
For example, the library <FONT FACE="courier" style="color:blue">numpy</FONT> contains vector and matrices operations such <FONT FACE="courier" style="color:blue">zeros</FONT>, which allocate memory for a vector or a matrix of specified dimensions and set all components of the vector and matrix to zero. If you import numpy as np,
<FONT FACE="courier" style="color:blue">import numpy as np</FONT>
the allocation of memory for matrix A of dimensions n and m becomes
<FONT FACE="courier" style="color:blue">A = np.zeros((n,m))</FONT>
The following is a standard initialization for the python codes you will write in this course:
End of explanation
"""
L = 8*np.pi
N = 200
dx = L/N
u_0 = 1.
phi = np.zeros(N)
F = np.zeros(N+1)
u = u_0*np.ones(N+1)
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
"""
Explanation: The first two lines deal with the ability to show your graphs (generated via matplotlib) within this notebook, the remaining two lines import matplotlib's sublibrary pyplot as <FONT FACE="courier" style="color:blue">plt</FONT> and numpy as <FONT FACE="courier" style="color:blue">np</FONT>.
<h3>Step 2: Initialization of variables and allocations of memory</h3>
The first real coding task is to define your variables, with the exception of the time-related variables (you will understand why). Note that in our equation, we can store $\phi^n$ into one variable providing that we create a flux variable $F$.
<h3 style="color:red"> Q1: Explain why.</h3>
End of explanation
"""
def init_simulation(x_phi,N):
phi = np.zeros(N)
phi = 1.+np.cos(x_phi-L/2.)
xmask = np.where(np.abs(x_phi-L/2.) > np.pi)
phi[xmask] = 0.
return phi
phi = init_simulation(x_phi,N)
plt.plot(x_phi,phi,lw=2)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.show()
"""
Explanation: <h3 style="color:red"> Q2: Search numpy function linspace and describe what <FONT FACE="courier">x_phi</FONT> and <FONT FACE="courier">x_u</FONT> define. Why are the dimensions different?</h3>
<h3>Step 3: Initialization</h3>
Now we define a function to initialize our variables. In python, <b>indentation matters!</b> A function is defined by the command <FONT FACE="courier">def</FONT> followed by the name of the function and the argument given to the function. The variables passed as argument in the function are local, meaning they may or may not have the same names as the variables in the core code. Any other variable used within the function needs to be defined in the function or before.
Note that python accepts implicit loops. Here <FONT FACE="courier">phi</FONT> and <FONT FACE="courier">x_phi</FONT> are two vectors of dimension $N$.
End of explanation
"""
def init_simulation_slow(u,phi,x_phi,N):
for i in range(N):
if (np.abs(x_phi[i]-L/2.) > np.pi):
phi[i] = 0.
else:
phi[i] = 1.+np.cos(x_phi[i]-L/2.)
return phi
phi = init_simulation_slow(u,phi,x_phi,N)
plt.plot(x_phi,phi,lw=2)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.show()
"""
Explanation: A slower but easier to understand version of this function is shown below. The tag slow is explained shortly after.
End of explanation
"""
%%timeit
flux0 = np.zeros(N+1)
for i in range(1,N):
flux0[i] = 0.5*(phi[i-1]+phi[i])*u[i]
%%timeit
flux1 = np.zeros(N+1)
flux1[1:N] = 0.5*(phi[0:N-1]+phi[1:N])*u[1:N]
"""
Explanation: <h3>Step 3: Code your interpolation/derivativation subroutine</h3>
Before we can simulate our system, we need to write and test our spatial interpolation and derivative procedure. Below we test the speed of two approaches, The first uses a for loop, whereas the second using the rules of indexing in python.
End of explanation
"""
def compute_flux(a,v,N):
f=np.zeros(N+1)
f[1:N] = 0.5*(a[0:N-1]+a[1:N])*v[1:N]
f[0] = f[1]
f[N] = f[N-1]
return f
"""
Explanation: The choice for the interpolation is obvious:
End of explanation
"""
F_exact = np.zeros(N+1)
F_exact = init_simulation(x_u,N+1)
F = compute_flux(phi,u,N)
plt.plot(x_u,F_exact,lw=2,label="exact")
plt.plot(x_u,F,'r--',lw=2,label="interpolated")
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.legend(loc="upper left", bbox_to_anchor=[0, 1],
ncol=1, shadow=True, fancybox=True)
plt.show()
"""
Explanation: <h3>Step 4: Verification</h3>
The interpolation and derivation operations are critical components of the simulation that must be verified. Since the velocity is unity, $F_{i\pm1/2}=\phi_{i\pm1/2}$.
End of explanation
"""
N = 200
phi = np.zeros(N)
F_exact = np.zeros(N+1)
F = np.zeros(N+1)
u = u_0*np.ones(N+1)
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
phi = init_simulation(x_phi,N)
F_exact = init_simulation(x_u,N+1)
F = compute_flux(phi,u,N)
error = np.sqrt(np.sum(np.power(F-F_exact,2)))
errorx = np.power(F-F_exact,2)
plt.plot(x_u,errorx)
plt.show()
print('error norm L 2= %1.4e' %error)
Nerror = 3
Narray = np.array([10, 100, 200])
delta = L/Narray
error = np.zeros(Nerror)
order = np.zeros(Nerror)
for ierror in range(Nerror):
N = Narray[ierror]
phi = np.zeros(N)
F_exact = np.zeros(N+1)
F = np.zeros(N+1)
u = u_0*np.ones(N+1)
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
phi = init_simulation(x_phi,N)
F_exact = init_simulation(x_u,N+1)
F = compute_flux(phi,u,N)
error[ierror] = np.linalg.norm(F-F_exact)
#error[ierror] = np.sqrt(np.sum(np.power(F-F_exact,2)))
print('error norm L 2= %1.4e' %error[ierror])
order = 0.1*delta**(2)
plt.loglog(delta,error,lw=2,label='interpolate')
plt.loglog(delta,order,lw=2,label='$\propto\Delta x^2$')
plt.legend(loc="upper left", bbox_to_anchor=[0, 1],
ncol=1, shadow=True, fancybox=True)
plt.xlabel('$\Delta x$', fontdict = font)
plt.ylabel('$\Vert F\Vert_2$', fontdict = font)
plt.show
"""
Explanation: Although the plot suggests that the interpolation works, a visual proof can be deceptive. It is best to calculate the error between the exact and interpolated solution. Here we use an $l^2$-norm:
$$
\Vert F\Vert_2=\sqrt{\sum_{i=0}^{N}\left(F_i-F_i^e\right)^2}
$$
where $F_e$ is the exact solution for the flux.
End of explanation
"""
Nscheme = 4
Scheme = np.array(['CS','US1','US2','US3'])
g_1 = np.array([1./2.,0.,0.,3./8.])
g_2 = np.array([0.,0.,1./2.,1./8.])
def compute_flux_advanced(a,v,N,num_scheme):
imask = np.where(Scheme == num_scheme)
g1 = g_1[imask]
g2 = g_2[imask]
f=np.zeros(N+1)
f[2:N] = ((1.-g1+g2)*a[1:N-1]+g1*a[2:N]-g2*a[0:N-2])*v[2:N]
if (num_scheme == 'US2') or (num_scheme == 'US3'):
f[1] = ((1.-g1)*a[0]+g1*a[1])*v[1]
f[0] = f[1]
f[N] = f[N-1]
return f
table = ListTable()
table.append(['Scheme', '$g_1$', '$g_2$'])
for i in range(4):
table.append([Scheme[i],g_1[i], g_2[i]])
table
Nerror = 3
Narray = np.array([10, 100, 200])
delta = L/Narray
error = np.zeros((Nerror,Nscheme))
order = np.zeros((Nerror,Nscheme))
for ischeme in range(Nscheme):
num_scheme = Scheme[ischeme]
for ierror in range(Nerror):
N = Narray[ierror]
dx = L/N
phi = np.zeros(N)
F_exact = np.zeros(N+1)
F = np.zeros(N+1)
u = u_0*np.ones(N+1)
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
phi = init_simulation(x_phi,N)
F_exact = init_simulation(x_u,N+1)
F = compute_flux_advanced(phi,u,N,num_scheme)
error[ierror,ischeme] = np.linalg.norm(F-F_exact)
#print('error norm L 2= %1.4e' %error[ierror,ischeme])
for ischeme in range(Nscheme):
plt.loglog(delta,error[:,ischeme],lw=2,label=Scheme[ischeme])
order = 1.0*(delta/delta[0])
plt.loglog(delta,order,'k:',lw=2,label='$\propto\Delta x$')
order = 1.0*(delta/delta[0])**(2)
plt.loglog(delta,order,'k-',lw=2,label='$\propto\Delta x^2$')
order = 1.0*(delta/delta[0])**(3)
plt.loglog(delta,order,'k--',lw=2,label='$\propto\Delta x^3$')
plt.legend(loc=2, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$\Delta x$', fontdict = font)
plt.ylabel('$\Vert F\Vert_2$', fontdict = font)
plt.xlim(L/300,L/9.)
plt.ylim(1e-5,1e2)
plt.show
"""
Explanation: For reasons that will become clearer later, we want to consider other interpolation schemes:
$$
\phi_{i+\frac{1}{2}}=g_1\phi_{i+1}-g_2\phi_{i-1}+(1-g_1+g_2)\phi_i
$$
The scheme CS is the interpolation scheme we have used so far. Let us test them all, however we have to modify the interpolation function.
End of explanation
"""
def flux_divergence(f,N,dx):
df = np.zeros(N)
df[0:N] = (f[1:N+1]-f[0:N])/dx
return df
"""
Explanation: <h3 style="color:red">Q3: What do you observe? </h3>
<h3 style="color:red">Q4: Write a code to verify the divergence subroutine. </h3>
End of explanation
"""
N=200
Simulation_time = 5.
dx = L/N
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
u_0 = 1.
num_scheme = 'US2'
u = u_0*np.ones(N+1)
phi = np.zeros(N)
flux = np.zeros(N+1)
divflux = np.zeros(N)
phi = init_simulation(x_phi,N)
phi_init = phi.copy()
number_of_iterations = 100
dt = Simulation_time/number_of_iterations
t = 0.
for it in range (number_of_iterations):
flux = compute_flux_advanced(phi,u,N,num_scheme)
divflux = flux_divergence(flux,N,dx)
phi -= dt*divflux
t += dt
plt.plot(x_phi,phi,lw=2,label='simulated')
plt.plot(x_phi,phi_init,lw=2,label='initial')
plt.legend(loc=2, bbox_to_anchor=[0, 1],
ncol=2, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.show()
"""
Explanation: <h3>Step 5: Writing the simulation code</h3>
The first code solves:
<p class='alert alert-info'>
$$
\phi_i^{n+1}=\phi_i^n-\frac{\Delta t}{\Delta x}\left(F^n_{i+\frac{1}{2}} - F^n_{i-\frac{1}{2}}\right)
$$
</p>
for whatever scheme you choose. Play with the different schemes. Consider that the analytical solution is:
$$
\phi(x,t)=\begin{cases}
1+\cos\left[x-\left(\frac{L}{2}+u_0t\right)\right]&,\text{ for }\left\vert x-\left(\frac{L}{2}+u_0t\right)\right\vert\leq\pi\
0&,\text{ for }\left\vert x-\left(\frac{L}{2}+u_0t\right)\right\vert>\pi
\end{cases}
$$
End of explanation
"""
N=200
Simulation_time = 5.
dx = L/N
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
u_0 = 1.
num_scheme = 'CS'
u = u_0*np.ones(N+1)
phi = np.zeros(N)
flux = np.zeros(N+1)
divflux = np.zeros(N)
phiold = np.zeros(N)
phi = init_simulation(x_phi,N)
phi_init = phi.copy()
rk_coef = np.array([0.5,1.])
number_of_iterations = 100
dt = Simulation_time/number_of_iterations
t = 0.
for it in range (number_of_iterations):
phiold = phi
for irk in range(2):
flux = compute_flux_advanced(phi,u,N,num_scheme)
divflux = flux_divergence(flux,N,dx)
phi = phiold-rk_coef[irk]*dt*divflux
t += dt
plt.plot(x_phi,phi,lw=2,label='simulated')
plt.plot(x_phi,phi_init,lw=2,label='initial')
plt.legend(loc=2, bbox_to_anchor=[0, 1],
ncol=2, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.show()
"""
Explanation: The discretization of the time derivative is crude. A better discretization is the 2<sup>nd</sup>-order Runge-Kutta:
<p class='alert alert-info'>
\begin{eqnarray}
\phi_i^{n+1/2}&=&\phi_i^n-\frac{\Delta t}{2}\frac{F^n_{i+\frac{1}{2}} - F^n_{i-\frac{1}{2}}}{\Delta x}\\
\phi_i^{n+1}&=&\phi_i^n-\Delta t\frac{F^{n+1/2}_{i+\frac{1}{2}} - F^{n+1/2}_{i-\frac{1}{2}}}{\Delta x}
\end{eqnarray}
</p>
End of explanation
"""
|
elan4u/CI-sample | basics.ipynb | gpl-2.0 | # This function will return the Scrabble score of a word
def scrabble_score(word):
#Dictionary of our scrabble scores
score_lookup = {
"a": 1,
"b": 3,
"c": 3,
"d": 2,
"e": 1,
"f": 4,
"g": 2,
"h": 4,
"i": 1,
"j": 8,
"k": 5,
"l": 1,
"m": 3,
"n": 1,
"o": 1,
"p": 3,
"q": 10,
"r": 1,
"s": 1,
"t": 1,
"u": 1,
"v": 4,
"w": 4,
"x": 8,
"y": 4,
"z": 10,
"\n": 0, #just in case a new line character jumps in here
" ":0 #normally single words don't have spaces but we'll put this here just in case
}
total_score = 0
#We look up each letter in the scoring dictionary and add it to a running total
#to make our dictionary shorter we are just using lowercase letters so we need to
#change all of our input to lowercase with .lower()
for letter in word:
total_score = total_score + score_lookup[letter.lower()]
return total_score
"""
Explanation: <a href="https://colab.research.google.com/github/elan4u/CI-sample/blob/master/basics.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Introduction to Text Analysis with Python
Welcome to the Digital Scholarship Lab introduction to Text Analysis with Python class. In this class we'll learn the basics of text analysis:
parsing text
analyzing the text
We'll use our own home made analysis tool first, then we'll use a python library called TextBlob to use some built-in analysis tools.
This workshop assumes you've completed our Intro to Python workshop
We'll use the Zoom's chat feature to interact.
Be sure to enable line numbers by looking for the 'gear' icon and checking the box in the 'Editor' panel.
EG. Scrabble!
<img src="https://upload.wikimedia.org/wikipedia/commons/5/5d/Scrabble_game_in_progress.jpg" width =500x>
Scrabble is a popular game where players try to score points by spelling words and placing them on the game board. We'll use Scrabble scoring our our first attempt at text analysis. This will demonstart the basics of how Text Analysis works.
The function below gives you the Scrabble scored of any word you give it.
End of explanation
"""
name = ""
print("Score for my name is:", scrabble_score(name))
"""
Explanation: Text Analysis is a process comprised of three basic steps:
1. Identifying the text (or corpus) that you'd like to an analyze
1. Apply the analysis to your prepared text
1. Review the results
In our very basic example of scrabble we just are interested in finding the points we would get for spelling a specific word.
In a more complex example with a larger corpus you can do any of the following types of analysis:
- determine the sentiment (positive / negative tone) of the text
- quantify how complex a piece of writing is based on the vocabulary it uses
- determine what topics are in your corpus
- classify your text into different categories based on what it is about
Of course, there are many other different outcomes you can get from peforming text analysis.
Try questions Q1 - Q2 and type "All Done" in the chat box when you are done.
Q1
Score your name by creating the text variable name on line 1.
How many Points do you get for your name? Complete the expression below to find out the scrabble score of your name
End of explanation
"""
pet_name = ""
print("Score for my pet's name is:",scrabble_score(pet_name))
#Compare to see which gets more points!
if scrabble_score(pet_name) > scrabble_score(name):
print("My pet's name scores more points!")
else:
print("My name scores more (or the same) amount of points as my pets name")
"""
Explanation: Q2
Score your pet's name (or favorite character from a story) by creating the text variable pet_name on line 1.
Does your name or the name of your pet score higher in Scrabble?
End of explanation
"""
#Install textblob using magic commands
#Only needed once
%pip install textblob
#%python -m textblob.download_corpora
#%pip install textblob.download_corpora
from textblob import TextBlob
import pandas as pd
import nltk
import matplotlib.pyplot as plt
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('brown')
#Let's make sure our previews show more information
pd.set_option('display.max_colwidth', 999)
#Classifier for laster
from textblob.classifiers import NaiveBayesClassifier
"""
Explanation: Beyond the basics
We just completed a very basic text analysis where we analyzed two different bits of text to see which one scores higher in Scrabble. Let's expand this idea to a more complex example using the TextBlob Python Library. There are other more complex libraries that you can use for text analysis, we are using more simple solutions so we can spend more time looking at results compared to setting up the code.
Installing and Loading the Libraries
This next cell will install and load the requires libraries that will do the text analysis.
End of explanation
"""
winnie_corpus = pd.read_csv('https://raw.githubusercontent.com/BrockDSL/Text_Analysis_with_Python/master/winnie_corpus.txt', header = None, delimiter="\t")
winnie_corpus.columns = ["page","date","entry"]
winnie_corpus['date'] = pd.to_datetime(winnie_corpus['date'])
winnie_corpus['entry'] = winnie_corpus.entry.astype(str)
#preview our top entries
winnie_corpus.head()
"""
Explanation: Corpus
Corpus is a fancy way of saying the text that we will be looking at. Cleaning up a corpus and getting it ready for analysis is a big part of the process, once that is done the rest is easy. For our example we are going to be looking at some entries from the 1900 diary of Winnie Beam. The next cell will load this corpus into a Pandas dataframe and show us a few entires.
End of explanation
"""
happy_sentence = "Python is the best programming language ever!"
sad_sentence = "Python is difficult to use, and very frustrating"
print("Sentiment of happy sentence ", TextBlob(happy_sentence).sentiment)
print("Sentiment of sad sentence ", TextBlob(sad_sentence).sentiment)
# polarity ranges from -1 to 1.
# subjectvity ranges from 0 to 1.
"""
Explanation: Measuring Sentiment
We can analyze the sentiment of the text (more details.) The next cell demonstrates this:
End of explanation
"""
test_sentence = """
"""
print("Score of test sentence is ", TextBlob(test_sentence).sentiment)
"""
Explanation: Q3
Try a couple of different sentences in the code cell below. See if you can create something that scores -1 and another that scores 1 for polarity. See if you can minimize the subjectivity of your sentence. Share your answers in the chat box.
(We can create a multi line string of text by putting it in triple quotes like the cell following.)
End of explanation
"""
#Apply sentiment analysis from TextBlob
polarity = []
subjectivity = []
for day in winnie_corpus.entry:
#print(day,"\n")
score = TextBlob(day)
polarity.append(score.sentiment.polarity)
subjectivity.append(score.sentiment.subjectivity)
winnie_corpus['polarity'] = polarity
winnie_corpus['subjectivity'] = subjectivity
#Let's look at our new top entries
winnie_corpus.head()
"""
Explanation: Adding Sentiment to our Diary entries
This next cell will score each diary entry in a new column that will be added to the dataframe. We loop through each entry, calculate the two scores that represent the sentiment. After all the scores are computed with add them to the dataframe.
End of explanation
"""
#Let's graph out the sentiment as it changes day to day.
plt.plot(winnie_corpus["date"],winnie_corpus["polarity"])
plt.xticks(rotation='45')
plt.title("Sentiment of Winnie's Diary Entries")
plt.show()
"""
Explanation: Now that we have daily sentiment values, let's try to visualize how they go up and down over the course of the first 3 months of the year.
End of explanation
"""
#Very Negative
bad_sentiment = winnie_corpus["polarity"].min()
#Reduce this number by 20%
bad_sentiment = bad_sentiment - (bad_sentiment * 0.20)
winnie_corpus[winnie_corpus["polarity"] <= bad_sentiment]
#Very Positive
good_sentiment = winnie_corpus["polarity"].max()
#Reduce this number by 20%
good_sentiment = good_sentiment - (good_sentiment * 0.20)
winnie_corpus[winnie_corpus["polarity"] >= good_sentiment]
"""
Explanation: Interesting spikes?
We see some really strong negative and positive spikes in the sentiment. Let's just take a look at some of those entries. Run the next two cells to look at the individual negative and positive entries.
End of explanation
"""
entry_number = 22
bit_of_corpus = TextBlob(winnie_corpus["entry"][entry_number])
bit_of_corpus
"""
Explanation: Q4
Do you agree with the sentiment scores that are applied in the above two cells? Share your thoughts in the chat.
What else can we get from the text?
We've seen some details about sentiment, but what else can we get from the text? Let's grab a random entry and see what we can find out about it. We'll choose the 22nd entry.
End of explanation
"""
for sentence in bit_of_corpus.sentences:
print(sentence)
print(sentence.sentiment,"\n")
"""
Explanation: Sentences and Sentiment
We applied sentiment on to daily entries but we can apply it down to sentences just to see how a score fluctuates.
End of explanation
"""
for sentence in bit_of_corpus.sentences:
for word in sentence.words:
print(word)
"""
Explanation: Words in sentences
You can parse through words in a sentence using TextBlob as well. The next cell illustrates this.
End of explanation
"""
#Pick a value between 1 and this number
len(winnie_corpus)
en_no =
another_bit_of_corpus = TextBlob(winnie_corpus["entry"][en_no])
print("Random Entry: \n")
print(another_bit_of_corpus,"\n")
#Go through all of the sentences of this entry and determine their sentiment
for sentence in another_bit_of_corpus.sentences:
print(sentence)
print(sentence.sentiment,"\n")
"""
Explanation: Q5
Another random journal entry. Pick a random number between 1 and the length of the dataframe and update en_no in line 1. If you get an interesting result, share it with the class in the chat box.
End of explanation
"""
for np in bit_of_corpus.noun_phrases:
print(np)
"""
Explanation: Noun Phrases
We can get a good idea about what a corpus is about by looking at the different nouns that show up in it. Nouns that show up a lot give us an idea of the contents of the text.
End of explanation
"""
#January Entries
jan_corpus = winnie_corpus[(winnie_corpus['date'] >= '1900-01-01') & (winnie_corpus['date'] <= '1900-01-31')]
"""
Explanation: A closer look at the corpus
Let's look at the January Diary entries
End of explanation
"""
jan_phrases = dict()
for entry in jan_corpus.entry:
tb = TextBlob(entry)
for np in tb.noun_phrases:
if np in jan_phrases:
jan_phrases[np] += 1
else:
jan_phrases[np] = 1
#Print the top 10 things she mentioned in January
for np in sorted(jan_phrases, key=jan_phrases.get, reverse=True)[0:10]:
print(np, jan_phrases[np])
"""
Explanation: Let's see what Winnie talks about the most in the month. We can do this by extracting the noun phrases in her entries. We can put them in a dictionary to count how many times a phrase is used
End of explanation
"""
#February Entries
feb_corpus = winnie_corpus[(winnie_corpus['date'] >= '1900-02-01') & (winnie_corpus['date'] <= '1900-02-28')]
feb_phrases = dict()
for entry in feb_corpus.entry:
tb = TextBlob(entry)
for np in tb.noun_phrases:
if np in feb_phrases:
feb_phrases[np] += 1
else:
feb_phrases[np] = 1
#Print the top 10 things she mentioned in February
for np in sorted(feb_phrases, key=feb_phrases.get, reverse=True)[0:10]:
print(np, feb_phrases[np])
#March Entries
mar_corpus = winnie_corpus[(winnie_corpus['date'] >= '1900-03-01') & (winnie_corpus['date'] <= '1900-03-31')]
mar_phrases = dict()
for entry in mar_corpus.entry:
tb = TextBlob(entry)
for np in tb.noun_phrases:
if np in mar_phrases:
mar_phrases[np] += 1
else:
mar_phrases[np] = 1
#Print the top 10 things she mentioned in March
for np in sorted(mar_phrases, key=mar_phrases.get, reverse=True)[0:10]:
print(np, mar_phrases[np])
#April Entries
april_corpus = winnie_corpus[(winnie_corpus['date'] >= '1900-04-01') & (winnie_corpus['date'] <= '1900-04-30')]
april_phrases = dict()
for entry in april_corpus.entry:
tb = TextBlob(entry)
for np in tb.noun_phrases:
if np in april_phrases:
april_phrases[np] += 1
else:
april_phrases[np] = 1
#Print the top 10 things she mentioned in April
for np in sorted(april_phrases, key=april_phrases.get, reverse=True)[0:10]:
print(np, april_phrases[np])
#May Entries
may_corpus = winnie_corpus[(winnie_corpus['date'] >= '1900-05-01') & (winnie_corpus['date'] <= '1900-05-31')]
may_phrases = dict()
for entry in may_corpus.entry:
tb = TextBlob(entry)
for np in tb.noun_phrases:
if np in may_phrases:
may_phrases[np] += 1
else:
may_phrases[np] = 1
#Print the top 10 things she mentioned in may
for np in sorted(may_phrases, key=may_phrases.get, reverse=True)[0:10]:
print(np, may_phrases[np])
#June Entries
june_corpus = winnie_corpus[(winnie_corpus['date'] >= '1900-06-01') & (winnie_corpus['date'] <= '1900-06-30')]
june_phrases = dict()
for entry in june_corpus.entry:
tb = TextBlob(entry)
for np in tb.noun_phrases:
if np in june_phrases:
june_phrases[np] += 1
else:
june_phrases[np] = 1
#Print the top 10 things she mentioned in june
for np in sorted(june_phrases, key=june_phrases.get, reverse=True)[0:10]:
print(np, june_phrases[np])
"""
Explanation: Q6
Let's compare against the first 6 months of the year. Run the following set of cells.
What can you say about Winnie's topics over the first half of the year? Share your thoughts in the chat box.
End of explanation
"""
ex_corpus = """
**Put your text in here***
"""
eTB = TextBlob(ex_corpus)
#Sentiment
print("Sentiment:\n")
print(eTB.sentiment)
#Noun Phrases
print("\nNoun Phrases:\n")
ex_phrases = dict()
for np in eTB.noun_phrases:
if np in ex_phrases:
ex_phrases[np] += 1
else:
ex_phrases[np] = 1
for np in sorted(ex_phrases, key=ex_phrases.get, reverse=True):
print(np, ex_phrases[np])
"""
Explanation: Q7
Get a piece of text and put it through some analysis. You can try to get something from:
- CBC news
- New York Times
- The text of a tweet...
- What else?
Share the text you've analyzed by sharing a link in the chat box
End of explanation
"""
train = [
('I think Twitter is stupid', 'sub'),
('Lots of people send too much time on Twitter.', 'obj'),
('Twitter is a waste of time.', 'sub'),
('Twitter can be used to find information.', 'obj'),
('Many celebrites have Twitter accounts.', 'obj'),
('I think there is too much misinformation on Twitter', 'sub'),
("I don't like Twitter.", 'sub'),
("Twitter is the best ever", 'sub'),
('Twitter is great because all of my friends us it', 'sub'),
('Twitter is a fortune 500 company', 'obj')
]
test = [
('Twitter is a company', 'obj'),
("You can't communicate well with such short sentences", 'sub'),
("Twitter is disruptive to soceity", 'sub'),
("Over 500 million people use Twitter", 'obj'),
('A Twitter message can have 280 characters', 'obj'),
("A Twitter message is always stupid", 'sub')
]
#Builds the classifer and run the training data through it
cl = NaiveBayesClassifier(train)
#Classify each item in the test set to see how well the classifier works.
for item in test:
print("Item: ",item[0],"\t\t Classification guess: ",cl.classify(item[0]),"\t Actual: ",item[1])
print("\nAccuracy of guesses", cl.accuracy(test))
# We can have the classifer tells us some things it has noticed with the samples
cl.show_informative_features(3)
"""
Explanation: A very basic classifier
We looked at how to score the sentiment of a corpus. We can also create a classifier on our own if we provide testing and training data. In our example we are going to look at whether some statements about Twitter are subjective ( sub ) or objective ( obj ).
End of explanation
"""
train_2 = [
('I love this sandwich.', 'pos'),
('','pos'), #add a positive sentence
('','pos'), #add a positive sentence
('','pos'), #add a positive sentence
('I do not like this restaurant', 'neg'),
('','neg'), #add a negative sentence
('','neg'), #add a negative sentence
('','neg') #add a negative sentence
]
cl_2 = NaiveBayesClassifier(train_2)
"""
Explanation: Q8
As our last activity try to create your own classifier in the next code cell. You'll just need to provide examples for the classifer to train on.
End of explanation
"""
print("\nInput a sentence you wish to classify")
test_sentence = input()
print("Classification category: ", cl_2.classify(test_sentence))
"""
Explanation: Run the following cell as often as you'd like to have the classifier attempt more sentences.
End of explanation
"""
|
jobovy/wendy | examples/WendyScaling.ipynb | mit | def initialize_selfgravitating_disk(N):
totmass= 1.
sigma= 1.
zh= sigma**2./totmass # twopiG = 1. in our units
tdyn= zh/sigma
x= numpy.arctanh(2.*numpy.random.uniform(size=N)-1)*zh*2.
v= numpy.random.normal(size=N)*sigma
v-= numpy.mean(v)
m= numpy.ones_like(x)/N
return (x,v,m,tdyn)
"""
Explanation: Scaling of wendy with particle number N
We will investigate how wendy scales with the number $N$ of particles using a self-gravitating disk. The following function initializes a self-gravitating disk:
End of explanation
"""
Ns= numpy.round(10.**numpy.linspace(1.,5.,11)).astype('int')
ntrials= 3
T= numpy.empty((len(Ns),ntrials))
ncoll= numpy.empty((len(Ns),ntrials))
tdyn_fac_norm= 3000
for ii,N in enumerate(Ns):
for jj in range(ntrials):
x,v,m,tdyn= initialize_selfgravitating_disk(N)
g= wendy.nbody(x,v,m,tdyn*(tdyn_fac_norm/N)**2.,maxcoll=10000000,full_output=True)
tx,tv,tncoll, time_elapsed= next(g)
if tncoll > 0:
T[ii,jj]= time_elapsed / tncoll
ncoll[ii,jj]= tncoll*(N/tdyn_fac_norm)**2.
else:
T[ii,jj]= numpy.nan
ncoll[ii,jj]= numpy.nan
plot(Ns,numpy.nanmean(T,axis=1)*10.**6,'o')
p= numpy.polyfit(numpy.log(Ns),numpy.log(numpy.nanmean(T,axis=1)*10.**6/numpy.log(Ns)),deg=1)
plot(Ns,numpy.exp(numpy.polyval(p,numpy.log(Ns)))*numpy.log(Ns))
pyplot.text(10,4.5,r'$\Delta t \approx %.2f\,\mu\mathrm{s} \times N^{%.2f}\log N$' % (numpy.exp(p[1]),p[0]),size=16.)
gca().set_xscale('log')
xlim(5,150000)
ylim(0.,5.)
xlabel(r'$N$')
ylabel(r'$\Delta t / \mathrm{collision}\,(\mu\mathrm{s})$')
"""
Explanation: Time/collision for exact solver
We believe that this should go as $\log(N)$:
End of explanation
"""
plot(Ns,numpy.nanmean(ncoll,axis=1),'o-')
gca().set_xscale('log')
gca().set_yscale('log')
xlim(5,150000)
#ylim(0.,2.)
xlabel(r'$N$')
ylabel(r'$\#\ \mathrm{of\ collisions} / t_{\mathrm{dyn}}$')
"""
Explanation: The behavior seems to be more like $(\log N)^2$ or $N^{1/4}$, probably because the implementation of the binary search tree to determine the next collision is not optimal.
Number of collisions per dynamical time
In a dynamical time, every particle will cross every other particle, so the total number of collisions per dynamical time should scale as $N^2$, which is indeed what we observe:
End of explanation
"""
Ns= numpy.round(10.**numpy.linspace(1.,6.,11)).astype('int')
ntrials= 3
T= numpy.empty((len(Ns),ntrials))
dE= numpy.empty((len(Ns),ntrials))
E= numpy.empty((len(Ns),ntrials))
tdyn_fac_norm= 3000
for ii,N in enumerate(Ns):
tnleap= int(numpy.ceil((1000*tdyn_fac_norm)/N))
for jj in range(ntrials):
x,v,m,tdyn= initialize_selfgravitating_disk(N)
E[ii,jj]= wendy.energy(x,v,m)
g= wendy.nbody(x,v,m,tdyn*(tdyn_fac_norm/N),approx=True,
nleap=tnleap,full_output=True)
tx,tv, time_elapsed= next(g)
T[ii,jj]= time_elapsed*(N/tdyn_fac_norm)
dE[ii,jj]= (E[ii,jj]-wendy.energy(tx,tv,m))/E[ii,jj]
"""
Explanation: The full, exact algorithm to run a system for a dynamical time therefore runs in order $N^2\,\log N$ time, which becomes prohibitively slow for large $N$.
Approximate solver
wendy also contains an approximate solver, which computes the gravitational force exactly at each time step, but uses leapfrog integration to increment the dynamics. The gravitational force can be computed exactly for $N$ particles using a sort and should therefore scale as $N \log N$. The number of time steps necessary to conserve energy to something like 1 part in $10^6$ should be relatively insensitive to $N$, so the overall algorithm should scale as $N\log N$.
End of explanation
"""
plot(Ns,numpy.nanmean(T,axis=1)*10.**3,'o')
p= numpy.polyfit(numpy.log(Ns),numpy.log(numpy.nanmean(T,axis=1)*10.**3/numpy.log(Ns)),deg=1)
plot(Ns,numpy.exp(numpy.polyval(p,numpy.log(Ns)))*numpy.log(Ns))
pyplot.text(10,10.**4.8,r'$\Delta t \approx %.2f \,\mu\mathrm{s} \times N^{%.2f}\log N$' %
(numpy.exp(p[1])*10.**3.,p[0]),size=16.)
gca().set_xscale('log')
gca().set_yscale('log')
xlabel(r'$N$')
ylabel(r'$\Delta t / t_{\mathrm{dyn}}\,(\mathrm{ms})$')
"""
Explanation: The algorithm indeed scales close to $N\log N$ with a fixed time step:
End of explanation
"""
plot(Ns,numpy.nanmean(numpy.fabs(dE),axis=1)*10.**3,'o')
gca().set_xscale('log')
gca().set_yscale('log')
ylabel(r'$\Delta E$')
xlabel(r'$N$')
"""
Explanation: However, with the same time step, energy is much better conserved for large $N$ than for small $N$:
End of explanation
"""
Ns= numpy.round(10.**numpy.linspace(1.,6.,11)).astype('int')
ntrials= 3
T= numpy.empty((len(Ns),ntrials))
E= numpy.empty((len(Ns),ntrials))
dE= numpy.empty((len(Ns),ntrials))
tdyn_fac_norm= 10**4
for ii,N in enumerate(Ns):
tnleap= int(numpy.ceil(((3000.*tdyn_fac_norm)/N)**1.))
tdt= tdyn*(tdyn_fac_norm/N)**.15/10.
for jj in range(ntrials):
x,v,m,tdyn= initialize_selfgravitating_disk(N)
E[ii,jj]= wendy.energy(x,v,m)
g= wendy.nbody(x,v,m,tdt,approx=True,
nleap=tnleap,full_output=True)
tx,tv, time_elapsed= next(g)
T[ii,jj]= time_elapsed/tdt
dE[ii,jj]= (E[ii,jj]-wendy.energy(tx,tv,m))/E[ii,jj]
plot(Ns,numpy.nanmean(numpy.fabs(dE),axis=1)*10.**3,'o')
gca().set_xscale('log')
#gca().set_yscale('log')
ylabel(r'$\Delta E$')
xlabel(r'$N$')
"""
Explanation: We can design the time step such that energy is conserved to about the same degree for different $N$:
End of explanation
"""
plot(Ns,numpy.nanmean(T,axis=1),'o')
p= numpy.polyfit(numpy.log(Ns),numpy.log(numpy.nanmean(T,axis=1)/numpy.log(Ns)),deg=1)
plot(Ns,numpy.exp(numpy.polyval(p,numpy.log(Ns)))*numpy.log(Ns))
pyplot.text(10,10.**1.8,r'$\Delta t \approx %.2f \mathrm{s} \times N^{%.2f}\log N\,$' %
(numpy.exp(p[1]),p[0]),size=16.)
gca().set_xscale('log')
gca().set_yscale('log')
xlabel(r'$N$')
ylabel(r'$\Delta t / t_{\mathrm{dyn}}\,(\mathrm{s})$')
"""
Explanation: In this case, the algorithm for integrating the self-gravitating $\mathrm{sech}^2$ disk scales much better, approximately as $N^{1/4}\log N$:
End of explanation
"""
|
Kyubyong/numpy_exercises | 11_Set_routines.ipynb | mit | import numpy as np
np.__version__
author = 'kyubyong. longinglove@nate.com'
"""
Explanation: Set routines
End of explanation
"""
x = np.array([1, 2, 6, 4, 2, 3, 2])
"""
Explanation: Making proper sets
Q1. Get unique elements and reconstruction indices from x. And reconstruct x.
End of explanation
"""
x = np.array([0, 1, 2, 5, 0])
y = np.array([0, 1])
"""
Explanation: Boolean operations
Q2. Create a boolean array of the same shape as x. If each element of x is present in y, the result will be True, otherwise False.
End of explanation
"""
x = np.array([0, 1, 2, 5, 0])
y = np.array([0, 1, 4])
"""
Explanation: Q3. Find the unique intersection of x and y.
End of explanation
"""
x = np.array([0, 1, 2, 5, 0])
y = np.array([0, 1, 4])
"""
Explanation: Q4. Find the unique elements of x that are not present in y.
End of explanation
"""
x = np.array([0, 1, 2, 5, 0])
y = np.array([0, 1, 4])
"""
Explanation: Q5. Find the xor elements of x and y.
End of explanation
"""
x = np.array([0, 1, 2, 5, 0])
y = np.array([0, 1, 4])
"""
Explanation: Q6. Find the union of x and y.
End of explanation
"""
|
letsgoexploring/beapy-package | .ipynb_checkpoints/beapyExample-checkpoint.ipynb | mit | import numpy as np
import pandas as pd
import urllib
import datetime
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
import beapy
apiKey = '3EDEAA66-4B2B-4926-83C9-FD2089747A5B'
bea = beapy.initialize(apiKey =apiKey)
"""
Explanation: beapy
beapy is a Python package for obtaining data from the API of the Bureau of Economic Analysis.
End of explanation
"""
# Get a list of the the data sets available from the BEA along with descriptions.
bea.getDataSetList()
# The getDataSet() method adds a dataSetList attiribute that is a list of the available datasets:
print(bea.dataSetList)
# Get a list of the the parameters for the NIPA dataset
bea.getParameterList('NIPA')
# The getParameterList() method adds a parameterList attiribute that is a list of the parameters of the chosen dataset.
print(bea.parameterList)
# Get a list of the values that the Frequency parameter in the NIPA dataset can take:
bea.getParameterValues('NIPA','Frequency')
# Download data from Table 1.1.5, TableID: 5. and plot
results = bea.getNipa(TableID=5,Frequency='A',Year='X')
frame =results['data']
np.log(frame['Gross domestic product']).plot(grid=True,lw=3)
"""
Explanation: Methods for searching for data
getDataSetList(): returns the available datasets.
getParameterList(dataSetName): returns the parameters of the specified dataset.
getParameterValues(dataSetName,ParameterName): returns the values accepted for a parameter of the specified dataset.
End of explanation
"""
bea.getParameterValues('RegionalData','KeyCode')
bea.getParameterValues('RegionalData','GeoFips')
bea.getParameterValues('RegionalData','Year')
bea.getParameterValues('RegionalData','KeyCode')
"""
Explanation: Datasets
There are 10 datasets available through the BEA API:
RegionalData (statistics by state, county, and MSA)
NIPA (National Income and Product Accounts)
~~NIUnderlyingDetail (National Income and Product Accounts)~~
Fixed Assets
~~Direct Investment and Multinational Enterprises (MNEs)~~
Gross Domestic Product by Industry (GDPbyIndustry)
ITA (International Transactions)
IIP (International Investment Position)
Regional Income (detailed regional income and employment data sets)
RegionalProduct (detailed state and MSA product data sets)
beapy provides a separate method for accessing the data in each datset:
getRegionalData.(KeyCode=None,GeoFips='STATE',Year='ALL')
getNipa.(TableID=None,Frequency=None,Year='X',ShowMillions='N')
~~getNIUnderlyingDetail.()~~
getFixedAssets.()
~~getDirectInvestmentMNEs.()~~
getGrossDomesticProductByIndustry.()
getIta.()
getIip.()
getRegionalIncome.()
getRegionalProduct.()
Datasets and methods with a ~~strikethrough~~ are not currently accessible with the package.
Regional Data
getRegionalData.(KeyCode=None,GeoFips='STATE',Year='ALL')
Method for accessing data from the US at county, state, and regional levels.
End of explanation
"""
# Get per capita personal income at the state level for all years.
result = bea.getRegionalData(KeyCode='PCPI_SI',GeoFips = 'STATE', Year = 'ALL')
frame = result['data']
# For each state including Washington, D.C., find the percentage difference between state pc income and US pc income.
for state in frame.columns:
f = 100*(frame[state] - frame['United States'])/frame['United States']
f.plot(grid=True)
"""
Explanation: Example: Converging relative per capita incomes in the US
End of explanation
"""
|
ajgeers/3dracta | data_analysis.ipynb | bsd-2-clause | %matplotlib inline
import os
import numpy as np
import pandas as pd
from scipy import stats
import matplotlib.pyplot as plt
"""
Explanation: Reproducibility of hemodynamic simulations of cerebral aneurysms across imaging modalities 3DRA and CTA
Arjan Geers
This notebook reproduces* the data analysis presented in:
Geers AJ, Larrabide I, Radaelli AG, Bogunovic H, Kim M, Gratama van Andel HAF, Majoie CB, VanBavel E, Frangi AF. Patient-specific computational hemodynamics of intracranial aneurysms from 3D rotational angiography and CT angiography: An in vivo reproducibility study. American Journal of Neuroradiology, 32(3):581–586, 2011.
The goal of the study was to determine the reproducibility of blood flow simulations of cerebral aneurysms. Patients with a total of 10 cerebral aneurysms were imaged with both 3D rotational angiography (3DRA) and computed tomographic angiography (CTA). Each image independently was segmented to obtain a vascular model, the same boundary conditions were imposed, and a CFD simulation was obtained.
*Originally, data was analyzed in MATLAB R2010b and the boxplot was created in Mathematica 7.
Preamble
End of explanation
"""
df_input = pd.read_csv(os.path.join('data', '3dracta.csv'), index_col=[0, 1])
df_input
"""
Explanation: Data
The data used in this notebook is also available on FigShare:
Geers AJ, Larrabide I, Radaelli AG, Bogunovic H, Kim M, Gratama van Andel HAF, Majoie CB, VanBavel E, Frangi AF. Reproducibility of hemodynamic simulations of cerebral aneurysms across imaging modalities 3DRA and CTA: Geometric and hemodynamic data. FigShare, 2015. DOI: 10.6084/m9.figshare.1354056
Variables are defined as follows (TA: time-averaged; PS: peak systole; ED: end diastole):
* A_N: Aneurysm neck area
* V_A: Aneurysm volume
* Q_P: TA flow rate in the parent vessel just proximal to the aneurysm
* Q_A: TA flow rate into the aneurysm
* NQ_A: Q_A / Q_P
* WSS_P: Average TA WSS on the wall of a parent vessel segment just proximal to the aneurysm
* WSS_A: Average TA WSS on the aneurysm wall
* NWSS_A: WSS_A / WSS_P
* LWSS_A: Portion of the aneurysm wall with WSS < 0.4 Pa at ED
* MWSS_A: Maximum WSS on the aneurysm wall at PS
* 90WSS_A: 90th percentile value of the WSS on the aneurysm wall at PS
* N90WSS_A: 90WSS_A normalized by the average WSS on the aneurysm wall at PS
End of explanation
"""
df_3dra = df_input.xs('3dra', level='modality')
df_cta = df_input.xs('cta', level='modality')
"""
Explanation: Extract separate dataframes for 3DRA and CTA.
End of explanation
"""
df_reldiff = 100 * abs(df_3dra - df_cta)/df_3dra
s_mean = df_reldiff.mean()
s_standarderror = pd.Series(stats.sem(df_reldiff), index=df_input.columns)
"""
Explanation: Statistics
Calculate the relative difference between 3DRA and CTA wrt 3DRA. Per variable, get the mean and standard error of this relative difference over all aneurysms.
End of explanation
"""
pvalue = np.empty(len(df_input.columns))
for i, variable in enumerate(df_input.columns):
pvalue[i] = stats.wilcoxon(df_3dra[variable], df_cta[variable])[1]
s_pvalue = pd.Series(pvalue, index=df_input.columns)
"""
Explanation: Test differences between 3DRA and CTA with the Wilcoxon signed rank test.
Note: MATLAB was used to perform this test for the paper. Its 'signrank' function defaults to using the 'exact method' if a dataset has 15 or fewer observations and the 'approximate method' otherwise. See the documentation for more details. SciPy's 'wilcoxon' function has currently (version 1.3.0) no equivalent option and always uses the 'approximate method'.
End of explanation
"""
numberofcases = np.empty(len(df_input.columns))
for i, variable in enumerate(df_input.columns):
numberofcases[i] = sum(df_3dra.loc[j, variable] > df_cta.loc[j, variable]
for j in df_input.index.levels[0])
s_numberofcases = pd.Series(numberofcases, index=df_input.columns)
"""
Explanation: Determine the number of aneurysms for which a variable is lower for CTA than for 3DRA.
End of explanation
"""
d = {'M': s_numberofcases,
'P': s_pvalue,
'Mean (%)': s_mean,
'SE (%)': s_standarderror}
df_output = pd.DataFrame(d, columns=['M', 'P', 'Mean (%)', 'SE (%)'])
df_output
"""
Explanation: Compose a dataframe with the obtained statistical results, corresponding to the 'online table' of the journal paper.
End of explanation
"""
# extract arrays to plot from dataframe
array_yticklabels = ['$\mathregular{' + variable.replace('%', '\%') + '}$'
for variable in df_reldiff.columns]
array_reldiff = df_reldiff.values
# create plot
fig, ax = plt.subplots()
bp = ax.boxplot(array_reldiff, sym='+', vert=0, patch_artist=True)
# set labels
ax.set_xlabel('Relative difference (%)', fontsize=18)
ax.set_xlim(0, 130)
ax.set_yticklabels(array_yticklabels, fontsize=12)
# format box, whiskers, etc.
plt.setp(ax.get_xticklabels(), fontsize=12)
plt.setp(bp['boxes'], color='black')
plt.setp(bp['medians'], color='white')
plt.setp(bp['whiskers'], color='black', linestyle='-')
plt.setp(bp['fliers'], color='black', markersize=5)
plt.tight_layout()
"""
Explanation: Boxplot
Make boxplots showing the distributions of the relative differences over all aneurysms.
End of explanation
"""
|
GoogleChromeLabs/dynamic-web-bundle-serving | compression_experiments/js_dataset_compression.ipynb | apache-2.0 | import numpy as np
import json
import matplotlib.pyplot as plt
from tqdm import tqdm
import random
import subprocess
import time
import os
"""
Explanation: Copyright 2020 Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); <br>
you may not use this file except in compliance with the License.<br>
You may obtain a copy of the License at<br>
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.<br>
See the License for the specific language governing permissions and
limitations under the License.
End of explanation
"""
# js_scripts.txt constains the paths to js files
with open("js_dataset/js_scripts.txt") as file:
scripts = file.read().strip().split('\n')
# dirs_data.txt constains the names of directories in data directory of js 150 dataset
# we assume that different directories indicates different js apps
with open("js_dataset/dirs_data.txt") as file:
dirs = file.read().strip().split('\n')
# group script paths by directories
scripts_by_dirs = []
for directory in tqdm(dirs):
dir_scripts = []
for script in scripts:
if script.startswith("data/" + directory):
dir_scripts.append(script)
if len(dir_scripts):
scripts_by_dirs.append(dir_scripts)
"""
Explanation: Read the data
End of explanation
"""
def get_seconds(time):
min_ind = time.find('m')
mins = int(time[:min_ind])
second = float(time[min_ind + 1:-1])
return mins * 60 + second
def log(file, msg):
f = open(file, 'a+')
f.write(msg + '\n')
f.close()
rates_gzip = []
rates_brotli = []
times_gzip = []
times_brotli = []
speed_gzip = []
speed_brotli = []
init_sizes = []
for i in range(len(scripts_by_dirs)):
#concatenate all scripts inside the directory to simulate web bundle
script_concatenated = ""
for url in scripts_by_dirs[i]:
if url == "":
continue
if not os.path.exists("js_dataset/" + url):
print("DOESN'T EXIST: ", url)
continue
try:
with open("js_dataset/" + url) as file:
script_concatenated += file.read()
except:
print("didn't read")
rates_gzip_compressed = []
rates_brotli_compressed = []
times_gzip_compressed = []
times_brotli_compressed = []
speed_gzip_compressed = []
speed_brotli_compressed = []
with open("example2.txt", "w") as file:
file.write(script_concatenated)
size_non_compressed = os.stat("example2.txt").st_size
init_sizes.append(size_non_compressed)
# do the gzip compression with different levels
for level in range(4, 10):
result = subprocess.run(["bash", "gzip_compress.sh", str(level), "time2.txt",
"example_gzip2.txt.gz", "example2.txt"])
with open("time2.txt") as file:
user_sys = file.read().strip().split('\n')[1:]
time = get_seconds(user_sys[0].split('\t')[1]) + get_seconds(user_sys[1].split('\t')[1])
size_gzip_compressed = os.stat("example_gzip2.txt.gz").st_size
rates_gzip_compressed.append(size_non_compressed / size_gzip_compressed)
times_gzip_compressed.append(time)
speed_gzip_compressed.append(size_non_compressed / time)
# do the brotli compression with different levels
for level in range(4, 12):
result = subprocess.run(["bash", "brotli_compress.sh", str(level), "time2.txt",
"example_brotli2.txt.br", "example2.txt"])
with open("time2.txt") as file:
user_sys = file.read().strip().split('\n')[1:]
time = get_seconds(user_sys[0].split('\t')[1]) + get_seconds(user_sys[1].split('\t')[1])
size_br_compressed = os.stat("example_brotli2.txt.br").st_size
rates_brotli_compressed.append(size_non_compressed / size_br_compressed)
times_brotli_compressed.append(time)
speed_brotli_compressed.append(size_non_compressed / time)
rates_gzip.append(rates_gzip_compressed)
rates_brotli.append(rates_brotli_compressed)
times_gzip.append(times_gzip_compressed)
times_brotli.append(times_brotli_compressed)
speed_gzip.append(speed_gzip_compressed)
speed_brotli.append(speed_brotli_compressed)
if i != 0 and i % 500 == 0:
log("logs4.txt", "rates_gzip: " + str(np.mean(rates_gzip, axis=0)))
log("logs4.txt", "rates_brotli: " + str(np.mean(rates_brotli, axis=0)))
log("logs4.txt", "times_gzip: " + str(np.mean(times_gzip, axis=0)))
log("logs4.txt", "times_brotli: " + str(np.mean(times_brotli, axis=0)))
log("logs4.txt", "speed_gzip: " + str(np.mean(speed_gzip, axis=0)))
log("logs4.txt", "speed_brotli: " + str(np.mean(speed_brotli, axis=0)))
import pandas as pd
frame = pd.DataFrame()
frame["name"] = ["gzip 4", "gzip 5", "gzip 6", "gzip 7", "gzip 8", "gzip 9",
"brotli 4", "brotli 5", "brotli 6", "brotli 7", "brotli 8", "brotli 9", "brotli 10", "brotli 11"]
frame["rates"] = np.hstack((np.mean(rates_gzip, axis=0), np.mean(rates_brotli, axis=0)))
frame["savings"] = 1 - 1 / np.hstack((np.mean(rates_gzip, axis=0), np.mean(rates_brotli, axis=0)))
frame["speed(MB/s)"] = np.hstack((np.mean(speed_gzip, axis=0), np.mean(speed_brotli, axis=0))) / 1000000
frame
print("non compressed size range {}MB-{}MB".format(np.min(init_sizes) / 1000000, np.max(init_sizes)/ 1000000))
"""
Explanation: Perform compression
End of explanation
"""
splits = [0, 100000, 1000000, 519170072]
init_sizes = np.array(init_sizes)
group1 = np.where((init_sizes >= 0)*(init_sizes <= 100000))[0]
group2 = np.where((init_sizes > 100000)*(init_sizes <= 1000000))[0]
group3 = np.where((init_sizes > 1000000)*(init_sizes <= 519170072))[0]
print(0, "-", 100000, "bytes")
frame = pd.DataFrame()
frame["name"] = ["gzip 4", "gzip 5", "gzip 6", "gzip 7", "gzip 8", "gzip 9",
"brotli 4", "brotli 5", "brotli 6", "brotli 7", "brotli 8", "brotli 9", "brotli 10", "brotli 11"]
frame["rates"] = np.hstack((np.mean(np.array(rates_gzip)[group1], axis=0), np.mean(np.array(rates_brotli)[group1], axis=0)))
frame["savings"] = 1 - 1 / np.hstack((np.mean(np.array(rates_gzip)[group1], axis=0), np.mean(np.array(rates_brotli)[group1], axis=0)))
frame["speed(MB/s)"] = np.hstack((np.mean(np.array(speed_gzip)[group1], axis=0), np.mean(np.array(speed_brotli)[group1], axis=0))) / 1000000
frame
print(100000, "-", 1000000, "bytes")
frame = pd.DataFrame()
frame["name"] = ["gzip 4", "gzip 5", "gzip 6", "gzip 7", "gzip 8", "gzip 9",
"brotli 4", "brotli 5", "brotli 6", "brotli 7", "brotli 8", "brotli 9", "brotli 10", "brotli 11"]
frame["rates"] = np.hstack((np.mean(np.array(rates_gzip)[group2], axis=0), np.mean(np.array(rates_brotli)[group2], axis=0)))
frame["savings"] = 1 - 1 / np.hstack((np.mean(np.array(rates_gzip)[group2], axis=0), np.mean(np.array(rates_brotli)[group2], axis=0)))
frame["speed(MB/s)"] = np.hstack((np.mean(np.array(speed_gzip)[group2], axis=0), np.mean(np.array(speed_brotli)[group2], axis=0))) / 1000000
frame
print(1000000, "-", 519170072, "bytes")
frame = pd.DataFrame()
frame["name"] = ["gzip 4", "gzip 5", "gzip 6", "gzip 7", "gzip 8", "gzip 9",
"brotli 4", "brotli 5", "brotli 6", "brotli 7", "brotli 8", "brotli 9", "brotli 10", "brotli 11"]
frame["rates"] = np.hstack((np.mean(np.array(rates_gzip)[group3], axis=0), np.mean(np.array(rates_brotli)[group3], axis=0)))
frame["savings"] = 1 - 1 / np.hstack((np.mean(np.array(rates_gzip)[group3], axis=0), np.mean(np.array(rates_brotli)[group3], axis=0)))
frame["speed(MB/s)"] = np.hstack((np.mean(np.array(speed_gzip)[group3], axis=0), np.mean(np.array(speed_brotli)[group3], axis=0))) / 1000000
frame
"""
Explanation: Group results by non compressed size ranges
End of explanation
"""
|
bayesimpact/bob-emploi | data_analysis/notebooks/datasets/bmo/bmo_rome_mapping.ipynb | gpl-3.0 |
import codecs
import os
import pandas as pd
import seaborn as sns
data_path = '../../../data'
"""
Explanation: Author: Valentin Lehuger
Skip the run test because the ROME version has to be updated to make it work in the exported repository. TODO: Update ROME and remove the skiptest flag.
BMO ROME analysis
This notebook is about how the mapping between BMO and ROME works and how to interpret the differents job categroy identifiers.
The ROME is a job classification created by the french employment agency "pole emploi" and the BMO is a study of labour market emitted by a Statistics agency.
End of explanation
"""
bmo_df = pd.read_csv(os.path.join(data_path, 'bmo/bmo_2015.csv'))
bmo_df.sample(frac=0.0001)
# Select useful columns of codes and names
bmo_df = bmo_df[[u'PROFESSION_FAMILY_CODE', u'PROFESSION_FAMILY_NAME', u'FAP_CODE', u'FAP_NAME']]
bmo_df = bmo_df.sort_values(['PROFESSION_FAMILY_CODE', 'FAP_CODE'])
# create correspondance profession_family/fap codes df
FAP_profession_family = bmo_df[[u'PROFESSION_FAMILY_CODE', u'FAP_CODE']].drop_duplicates()
# Create correspondance code/name dfs
profession_family_correspondance = bmo_df[[u'PROFESSION_FAMILY_CODE', u'PROFESSION_FAMILY_NAME']].drop_duplicates()
FAP_correspondance = bmo_df[[u'FAP_CODE', u'FAP_NAME']].drop_duplicates().sort_values([u'FAP_CODE'])
"""
Explanation: Load BMO datas
End of explanation
"""
rome_df = pd.read_csv(os.path.join(data_path, 'rome/csv/unix_referentiel_appellation_v332_utf8.csv'))
# Select useful columns of codes and names
rome_df = rome_df[['code_ogr', 'libelle_appellation_court', 'code_rome']]
rome_df.columns = [u'OGR_CODE', u'ROME_PROFESSION_SHORT_NAME', u'ROME_PROFESSION_CARD_CODE']
rome_df = rome_df[[u'OGR_CODE', u'ROME_PROFESSION_SHORT_NAME', u'ROME_PROFESSION_CARD_CODE']].drop_duplicates().sort_values([u'OGR_CODE', u'ROME_PROFESSION_CARD_CODE'])
print("{} uniques romes.".format(len(rome_df.ROME_PROFESSION_CARD_CODE.unique())))
rome_df[rome_df.ROME_PROFESSION_CARD_CODE == "L1503"]
"""
Explanation: This document (http://travail-emploi.gouv.fr/IMG/pdf/FAP-2009_Introduction_et_table_de_correspondance.pdf) gives a very good explanation of how the FAP codes are built.
The first character is the professional field. (A = Agriculture, marins, fishing / B = Civil engineering / C = Electricity, electronics, etc)
The second and third characters are used to group 87 FAP categories.
The fourth character indicatesthe qualification level. (0 = undefined, 2 = unskilled worker to 9 = engineer and manager)
The fifth character is used to group the professionnal families in to 225 more specific categories.
Load ROME datas
End of explanation
"""
def parse_faprome_file(filename):
with codecs.open(filename, 'r', 'latin-1') as txtfile:
table = pd.DataFrame([x.replace('"', '').split("=") for x in txtfile.readlines() if x.startswith('"')])
return table
bmo_rome = parse_faprome_file(os.path.join(data_path, 'crosswalks/passage_fap2009_romev3.txt'))
bmo_rome[0] = bmo_rome.apply(lambda x: [s.strip() for s in x[0].split(',')], axis=1)
bmo_rome[1] = bmo_rome.apply(lambda x: x[1].replace('\n', '').replace('\r', '').replace('\t', '').strip(), axis=1)
bmo_rome.columns = [u"ROME", u"FAP"]
s = bmo_rome.ROME.apply(pd.Series, 1).stack()
s.index = s.index.droplevel(-1)
s.name = u"ROME"
bmo_rome = bmo_rome[[u"FAP"]].join(s)
bmo_rome_entire = bmo_rome
bmo_rome = bmo_rome[bmo_rome.ROME.str.len() == 5]
print("{} uniques romes.".format(len(bmo_rome.ROME.unique())))
bmo_rome.head()
A = pd.merge(bmo_rome, rome_df, left_on="ROME", right_on=u"ROME_PROFESSION_CARD_CODE")[["FAP", "ROME", "OGR_CODE", "ROME_PROFESSION_SHORT_NAME"]]
bmo_rome_merged = pd.merge(A, bmo_df, left_on="FAP", right_on="FAP_CODE").drop_duplicates()[["FAP", "ROME", "OGR_CODE", "ROME_PROFESSION_SHORT_NAME", "PROFESSION_FAMILY_CODE", "PROFESSION_FAMILY_NAME", "FAP_NAME"]]
bmo_rome_merged.head()
bmo_rome_merged.sample(frac=0.01).head()
"""
Explanation: Load BMO/ROME correspondance
End of explanation
"""
rome_df[rome_df.ROME_PROFESSION_CARD_CODE == "L1503"].head()
"""
Explanation: There are 4 kinds of codes to describe jobs in ROME and BMO datasets
Identifiers created by Pole emploi : ROME and OGR_CODE
Identifiers created by DARES (Statistics Agency) : FAP and PROFESSION_FAMILY_CODE
From larger to smaller groups, we get :
PROFESSION_FAMILY_CODE > FAP > ROME_CODE > OGR_CODE
In the ROME classification, the OGR_CODE is the most accurate job identifier
(example: props or pyrotechnist or marketing director).
End of explanation
"""
bmo_rome_merged[bmo_rome_merged.FAP == "U1Z80"].sample(frac=0.05)
"""
Explanation: A ROME_PROFESSION_CARD_CODE is a group of OGR_CODE for very similar jobs in one field on a same hierarchical level.
(example in entertainment field: props, pyrotechnist, steward are under the same ROME_PROFESSION_CARD_CODE)
End of explanation
"""
bmo_rome_merged[bmo_rome_merged.PROFESSION_FAMILY_CODE == "C"].head()
"""
Explanation: The FAP code is a larger group of jobs which can include multiple ROME_PROFESSION_CARD_CODE in same a field with differents hierarchical levels.
(example: props, pyrotechnist, steward are grouped with production manager and ballet director)
End of explanation
"""
unique_fap_rome_couples = bmo_rome_merged[["FAP", "ROME"]].drop_duplicates()
rome_by_fap_count = unique_fap_rome_couples.groupby("FAP")["ROME"].count()
rome_by_fap_count.hist(bins=rome_by_fap_count.max())
print("mean : {0:.4f}".format(rome_by_fap_count.mean()))
print("standard deviation : {0:.4f}".format(rome_by_fap_count.std()))
print("{0:.2f}% of FAP contains less than 5 ROME.".format(rome_by_fap_count[rome_by_fap_count <= 4].count() / 130. * 100))
"""
Explanation: PROFESSION_FAMILY_CODE is the largest group of all identifiers. Each classification id includes many FAP codes.
It contains 7 class of jobs within Administrative jobs, social and medical jobs, etc...
Distribution of ROME count by FAP
End of explanation
"""
FAP_correspondance[FAP_correspondance.FAP_CODE.isin(rome_by_fap_count[rome_by_fap_count < 5].index)].sample(frac=0.3)
"""
Explanation: The FAP code seems to be a pretty low level of job groups. 2/3 FAP contains one or two ROME.
End of explanation
"""
FAP_correspondance[FAP_correspondance.FAP_CODE.isin(rome_by_fap_count[rome_by_fap_count >= 5].index)]
"""
Explanation: The designations of FAP under 5 ROME are very specific. For example doctors (V2Z90), dentists (V2Z91), pharmacists (V2Z93), telemarketers (R1Z67)
End of explanation
"""
|
0x4a50/udacity-0x4a50-deep-learning-nanodegree | first-neural-network/Your_first_neural_network.ipynb | mit | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
"""
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
"""
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
"""
rides[:24*10].plot(x='dteday', y='cnt')
"""
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
"""
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
"""
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
"""
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
"""
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
"""
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
"""
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
"""
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
"""
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
"""
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
hidden_outputs = self.calc_hidden_outputs(X)
final_outputs = self.calc_final_outputs(hidden_outputs)
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
output_error_term = error * 1
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(self.weights_hidden_to_output, error)
# TODO: Backpropagated error terms - Replace these values with your calculations.
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:, None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
def calc_hidden_outputs(self, features):
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
return self.activation_function(hidden_inputs) # signals from hidden layer
def calc_final_outputs(self, hidden_outputs):
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
return final_inputs # signals from final output layer
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
### Forward pass ###
return self.calc_final_outputs(self.calc_hidden_outputs(features))
def MSE(y, Y):
return np.mean((y-Y)**2)
"""
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
"""
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
"""
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
"""
import sys
### Set the hyperparameters here ###
iterations = 4000
learning_rate = 0.6
hidden_nodes = 12
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
"""
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
"""
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
"""
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
"""
|
Leguark/pynoddy | docs/notebooks/Feature-Analysis.ipynb | gpl-2.0 | from IPython.core.display import HTML
css_file = 'pynoddy.css'
HTML(open(css_file, "r").read())
import sys, os
import matplotlib.pyplot as plt
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('../..')
import pynoddy.history
import numpy as np
%matplotlib inline
"""
Explanation: Analysis of classification results
Objective: read back in the classification results and compare to original model
End of explanation
"""
import pynoddy.output
reload(pynoddy.output)
output_name = "feature_out"
nout = pynoddy.output.NoddyOutput(output_name)
nout.plot_section('x',
colorbar = True, title="",
savefig = False, fig_filename = "ex01_faults_combined.eps",
cmap = 'YlOrRd') # note: YlOrRd colourmap should be suitable for colorblindness!
"""
Explanation: Load original model:
End of explanation
"""
f_set1 = open("../../sandbox/jack/features_lowres-5 with class ID.csv").readlines()
f_set1[0]
# initialise classification results array
cf1 = np.empty_like(nout.block)
# iterate through results and append
for f in f_set1[1:]:
fl = f.rstrip().split(",")
cf1[int(fl[0]),int(fl[1]),int(fl[2])] = int(fl[6])
f_set1[2:6]
nout.plot_section('x', data = cf1,
colorbar = True, title="", layer_labels = range(5),
savefig = False, fig_filename = "ex01_faults_combined.eps",
cmap = 'YlOrRd')
# compare to original model:
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
nout.plot_section('x', ax = ax1,
colorbar = False, title="",
savefig = False, fig_filename = "ex01_faults_combined.eps",
cmap = 'YlOrRd') # note: YlOrRd colourmap should be suitable for colorblindness!
nout.plot_section('x', data = cf1,ax = ax2,
colorbar = False, title="",
savefig = False, fig_filename = "ex01_faults_combined.eps",
cmap = 'YlOrRd')
"""
Explanation: Load sample classification results
The implemented classification method does not return a single best-fit model, but an ensemble of probable model (as it is an MCMC sampling from the posterior). As a first test, we will therefore import single models first and check the misclassification rate defined as:
$$\mbox{MCR} = \frac{\mbox{Number of misclassified voxels}}{\mbox{Total number of voxels}}$$
End of explanation
"""
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[15,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
plt.colorbar(im1)
im2 = ax2.imshow(cf1[15,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
print np.unique(nout.block)
print np.unique(cf1)
# define id mapping from cluster results to original:
# id_mapping = {2:1, 3:2, 4:5, 5:3, 1:4}
# remapping for result 4:
# id_mapping = {4:5, 3:4, 1:3, 5:2, 2:1}
# remapping for result 5:
id_mapping = {2:5, 1:4, 3:3, 5:2, 4:1}
"""
Explanation: Results of the classification do not necessarily contain the same ids as the units in the initial model. This seems to be the case here, as well. Re-sort:
End of explanation
"""
def re_map(id_val):
return id_mapping[id_val]
re_map_vect = np.vectorize(re_map)
cf1_remap = re_map_vect(cf1)
# compare to original model:
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
nout.plot_section('x', ax = ax1,
colorbar = False, title="",
savefig = False, fig_filename = "ex01_faults_combined.eps",
cmap = 'YlOrRd') # note: YlOrRd colourmap should be suitable for colorblindness!
nout.plot_section('x', data = cf1_remap, ax = ax2,
colorbar = False, title="",
savefig = False, fig_filename = "ex01_faults_combined.eps",
cmap = 'YlOrRd')
feature_diff = (nout.block != cf1_remap)
nout.plot_section('x', data = feature_diff,
colorbar = False, title="Difference between real and matched model",
cmap = 'YlOrRd')
# Calculate the misclassification:
np.sum(feature_diff) / float(nout.n_total)
# Export misclassification to VTK:
misclass = feature_diff.astype('int')
nout.export_to_vtk(vtk_filename = "misclass", data=misclass)
"""
Explanation: Now remap results and compare again:
Note: create a vectorised function to enable a direct re-mapping of the entire array while keeping the structure!
End of explanation
"""
def calc_misclassification(nout, filename):
"""Calculate misclassification for classification results data stored in file
**Arguments**:
- *nout* = NoddyOutput: original model (Noddy object)
- *filename* = filename (with path): file with classification results
"""
f_set1 = open(filename).readlines()
# initialise classification results array
cf1 = np.empty_like(nout.block)
# iterate through results and append
for f in f_set1[1:]:
fl = f.rstrip().split(",")
cf1[int(fl[0]),int(fl[1]),int(fl[2])] = int(fl[6])
# remap ids
cf1_remap = re_map_vect(cf1)
# determine differences in class ids:
feature_diff = (nout.block != cf1_remap)
# Calculate the misclassification:
misclass = np.sum(feature_diff) / float(nout.n_total)
return misclass
filename = r"../../sandbox/jack/features_lowres-4 with class ID.csv"
calc_misclassification(nout, filename)
"""
Explanation: Combined analysis in a single function
Note: function assumes correct EOL character in data file (check/ adjust with vi: %s/\r/\r/g)
Problem: remapping is unfortunatley not identical!
End of explanation
"""
# f_set1 = open("../../sandbox/jack/features_lowres-6 with class ID and Prob.csv").readlines()
f_set1 = open("../../sandbox/jack/features_lowres-8 with Prob (weak Beta).csv").readlines()
f_set1[0]
# initialise classification results array
cf1 = np.empty_like(nout.block)
# Initialise probability array
probs = np.empty((5, cf1.shape[0], cf1.shape[1], cf1.shape[2]))
# iterate through results and append
for f in f_set1[1:]:
fl = f.rstrip().split(",")
i,j,k = int(fl[0]),int(fl[1]),int(fl[2])
# cf1[i,j,k] = int(fl[6])
for i2 in range(5):
probs[i2,i,j,k] = float(fl[i2+6])
"""
Explanation: Determine validity of uncertainty estimate
In addition to single model realisations, an esitmate of model uncertainty is calculated (this is, actually, also one of the main "selling points" of the paper). So, we will now check if the correct model is actually in the range of the estimated model uncertainty bounds (i.e.: if all voxets values from the original model actually have a non-zero probability in the estimated model)!
First step: load estimated class probabilities:
End of explanation
"""
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[15,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
plt.colorbar(im2)
im2 = ax2.imshow(probs[4,15,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
# Note: map now ids from original model to probability fields in results:
prob_mapping = {4:0, 5:1, 3:2, 1:3, 2:4}
# Check membership for each class in original model
for i in range(1,6):
tmp = np.ones_like(nout.block) * (nout.block==i)
# test if voxels have non-zero probability by checking conjunction with zero-prob voxels
prob_zero = probs[prob_mapping[i],:,:,:] == 0
misidentified = np.sum(tmp * prob_zero)
print i, misidentified
prob_zero = probs[prob_mapping[1],:,:,:] == 0
"""
Explanation: We now need to perform the remapping similar to before, but now for the probability fields:
End of explanation
"""
f_set1 = open("../../sandbox/jack/features_lowres-7 with 151 realizations.csv").readlines()
# Initialise results array
all_results = np.empty((152, cf1.shape[0], cf1.shape[1], cf1.shape[2]))
# iterate through results and append
for f in f_set1[1:]:
fl = f.rstrip().split(",")
i,j,k = int(fl[0]),int(fl[1]),int(fl[2])
# cf1[i,j,k] = int(fl[6])
for i2 in range(152):
try:
all_results[i2,i,j,k] = float(fl[i2+5])
except IndexError:
print i2, i, j, k
"""
Explanation: Determination of misclassification statistics
Next step: use multiple results from one chain to determine misclassification statistics.
End of explanation
"""
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[15,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
plt.colorbar(im1)
im2 = ax2.imshow(all_results[5,15,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
# mapping from results to original:
id_mapping = {2:5, 1:4, 3:3, 5:2, 4:1}
def re_map(id_val):
return id_mapping[id_val]
re_map_vect = np.vectorize(re_map)
# Apply remapping to all but first result (seems to be original feature)
all_results_remap = re_map_vect(all_results[1:,:,:,:])
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[30,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
# plt.colorbar(im1)
im2 = ax2.imshow(all_results_remap[85,30,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
"""
Explanation: First, we again need to check the assignment of the units/ class ids:
End of explanation
"""
all_misclass = np.empty(151)
for i in range(151):
# determine differences in class ids:
feature_diff = (nout.block != all_results_remap[i,:,:,:])
# Calculate the misclassification:
all_misclass[i] = np.sum(feature_diff) / float(nout.n_total)
plt.plot(all_misclass)
plt.title("Misclassification of suite lowres-7")
plt.xlabel("Model id")
plt.ylabel("MCR")
"""
Explanation: We can now determine the misclassification for all results:
End of explanation
"""
f_set1 = open("../../sandbox/jack/features_lowres-9 with 151 realizations.csv").readlines()
# Initialise results array
all_results = np.empty((151, cf1.shape[0], cf1.shape[1], cf1.shape[2]))
# iterate through results and append
for f in f_set1[1:]:
fl = f.rstrip().split(",")
i,j,k = int(fl[0]),int(fl[1]),int(fl[2])
# cf1[i,j,k] = int(fl[6])
for i2 in range(151):
try:
all_results[i2,i,j,k] = float(fl[i2+6])
except IndexError:
print i2, i, j, k
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[15,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
plt.colorbar(im1)
im2 = ax2.imshow(all_results[20,15,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
# define re-mapping
id_mapping = {2:5, 1:4, 3:3, 5:2, 4:1}
# Apply remapping to all but first result (seems to be original feature)
all_results_remap = re_map_vect(all_results[1:,:,:,:])
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[30,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
# plt.colorbar(im1)
im2 = ax2.imshow(all_results_remap[115,30,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
all_misclass = np.empty(150)
for i in range(150):
# determine differences in class ids:
feature_diff = (nout.block != all_results_remap[i,:,:,:])
# Calculate the misclassification:
all_misclass[i] = np.sum(feature_diff) / float(nout.n_total)
plt.plot(all_misclass)
plt.title("Misclassification of suite lowres-9")
plt.xlabel("Model id")
plt.ylabel("MCR")
f_set1 = open("../../sandbox/jack/features_lowres-10 with 2000 realizations.csv").readlines()
# Initialise results array
all_results = np.empty((2000, cf1.shape[0], cf1.shape[1], cf1.shape[2]))
# iterate through results and append
for f in f_set1[1:]:
fl = f.rstrip().split(",")
i,j,k = int(fl[0]),int(fl[1]),int(fl[2])
# cf1[i,j,k] = int(fl[6])
for i2 in range(2000):
try:
all_results[i2,i,j,k] = float(fl[i2+6])
except IndexError:
print i2, i, j, k
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[15,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
plt.colorbar(im1)
im2 = ax2.imshow(all_results[20,15,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
# define re-mapping
id_mapping = {3:5, 4:4, 2:3, 1:2, 5:1, 0:0}
# Apply remapping to all but first result (seems to be original feature)
all_results_remap = re_map_vect(all_results[2:,:,:,:])
np.unique(all_results[1999,:,:,:])
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[30,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
# plt.colorbar(im1)
im2 = ax2.imshow(all_results_remap[115,30,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
all_misclass = np.empty(1998)
for i in range(1998):
# determine differences in class ids:
feature_diff = (nout.block != all_results_remap[i,:,:,:])
# Calculate the misclassification:
all_misclass[i] = np.sum(feature_diff) / float(nout.n_total)
plt.plot(all_misclass[100:])
plt.title("Misclassification of suite lowres-10")
plt.xlabel("Model id")
plt.ylabel("MCR")
plt.hist(all_misclass[100:])
"""
Explanation: It seems to be the case that the upper thin layer vanishes after approimately 30-40 iterations. From then on, the misclassification rate is approximately constant at around 9.5 percent (which is still quite acceptable!).
Let's compare this now to classifications with another (lower) beta value (which should put more weight to the data?):
End of explanation
"""
# f_set1 = open("../../sandbox/jack/features_lowres-6 with class ID and Prob.csv").readlines()
f_set1 = open("../../sandbox/jack/features_lowres-10 with Prob (weak Beta).csv").readlines()
# initialise classification results array
cf1 = np.empty_like(nout.block)
f_set1[0]
# Initialise probability array
probs = np.empty((5, cf1.shape[0], cf1.shape[1], cf1.shape[2]))
# iterate through results and append
for f in f_set1[1:]:
fl = f.rstrip().split(",")
i,j,k = int(fl[0]),int(fl[1]),int(fl[2])
# cf1[i,j,k] = int(fl[6])
for i2 in range(5):
probs[i2,i,j,k] = float(fl[i2+6])
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[15,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
plt.colorbar(im2)
im2 = ax2.imshow(probs[0,15,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
# Note: map now ids from original model to probability fields in results:
prob_mapping = {2:0, 3:1, 5:2, 4:3, 1:4}
# Check membership for each class in original model
for i in range(1,6):
tmp = np.ones_like(nout.block) * (nout.block==i)
# test if voxels have non-zero probability by checking conjunction with zero-prob voxels
prob_zero = probs[prob_mapping[i],:,:,:] == 0
misidentified = np.sum(tmp * prob_zero)
print i, misidentified
info_entropy = np.zeros_like(nout.block)
for prob in probs:
info_entropy[prob > 0] -= prob[prob > 0] * np.log2(prob[prob > 0])
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
im1 = ax1.imshow(nout.block[15,:,:].transpose(),
interpolation = 'none', cmap = 'YlOrRd', origin = 'lower left')
plt.colorbar(im2)
im2 = ax2.imshow(info_entropy[1,:,:].transpose(),
interpolation = 'none',
cmap = 'YlOrRd', origin = 'lower left')
nout.export_to_vtk(vtk_filename = "../../sandbox/jack/info_entropy", data = info_entropy)
np.max(probs)
np.max(info_entropy)
"""
Explanation: Determine validity of estimated probability
End of explanation
"""
|
subhankarb/Machine-Learning-PlayGround | Machine-Learning-Specialization/machine_learning_regression/week2/numpy-tutorial.ipynb | apache-2.0 | import numpy as np # importing this way allows us to refer to numpy as np
"""
Explanation: Numpy Tutorial
Numpy is a computational library for Python that is optimized for operations on multi-dimensional arrays. In this notebook we will use numpy to work with 1-d arrays (often called vectors) and 2-d arrays (often called matrices).
For a the full user guide and reference for numpy see: http://docs.scipy.org/doc/numpy/
End of explanation
"""
mylist = [1., 2., 3., 4.]
mynparray = np.array(mylist)
mynparray
"""
Explanation: Creating Numpy Arrays
New arrays can be made in several ways. We can take an existing list and convert it to a numpy array:
End of explanation
"""
one_vector = np.ones(4)
print one_vector # using print removes the array() portion
one2Darray = np.ones((2, 4)) # an 2D array with 2 "rows" and 4 "columns"
print one2Darray
zero_vector = np.zeros(4)
print zero_vector
"""
Explanation: You can initialize an array (of any dimension) of all ones or all zeroes with the ones() and zeros() functions:
End of explanation
"""
empty_vector = np.empty(5)
print empty_vector
"""
Explanation: You can also initialize an empty array which will be filled with values. This is the fastest way to initialize a fixed-size numpy array however you must ensure that you replace all of the values.
End of explanation
"""
mynparray[2]
"""
Explanation: Accessing array elements
Accessing an array is straight forward. For vectors you access the index by referring to it inside square brackets. Recall that indices in Python start with 0.
End of explanation
"""
my_matrix = np.array([[1, 2, 3], [4, 5, 6]])
print my_matrix
print my_matrix[1, 2]
"""
Explanation: 2D arrays are accessed similarly by referring to the row and column index separated by a comma:
End of explanation
"""
print my_matrix[0:2, 2] # recall 0:2 = [0, 1]
print my_matrix[0, 0:3]
"""
Explanation: Sequences of indices can be accessed using ':' for example
End of explanation
"""
fib_indices = np.array([1, 1, 2, 3])
random_vector = np.random.random(10) # 10 random numbers between 0 and 1
print random_vector
print random_vector[fib_indices]
"""
Explanation: You can also pass a list of indices.
End of explanation
"""
my_vector = np.array([1, 2, 3, 4])
select_index = np.array([True, False, True, False])
print my_vector[select_index]
"""
Explanation: You can also use true/false values to select values
End of explanation
"""
select_cols = np.array([True, False, True]) # 1st and 3rd column
select_rows = np.array([False, True]) # 2nd row
print my_matrix[select_rows, :] # just 2nd row but all columns
print my_matrix[:, select_cols] # all rows and just the 1st and 3rd column
"""
Explanation: For 2D arrays you can select specific columns and specific rows. Passing ':' selects all rows/columns
End of explanation
"""
my_array = np.array([1., 2., 3., 4.])
print my_array*my_array
print my_array**2
print my_array - np.ones(4)
print my_array + np.ones(4)
print my_array / 3
print my_array / np.array([2., 3., 4., 5.]) # = [1.0/2.0, 2.0/3.0, 3.0/4.0, 4.0/5.0]
"""
Explanation: Operations on Arrays
You can use the operations '*', '**', '\', '+' and '-' on numpy arrays and they operate elementwise.
End of explanation
"""
print np.sum(my_array)
print np.average(my_array)
print np.sum(my_array)/len(my_array)
"""
Explanation: You can compute the sum with np.sum() and the average with np.average()
End of explanation
"""
array1 = np.array([1., 2., 3., 4.])
array2 = np.array([2., 3., 4., 5.])
print np.dot(array1, array2)
print np.sum(array1*array2)
"""
Explanation: The dot product
An important mathematical operation in linear algebra is the dot product.
When we compute the dot product between two vectors we are simply multiplying them elementwise and adding them up. In numpy you can do this with np.dot()
End of explanation
"""
array1_mag = np.sqrt(np.dot(array1, array1))
print array1_mag
print np.sqrt(np.sum(array1*array1))
"""
Explanation: Recall that the Euclidean length (or magnitude) of a vector is the squareroot of the sum of the squares of the components. This is just the squareroot of the dot product of the vector with itself:
End of explanation
"""
my_features = np.array([[1., 2.], [3., 4.], [5., 6.], [7., 8.]])
print my_features
my_weights = np.array([0.4, 0.5])
print my_weights
my_predictions = np.dot(my_features, my_weights) # note that the weights are on the right
print my_predictions # which has 4 elements since my_features has 4 rows
"""
Explanation: We can also use the dot product when we have a 2D array (or matrix). When you have an vector with the same number of elements as the matrix (2D array) has columns you can right-multiply the matrix by the vector to get another vector with the same number of elements as the matrix has rows. For example this is how you compute the predicted values given a matrix of features and an array of weights.
End of explanation
"""
my_matrix = my_features
my_array = np.array([0.3, 0.4, 0.5, 0.6])
print np.dot(my_array, my_matrix) # which has 2 elements because my_matrix has 2 columns
"""
Explanation: Similarly if you have a vector with the same number of elements as the matrix has rows you can left multiply them.
End of explanation
"""
matrix_1 = np.array([[1., 2., 3.],[4., 5., 6.]])
print matrix_1
matrix_2 = np.array([[1., 2.], [3., 4.], [5., 6.]])
print matrix_2
print 2 * np.dot(matrix_1, matrix_2)
print np.dot(matrix_1, matrix_2)
"""
Explanation: Multiplying Matrices
If we have two 2D arrays (matrices) matrix_1 and matrix_2 where the number of columns of matrix_1 is the same as the number of rows of matrix_2 then we can use np.dot() to perform matrix multiplication.
End of explanation
"""
|
wesleybeckner/salty | examples/salty_eScience_chalk_talk.ipynb | mit | import statistics
import requests
import json
import pickle
import salty
import numpy as np
import matplotlib.pyplot as plt
import numpy.linalg as LINA
from scipy import stats
from scipy.stats import uniform as sp_rand
from scipy.stats import mode
from sklearn.linear_model import Lasso
from sklearn.model_selection import cross_val_score
from sklearn.neural_network import MLPRegressor
import os
import sys
import pandas as pd
from collections import OrderedDict
from numpy.random import randint
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import RandomizedSearchCV
from math import log
from time import sleep
%matplotlib inline
class dev_model():
def __init__(self, coef_data, data):
self.Coef_data = coef_data
self.Data = data
"""
Explanation: <small>This notebook was put together by wesley beckner</small>
<a id='top'></a>
Contents
scrape data
create descriptors
optimize LASSO
create confidence intervals for coefficients
multi-layer perceptron (MLP) regressor
create static files
End of explanation
"""
paper_url = "http://ilthermo.boulder.nist.gov/ILT2/ilsearch?"\
"cmp=&ncmp=1&year=&auth=&keyw=&prp=lcRG"
r = requests.get(paper_url)
header = r.json()['header']
papers = r.json()['res']
i = 1
data_url = 'http://ilthermo.boulder.nist.gov/ILT2/ilset?set={paper_id}'
for paper in papers[:1]:
r = requests.get(data_url.format(paper_id=paper[0]))
data = r.json()['data']
with open("../salty/data/MELTING_POINT/%s.json" % i, "w") as outfile:
json.dump(r.json(), outfile)
#then do whatever you want to data like writing to a file
sleep(0.5) #import step to avoid getting banned by server
i += 1
"""
Explanation: <a id='scrape'></a>
Scrape ILThermo Data
back to top
ILThermo has specific 4-letter tags for the properties in the database. These can be determined by inspecting the web elements on their website.
Melting point: prp=lcRG (note this in the paper_url string)
All that needs to be changed to scrape other property data is the 4-letter tag and the directory in which to save the information.
End of explanation
"""
###add JSON files to density.csv
outer_old = pd.DataFrame()
outer_new = pd.DataFrame()
number_of_files = 2266
for i in range(10):
with open("../salty/data/DENSITY/%s.json" % str(i+1)) as json_file:
#grab data, data headers (names), the salt name
json_full = json.load(json_file)
json_data = pd.DataFrame(json_full['data'])
json_datanames = np.array(json_full['dhead'])
json_data.columns = json_datanames
json_saltname = pd.DataFrame(json_full['components'])
print(json_saltname.iloc[0][3])
inner_old = pd.DataFrame()
inner_new = pd.DataFrame()
#loop through the columns of the data, note that some of the
#json files are missing pressure data.
for indexer in range(len(json_data.columns)):
grab=json_data.columns[indexer]
list = json_data[grab]
my_list = [l[0] for l in list]
dfmy_list = pd.DataFrame(my_list)
dfmy_list.columns = [json_datanames[indexer][0]]
inner_new = pd.concat([dfmy_list, inner_old], axis=1)
inner_old = inner_new
#add the name of the salt
inner_old['salt_name']=json_saltname.iloc[0][3]
#add to the growing dataframe
outer_new = pd.concat([inner_old, outer_old], axis=0)
outer_old = outer_new
print(outer_old)
# pd.DataFrame.to_csv(outer_old, path_or_buf='../salty/data/density.csv', index=False)
"""
Explanation: <a id='descriptors'></a>
Create Descriptors
back to top
The scraped data is in the form of a json file. The json files contain all the experimental information NIST has archived, including methods and experimental error!
Unfortunately the IUPAC names in the database are imperfect. We address this after the following cell.
End of explanation
"""
###a hacky hack solution to cleaning raw ILThermo data
# df = pd.read_csv("../salty/data/viscosity_full.csv")
df = pd.read_csv('../salty/data/density.csv',delimiter=',')
salts = pd.DataFrame(df["salt_name"])
salts = salts.rename(columns={"salt_name": "salts"})
anions= []
cations= []
missed = 0
for i in range(df.shape[0]):
if len(salts['salts'].iloc[i].split()) == 2:
cations.append(salts['salts'].iloc[i].split()[0])
anions.append(salts['salts'].iloc[i].split()[1])
elif len(salts['salts'].iloc[i].split()) == 3:
#two word cation
if"tris(2-hydroxyethyl) methylammonium" in salts['salts'].iloc[i]:
first = salts['salts'].iloc[i].split()[0]
second = salts['salts'].iloc[i].split()[1]
anions.append(salts['salts'].iloc[i].split()[2])
cations.append(first + ' ' + second)
#these strings have two word anions
elif("sulfate" in salts['salts'].iloc[i] or\
"phosphate" in salts['salts'].iloc[i] or\
"phosphonate" in salts['salts'].iloc[i] or\
"carbonate" in salts['salts'].iloc[i]):
first = salts['salts'].iloc[i].split()[1]
second = salts['salts'].iloc[i].split()[2]
cations.append(salts['salts'].iloc[i].split()[0])
anions.append(first + ' ' + second)
elif("bis(trifluoromethylsulfonyl)imide" in salts['salts'].iloc[i]):
#this string contains 2 word cations
first = salts['salts'].iloc[i].split()[0]
second = salts['salts'].iloc[i].split()[1]
third = salts['salts'].iloc[i].split()[2]
cations.append(first + ' ' + second)
anions.append(third)
else:
print(salts['salts'].iloc[i])
missed += 1
elif len(salts['salts'].iloc[i].split()) == 4:
#this particular string block contains (1:1) at end of name
if("1,1,2,3,3,3-hexafluoro-1-propanesulfonate" in salts['salts'].iloc[i]):
first = salts['salts'].iloc[i].split()[0]
second = salts['salts'].iloc[i].split()[1]
cations.append(first + ' ' + second)
anions.append(salts['salts'].iloc[i].split()[2])
else:
#and two word anion
first = salts['salts'].iloc[i].split()[1]
second = salts['salts'].iloc[i].split()[2]
anions.append(first + ' ' + second)
cations.append(salts['salts'].iloc[i].split()[0])
elif("2-aminoethanol-2-hydroxypropanoate" in salts['salts'].iloc[i]):
#one of the ilthermo salts is missing a space between cation/anion
anions.append("2-hydroxypropanoate")
cations.append("2-aminoethanol")
elif len(salts['salts'].iloc[i].split()) == 5:
if("bis[(trifluoromethyl)sulfonyl]imide" in salts['salts'].iloc[i]):
anions.append("bis(trifluoromethylsulfonyl)imide")
first = salts['salts'].iloc[i].split()[0]
second = salts['salts'].iloc[i].split()[1]
third = salts['salts'].iloc[i].split()[2]
fourth = salts['salts'].iloc[i].split()[3]
cations.append(first + ' ' + second + ' ' + third + ' ' + fourth)
if("trifluoro(perfluoropropyl)borate" in salts['salts'].iloc[i]):
anions.append("trifluoro(perfluoropropyl)borate")
cations.append("N,N,N-triethyl-2-methoxyethan-1-aminium")
else:
print(salts['salts'].iloc[i])
missed += 1
anions = pd.DataFrame(anions, columns=["name-anion"])
cations = pd.DataFrame(cations, columns=["name-cation"])
salts=pd.read_csv('../salty/data/salts_with_smiles.csv',delimiter=',')
new_df = pd.concat([salts["name-cation"], salts["name-anion"], salts["Temperature, K"],\
salts["Pressure, kPa"], salts["Specific density, kg/m<SUP>3</SUP>"]],\
axis = 1)
print(missed)
"""
Explanation: Dealing with messy data is commonplace. Even highly vetted data in ILThermo.
I addressed inaccuracies in the IUPAC naming by first parsing the IUPAC names into two strings (caiton and anion) and then hand checking the strings that had more than two components. I then matched these weird IUPAC names to their correct SMILES representations. These are stored in the salty database file cationInfo.csv and anionInfo.csv.
I've taken care of most of them but I've left a few unaddressed and you can see these after executing the cell bellow.
End of explanation
"""
cationDescriptors = salty.load_data("cationDescriptors.csv")
cationDescriptors.columns = [str(col) + '-cation' for col in cationDescriptors.columns]
anionDescriptors = salty.load_data("anionDescriptors.csv")
anionDescriptors.columns = [str(col) + '-anion' for col in anionDescriptors.columns]
# new_df = pd.concat([cations, anions, df["Temperature, K"], df["Pressure, kPa"],\
# df["Specific density, kg/m<SUP>3</SUP>"]], axis=1)
new_df = pd.merge(cationDescriptors, new_df, on="name-cation", how="right")
new_df = pd.merge(anionDescriptors, new_df, on="name-anion", how="right")
new_df.dropna(inplace=True) #remove entires not in smiles database
pd.DataFrame.to_csv(new_df, path_or_buf='../salty/data/density_premodel.csv', index=False)
"""
Explanation: After appending SMILES to the dataframe, we're ready to add RDKit descriptors. Because the descriptors are specific to a given cation and anion, and there are many repeats of these within the data (~10,000 datapoints with ~300 cations and ~150 anions) it is much faster to use pandas to append existing descriptor dataframes to our growing dataframe from ILThermo.
End of explanation
"""
property_model = "density"
df = pd.DataFrame.from_csv('../salty/data/%s_premodel.csv' % property_model, index_col=None)
metaDf = df.select_dtypes(include=["object"])
dataDf = df.select_dtypes(include=[np.number])
property_scale = dataDf["Specific density, kg/m<SUP>3</SUP>"].apply(lambda x: log(float(x)))
cols = dataDf.columns.tolist()
instance = StandardScaler()
data = pd.DataFrame(instance.fit_transform(dataDf.iloc[:,:-1]), columns=cols[:-1])
df = pd.concat([data, property_scale, metaDf], axis=1)
mean_std_of_coeffs = pd.DataFrame([instance.mean_,instance.scale_], columns=cols[:-1])
viscosity_devmodel = dev_model(mean_std_of_coeffs, df)
pickle_out = open("../salty/data/%s_devmodel.pkl" % property_model, "wb")
pickle.dump(viscosity_devmodel, pickle_out)
pickle_out.close()
"""
Explanation: <a id='optimize'></a>
Optimize LASSO (alpha hyperparameter)
back to top
I like to shrink my feature space before feeding it into a neural network.
This is useful for two reasons. We can combat overfitting in our neural network and we can speed up our genetic search algorithm by reducing the number of computations needed in our fitness test--more on this later.
Scikit-learn has a random search algorithm that is pretty easy to implement and useful. I've personally used bootstrap, cross validation, and shuffle-split to parameterize LASSO on ILThermo data and they all agree pretty well with each other.
End of explanation
"""
pickle_in = open("../salty/data/%s_devmodel.pkl" % property_model, "rb")
devmodel = pickle.load(pickle_in)
df = devmodel.Data
metaDf = df.select_dtypes(include=["object"])
dataDf = df.select_dtypes(include=[np.number])
X_train = dataDf.values[:,:-1]
Y_train = dataDf.values[:,-1]
#metaDf["Specific density, kg/m<SUP>3</SUP>"].str.split().apply(lambda x: log(float(x[0])))
"""
Explanation: At this point I introduce a new class of objects called devmodel.
devmodel is a pickle-able object. self.Data contains the scaled/centered feature data and log of the property data as well as the original IUPAC names and SMILES. This makes it easy to consistently unpickle the devmodel and begin using it in an sklearn algorithm without making changes to the dataframe. Self.Coef_data contains the mean and standard deviation of the features so that structure candidates in our genetic algorithm can be scaled and centered appropriately.
End of explanation
"""
#param_grid = {"alpha": sp_rand(0,0.1), "hidden_layer_sizes" : [randint(10)]}
# model = MLPRegressor(max_iter=10000,tol=1e-8)
param_grid = {"alpha": sp_rand(0.001,0.1)}
model = Lasso(max_iter=1e5,tol=1e-8)
grid = RandomizedSearchCV(estimator=model, param_distributions=param_grid, n_jobs=-1,\
n_iter=15)
grid_result = grid.fit(X_train, Y_train)
print(grid_result.best_estimator_)
"""
Explanation: And now we can parameterize our LASSO model:
End of explanation
"""
iterations=2
averages=np.zeros(iterations)
variances=np.zeros(iterations)
test_MSE_array=[]
property_model = "density"
pickle_in = open("../salty/data/%s_devmodel.pkl" % property_model, "rb")
devmodel = pickle.load(pickle_in)
df = devmodel.Data
df = df.sample(frac=1)
# df["Viscosity, Pas"] = df["Viscosity, Pas"].str.split().apply(lambda x: log(float(x[0])))
metadf = df.select_dtypes(include=["object"])
datadf = df.select_dtypes(include=[np.number])
data=np.array(datadf)
n = data.shape[0]
d = data.shape[1]
d -= 1
n_train = int(n*0.8) #set fraction of data to be for training
n_test = n - n_train
deslist=datadf.columns
score=np.zeros(len(datadf.columns))
feature_coefficients=np.zeros((len(datadf.columns),iterations))
test_MSE_array=[]
model_intercept_array=[]
for i in range(iterations):
data = np.random.permutation(data)
X_train = np.zeros((n_train,d)) #prepare train/test arrays
X_test = np.zeros((n_test,d))
Y_train = np.zeros((n_train))
Y_test = np.zeros((n_test))
###sample from training set with replacement
for k in range(n_train):
x = randint(0,n_train)
X_train[k] = data[x,:-1]
Y_train[k] = (float(data[x,-1]))
n = data.shape[0]
###sample from test set with replacement
for k in range(n_test):
x = randint(n_train,n)
X_test[k] = data[x,:-1]
Y_test[k] = (float(data[x,-1]))
###train the lasso model
model = Lasso(alpha=0.007115873059701538,tol=1e-10,max_iter=4000)
model.fit(X_train,Y_train)
###Check what features are selected
p=0
avg_size=[]
for a in range(len(data[0])-1):
if model.coef_[a] != 0:
score[a] = score[a] + 1
feature_coefficients[a,i] = model.coef_[a] ###append the model coefs
p+=1
avg_size.append(p)
###Calculate the test set MSE
Y_hat = model.predict(X_test)
n = len(Y_test)
test_MSE = np.sum((Y_test-Y_hat)**2)**1/n
test_MSE_array.append(test_MSE)
###Grab intercepts
model_intercept_array.append(model.intercept_)
print("{}\t{}".format("average feature length:", np.average(avg_size)))
print("{}\t{}".format("average y-intercept:", "%.2f" % np.average(model_intercept_array)))
print("{}\t{}".format("average test MSE:", "%.2E" % np.average(test_MSE_array)))
print("{}\t{}".format("average MSE std dev:", "%.2E" % np.std(test_MSE_array)))
select_score=[]
select_deslist=[]
feature_coefficient_averages=[]
feature_coefficient_variance=[]
feature_coefficients_all=[]
for a in range(len(deslist)):
if score[a] != 0:
select_score.append(score[a])
select_deslist.append(deslist[a])
feature_coefficient_averages.append(np.average(feature_coefficients[a,:]))
feature_coefficient_variance.append(np.std(feature_coefficients[a,:]))
feature_coefficients_all.append(feature_coefficients[a,:])
"""
Explanation: <a id='ci_coeff'></a>
Determine Confidence Intervals for LASSO Coefficients
back to top
It can be incredibly useful to look at our coefficient response to changes in the underlying training data (e.g. does it look like one of our features are being selected because of a single type of training datum, category of salt, etc.)
This can be assessed using the bootstrap.
End of explanation
"""
#save the selected feature coeffs and their scores
df = pd.DataFrame(select_score, select_deslist)
df.to_pickle("../salty/data/bootstrap_list_scores.pkl")
#save the selected feature coefficients
df = pd.DataFrame(data=np.array(feature_coefficients_all).T, columns=select_deslist)
df = df.T.sort_values(by=1, ascending=False)
df.to_pickle("../salty/data/bootstrap_coefficients.pkl")
#save all the bootstrap data to create a box & whiskers plot
df = pd.DataFrame(data=[feature_coefficient_averages,\
feature_coefficient_variance], columns=select_deslist)
df = df.T.sort_values(by=1, ascending=False)
df.to_pickle("../salty/data/bootstrap_coefficient_estimates.pkl")
#save the coefficients sorted by their abs() values
df = pd.DataFrame(select_score, select_deslist)
df = df.sort_values(by=0, ascending=False).iloc[:]
cols = df.T.columns.tolist()
df = pd.read_pickle('../salty/data/bootstrap_coefficient_estimates.pkl')
df = df.loc[cols]
med = df.T.median()
med.sort_values()
newdf = df.T[med.index]
newdf.to_pickle('../salty/data/bootstrap_coefficient_estimates_top_sorted.pkl')
df = pd.DataFrame(select_score, select_deslist)
df.sort_values(by=0, ascending=False)
model = pd.read_pickle('../salty/data/bootstrap_coefficient_estimates_top_sorted.pkl')
model2 = model.abs()
df = model2.T.sort_values(by=0, ascending=False).iloc[:]
cols = df.T.columns.tolist()
df = pd.read_pickle('../salty/data/bootstrap_coefficients.pkl')
df = df.loc[cols]
med = df.T.median()
med.sort_values()
newdf = df.T[med.index]
newdf = newdf.replace(0, np.nan)
props = dict(boxes=tableau20[0], whiskers=tableau20[8], medians=tableau20[4],\
caps=tableau20[6])
newdf.abs().plot(kind='box', figsize=(5,12), subplots=False, fontsize=18,\
showmeans=True, logy=False, sharey=True, sharex=True, whis='range', showfliers=False,\
color=props, vert=False)
plt.xticks(np.arange(0,0.1,0.02))
print(df.shape)
# plt.savefig(filename='paper_images/Box_Plot_All_Salts.eps', bbox_inches='tight', format='eps',\
# transparent=True)
"""
Explanation: Executing the following cell will overwrite saved files that were done for many bootstrap itterations.
End of explanation
"""
df = pd.read_pickle('../salty/data/bootstrap_coefficients.pkl')
med = df.T.median()
med.sort_values()
newdf = df.T[med.index]
df = newdf
for index, string in enumerate(newdf.columns):
print(string)
#get mean, std, N, and SEM from our sample
samplemean=np.mean(df[string])
print('sample mean', samplemean)
samplestd=np.std(df[string],ddof=1)
print('sample std', samplestd)
sampleN=1000
samplesem=stats.sem(df[string])
print('sample SEM', samplesem)
#t, the significance level of our sample mean is defined as
#samplemean - 0 / standard error of sample mean
#in other words, the number of standard deviations
#the coefficient value is from 0
#the t value by itself does not tell us very much
t=(samplemean)/samplesem
print('t', t)
#the p-value tells us the propbability of achieving a value
#at least as extreme as the one for our dataset if the null
#hypothesis were true
p=stats.t.sf(np.abs(t),sampleN-1)*2 #multiply by two for two-sided test
print('p', p)
#test rejection of the null hypothesis based on
#significance level of 0.05
alpha=0.05
if p < alpha:
print('reject null hypothesis')
else:
print('null hypothesis accepted')
"""
Explanation: It can also be useful to evaluate the t-scores for the coefficients.
End of explanation
"""
mse_scores=[]
for i in range(df.shape[0]):
model = pd.read_pickle('../salty/data/bootstrap_coefficient_estimates_top_sorted.pkl')
model2 = model.abs()
df = model2.T.sort_values(by=0, ascending=False).iloc[:i]
cols = df.T.columns.tolist()
model = model[cols]
cols = model.columns.tolist()
cols.append("Specific density, kg/m<SUP>3</SUP>")
property_model = "density"
pickle_in = open("../salty/data/%s_devmodel.pkl" % property_model, "rb")
devmodel = pickle.load(pickle_in)
df = devmodel.Data
df = df.sample(frac=1)
metadf = df.select_dtypes(include=["object"])
datadf = df.select_dtypes(include=[np.number])
df = datadf.T.loc[cols]
data=np.array(df.T)
n = data.shape[0]
d = data.shape[1]
d -= 1
n_train = 0#int(n*0.8) #set fraction of data to be for training
n_test = n - n_train
X_train = np.zeros((n_train,d)) #prepare train/test arrays
X_test = np.zeros((n_test,d))
Y_train = np.zeros((n_train))
Y_test = np.zeros((n_test))
X_train[:] = data[:n_train,:-1] #fill arrays according to train/test split
Y_train[:] = (data[:n_train,-1].astype(float))
X_test[:] = data[n_train:,:-1]
Y_test[:] = (data[n_train:,-1].astype(float))
Y_hat = np.dot(X_test, model.loc[0])+np.mean(Y_test[:] - np.dot(X_test[:], model.loc[0]))
n = len(Y_test)
test_MSE = np.sum((Y_test-Y_hat)**2)**1/n
mse_scores.append(test_MSE)
with plt.style.context('seaborn-whitegrid'):
fig=plt.figure(figsize=(5,5), dpi=300)
ax=fig.add_subplot(111)
ax.plot(mse_scores)
ax.grid(False)
# plt.xticks(np.arange(0,31,10))
# plt.yticks(np.arange(0,1.7,.4))
"""
Explanation: Create Models Progressively Dropping Features
back to top
A last check that I find very useful is progressively dropping features from the LASSO model (based on their average coefficients--see box and whiskers plot above). At some point we should see that the inclusion of additional features doesn't improve the performance of the model. In this case we see improvement fall off at about 15-20 features.
End of explanation
"""
####Create dataset according to LASSO selected features
df = pd.read_pickle("../salty/data/bootstrap_list_scores.pkl")
df = df.sort_values(by=0, ascending=False)
avg_selected_features=20
df = df.iloc[:avg_selected_features]
# coeffs = mean_std_of_coeffs[cols]
property_model = "density"
pickle_in = open("../salty/data/%s_devmodel.pkl" % property_model, "rb")
devmodel = pickle.load(pickle_in)
rawdf = devmodel.Data
rawdf = rawdf.sample(frac=1)
metadf = rawdf.select_dtypes(include=["object"])
datadf = rawdf.select_dtypes(include=[np.number])
to_add=[]
for i in range(len(df)):
to_add.append(df.index[i])
cols = [col for col in datadf.columns if col in to_add]
cols.append("Specific density, kg/m<SUP>3</SUP>")
df = datadf.T.loc[cols]
data=np.array(datadf)
n = data.shape[0]
d = data.shape[1]
d -= 1
n_train = int(n*0.8) #set fraction of data to be for training
n_test = n - n_train
X_train = np.zeros((n_train,d)) #prepare train/test arrays
X_test = np.zeros((n_test,d))
Y_train = np.zeros((n_train))
Y_test = np.zeros((n_test))
X_train[:] = data[:n_train,:-1] #fill arrays according to train/test split
Y_train[:] = (data[:n_train,-1].astype(float))
X_test[:] = data[n_train:,:-1]
Y_test[:] = (data[n_train:,-1].astype(float))
"""
Explanation: <a id='nn'></a>
MLPRegressor
back to top
I set avg_selected_features to the number of features I want to include based on the box-whiskers plot, t-tests, and progressively dropped features model. I've set this value to 20 in the cell bellow.
End of explanation
"""
###Randomized Search NN Characterization
param_grid = {"activation": ["identity", "logistic", "tanh", "relu"],\
"solver": ["lbfgs", "sgd", "adam"], "alpha": sp_rand(),\
"learning_rate" :["constant", "invscaling", "adaptive"],\
"hidden_layer_sizes": [randint(100)]}
model = MLPRegressor(max_iter=400,tol=1e-8)
grid = RandomizedSearchCV(estimator=model, param_distributions=param_grid,\
n_jobs=-1, n_iter=10)
grid_result = grid.fit(X_train, Y_train)
print(grid_result.best_estimator_)
model = MLPRegressor(activation='logistic', alpha=0.92078, batch_size='auto',
beta_1=0.9, beta_2=0.999, early_stopping=False, epsilon=1e-08,
hidden_layer_sizes=75, learning_rate='constant',
learning_rate_init=0.001, max_iter=1e8, momentum=0.9,
nesterovs_momentum=True, power_t=0.5, random_state=None,
shuffle=True, solver='lbfgs', tol=1e-08, validation_fraction=0.1,
verbose=False, warm_start=False)
model.fit(X_train,Y_train)
with plt.style.context('seaborn-whitegrid'):
fig=plt.figure(figsize=(6,6), dpi=300)
ax=fig.add_subplot(111)
ax.plot(np.exp(Y_test),np.exp(model.predict(X_test)),\
marker=".",linestyle="")
"""
Explanation: I usually optimize my MLP regressor hyper parameters with any new type of dataset. This takes a long time to run so I use the Hyak supercomputer.
End of explanation
"""
prodmodel = prod_model(coeffs, model)
pickle_out = open("../salty/data/%s_prodmodel.pkl" % property_model, "wb")
pickle.dump(prodmodel, pickle_out)
pickle_out.close()
"""
Explanation: <a id='static'></a>
Save the final model to be used in the GAINS fitness test
back to top
End of explanation
"""
|
dudektria/notebooks | computational-chemistry/reaction-mechanisms/reaction-mechanisms.ipynb | mit | # Import matplotlib and seaborn (plotting).
# Set parameters for plotting.
%matplotlib inline
import seaborn as sns
sns.set_style("white")
sns.set_context("poster")
sns.set_palette("colorblind", color_codes=True)
"""
Explanation: Things to be done:
1. Calculate Eyring rates between ground and transition states and store them in edges. Do the same for equilibrium constants and adjacent ground states. You can check for ground/transitions states by counting the number of imaginary frequencies (0/1).
2. Check what rnxlvls does. Do better.
3. Allow user to hide the diagram frame completely (https://stackoverflow.com/a/14913405/4039050)
4. Use fig, ax API instead of direct use of plt.
5. Create path for understanding IRCs, scans of PES and for applying activation-strain model.
1. path should deal with a different type of data structure than a digraph since we are dealing with an almost continuous input when we use path.
2. Furthermore, an IRC is normally done in two steps in different directions in the PES; we should thus make easy for this continuous input to be inverted and concatenated.
3. The activatin-strain model requires prior treatment of the data (energy differences of different continuous paths, etc.); we need to make it easy to do so before feeding path
End of explanation
"""
%ls *out
"""
Explanation: Reactions mechanisms
This is a simple example of use of eyring for analysing reaction mechanisms from computational chemistry log files.
First, we have the following ORCA log files available:
End of explanation
"""
from eyring import reactions as rxn
reactant = "reactant.out"
product = "product.out"
ts = "ts.out"
names = {reactant: r"$\mathbf{A}$",
product: r"$\mathbf{B}$",
ts: r"$\mathbf{A^\neq}$"}
"""
Explanation: Now we will import eyring.reactions and set some variables to help us out in producing pretty diagrams.
End of explanation
"""
G = rxn.mechanism([reactant, ts, product], concentration=1.)
rxn.diagram(G, reactant, product, names)
"""
Explanation: Below we produce the digraph G containing information about our reaction.
This will be used to generate the diagram.
End of explanation
"""
|
misken/hillmaker | hillmaker/examples/basic_usage_shortstay_unit.ipynb | apache-2.0 | import pandas as pd
import hillmaker as hm
"""
Explanation: Hillmaker - basic usage
In this notebook we'll focus on basic use of Hillmaker for analyzing occupancy in a typical hospital setting. The data is fictitious data from a hospital short stay unit. Patients flow through a short stay unit for a variety of procedures, tests or therapies. Let's assume patients can be classified into one of five categories of patient types: ART (arterialgram), CAT (post cardiac-cath), MYE (myelogram), IVT (IV therapy), and OTH (other). From one of our hospital information systems we were able to get raw data about the entry and exit times of each patient. For simplicity, the data is in a csv file.
This example assumes you are already familiar with statistical occupancy analysis using the old version of Hillmaker or some similar such tool. It also assumes some knowledge of using Python for analytical work.
The following blog posts are helpful:
Computing occupancy statistics with Python - Part 1 of 3
Computing occupancy statistics with Python - Part 2 of 3
Current status of code
Hillmaker is implemented as a Python module which can be used by importing hillmaker and then calling the main Hillmaker function, make_hills() (or any component function included in the module). This new version of Hillmaker is in what I'd call an alpha state. The output does match the Access version for the ShortStay database that I included in the original Hillmaker. I've been actively using it to process thousands of simulation output log files as part of a research project on OB patient flow. More testing is needed before I release it publicly, but it does appear to be doing its primary job correctly. Please let me know if you think it's computing something incorrectly. Before using for any real project work, you should do your own testing to confirm that it is working correctly. Use at your own risk.
User interface plans
Here's where I'd like some input from you, user of the old Access version of Hillmaker. Over the years, I (and many others) have used Hillmaker in a variety of ways, including:
MS Access form based GUI
run main Hillmaker sub from Access VBA Immediate Window
run Hillmaker main sub (and/or components subs) via custom VBA procedures
I'd like users to be able to use the new Python based version in a number of different ways as well. As I'll show in this IPython notebook, it can be used by importing the hillmaker module and then calling Hillmaker functions via:
an IPython notebook (or any Python terminal such as an IPython shell or QT console, or IDLE)
a Python script with the input arguments set and passed via Python statements
While these two options provide tons of flexibility for power users, I also want to create other interfaces that don't require users to write Python code. At a minimum, I plan to create a command line interface (CLI) as well as a GUI that is similar to the old Access version.
A CLI for Hillmaker
Python has several nice tools for creating CLI's. Both docopt and argparse are part of the standard library. Layered on top of these are tools like Click. See http://docs.python-guide.org/en/latest/scenarios/cli/ for more. A well designed CLI will make it easy to use Python from the command line in either Windows or Linux. It shouldn't take me long at all to create this.
A GUI for Hillmaker
This is uncharted territory for me. Python has a number of frameworks/toolkits for creating GUI apps. This is not the highest priority for me but I do plan on creating a GUI for Hillmaker. If anyone wants to help with this, awesome.
Installing Hillmaker
Whereas the old Hillmaker required MS Access, the new one requires an installation of Python 3 along with several Python modules that are widely used for analytics and data science work.
Getting Python and a jillion analytical packages via Anaconda
An very easy way to get Python 3 pre-configured with tons of analytical Python packages is to use the Anaconda distro for Python. From their Downloads page:
Anaconda is a completely free Python distribution (including for commercial use and redistribution).
It includes more than 300 of the most popular Python packages for science, math, engineering, and
data analysis. See the packages included with Anaconda and the Anaconda changelog.
Make sure you download Python 3.x (3.5 is latest version as of January, 2016)
There are several really nice reasons to use the Anaconda Python distro for data science work:
it comes preconfigured with hundreds of the most popular data science Python packages installed and they just work
large community of Anaconda data science users and vibrant user community on places like StackOverflow
it has a companion package manager called Conda which makes it easy to install new packages as well as to create and manage virtual environments
Getting Hillmaker
Eventually Hillmaker will be publicly available from the Python Package Index known as PyPi as well as Anaconda Cloud. They are similar to CRAN for R. Source code will also be available from my GitHub site (currently I have it marked as a private project) and it will be an open-source project. There will be a companion project on GitHub called hillmaker-examples which will contains, well, examples of hillmaker use cases. For now, I'm just providing the package to you directly.
Installing Hillmaker
It helps if you know a little bit about Python package installation. Again, the Python Packaging User Guide is helpful. You can use either pip or conda to install Hillmaker. I suggest learning about Python virtual environments and either using pyenv, virtualenv or conda (preferred) to create a Python virtual environment and then install Hillmaker into it. This way you avoid mixing developmental third-party packages like Hillmaker with your base Anaconda Python environment.
For now, by far the easiest thing to do is to simply use Conda to install hillmaker into your Anaconda root environment. You can always uninstall it using Conda as well.
Step 1 - Download hillmaker
Presumably, I've given you a link from which you can download the binary package for hillmaker. Download it. It will be called something like hillmaker-0.1.0-py34_0.tar.bz2. It doesn't matter where you download it to. Let's assume you've make a subfolder of your Documents folder called hillmaker and you've put it in there.
Step 2 - Open a command shell
In Windows, just run cmd.exe from the Start Menu. Then use the cd command to get yourself into the \Documents\hillmaker folder you just created. You can use the MS-DOS dir command to make sure the downloaded hillmaker archive is in this folder.
sh
dir
or conda:
sh
conda install pandas
Module imports
To run Hillmaker we only need to import a few modules. Since the main Hillmaker function uses Pandas DataFrames for both data input and output, we need to import pandas in addition to hillmaker.
End of explanation
"""
file_stopdata = '../data/ShortStay.csv'
stops_df = pd.read_csv(file_stopdata, parse_dates=['InRoomTS','OutRoomTS'])
stops_df.info() # Check out the structure of the resulting DataFrame
"""
Explanation: Read main stop data file
Here's the first few lines from our csv file containing the patient stop data:
PatID,InRoomTS,OutRoomTS,PatType
1,1/1/1996 7:44,1/1/1996 8:50,IVT
2,1/1/1996 8:28,1/1/1996 9:20,IVT
3,1/1/1996 11:44,1/1/1996 13:30,MYE
4,1/1/1996 11:51,1/1/1996 12:55,CAT
5,1/1/1996 12:10,1/1/1996 13:00,IVT
6,1/1/1996 14:16,1/1/1996 15:35,IVT
7,1/1/1996 14:40,1/1/1996 15:25,IVT
Read the short stay data from a csv file into a DataFrame and tell Pandas which fields to treat as dates.
End of explanation
"""
stops_df.head(7)
stops_df.tail(5)
"""
Explanation: Check out the top and bottom of stops_df.
End of explanation
"""
help(hm.make_hills)
"""
Explanation: No obvious problems. We'll assume the data was all read in correctly.
Creating occupancy summaries
The primary function in Hillmaker is called make_hills and plays the same role as the Hillmaker function in the original Access VBA version of Hillmaker. Let's get a little help on this function.
End of explanation
"""
# Required inputs
scenario = 'ss_example_1'
in_fld_name = 'InRoomTS'
out_fld_name = 'OutRoomTS'
cat_fld_name = 'PatType'
start = '1/1/1996'
end = '3/30/1996 23:45'
# Optional inputs
verbose = 1
"""
Explanation: Most of the parameters are similar to those in the original VBA version, though a few new ones have been added. For example, the cat_to_exclude parameter allows you to specify a list of category values for which you do not want occupancy statistics computed. Also, since the VBA version used an Access database as the container for its output, new parameters were added to control output to csv files instead.
Example 1: 60 minute bins, all categories, export to csv
Specify values for all the required inputs:
End of explanation
"""
hm.make_hills(scenario, stops_df, in_fld_name, out_fld_name, start, end, cat_fld_name, verbose=verbose)
"""
Explanation: Now we'll call the main make_hills function. We won't capture the return values but will simply take the default behavior of having the summaries exported to csv files. You'll see that the filenames will contain the scenario value.
End of explanation
"""
# Required inputs - same as Example 1 except for scenario name
scenario = 'ss_example_2'
in_fld_name = 'InRoomTS'
out_fld_name = 'OutRoomTS'
cat_fld_name = 'PatType'
start = '1/1/1996'
end = '3/30/1996 23:45'
# Optional inputs
tot_fld_name = 'CAT_IVT' # Just to make it clear that it's only these patient types
bin_mins = 30 # Half-hour time bins
exclude = ['ART','MYE','OTH'] # Tell Hillmaker to ignore these patient types
"""
Explanation: Here's a screenshot of the current folder containing this IPython notebook (basic_usage_shortstay_unit.ipynb) and the csv files created by Hillmaker.
If you've used the previous version of Hillmaker, you'll recognize these files. A few more statistics have been added, but otherwise they are the same. These csv files can be imported into a spreadsheet application for plot creation. Of course, we can also make plots in Python. We'll do that in the next example.
The files with 'cat' in their name are new. They contain summary overall summary statistics by category. In other words, they are NOT by time of day and day of week.
Example 2: 30 minute bins, only CAT and IVT, return values to DataFrames
End of explanation
"""
results_ex2 = hm.make_hills(scenario, stops_df, in_fld_name, out_fld_name, start, end, cat_fld_name,
total_str=tot_fld_name, bin_size_minutes=bin_mins,
cat_to_exclude=exclude, return_dataframes=True)
results_ex2.keys()
occ_df = results_ex2['occupancy']
occ_df.head()
occ_df.tail()
occ_df.info()
"""
Explanation: Now we'll call make_hills and tuck the results (a dictionary of DataFrames) into a local variable. Then we can explore them a bit with Pandas.
End of explanation
"""
import pandas as pd
import hillmaker as hm
file_stopdata = '../data/ShortStay.csv'
# Required inputs
scenario = 'sstest_60'
in_fld_name = 'InRoomTS'
out_fld_name = 'OutRoomTS'
cat_fld_name = 'PatType'
start = '1/1/1996'
end = '3/30/1996 23:45'
# Optional inputs
tot_fld_name = 'SSU'
bin_mins = 60
df = pd.read_csv(file_stopdata, parse_dates=[in_fld_name, out_fld_name])
hm.make_hills(scenario, df, in_fld_name, out_fld_name,
start, end, cat_fld_name,
tot_fld_name, bin_mins,
cat_to_exclude=None,
verbose=1)
"""
Explanation: Example 3 - Running via a Python script
Of course, you don't have to run Python statements through an IPython notebook. You can simply create a short Python script and run that directly in a terminal. An example, test_shortstay.py, can be found in the scripts subfolder of the hillmaker-examples project. Here's what it looks like - you can modify as necessary for your needs. There is another example in that folder as well, test_obsim_log.py, that is slightly more complex in that the input data has raw simulation times (i.e. minutes past t=0) and we need to do some datetime math to turn them into calendar based inputs.
End of explanation
"""
for log_fn in glob.glob('logs/*.csv'):
# Read the log file and filter by included categories
stops_df = pd.read_csv(log_fn, parse_dates=[in_fld_name, out_fld_name])
hm.make_hills(scenario, df, in_fld_name, out_fld_name, start, end, cat_fld_name)
...
"""
Explanation: More elaborate versions of scripts like test_shortstay.py can be envisioned. For example, an entire folder of input data files could be processed by simple enclosing the hm.make_hills call inside a loop over the collection of input files:
End of explanation
"""
|
robertoalotufo/ia898 | dev/2017-01-05-RAL+Ferramentas+de+Edicao+HTLM+Notebook.ipynb | mit | from IPython.display import YouTubeVideo
# a talk about IPython at Sage Days at U. Washington, Seattle.
# Video credit: William Stein.
YouTubeVideo('1j_HxD4iLn8')
"""
Explanation: Ferramentas de edição HTML
Este documento ilustra as principais ferramentas para editar o notebook, utilizando células de texto Markdown:
Colocando links Internet
Forma de carregar um arquivo para computador local
Negrito, itálico e cru
Texto colorido
Itemização de texto
Títulos e subtítulos
Linha de separação
Inserindo vídeo e website
Mostrando imagens na célula markdown
Colocando links Internet
Coloca-se [Link para divulgação curso DL](http://adessowiki.fee.unicamp.br/rnpi)
Veja como aparece: Link para divulgação curso DL
Fazendo um link para uma seção dentro do próprio notebook. Utiliza-se o link usando-se a sintaxe do HTML para referências dentro de arquivo. Lembrar de trocar os espaços dos subtítulos por traço.
Por exemplo:
Link para [Negrito e itálico](#Negrito-e-itálico) fica
Link para Negrito, itálico e cru
Forma de carregar um arquivo para computador local
Uma forma simples de transferir um arquivo gerado pelo Jupyter no servidor para seu computador local é zipá-lo usando o comando gzip do linux e criar um link para ele. Como a extensão será gz quando se clica, o navegador faz o download do arquivo para seu computador local.
Colocar numa célula de código:
!gzip <nome do arquivo>
Colocar numa célula Markdown:
[link para download](<nome do arquivo>.gz)
Negrito, itálico e cru
Negrito coloca-se com dois asteriscos. Por exemplo:
- o texto **Este texto em negrito** fica Este texto em negrito
O texto em Itálico é utilizado com apenas um asterisco. Por exemplo
- o texto *Este texto em itálico* aparece Este texto em itálico
Para mostrar um texto cru (raw text) usa-se texto separado por crases. Por exemplo
- este texto aparece com espaçamento fixo: Este é um texto cru
Texto colorido
Forma simples de criar texto colorido é utilizando sintaxe do HTML:
- escrevendo-se <code style="color:red">vermelho</code>, o texto aparece como <code style="color:red">vermelho</code>
Itemização de texto
A itemização utiliza caracteres como -, * e números. Para criar itens aninhados, utiliza-se um afastamento maior. Veja estes exemplos:
item 1
item 1.1
item 2
Itemização numérica
item 1
item 1.a
este texto pertence ao item 1.a
item 2
Títulos e subtítulos
Títulos e subtítulos são colocados com #, ##, ### ou mais # como a seguir
Subtítulo de terceiro nível
Subítulo de quarto nível
Linha de separação
Uma linha de separação é feita colocando-se três caracteres _ no começo de uma linha:
Inserindo vídeo e website
A inserção de vídeo e website somente na célula de programação Python. O display do IPython possui aplicação específica para importar vídeos do YouTube:
End of explanation
"""
from IPython.display import IFrame
IFrame('http://adessowiki.fee.unicamp.br/rnpi', width=700, height=350)
"""
Explanation: Para inserir website, o display do IPython tem suporte para mostrar um IFrame, onde é possível colocar o endereço de um website:
End of explanation
"""
|
ImAlexisSaez/deep-learning-specialization-coursera | course_1/week_4/assignment_1/building_your_deep_neural_network_step_by_step_v4.ipynb | mit | import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v2 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
"""
Explanation: Building your Deep Neural Network: Step by Step
Welcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!
In this notebook, you will implement all the functions required to build a deep neural network.
In the next assignment, you will use these functions to build a deep neural network for image classification.
After this assignment you will be able to:
- Use non-linear units like ReLU to improve your model
- Build a deeper neural network (with more than 1 hidden layer)
- Implement an easy-to-use neural network class
Notation:
- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer.
- Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.
- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).
Let's get started!
1 - Packages
Let's first import all the packages that you will need during this assignment.
- numpy is the main package for scientific computing with Python.
- matplotlib is a library to plot graphs in Python.
- dnn_utils provides some necessary functions for this notebook.
- testCases provides some test cases to assess the correctness of your functions
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.
End of explanation
"""
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros((n_y, 1))
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(3,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
"""
Explanation: 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:
Initialize the parameters for a two-layer network and for an $L$-layer neural network.
Implement the forward propagation module (shown in purple in the figure below).
Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).
We give you the ACTIVATION function (relu/sigmoid).
Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.
Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.
Compute the loss.
Implement the backward propagation module (denoted in red in the figure below).
Complete the LINEAR part of a layer's backward propagation step.
We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward)
Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.
Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function
Finally update the parameters.
<img src="images/final outline.png" style="width:800px;height:500px;">
<caption><center> Figure 1</center></caption><br>
Note that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps.
3 - Initialization
You will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers.
3.1 - 2-layer Neural Network
Exercise: Create and initialize the parameters of the 2-layer neural network.
Instructions:
- The model's structure is: LINEAR -> RELU -> LINEAR -> SIGMOID.
- Use random initialization for the weight matrices. Use np.random.randn(shape)*0.01 with the correct shape.
- Use zero initialization for the biases. Use np.zeros(shape).
End of explanation
"""
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
"""
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
"""
Explanation: Expected output:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td> [[ 0.01624345 -0.00611756 -0.00528172]
[-0.01072969 0.00865408 -0.02301539]] </td>
</tr>
<tr>
<td> **b1**</td>
<td>[[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[ 0.01744812 -0.00761207]]</td>
</tr>
<tr>
<td> **b2** </td>
<td> [[ 0.]] </td>
</tr>
</table>
3.2 - L-layer Neural Network
The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the initialize_parameters_deep, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:
<table style="width:100%">
<tr>
<td> </td>
<td> **Shape of W** </td>
<td> **Shape of b** </td>
<td> **Activation** </td>
<td> **Shape of Activation** </td>
<tr>
<tr>
<td> **Layer 1** </td>
<td> $(n^{[1]},12288)$ </td>
<td> $(n^{[1]},1)$ </td>
<td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td>
<td> $(n^{[1]},209)$ </td>
<tr>
<tr>
<td> **Layer 2** </td>
<td> $(n^{[2]}, n^{[1]})$ </td>
<td> $(n^{[2]},1)$ </td>
<td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td>
<td> $(n^{[2]}, 209)$ </td>
<tr>
<tr>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$</td>
<td> $\vdots$ </td>
<tr>
<tr>
<td> **Layer L-1** </td>
<td> $(n^{[L-1]}, n^{[L-2]})$ </td>
<td> $(n^{[L-1]}, 1)$ </td>
<td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td>
<td> $(n^{[L-1]}, 209)$ </td>
<tr>
<tr>
<td> **Layer L** </td>
<td> $(n^{[L]}, n^{[L-1]})$ </td>
<td> $(n^{[L]}, 1)$ </td>
<td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>
<td> $(n^{[L]}, 209)$ </td>
<tr>
</table>
Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if:
$$ W = \begin{bmatrix}
j & k & l\
m & n & o \
p & q & r
\end{bmatrix}\;\;\; X = \begin{bmatrix}
a & b & c\
d & e & f \
g & h & i
\end{bmatrix} \;\;\; b =\begin{bmatrix}
s \
t \
u
\end{bmatrix}\tag{2}$$
Then $WX + b$ will be:
$$ WX + b = \begin{bmatrix}
(ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\
(ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\
(pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u
\end{bmatrix}\tag{3} $$
Exercise: Implement initialization for an L-layer Neural Network.
Instructions:
- The model's structure is [LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.
- Use random initialization for the weight matrices. Use np.random.rand(shape) * 0.01.
- Use zeros initialization for the biases. Use np.zeros(shape).
- We will store $n^{[l]}$, the number of units in different layers, in a variable layer_dims. For example, the layer_dims for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means W1's shape was (4,2), b1 was (4,1), W2 was (1,4) and b2 was (1,1). Now you will generalize this to $L$ layers!
- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).
python
if L == 1:
parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01
parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))
End of explanation
"""
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
"""
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
"""
### START CODE HERE ### (≈ 1 line of code)
Z = np.dot(W, A) + b
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
"""
Explanation: Expected output:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]
[-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]
[-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]
[-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td>
</tr>
<tr>
<td>**b1** </td>
<td>[[ 0.]
[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2** </td>
<td>[[-0.01185047 -0.0020565 0.01486148 0.00236716]
[-0.01023785 -0.00712993 0.00625245 -0.00160513]
[-0.00768836 -0.00230031 0.00745056 0.01976111]]</td>
</tr>
<tr>
<td>**b2** </td>
<td>[[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
</table>
4 - Forward propagation module
4.1 - Linear Forward
Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:
LINEAR
LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid.
[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model)
The linear forward module (vectorized over all the examples) computes the following equations:
$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$
where $A^{[0]} = X$.
Exercise: Build the linear part of forward propagation.
Reminder:
The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find np.dot() useful. If your dimensions don't match, printing W.shape may help.
End of explanation
"""
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
"""
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
"""
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
"""
Explanation: Expected output:
<table style="width:35%">
<tr>
<td> **Z** </td>
<td> [[ 3.26295337 -1.23429987]] </td>
</tr>
</table>
4.2 - Linear-Activation Forward
In this notebook, you will use two activation functions:
Sigmoid: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the sigmoid function. This function returns two items: the activation value "a" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call:
python
A, activation_cache = sigmoid(Z)
ReLU: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the relu function. This function returns two items: the activation value "A" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call:
python
A, activation_cache = relu(Z)
For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.
Exercise: Implement the forward propagation of the LINEAR->ACTIVATION layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.
End of explanation
"""
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
"""
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)
the cache of linear_sigmoid_forward() (there is one, indexed L-1)
"""
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)], parameters['b' + str(l)], "relu")
caches.append(cache)
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
AL, cache = linear_activation_forward(A, parameters['W' + str(L)], parameters['b' + str(L)], "sigmoid")
caches.append(cache)
### END CODE HERE ###
assert(AL.shape == (1,X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case()
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
"""
Explanation: Expected output:
<table style="width:35%">
<tr>
<td> **With sigmoid: A ** </td>
<td > [[ 0.96890023 0.11013289]]</td>
</tr>
<tr>
<td> **With ReLU: A ** </td>
<td > [[ 3.43896131 0. ]]</td>
</tr>
</table>
Note: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers.
d) L-Layer Model
For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (linear_activation_forward with RELU) $L-1$ times, then follows that with one linear_activation_forward with SIGMOID.
<img src="images/model_architecture_kiank.png" style="width:600px;height:300px;">
<caption><center> Figure 2 : [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model</center></caption><br>
Exercise: Implement the forward propagation of the above model.
Instruction: In the code below, the variable AL will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called Yhat, i.e., this is $\hat{Y}$.)
Tips:
- Use the functions you had previously written
- Use a for loop to replicate [LINEAR->RELU] (L-1) times
- Don't forget to keep track of the caches in the "caches" list. To add a new value c to a list, you can use list.append(c).
End of explanation
"""
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
"""
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
"""
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
cost = -1 / m * np.sum(np.multiply(Y, np.log(AL)) + np.multiply((1 - Y), np.log(1 - AL)))
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
"""
Explanation: <table style="width:40%">
<tr>
<td> **AL** </td>
<td > [[ 0.17007265 0.2524272 ]]</td>
</tr>
<tr>
<td> **Length of caches list ** </td>
<td > 2</td>
</tr>
</table>
Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions.
5 - Cost function
Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.
Exercise: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{L}\right)) \tag{7}$$
End of explanation
"""
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
"""
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = 1 / m * np.dot(dZ, A_prev.T)
db = 1 / m * np.sum(dZ, axis=1, keepdims=True)
dA_prev = np.dot(W.T, dZ)
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
"""
Explanation: Expected Output:
<table>
<tr>
<td>**cost** </td>
<td> 0.41493159961539694</td>
</tr>
</table>
6 - Backward propagation module
Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters.
Reminder:
<img src="images/backprop_kiank.png" style="width:650px;height:250px;">
<caption><center> Figure 3 : Forward and Backward propagation for LINEAR->RELU->LINEAR->SIGMOID <br> The purple blocks represent the forward propagation, and the red blocks represent the backward propagation. </center></caption>
<!--
For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:
$$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$
In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.
Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$.
This is why we talk about **backpropagation**.
!-->
Now, similar to forward propagation, you are going to build the backward propagation in three steps:
- LINEAR backward
- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)
6.1 - Linear backward
For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).
Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$.
<img src="images/linearback_kiank.png" style="width:250px;height:300px;">
<caption><center> Figure 4 </center></caption>
The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:
$$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$
$$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{l}\tag{9}$$
$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$
Exercise: Use the 3 formulas above to implement linear_backward().
End of explanation
"""
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
"""
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
dZ = sigmoid_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
return dA_prev, dW, db
AL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
"""
Explanation: Expected Output:
<table style="width:90%">
<tr>
<td> **dA_prev** </td>
<td > [[ 0.51822968 -0.19517421]
[-0.40506361 0.15255393]
[ 2.37496825 -0.89445391]] </td>
</tr>
<tr>
<td> **dW** </td>
<td > [[-0.10076895 1.40685096 1.64992505]] </td>
</tr>
<tr>
<td> **db** </td>
<td> [[ 0.50629448]] </td>
</tr>
</table>
6.2 - Linear-Activation backward
Next, you will create a function that merges the two helper functions: linear_backward and the backward step for the activation linear_activation_backward.
To help you implement linear_activation_backward, we provided two backward functions:
- sigmoid_backward: Implements the backward propagation for SIGMOID unit. You can call it as follows:
python
dZ = sigmoid_backward(dA, activation_cache)
relu_backward: Implements the backward propagation for RELU unit. You can call it as follows:
python
dZ = relu_backward(dA, activation_cache)
If $g(.)$ is the activation function,
sigmoid_backward and relu_backward compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$.
Exercise: Implement the backpropagation for the LINEAR->ACTIVATION layer.
End of explanation
"""
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
"""
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
"""
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
current_cache = caches[L - 1]
grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache, activation = "sigmoid")
### END CODE HERE ###
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 2)], caches". Outputs: "grads["dA" + str(l + 1)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
current_cache = caches[l]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l + 2)], current_cache, activation = "relu")
grads["dA" + str(l + 1)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
### END CODE HERE ###
return grads
AL, Y_assess, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dA1 = "+ str(grads["dA1"]))
"""
Explanation: Expected output with sigmoid:
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td >[[ 0.11017994 0.01105339]
[ 0.09466817 0.00949723]
[-0.05743092 -0.00576154]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.10266786 0.09778551 -0.01968084]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.05729622]] </td>
</tr>
</table>
Expected output with relu
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td > [[ 0.44090989 0. ]
[ 0.37883606 0. ]
[-0.2298228 0. ]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.44513824 0.37371418 -0.10478989]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.20837892]] </td>
</tr>
</table>
6.3 - L-Model Backward
Now you will implement the backward function for the whole network. Recall that when you implemented the L_model_forward function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the L_model_backward function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass.
<img src="images/mn_backward.png" style="width:450px;height:300px;">
<caption><center> Figure 5 : Backward pass </center></caption>
Initializing backpropagation:
To backpropagate through this network, we know that the output is,
$A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute dAL $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$.
To do so, use this formula (derived using calculus which you don't need in-depth knowledge of):
python
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
You can then use this post-activation gradient dAL to keep going backward. As seen in Figure 5, you can now feed in dAL into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a for loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula :
$$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$
For example, for $l=3$ this would store $dW^{[l]}$ in grads["dW3"].
Exercise: Implement backpropagation for the [LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model.
End of explanation
"""
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
"""
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
for l in range(L):
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * grads["dW" + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * grads["db" + str(l+1)]
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = "+ str(parameters["W1"]))
print ("b1 = "+ str(parameters["b1"]))
print ("W2 = "+ str(parameters["W2"]))
print ("b2 = "+ str(parameters["b2"]))
"""
Explanation: Expected Output
<table style="width:60%">
<tr>
<td > dW1 </td>
<td > [[ 0.41010002 0.07807203 0.13798444 0.10502167]
[ 0. 0. 0. 0. ]
[ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td>
</tr>
<tr>
<td > db1 </td>
<td > [[-0.22007063]
[ 0. ]
[-0.02835349]] </td>
</tr>
<tr>
<td > dA1 </td>
<td > [[ 0. 0.52257901]
[ 0. -0.3269206 ]
[ 0. -0.32070404]
[ 0. -0.74079187]] </td>
</tr>
</table>
6.4 - Update Parameters
In this section you will update the parameters of the model, using gradient descent:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$
where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary.
Exercise: Implement update_parameters() to update your parameters using gradient descent.
Instructions:
Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
End of explanation
"""
|
tensorflow/docs-l10n | site/en-snapshot/probability/examples/FFJORD_Demo.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
!pip install -q dm-sonnet
#@title Imports (tf, tfp with adjoint trick, etc)
import numpy as np
import tqdm as tqdm
import sklearn.datasets as skd
# visualization
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.stats import kde
# tf and friends
import tensorflow.compat.v2 as tf
import tensorflow_probability as tfp
import sonnet as snt
tf.enable_v2_behavior()
tfb = tfp.bijectors
tfd = tfp.distributions
def make_grid(xmin, xmax, ymin, ymax, gridlines, pts):
xpts = np.linspace(xmin, xmax, pts)
ypts = np.linspace(ymin, ymax, pts)
xgrid = np.linspace(xmin, xmax, gridlines)
ygrid = np.linspace(ymin, ymax, gridlines)
xlines = np.stack([a.ravel() for a in np.meshgrid(xpts, ygrid)])
ylines = np.stack([a.ravel() for a in np.meshgrid(xgrid, ypts)])
return np.concatenate([xlines, ylines], 1).T
grid = make_grid(-3, 3, -3, 3, 4, 100)
#@title Helper functions for visualization
def plot_density(data, axis):
x, y = np.squeeze(np.split(data, 2, axis=1))
levels = np.linspace(0.0, 0.75, 10)
kwargs = {'levels': levels}
return sns.kdeplot(x, y, cmap="viridis", shade=True,
shade_lowest=True, ax=axis, **kwargs)
def plot_points(data, axis, s=10, color='b', label=''):
x, y = np.squeeze(np.split(data, 2, axis=1))
axis.scatter(x, y, c=color, s=s, label=label)
def plot_panel(
grid, samples, transformed_grid, transformed_samples,
dataset, axarray, limits=True):
if len(axarray) != 4:
raise ValueError('Expected 4 axes for the panel')
ax1, ax2, ax3, ax4 = axarray
plot_points(data=grid, axis=ax1, s=20, color='black', label='grid')
plot_points(samples, ax1, s=30, color='blue', label='samples')
plot_points(transformed_grid, ax2, s=20, color='black', label='ode(grid)')
plot_points(transformed_samples, ax2, s=30, color='blue', label='ode(samples)')
ax3 = plot_density(transformed_samples, ax3)
ax4 = plot_density(dataset, ax4)
if limits:
set_limits([ax1], -3.0, 3.0, -3.0, 3.0)
set_limits([ax2], -2.0, 3.0, -2.0, 3.0)
set_limits([ax3, ax4], -1.5, 2.5, -0.75, 1.25)
def set_limits(axes, min_x, max_x, min_y, max_y):
if isinstance(axes, list):
for axis in axes:
set_limits(axis, min_x, max_x, min_y, max_y)
else:
axes.set_xlim(min_x, max_x)
axes.set_ylim(min_y, max_y)
"""
Explanation: FFJORD
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/FFJORD_Demo"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/FFJORD_Demo.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/FFJORD_Demo.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/FFJORD_Demo.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Setup
First install packages used in this demo.
End of explanation
"""
#@title Dataset
DATASET_SIZE = 1024 * 8 #@param
BATCH_SIZE = 256 #@param
SAMPLE_SIZE = DATASET_SIZE
moons = skd.make_moons(n_samples=DATASET_SIZE, noise=.06)[0]
moons_ds = tf.data.Dataset.from_tensor_slices(moons.astype(np.float32))
moons_ds = moons_ds.prefetch(tf.data.experimental.AUTOTUNE)
moons_ds = moons_ds.cache()
moons_ds = moons_ds.shuffle(DATASET_SIZE)
moons_ds = moons_ds.batch(BATCH_SIZE)
plt.figure(figsize=[8, 8])
plt.scatter(moons[:, 0], moons[:, 1])
plt.show()
"""
Explanation: FFJORD bijector
In this colab we demonstrate FFJORD bijector, originally proposed in the paper by Grathwohl, Will, et al. arxiv link.
In the nutshell the idea behind such approach is to establish a correspondence between a known base distribution and the data distribution.
To establish this connection, we need to
Define a bijective map $\mathcal{T}{\theta}:\mathbf{x} \rightarrow \mathbf{y}$, $\mathcal{T}{\theta}^{1}:\mathbf{y} \rightarrow \mathbf{x}$ between the space $\mathcal{Y}$ on which base distribution is defined and space $\mathcal{X}$ of the data domain.
Efficiently keep track of the deformations we perform to transfer the notion of probability onto $\mathcal{X}$.
The second condition is formalized in the following expression for probability
distribution defined on $\mathcal{X}$:
$$
\log p_{\mathbf{x}}(\mathbf{x})=\log p_{\mathbf{y}}(\mathbf{y})-\log \operatorname{det}\left|\frac{\partial \mathcal{T}_{\theta}(\mathbf{y})}{\partial \mathbf{y}}\right|
$$
FFJORD bijector accomplishes this by defining a transformation
$$
\mathcal{T_{\theta}}: \mathbf{x} = \mathbf{z}(t_{0}) \rightarrow \mathbf{y} = \mathbf{z}(t_{1}) \quad : \quad \frac{d \mathbf{z}}{dt} = \mathbf{f}(t, \mathbf{z}, \theta)
$$
This transformation is invertible, as long as function $\mathbf{f}$ describing the evolution of the state $\mathbf{z}$ is well behaved and the log_det_jacobian can be calculated by integrating the following expression.
$$
\log \operatorname{det}\left|\frac{\partial \mathcal{T}{\theta}(\mathbf{y})}{\partial \mathbf{y}}\right| =
-\int{t_{0}}^{t_{1}} \operatorname{Tr}\left(\frac{\partial \mathbf{f}(t, \mathbf{z}, \theta)}{\partial \mathbf{z}(t)}\right) d t
$$
In this demo we will train a FFJORD bijector to warp a gaussian distribution onto the distribution defined by moons dataset. This will be done in 3 steps:
* Define base distribution
* Define FFJORD bijector
* Minimize exact log-likelihood of the dataset
First, we load the data
End of explanation
"""
base_loc = np.array([0.0, 0.0]).astype(np.float32)
base_sigma = np.array([0.8, 0.8]).astype(np.float32)
base_distribution = tfd.MultivariateNormalDiag(base_loc, base_sigma)
"""
Explanation: Next, we instantiate a base distribution
End of explanation
"""
class MLP_ODE(snt.Module):
"""Multi-layer NN ode_fn."""
def __init__(self, num_hidden, num_layers, num_output, name='mlp_ode'):
super(MLP_ODE, self).__init__(name=name)
self._num_hidden = num_hidden
self._num_output = num_output
self._num_layers = num_layers
self._modules = []
for _ in range(self._num_layers - 1):
self._modules.append(snt.Linear(self._num_hidden))
self._modules.append(tf.math.tanh)
self._modules.append(snt.Linear(self._num_output))
self._model = snt.Sequential(self._modules)
def __call__(self, t, inputs):
inputs = tf.concat([tf.broadcast_to(t, inputs.shape), inputs], -1)
return self._model(inputs)
#@title Model and training parameters
LR = 1e-2 #@param
NUM_EPOCHS = 80 #@param
STACKED_FFJORDS = 4 #@param
NUM_HIDDEN = 8 #@param
NUM_LAYERS = 3 #@param
NUM_OUTPUT = 2
"""
Explanation: We use a multi-layer perceptron to model state_derivative_fn.
While not necessary for this dataset, it is often benefitial to make state_derivative_fn dependent on time. Here we achieve this by concatenating t to inputs of our network.
End of explanation
"""
#@title Building bijector
solver = tfp.math.ode.DormandPrince(atol=1e-5)
ode_solve_fn = solver.solve
trace_augmentation_fn = tfb.ffjord.trace_jacobian_exact
bijectors = []
for _ in range(STACKED_FFJORDS):
mlp_model = MLP_ODE(NUM_HIDDEN, NUM_LAYERS, NUM_OUTPUT)
next_ffjord = tfb.FFJORD(
state_time_derivative_fn=mlp_model,ode_solve_fn=ode_solve_fn,
trace_augmentation_fn=trace_augmentation_fn)
bijectors.append(next_ffjord)
stacked_ffjord = tfb.Chain(bijectors[::-1])
"""
Explanation: Now we construct a stack of FFJORD bijectors. Each bijector is provided with ode_solve_fn and trace_augmentation_fn and it's own state_derivative_fn model, so that they represent a sequence of different transformations.
End of explanation
"""
transformed_distribution = tfd.TransformedDistribution(
distribution=base_distribution, bijector=stacked_ffjord)
"""
Explanation: Now we can use TransformedDistribution which is the result of warping base_distribution with stacked_ffjord bijector.
End of explanation
"""
#@title Training
@tf.function
def train_step(optimizer, target_sample):
with tf.GradientTape() as tape:
loss = -tf.reduce_mean(transformed_distribution.log_prob(target_sample))
variables = tape.watched_variables()
gradients = tape.gradient(loss, variables)
optimizer.apply(gradients, variables)
return loss
#@title Samples
@tf.function
def get_samples():
base_distribution_samples = base_distribution.sample(SAMPLE_SIZE)
transformed_samples = transformed_distribution.sample(SAMPLE_SIZE)
return base_distribution_samples, transformed_samples
@tf.function
def get_transformed_grid():
transformed_grid = stacked_ffjord.forward(grid)
return transformed_grid
"""
Explanation: Now we define our training procedure. We simply minimize negative log-likelihood of the data.
End of explanation
"""
evaluation_samples = []
base_samples, transformed_samples = get_samples()
transformed_grid = get_transformed_grid()
evaluation_samples.append((base_samples, transformed_samples, transformed_grid))
panel_id = 0
panel_data = evaluation_samples[panel_id]
fig, axarray = plt.subplots(
1, 4, figsize=(16, 6))
plot_panel(
grid, panel_data[0], panel_data[2], panel_data[1], moons, axarray, False)
plt.tight_layout()
learning_rate = tf.Variable(LR, trainable=False)
optimizer = snt.optimizers.Adam(learning_rate)
for epoch in tqdm.trange(NUM_EPOCHS // 2):
base_samples, transformed_samples = get_samples()
transformed_grid = get_transformed_grid()
evaluation_samples.append(
(base_samples, transformed_samples, transformed_grid))
for batch in moons_ds:
_ = train_step(optimizer, batch)
panel_id = -1
panel_data = evaluation_samples[panel_id]
fig, axarray = plt.subplots(
1, 4, figsize=(16, 6))
plot_panel(grid, panel_data[0], panel_data[2], panel_data[1], moons, axarray)
plt.tight_layout()
"""
Explanation: Plot samples from base and transformed distributions.
End of explanation
"""
|
amueller/scipy-2017-sklearn | notebooks/02.Scientific_Computing_Tools_in_Python.ipynb | cc0-1.0 | import numpy as np
# Setting a random seed for reproducibility
rnd = np.random.RandomState(seed=123)
# Generating a random array
X = rnd.uniform(low=0.0, high=1.0, size=(3, 5)) # a 3 x 5 array
print(X)
"""
Explanation: Jupyter Notebooks
You can run a cell by pressing [shift] + [Enter] or by pressing the "play" button in the menu.
You can get help on a function or object by pressing [shift] + [tab] after the opening parenthesis function(
You can also get help by executing function?
Numpy Arrays
Manipulating numpy arrays is an important part of doing machine learning
(or, really, any type of scientific computation) in python. This will likely
be a short review for most. In any case, let's quickly go through some of the most important features.
End of explanation
"""
# Accessing elements
# get a single element
# (here: an element in the first row and column)
print(X[0, 0])
# get a row
# (here: 2nd row)
print(X[1])
# get a column
# (here: 2nd column)
print(X[:, 1])
# Transposing an array
print(X.T)
"""
Explanation: (Note that NumPy arrays use 0-indexing just like other data structures in Python.)
End of explanation
"""
# Creating a row vector
# of evenly spaced numbers over a specified interval.
y = np.linspace(0, 12, 5)
print(y)
# Turning the row vector into a column vector
print(y[:, np.newaxis])
# Getting the shape or reshaping an array
# Generating a random array
rnd = np.random.RandomState(seed=123)
X = rnd.uniform(low=0.0, high=1.0, size=(3, 5)) # a 3 x 5 array
print(X.shape)
print(X.reshape(5, 3))
# Indexing by an array of integers (fancy indexing)
indices = np.array([3, 1, 0])
print(indices)
X[:, indices]
"""
Explanation: $$\begin{bmatrix}
1 & 2 & 3 & 4 \
5 & 6 & 7 & 8
\end{bmatrix}^T
=
\begin{bmatrix}
1 & 5 \
2 & 6 \
3 & 7 \
4 & 8
\end{bmatrix}
$$
End of explanation
"""
from scipy import sparse
# Create a random array with a lot of zeros
rnd = np.random.RandomState(seed=123)
X = rnd.uniform(low=0.0, high=1.0, size=(10, 5))
print(X)
# set the majority of elements to zero
X[X < 0.7] = 0
print(X)
# turn X into a CSR (Compressed-Sparse-Row) matrix
X_csr = sparse.csr_matrix(X)
print(X_csr)
# Converting the sparse matrix to a dense array
print(X_csr.toarray())
"""
Explanation: There is much, much more to know, but these few operations are fundamental to what we'll
do during this tutorial.
SciPy Sparse Matrices
We won't make very much use of these in this tutorial, but sparse matrices are very nice
in some situations. In some machine learning tasks, especially those associated
with textual analysis, the data may be mostly zeros. Storing all these zeros is very
inefficient, and representing in a way that only contains the "non-zero" values can be much more efficient. We can create and manipulate sparse matrices as follows:
End of explanation
"""
# Create an empty LIL matrix and add some items
X_lil = sparse.lil_matrix((5, 5))
for i, j in np.random.randint(0, 5, (15, 2)):
X_lil[i, j] = i + j
print(X_lil)
print(type(X_lil))
X_dense = X_lil.toarray()
print(X_dense)
print(type(X_dense))
"""
Explanation: (You may have stumbled upon an alternative method for converting sparse to dense representations: numpy.todense; toarray returns a NumPy array, whereas todense returns a NumPy matrix. In this tutorial, we will be working with NumPy arrays, not matrices; the latter are not supported by scikit-learn.)
The CSR representation can be very efficient for computations, but it is not
as good for adding elements. For that, the LIL (List-In-List) representation
is better:
End of explanation
"""
X_csr = X_lil.tocsr()
print(X_csr)
print(type(X_csr))
"""
Explanation: Often, once an LIL matrix is created, it is useful to convert it to a CSR format
(many scikit-learn algorithms require CSR or CSC format)
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
# Plotting a line
x = np.linspace(0, 10, 100)
plt.plot(x, np.sin(x));
# Scatter-plot points
x = np.random.normal(size=500)
y = np.random.normal(size=500)
plt.scatter(x, y);
# Showing images using imshow
# - note that origin is at the top-left by default!
x = np.linspace(1, 12, 100)
y = x[:, np.newaxis]
im = y * np.sin(x) * np.cos(y)
print(im.shape)
plt.imshow(im);
# Contour plots
# - note that origin here is at the bottom-left by default!
plt.contour(im);
# 3D plotting
from mpl_toolkits.mplot3d import Axes3D
ax = plt.axes(projection='3d')
xgrid, ygrid = np.meshgrid(x, y.ravel())
ax.plot_surface(xgrid, ygrid, im, cmap=plt.cm.viridis, cstride=2, rstride=2, linewidth=0);
"""
Explanation: The available sparse formats that can be useful for various problems are:
CSR (compressed sparse row)
CSC (compressed sparse column)
BSR (block sparse row)
COO (coordinate)
DIA (diagonal)
DOK (dictionary of keys)
LIL (list in list)
The scipy.sparse submodule also has a lot of functions for sparse matrices
including linear algebra, sparse solvers, graph algorithms, and much more.
matplotlib
Another important part of machine learning is the visualization of data. The most common
tool for this in Python is matplotlib. It is an extremely flexible package, and
we will go over some basics here.
Since we are using Jupyter notebooks, let us use one of IPython's convenient built-in "magic functions", the "matoplotlib inline" mode, which will draw the plots directly inside the notebook.
End of explanation
"""
# %load http://matplotlib.org/mpl_examples/pylab_examples/ellipse_collection.py
"""
Explanation: There are many, many more plot types available. One useful way to explore these is by
looking at the matplotlib gallery.
You can test these examples out easily in the notebook: simply copy the Source Code
link on each page, and put it in a notebook using the %load magic.
For example:
End of explanation
"""
|
dj2441/Course_NumMethods | InClassAssignment1/error-group-work-template.ipynb | gpl-3.0 | # We can use the formulas you derieved above to calculate the actual numbers
# CODE HERE - Make sure to print out the results
def e_Approx(x):
return (2.718**x)
print("Approximation of e:")
print(e_Approx(1))
#Without using taylor expansion
print("\nHigher precision of e (from numpy):")
print(np.e)
#Absolute Error
print("\nAbsolute error of e and 2.718")
absError = abs(np.e - e_Approx(1))
print(absError)
#Relative Error
print("\nRelative error of e and 2.718")
relError = absError/abs(np.e)
print(relError)
"""
Explanation: Error Definitions
Following is an example for the concept of absolute error, relative error and decimal precision:
We shall test the approximation to common mathematical constant, $e$. Compute the absolute and relative errors along with the decimal precision if we take the approximate value of $e = 2.718$.
End of explanation
"""
# Model Error
time = [0, 1, 2, 3, 4, 5] # hours
growth = [20, 40, 75, 150, 297, 510] # Bacteria Population
time = np.array(time)
growth = np.array(growth)
# First we can just plot the data to visualize it
plt.plot(time,growth,'rs')
plt.title("Scatter plot for the Bacteria population growth over time")
plt.xlabel('Time (hrs)')
plt.ylabel('Population')
plt.show()
# Now we can use the Exponential Model, y = ab^x, to fit the data
a = 20.5122; b = 1.9238;
y = a*b**time[:]
ErrRelExp = abs(y-growth)/abs(growth)
plt.plot(time,growth,'rs',time,y,'-b')
plt.title("Expoenential model fit")
plt.xlabel('Time (hrs)')
plt.ylabel('Population')
plt.legend(["Data", "Exponential Fit"], loc=4)
plt.show()
# Now we can use the Power Model, y = ax^b, to fit the data
a = 32.5846; b = 1.572;
y = a*time[:]**b
ErrRelPow = abs(y-growth)/abs(growth)
plt.plot(time,growth,'rs',time,y,'-b')
plt.title("Power model fit")
plt.xlabel('Time (hrs)')
plt.ylabel('Population')
plt.legend(["Data", "Power Fit"], loc=4)
plt.show()
plt.plot(time,(ErrRelExp*100),"-r",time,(ErrRelPow*100),"-b")
plt.title("Relative Error of Exponential and Power Models")
plt.xlabel('Time (hrs)')
plt.ylabel('Relative Error (%)')
plt.legend(["Exp Model","Pow Model"], loc=1)
plt.show()
##Comments from Dan Judkins##
#1234567890123456789012345678901234567890123456789012345678901234567890123456789
# The Power Model does not appear to fit the data as provided as well as the the
# Exponential Model.However, the measurements accuracy is not given and time is
# short. If the study were to be performed over a longer period, the power
# model may end up being the more appropriate model given resource constraints
# (i.e. space, food, etc.)
"""
Explanation: Model Error
Model error arises in various forms, here we are gonna take some population data and fit two different models and
analyze which model is better for the given data. Take a look at the code below and comment on the results.
End of explanation
"""
a = 4.0/3.0
b = a - 1.0
c = 3.0 * b
eps = 1.0 - c
print 'Value of a is %s' % a
print 'Value of b is %s' % b
print 'Value of c is %s' % c
print 'Value of epsilon is %s' % eps
val = (1.0 +eps) - 1.0
print '\nvalue of (1.0 + epsilon)-1.0 is %s' % val
#Another calc of machine epsilon
a2 = 10.0 / 9.0
b2 = a2 - 1.0
c2 = 9.0 * b2
eps2 = 1.0 - c2
print '\nValue of a2 is %s' % a2
print 'Value of b2 is %s' % b2
print 'Value of c2 is %s' % c2
print 'Value of epsilon2 is %s' % eps2
#Another calc of machine epsilon
a3 = 5.0/9.0
b3 = 10.0 * a3
c3 = (b3 - 5.0) * 9.0
eps3 = c3 - 5.0
print '\nValue of a3 is %s' % a3
print 'Value of b3 is %s' % b3
print 'Value of c3 is %s' % c3
print 'Value of eps3 is %s' % eps3
"""
Explanation: Machine Epsilon
Machine epsilon is a very important concept in floating point error. The value, even though small, can easily compound over a period to cause huge problems.
Below we see a problem demonstrating how easily machine error can creep into a simple piece of code. Play with different ways to compute this and see what happens
End of explanation
"""
loopVal = c
maxVal = 30
for i in range (maxVal):
loopVal = loopVal * 10
print 'After %s iterations,' % maxVal
print 'loopVal is %s.'% loopVal
epsLoop = (1.0 * 10.0**maxVal) - loopVal
print 'machine epsilon is %s', epsLoop
"""
Explanation: Ideally eps should be 0, but instead we see the machine epsilon and while the value is small it can lead to issues. Write a loop that multiplies the value c above by 10 and see how the error propagates.
End of explanation
"""
largestFloat = np.finfo(float).max
ExtraLarge = largestFloat + 10.0
Diff = ExtraLarge - largestFloat
print 'largest floating point number is %s' % largestFloat
print 'add 10.0 to largest float number is %s' % ExtraLarge
print 'Diff between the two is %s' % Diff
"""
Explanation: The largest floating point number
Use the system library to find the largest floating point value. Now try to compute some things with this number and see what happens.
End of explanation
"""
smallestFloat = np.finfo(float).min
ExtraSmall = smallestFloat - 10.0
Diff2 = ExtraSmall - smallestFloat
print 'smallest floating point number is %s' % smallestFloat
print 'remove 10.0 from smallest float number is %s' % ExtraSmall
print 'Diff between the two is %s' % Diff2
"""
Explanation: The smallest floating point number
Do the same with the smallest number.
End of explanation
"""
xInterval = np.linspace(-np.pi, np.pi, 30)
ySine = np.sin( xInterval )
yX = xInterval
RelErr = abs(ySine - yX)/abs(ySine)
plt.plot(xInterval,yVal,'-r', xInterval, yX, '-b')
plt.title("Sin(x) and approximation X")
plt.xlabel('X')
plt.ylabel('Function Value')
plt.legend(["Sin(x)","X"], loc=4)
plt.show()
plt.plot(xInterval,(RelErr*100),'-g')
plt.title('Relative Error between Sin(x) and X')
plt.xlabel('X')
plt.ylabel('% Error')
plt.show()
"""
Explanation: Truncation Error
Truncation error is a very common form of error you will keep seeing in the area of Numerical Analysis/Computing.
Here we will look at the classic Calculus example of the approximation $\sin(x) \approx x$ near 0. We can plot them together to visualize the approximation and also plot the error to understand the behavior of the truncation error.
First plot the error of the approximation to $\sin x$ with $x$ on the interval $[-pi, \pi]$.
End of explanation
"""
xInterval = np.linspace(-0.5, 0.5, 30)
ySine = np.sin( xInterval )
yX = xInterval
RelErr = abs(ySine - yX)/abs(ySine)
plt.plot(xInterval,yVal,'-r', xInterval, yX, '-b')
plt.title("Sin(x) and approximation X")
plt.xlabel('X')
plt.ylabel('Function Value')
plt.legend(["Sin(x)","X"], loc=4)
plt.show()
plt.plot(xInterval,(RelErr*100),'-g')
plt.title('Relative Error between Sin(x) and X')
plt.xlabel('X')
plt.ylabel('% Error')
plt.show()
"""
Explanation: Now try the interval $[-0.5, 0.5]$
End of explanation
"""
AbsError = abs(ySine - yX)
plt.plot(xInterval,(AbsErr*100),'-g')
plt.title('Absolute Error between Sin(x) and X')
plt.xlabel('X')
plt.ylabel('Error')
plt.show()
"""
Explanation: Now plot the absolute error
End of explanation
"""
plt.plot(xInterval,(RelErr*100),'-r')
plt.title('Relative Error between Sin(x) and X')
plt.xlabel('X')
plt.ylabel('% Error')
plt.show()
"""
Explanation: Finally the relative error.
End of explanation
"""
|
rishuatgithub/MLPy | torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/04-CNN-on-Custom-Images.ipynb | apache-2.0 | import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torchvision import datasets, transforms, models # add models to the list
from torchvision.utils import make_grid
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# ignore harmless warnings
import warnings
warnings.filterwarnings("ignore")
"""
Explanation: <img src="../Pierian-Data-Logo.PNG">
<br>
<strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong>
CNN on Custom Images
For this exercise we're using a collection of Cats and Dogs images inspired by the classic <a href='https://www.kaggle.com/c/dogs-vs-cats'>Kaggle competition</a>.
In the last section we downloaded the files, looked at the directory structure, examined the images, and performed a variety of transforms in preparation for training.
In this section we'll define our model, then feed images through a training and validation sequence using DataLoader.
Image files directory tree
<pre>.
└── Data
└── CATS_DOGS
├── test
│ ├── CAT
│ │ ├── 9374.jpg
│ │ ├── 9375.jpg
│ │ └── ... (3,126 files)
│ └── DOG
│ ├── 9374.jpg
│ ├── 9375.jpg
│ └── ... (3,125 files)
│
└── train
├── CAT
│ ├── 0.jpg
│ ├── 1.jpg
│ └── ... (9,371 files)
└── DOG
├── 0.jpg
├── 1.jpg
└── ... (9,372 files)</pre>
Perform standard imports
End of explanation
"""
train_transform = transforms.Compose([
transforms.RandomRotation(10), # rotate +/- 10 degrees
transforms.RandomHorizontalFlip(), # reverse 50% of images
transforms.Resize(224), # resize shortest side to 224 pixels
transforms.CenterCrop(224), # crop longest side to 224 pixels at center
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])
test_transform = transforms.Compose([
transforms.Resize(224),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])
"""
Explanation: Define transforms
In the previous section we looked at a variety of transforms available for data augmentation (rotate, flip, etc.) and normalization.<br>
Here we'll combine the ones we want, including the <a href='https://discuss.pytorch.org/t/normalization-in-the-mnist-example/457/22'>recommended normalization parameters</a> for mean and std per channel.
End of explanation
"""
root = '../Data/CATS_DOGS'
train_data = datasets.ImageFolder(os.path.join(root, 'train'), transform=train_transform)
test_data = datasets.ImageFolder(os.path.join(root, 'test'), transform=test_transform)
torch.manual_seed(42)
train_loader = DataLoader(train_data, batch_size=10, shuffle=True)
test_loader = DataLoader(test_data, batch_size=10, shuffle=True)
class_names = train_data.classes
print(class_names)
print(f'Training images available: {len(train_data)}')
print(f'Testing images available: {len(test_data)}')
"""
Explanation: Prepare train and test sets, loaders
We're going to take advantage of a built-in torchvision dataset tool called <a href='https://pytorch.org/docs/stable/torchvision/datasets.html#imagefolder'><tt><strong>ImageFolder</strong></tt></a>.
End of explanation
"""
# Grab the first batch of 10 images
for images,labels in train_loader:
break
# Print the labels
print('Label:', labels.numpy())
print('Class:', *np.array([class_names[i] for i in labels]))
im = make_grid(images, nrow=5) # the default nrow is 8
# Inverse normalize the images
inv_normalize = transforms.Normalize(
mean=[-0.485/0.229, -0.456/0.224, -0.406/0.225],
std=[1/0.229, 1/0.224, 1/0.225]
)
im_inv = inv_normalize(im)
# Print the images
plt.figure(figsize=(12,4))
plt.imshow(np.transpose(im_inv.numpy(), (1, 2, 0)));
"""
Explanation: Display a batch of images
To verify that the training loader selects cat and dog images at random, let's show a batch of loaded images.<br>
Recall that imshow clips pixel values <0, so the resulting display lacks contrast. We'll apply a quick inverse transform to the input tensor so that images show their "true" colors.
End of explanation
"""
class ConvolutionalNetwork(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 3, 1)
self.conv2 = nn.Conv2d(6, 16, 3, 1)
self.fc1 = nn.Linear(54*54*16, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 2)
def forward(self, X):
X = F.relu(self.conv1(X))
X = F.max_pool2d(X, 2, 2)
X = F.relu(self.conv2(X))
X = F.max_pool2d(X, 2, 2)
X = X.view(-1, 54*54*16)
X = F.relu(self.fc1(X))
X = F.relu(self.fc2(X))
X = self.fc3(X)
return F.log_softmax(X, dim=1)
"""
Explanation: Define the model
We'll start by using a model similar to the one we applied to the CIFAR-10 dataset, except that here we have a binary classification (2 output channels, not 10). Also, we'll add another set of convolution/pooling layers.
End of explanation
"""
torch.manual_seed(101)
CNNmodel = ConvolutionalNetwork()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(CNNmodel.parameters(), lr=0.001)
CNNmodel
"""
Explanation: <div class="alert alert-info"><strong>Why <tt>(54x54x16)</tt>?</strong><br>
With 224 pixels per side, the kernels and pooling layers result in $\;(((224-2)/2)-2)/2 = 54.5\;$ which rounds down to 54 pixels per side.</div>
Instantiate the model, define loss and optimization functions
We're going to call our model "CNNmodel" to differentiate it from an "AlexNetmodel" we'll use later.
End of explanation
"""
def count_parameters(model):
params = [p.numel() for p in model.parameters() if p.requires_grad]
for item in params:
print(f'{item:>8}')
print(f'________\n{sum(params):>8}')
count_parameters(CNNmodel)
"""
Explanation: Looking at the trainable parameters
End of explanation
"""
import time
start_time = time.time()
epochs = 3
max_trn_batch = 800
max_tst_batch = 300
train_losses = []
test_losses = []
train_correct = []
test_correct = []
for i in range(epochs):
trn_corr = 0
tst_corr = 0
# Run the training batches
for b, (X_train, y_train) in enumerate(train_loader):
# Limit the number of batches
if b == max_trn_batch:
break
b+=1
# Apply the model
y_pred = CNNmodel(X_train)
loss = criterion(y_pred, y_train)
# Tally the number of correct predictions
predicted = torch.max(y_pred.data, 1)[1]
batch_corr = (predicted == y_train).sum()
trn_corr += batch_corr
# Update parameters
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Print interim results
if b%200 == 0:
print(f'epoch: {i:2} batch: {b:4} [{10*b:6}/8000] loss: {loss.item():10.8f} \
accuracy: {trn_corr.item()*100/(10*b):7.3f}%')
train_losses.append(loss)
train_correct.append(trn_corr)
# Run the testing batches
with torch.no_grad():
for b, (X_test, y_test) in enumerate(test_loader):
# Limit the number of batches
if b == max_tst_batch:
break
# Apply the model
y_val = CNNmodel(X_test)
# Tally the number of correct predictions
predicted = torch.max(y_val.data, 1)[1]
tst_corr += (predicted == y_test).sum()
loss = criterion(y_val, y_test)
test_losses.append(loss)
test_correct.append(tst_corr)
print(f'\nDuration: {time.time() - start_time:.0f} seconds') # print the time elapsed
"""
Explanation: Train the model
In the interests of time, we'll limit the number of training batches to 800, and the number of testing batches to 300. We'll train the model on 8000 of 18743 available images, and test it on 3000 out of 6251 images.
End of explanation
"""
torch.save(CNNmodel.state_dict(), 'CustomImageCNNModel.pt')
"""
Explanation: Save the trained model
End of explanation
"""
plt.plot(train_losses, label='training loss')
plt.plot(test_losses, label='validation loss')
plt.title('Loss at the end of each epoch')
plt.legend();
plt.plot([t/80 for t in train_correct], label='training accuracy')
plt.plot([t/30 for t in test_correct], label='validation accuracy')
plt.title('Accuracy at the end of each epoch')
plt.legend();
print(test_correct)
print(f'Test accuracy: {test_correct[-1].item()*100/3000:.3f}%')
"""
Explanation: Evaluate model performance
End of explanation
"""
AlexNetmodel = models.alexnet(pretrained=True)
AlexNetmodel
"""
Explanation: Download a pretrained model
Torchvision has a number of proven models available through <a href='https://pytorch.org/docs/stable/torchvision/models.html#classification'><tt><strong>torchvision.models</strong></tt></a>:
<ul>
<li><a href="https://arxiv.org/abs/1404.5997">AlexNet</a></li>
<li><a href="https://arxiv.org/abs/1409.1556">VGG</a></li>
<li><a href="https://arxiv.org/abs/1512.03385">ResNet</a></li>
<li><a href="https://arxiv.org/abs/1602.07360">SqueezeNet</a></li>
<li><a href="https://arxiv.org/abs/1608.06993">DenseNet</a></li>
<li><a href="https://arxiv.org/abs/1512.00567">Inception</a></li>
<li><a href="https://arxiv.org/abs/1409.4842">GoogLeNet</a></li>
<li><a href="https://arxiv.org/abs/1807.11164">ShuffleNet</a></li>
<li><a href="https://arxiv.org/abs/1801.04381">MobileNet</a></li>
<li><a href="https://arxiv.org/abs/1611.05431">ResNeXt</a></li>
</ul>
These have all been trained on the <a href='http://www.image-net.org/'>ImageNet</a> database of images. Our only task is to reduce the output of the fully connected layers from (typically) 1000 categories to just 2.
To access the models, you can construct a model with random weights by calling its constructor:<br>
<pre>resnet18 = models.resnet18()</pre>
You can also obtain a pre-trained model by passing pretrained=True:<br>
<pre>resnet18 = models.resnet18(pretrained=True)</pre>
All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225].
Feel free to investigate the different models available. Each one will be downloaded to a cache directory the first time they're accessed - from then on they'll be available locally.
For its simplicity and effectiveness, we'll use AlexNet:
End of explanation
"""
for param in AlexNetmodel.parameters():
param.requires_grad = False
"""
Explanation: <div class="alert alert-info">This model uses <a href='https://pytorch.org/docs/master/nn.html#torch.nn.AdaptiveAvgPool2d'><tt><strong>torch.nn.AdaptiveAvgPool2d(<em>output_size</em>)</strong></tt></a> to convert the large matrix coming out of the convolutional layers to a (6x6)x256 matrix being fed into the fully connected layers.</div>
Freeze feature parameters
We want to freeze the pre-trained weights & biases. We set <tt>.requires_grad</tt> to False so we don't backprop through them.
End of explanation
"""
torch.manual_seed(42)
AlexNetmodel.classifier = nn.Sequential(nn.Linear(9216, 1024),
nn.ReLU(),
nn.Dropout(0.4),
nn.Linear(1024, 2),
nn.LogSoftmax(dim=1))
AlexNetmodel
# These are the TRAINABLE parameters:
count_parameters(AlexNetmodel)
"""
Explanation: Modify the classifier
Next we need to modify the fully connected layers to produce a binary output. The section is labeled "classifier" in the AlexNet model.<br>
Note that when we assign new layers, their parameters default to <tt>.requires_grad=True</tt>.
End of explanation
"""
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(AlexNetmodel.classifier.parameters(), lr=0.001)
"""
Explanation: Define loss function & optimizer
We only want to optimize the classifier parameters, as the feature parameters are frozen.
End of explanation
"""
import time
start_time = time.time()
epochs = 1
max_trn_batch = 800
max_tst_batch = 300
train_losses = []
test_losses = []
train_correct = []
test_correct = []
for i in range(epochs):
trn_corr = 0
tst_corr = 0
# Run the training batches
for b, (X_train, y_train) in enumerate(train_loader):
if b == max_trn_batch:
break
b+=1
# Apply the model
y_pred = AlexNetmodel(X_train)
loss = criterion(y_pred, y_train)
# Tally the number of correct predictions
predicted = torch.max(y_pred.data, 1)[1]
batch_corr = (predicted == y_train).sum()
trn_corr += batch_corr
# Update parameters
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Print interim results
if b%200 == 0:
print(f'epoch: {i:2} batch: {b:4} [{10*b:6}/8000] loss: {loss.item():10.8f} \
accuracy: {trn_corr.item()*100/(10*b):7.3f}%')
train_losses.append(loss)
train_correct.append(trn_corr)
# Run the testing batches
with torch.no_grad():
for b, (X_test, y_test) in enumerate(test_loader):
if b == max_tst_batch:
break
# Apply the model
y_val = AlexNetmodel(X_test)
# Tally the number of correct predictions
predicted = torch.max(y_val.data, 1)[1]
tst_corr += (predicted == y_test).sum()
loss = criterion(y_val, y_test)
test_losses.append(loss)
test_correct.append(tst_corr)
print(f'\nDuration: {time.time() - start_time:.0f} seconds') # print the time elapsed
print(test_correct)
print(f'Test accuracy: {test_correct[-1].item()*100/3000:.3f}%')
"""
Explanation: Train the model
Remember, we're only training the fully connected layers. The convolutional layers have fixed weights and biases. For this reason, we only need to run one epoch.
End of explanation
"""
x = 2019
im = inv_normalize(test_data[x][0])
plt.imshow(np.transpose(im.numpy(), (1, 2, 0)));
test_data[x][0].shape
# CNN Model Prediction:
CNNmodel.eval()
with torch.no_grad():
new_pred = CNNmodel(test_data[x][0].view(1,3,224,224)).argmax()
print(f'Predicted value: {new_pred.item()} {class_names[new_pred.item()]}')
# AlexNet Model Prediction:
AlexNetmodel.eval()
with torch.no_grad():
new_pred = AlexNetmodel(test_data[x][0].view(1,3,224,224)).argmax()
print(f'Predicted value: {new_pred.item()} {class_names[new_pred.item()]}')
"""
Explanation: Run a new image through the model
We can also pass a single image through the model to obtain a prediction.<br>
Pick a number from 0 to 6250, assign it to "x", and we'll use that value to select an image from the Cats and Dogs test set.
End of explanation
"""
|
Autodesk/molecular-design-toolkit | moldesign/_notebooks/Tutorial 1. Making a molecule.ipynb | apache-2.0 | import moldesign as mdt
import moldesign.units as u
"""
Explanation: <span style="float:right"><a href="http://moldesign.bionano.autodesk.com/" target="_blank" title="About">About</a> <a href="https://github.com/autodesk/molecular-design-toolkit/issues" target="_blank" title="Issues">Issues</a> <a href="http://bionano.autodesk.com/MolecularDesignToolkit/explore.html" target="_blank" title="Tutorials">Tutorials</a> <a href="http://autodesk.github.io/molecular-design-toolkit/" target="_blank" title="Documentation">Documentation</a></span>
</span>
<br>
<center><h1>Tutorial 1: Making a molecule</h1></center>
This notebook gets you started with MDT - you'll build a small molecule, visualize it, and run a basic calculation.
Contents
1. Import the toolkit
A. Optional: Set up your computing backend
2. Build it
3. View it
4. Simulate it
5. Minimize it
6. Write it
7. Examine it
1. Import the toolkit
This cell loads the toolkit and its unit system. To execute a cell, click on it, then press <kbd>shift</kbd> + <kbd>enter</kbd>. (If you're new to the notebook environment, you may want to check out this helpful cheat sheet).
End of explanation
"""
mdt.configure()
"""
Explanation: Optional: configuration options
If you'd like to set some basic MDT configuration options, you can execute the following cell to create a GUI configuration editor:
End of explanation
"""
molecule = mdt.read('data/butane.xyz')
"""
Explanation: 2. Read in a molecular structure
Let's get started by reading in a molecular structure file.
When you execute this cell, you'll use mdt.read function to parse an XYZ-format file to create an MDT molecule object named, appropriately enough, molecule:
End of explanation
"""
molecule
"""
Explanation: Jupyter notebooks will automatically print out the value of the last statement in any cell. When you evaluate a Molecule, as in the cell below, you'll get some quick summary data:
End of explanation
"""
viewer = molecule.draw()
viewer # we tell Jupyter to draw the viewer by putting it on the last line of the cell
"""
Explanation: 3. Visualize it
MDT molecules have three built-in visualization methods - draw, draw2d, and draw3d. Try them out!
End of explanation
"""
print(viewer.selected_atoms)
"""
Explanation: Try clicking on some of the atoms in the visualization you've just created.
Afterwards, you can retrieve a list of the Python objects representing the atoms you clicked on:
End of explanation
"""
molecule.set_energy_model(mdt.models.RHF, basis='sto-3g')
properties = molecule.calculate()
print(properties.keys())
print('Energy: ', properties['potential_energy'])
molecule.draw_orbitals()
"""
Explanation: 4. Simulate it
So far, we've created a 3D molecular structure and visualized it right in the notebook.
If you sat through VSEPR theory in P. Chem, you might notice this molecule (butane) is looking decidedly non-optimal. Luckily, we can use simulation to predict a better structure.
We're specifically going to run a basic type of Quantum Chemistry calculation called "Hartree-Fock", which will give us information about the molecule's orbitals and energy.
End of explanation
"""
mintraj = molecule.minimize()
mintraj.draw_orbitals()
"""
Explanation: 5. Minimize it
Next, an energy minimization - that is, we're going to move the atoms around in order to find a minimum energy conformation. This is a great way to start cleaning up the messy structure we started with. The calculation might take a second or two ...
End of explanation
"""
molecule.write('my_first_molecule.xyz')
mintraj.write('my_first_minimization.P.gz')
"""
Explanation: 6. Write it
End of explanation
"""
mdt.widgets.GeometryBuilder(molecule)
molecule.calculate_potential_energy()
"""
Explanation: 7. Play with it
There are any number of directions to go from here. See how badly you can distort the geometry:
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.13/_downloads/plot_object_epochs.ipynb | bsd-3-clause | from __future__ import print_function
import mne
import os.path as op
import numpy as np
from matplotlib import pyplot as plt
"""
Explanation: The :class:Epochs <mne.Epochs> data structure: epoched data
End of explanation
"""
data_path = mne.datasets.sample.data_path()
# Load a dataset that contains events
raw = mne.io.read_raw_fif(
op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif'),
add_eeg_ref=False)
# If your raw object has a stim channel, you can construct an event array
# easily
events = mne.find_events(raw, stim_channel='STI 014')
# Show the number of events (number of rows)
print('Number of events:', len(events))
# Show all unique event codes (3rd column)
print('Unique event codes:', np.unique(events[:, 2]))
# Specify event codes of interest with descriptive labels.
# This dataset also has visual left (3) and right (4) events, but
# to save time and memory we'll just look at the auditory conditions
# for now.
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2}
"""
Explanation: :class:Epochs <mne.Epochs> objects are a way of representing continuous
data as a collection of time-locked trials, stored in an array of
shape(n_events, n_channels, n_times). They are useful for many statistical
methods in neuroscience, and make it easy to quickly overview what occurs
during a trial.
:class:Epochs <mne.Epochs> objects can be created in three ways:
1. From a :class:Raw <mne.io.RawFIF> object, along with event times
2. From an :class:Epochs <mne.Epochs> object that has been saved as a
.fif file
3. From scratch using :class:EpochsArray <mne.EpochsArray>. See
tut_creating_data_structures
End of explanation
"""
epochs = mne.Epochs(raw, events, event_id, tmin=-0.1, tmax=1,
baseline=(None, 0), preload=True, add_eeg_ref=False)
print(epochs)
"""
Explanation: Now, we can create an :class:mne.Epochs object with the events we've
extracted. Note that epochs constructed in this manner will not have their
data available until explicitly read into memory, which you can do with
:func:get_data <mne.Epochs.get_data>. Alternatively, you can use
preload=True.
Expose the raw data as epochs, cut from -0.1 s to 1.0 s relative to the event
onsets
End of explanation
"""
print(epochs.events[:3], epochs.event_id, sep='\n\n')
"""
Explanation: Epochs behave similarly to :class:mne.io.Raw objects. They have an
:class:info <mne.Info> attribute that has all of the same
information, as well as a number of attributes unique to the events contained
within the object.
End of explanation
"""
print(epochs[1:5])
print(epochs['Auditory/Right'])
"""
Explanation: You can select subsets of epochs by indexing the :class:Epochs <mne.Epochs>
object directly. Alternatively, if you have epoch names specified in
event_id then you may index with strings instead.
End of explanation
"""
# These will be epochs objects
for i in range(3):
print(epochs[i])
# These will be arrays
for ep in epochs[:2]:
print(ep)
"""
Explanation: It is also possible to iterate through :class:Epochs <mne.Epochs> objects
in this way. Note that behavior is different if you iterate on Epochs
directly rather than indexing:
End of explanation
"""
epochs.drop([0], reason='User reason')
epochs.drop_bad(reject=dict(grad=2500e-13, mag=4e-12, eog=200e-6), flat=None)
print(epochs.drop_log)
epochs.plot_drop_log()
print('Selection from original events:\n%s' % epochs.selection)
print('Removed events (from numpy setdiff1d):\n%s'
% (np.setdiff1d(np.arange(len(events)), epochs.selection).tolist(),))
print('Removed events (from list comprehension -- should match!):\n%s'
% ([li for li, log in enumerate(epochs.drop_log) if len(log) > 0]))
"""
Explanation: You can manually remove epochs from the Epochs object by using
:func:epochs.drop(idx) <mne.Epochs.drop>, or by using rejection or flat
thresholds with :func:epochs.drop_bad(reject, flat) <mne.Epochs.drop_bad>.
You can also inspect the reason why epochs were dropped by looking at the
list stored in epochs.drop_log or plot them with
:func:epochs.plot_drop_log() <mne.Epochs.plot_drop_log>. The indices
from the original set of events are stored in epochs.selection.
End of explanation
"""
epochs_fname = op.join(data_path, 'MEG', 'sample', 'sample-epo.fif')
epochs.save(epochs_fname)
"""
Explanation: If you wish to save the epochs as a file, you can do it with
:func:mne.Epochs.save. To conform to MNE naming conventions, the
epochs file names should end with '-epo.fif'.
End of explanation
"""
epochs = mne.read_epochs(epochs_fname, preload=False)
"""
Explanation: Later on you can read the epochs with :func:mne.read_epochs. For reading
EEGLAB epochs files see :func:mne.read_epochs_eeglab. We can also use
preload=False to save memory, loading the epochs from disk on demand.
End of explanation
"""
ev_left = epochs['Auditory/Left'].average()
ev_right = epochs['Auditory/Right'].average()
f, axs = plt.subplots(3, 2, figsize=(10, 5))
_ = f.suptitle('Left / Right auditory', fontsize=20)
_ = ev_left.plot(axes=axs[:, 0], show=False)
_ = ev_right.plot(axes=axs[:, 1], show=False)
plt.tight_layout()
"""
Explanation: If you wish to look at the average across trial types, then you may do so,
creating an :class:Evoked <mne.Evoked> object in the process. Instances
of Evoked are usually created by calling :func:mne.Epochs.average. For
creating Evoked from other data structures see :class:mne.EvokedArray and
tut_creating_data_structures.
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.1/tutorials/plotting.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.1,<2.2"
"""
Explanation: Plotting
This tutorial explains the high-level interface to plotting provided by the Bundle. You are of course always welcome to access arrays and plot manually.
As of PHOEBE 2.1, PHOEBE uses autofig as an intermediate layer for highend functionality to matplotlib.
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
"""
Explanation: This first line is only necessary for ipython noteboooks - it allows the plots to be shown on this page instead of in interactive mode
End of explanation
"""
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b['q'] = 0.8
b['ecc'] = 0.1
b['irrad_method'] = 'none'
"""
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
"""
b.add_dataset('orb', times=np.linspace(0,4,1000), dataset='orb01', component=['primary', 'secondary'])
times, fluxes, sigmas = np.loadtxt('test.lc.in', unpack=True)
b.add_dataset('lc', times=times, fluxes=fluxes, sigmas=sigmas, dataset='lc01')
"""
Explanation: And we'll attach some dummy datasets. See Datasets for more details.
End of explanation
"""
b.set_value('incl@orbit', 90)
b.run_compute(model='run_with_incl_90')
b.set_value('incl@orbit', 85)
b.run_compute(model='run_with_incl_85')
b.set_value('incl@orbit', 80)
b.run_compute(model='run_with_incl_80')
"""
Explanation: And run the forward models. See Computing Observables for more details.
End of explanation
"""
afig, mplfig = b.plot(show=True)
"""
Explanation: Showing and Saving
NOTE: in IPython notebooks calling plot will display directly below the call to plot. When not in IPython you have several options for viewing the figure:
call b.show or b.savefig after calling plot
use the returned autofig and matplotlib figures however you'd like
pass show=True to the plot method.
pass save='myfilename' to the plot method. (same as calling plt.savefig('myfilename'))
Default Plots
To see the options for plotting that are dataset-dependent see the tutorials on that dataset method:
ORB dataset
MESH dataset
LC dataset
RV dataset
LP dataset
By calling the plot method on the bundle (or any ParameterSet) without any arguments, a plot or series of subplots will be built based on the contents of that ParameterSet.
End of explanation
"""
afig, mplfig = b['orb@run_with_incl_80'].plot(show=True)
"""
Explanation: Any call to plot returns 2 objects - the autofig and matplotlib figure instances. Generally we won't need to do anything with these, but having them returned could come in handy if you want to manually edit either before drawing/saving the image.
In this example with so many different models and datasets, it is quite simple to build a single plot by filtering the bundle and calling the plot method on the resulting ParameterSet.
End of explanation
"""
afig, mplfig = b['orb@run_with_incl_80'].plot(time=1.0, show=True)
"""
Explanation: Time (highlight and uncover)
The built-in plot method also provides convenience options to either highlight the interpolated point for a given time, or only show the dataset up to a given time.
Highlight
The higlight option is enabled by default so long as a time (or times) is passed to plot. It simply adds an extra marker at the sent time - interpolating in the synthetic model if necessary.
End of explanation
"""
afig, mplfig = b['orb@run_with_incl_80'].plot(time=1.0, highlight_marker='s', highlight_color='g', highlight_ms=20, show=True)
"""
Explanation: To change the style of the "highlighted" points, you can pass matplotlib recognized markers, colors, and markersizes to the highlight_marker, highlight_color, and highlight_ms keywords, respectively.
End of explanation
"""
afig, mplfig = b['orb@run_with_incl_80'].plot(time=1.0, highlight=False, show=True)
"""
Explanation: To disable highlighting, simply send highlight=False
End of explanation
"""
afig, mplfig = b['orb@run_with_incl_80'].plot(time=0.5, uncover=True, show=True)
"""
Explanation: Uncover
Uncover shows the observations or synthetic model up to the provided time and is disabled by default, even when a time is provided, but is enabled simply by providing uncover=True. There are no additional options available for uncover.
End of explanation
"""
afig, mplfig = b['primary@orb@run_with_incl_80'].plot(show=True)
afig, mplfig = b.plot(component='primary', kind='orb', model='run_with_incl_80', show=True)
afig, mplfig = b.plot('primary@orb@run_with_incl_80', show=True)
"""
Explanation: Selecting Datasets
In addition to filtering and calling plot on the resulting ParameterSet, plot can accept a twig or filter on any of the available parameter tags.
For this reason, any of the following give identical results:
End of explanation
"""
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(x='times', y='vus', show=True)
"""
Explanation: Selecting Arrays
So far, each plotting call automatically chose default arrays from that dataset to plot along each axis. To override these defaults, simply point to the qualifier of the array that you'd like plotted along a given axis.
End of explanation
"""
b['orb01@primary@run_with_incl_80'].qualifiers
"""
Explanation: To see the list of available qualifiers that could be passed for x or y, call the qualifiers (or twigs) property on the ParameterSet.
End of explanation
"""
afig, mplfig = b['lc01@dataset'].plot(x='phases', z=0, show=True)
"""
Explanation: For more information on each of the available arrays, see the relevant tutorial on that dataset method:
ORB dataset
MESH dataset
LC dataset
RV dataset
LP dataset
Selecting Phase
And to plot in phase we just send x='phases' or x='phases:binary'.
Setting x='phases' will use the ephemeris from the top-level of the hierarchy
(as if you called b.get_ephemeris()), whereas passing a string after the colon,
will use the ephemeris of that component.
End of explanation
"""
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(xunit='AU', yunit='AU', show=True)
"""
Explanation: Units
Likewise, each array that is plotted is automatically plotted in its default units. To override these defaults, simply provide the unit (as a string or as a astropy units object) for a given axis.
End of explanation
"""
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(xlabel='X POS', ylabel='Z POS', show=True)
"""
Explanation: WARNING: when plotting two arrays with the same dimensions, PHOEBE attempts to set the aspect ratio to equal, but overriding to use two different units will result in undesired results. This may be fixed in the future, but for now can be avoided by using consistent units for the x and y axes when they have the same dimensions.
Axes Labels
Axes labels are automatically generated from the qualifier of the array and the plotted units. To override these defaults, simply pass a string for the label of a given axis.
End of explanation
"""
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(xlim=(-2,2), show=True)
"""
Explanation: Axes Limits
Axes limits are determined by the data automatically. To set custom axes limits, either use matplotlib methods on the returned axes objects, or pass limits as a list or tuple.
End of explanation
"""
afig, mplfig = b['lc01@dataset'].plot(yerror='sigmas', show=True)
"""
Explanation: Errorbars
In the cases of observational data, errorbars can be added by passing the name of the column.
End of explanation
"""
afig, mplfig = b['lc01@dataset'].plot(yerror=None, show=True)
"""
Explanation: To disable the errorbars, simply set yerror=None.
End of explanation
"""
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(c='r', show=True)
"""
Explanation: Colors
Colors of points and lines, by default, cycle according to matplotlib's color policy. To manually set the color, simply pass a matplotlib recognized color to the 'c' keyword.
End of explanation
"""
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(x='times', c='vws', show=True)
"""
Explanation: In addition, you can point to an array in the dataset to use as color.
End of explanation
"""
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(x='times', c='vws', cmap='spring', show=True)
"""
Explanation: Choosing colors works slightly differently for meshes (ie you can set fc for facecolor and ec for edgecolor). For more details, see the tutorial on the MESH dataset.
Colormaps
The colormaps is determined automatically based on the parameter used for coloring (ie RVs will be a red-blue colormap). To override this, pass a matplotlib recognized colormap to the cmap keyword.
End of explanation
"""
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(x='times', c='vws', draw_sidebars=True, show=True)
"""
Explanation: Adding a Colorbar
To add a colorbar (or sizebar, etc), send draw_sidebars=True to the plot call.
End of explanation
"""
afig, mplfig = b['orb@run_with_incl_80'].plot(show=True, legend=True)
"""
Explanation: Labels and Legends
To add a legend, include legend=True.
For details on placement and formatting of the legend see matplotlib's documentation.
End of explanation
"""
afig, mplfig = b['primary@orb@run_with_incl_80'].plot(label='primary')
afig, mplfig = b['secondary@orb@run_with_incl_80'].plot(label='secondary', legend=True, show=True)
"""
Explanation: The legend labels are generated automatically, but can be overriden by passing a string to the label keyword.
End of explanation
"""
afig, mplfig = b['orb@run_with_incl_80'].plot(show=True, legend=True, legend_kwargs={'loc': 'center', 'facecolor': 'r'})
"""
Explanation: To override the position or styling of the legend, you can pass valid options to legend_kwargs which will be passed on to plt.legend
End of explanation
"""
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(linestyle=':', s=0.1, show=True)
"""
Explanation: Other Plotting Options
Valid plotting options that are directly passed to matplotlib include:
- linestyle
- marker
Note that sizes (markersize, linewidth) should be handled by passing the size to 's' and attempting to set markersize or linewidth directly will raise an error. See also the autofig documention on size scales.
End of explanation
"""
afig, mplfig = b['orb@run_with_incl_80'].plot(time=0, projection='3d', show=True)
"""
Explanation: 3D Axes
To plot a in 3d, simply pass projection='3d' to the plot call. To override the defaults for the z-direction, pass a twig or array just as you would for x or y.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/uhh/cmip6/models/sandbox-2/seaice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'uhh', 'sandbox-2', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: UHH
Source ID: SANDBOX-2
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
tensorflow/text | docs/guide/tokenizers.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
!pip install -q "tensorflow-text==2.8.*"
import requests
import tensorflow as tf
import tensorflow_text as tf_text
"""
Explanation: Tokenizing with TF Text
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/text/guide/tokenizers"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/guide/tokenizers.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/guide/tokenizers.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/text/docs/guide/tokenizers.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/zh_segmentation/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub models</a>
</td>
</table>
Overview
Tokenization is the process of breaking up a string into tokens. Commonly, these tokens are words, numbers, and/or punctuation. The tensorflow_text package provides a number of tokenizers available for preprocessing text required by your text-based models. By performing the tokenization in the TensorFlow graph, you will not need to worry about differences between the training and inference workflows and managing preprocessing scripts.
This guide discusses the many tokenization options provided by TensorFlow Text, when you might want to use one option over another, and how these tokenizers are called from within your model.
Setup
End of explanation
"""
tokenizer = tf_text.WhitespaceTokenizer()
tokens = tokenizer.tokenize(["What you know you can't explain, but you feel it."])
print(tokens.to_list())
"""
Explanation: Splitter API
The main interfaces are Splitter and SplitterWithOffsets which have single methods split and split_with_offsets. The SplitterWithOffsets variant (which extends Splitter) includes an option for getting byte offsets. This allows the caller to know which bytes in the original string the created token was created from.
The Tokenizer and TokenizerWithOffsets are specialized versions of the Splitter that provide the convenience methods tokenize and tokenize_with_offsets respectively.
Generally, for any N-dimensional input, the returned tokens are in a N+1-dimensional RaggedTensor with the inner-most dimension of tokens mapping to the original individual strings.
```python
class Splitter {
@abstractmethod
def split(self, input)
}
class SplitterWithOffsets(Splitter) {
@abstractmethod
def split_with_offsets(self, input)
}
```
There is also a Detokenizer interface. Any tokenizer implementing this interface can accept a N-dimensional ragged tensor of tokens, and normally returns a N-1-dimensional tensor or ragged tensor that has the given tokens assembled together.
python
class Detokenizer {
@abstractmethod
def detokenize(self, input)
}
Tokenizers
Below is the suite of tokenizers provided by TensorFlow Text. String inputs are assumed to be UTF-8. Please review the Unicode guide for converting strings to UTF-8.
Whole word tokenizers
These tokenizers attempt to split a string by words, and is the most intuitive way to split text.
WhitespaceTokenizer
The text.WhitespaceTokenizer is the most basic tokenizer which splits strings on ICU defined whitespace characters (eg. space, tab, new line). This is often good for quickly building out prototype models.
End of explanation
"""
tokenizer = tf_text.UnicodeScriptTokenizer()
tokens = tokenizer.tokenize(["What you know you can't explain, but you feel it."])
print(tokens.to_list())
"""
Explanation: You may notice a shortcome of this tokenizer is that punctuation is included with the word to make up a token. To split the words and punctuation into separate tokens, the UnicodeScriptTokenizer should be used.
UnicodeScriptTokenizer
The UnicodeScriptTokenizer splits strings based on Unicode script boundaries. The script codes used correspond to International Components for Unicode (ICU) UScriptCode values. See: http://icu-project.org/apiref/icu4c/uscript_8h.html
In practice, this is similar to the WhitespaceTokenizer with the most apparent difference being that it will split punctuation (USCRIPT_COMMON) from language texts (eg. USCRIPT_LATIN, USCRIPT_CYRILLIC, etc) while also separating language texts from each other. Note that this will also split contraction words into separate tokens.
End of explanation
"""
tokenizer = tf_text.WhitespaceTokenizer()
tokens = tokenizer.tokenize(["What you know you can't explain, but you feel it."])
print(tokens.to_list())
"""
Explanation: Subword tokenizers
Subword tokenizers can be used with a smaller vocabulary, and allow the model to have some information about novel words from the subwords that make create it.
We briefly discuss the Subword tokenization options below, but the Subword Tokenization tutorial goes more in depth and also explains how to generate the vocab files.
WordpieceTokenizer
WordPiece tokenization is a data-driven tokenization scheme which generates a set of sub-tokens. These sub tokens may correspond to linguistic morphemes, but this is often not the case.
The WordpieceTokenizer expects the input to already be split into tokens. Because of this prerequisite, you will often want to split using the WhitespaceTokenizer or UnicodeScriptTokenizer beforehand.
End of explanation
"""
url = "https://github.com/tensorflow/text/blob/master/tensorflow_text/python/ops/test_data/test_wp_en_vocab.txt?raw=true"
r = requests.get(url)
filepath = "vocab.txt"
open(filepath, 'wb').write(r.content)
subtokenizer = tf_text.UnicodeScriptTokenizer(filepath)
subtokens = tokenizer.tokenize(tokens)
print(subtokens.to_list())
"""
Explanation: After the string is split into tokens, the WordpieceTokenizer can be used to split into subtokens.
End of explanation
"""
tokenizer = tf_text.BertTokenizer(filepath, token_out_type=tf.string, lower_case=True)
tokens = tokenizer.tokenize(["What you know you can't explain, but you feel it."])
print(tokens.to_list())
"""
Explanation: BertTokenizer
The BertTokenizer mirrors the original implementation of tokenization from the BERT paper. This is backed by the WordpieceTokenizer, but also performs additional tasks such as normalization and tokenizing to words first.
End of explanation
"""
url = "https://github.com/tensorflow/text/blob/master/tensorflow_text/python/ops/test_data/test_oss_model.model?raw=true"
sp_model = requests.get(url).content
tokenizer = tf_text.SentencepieceTokenizer(sp_model, out_type=tf.string)
tokens = tokenizer.tokenize(["What you know you can't explain, but you feel it."])
print(tokens.to_list())
"""
Explanation: SentencepieceTokenizer
The SentencepieceTokenizer is a sub-token tokenizer that is highly configurable. This is backed by the Sentencepiece library. Like the BertTokenizer, it can include normalization and token splitting before splitting into sub-tokens.
End of explanation
"""
tokenizer = tf_text.UnicodeCharTokenizer()
tokens = tokenizer.tokenize(["What you know you can't explain, but you feel it."])
print(tokens.to_list())
"""
Explanation: Other splitters
UnicodeCharTokenizer
This splits a string into UTF-8 characters. It is useful for CJK languages that do not have spaces between words.
End of explanation
"""
characters = tf.strings.unicode_encode(tf.expand_dims(tokens, -1), "UTF-8")
bigrams = tf_text.ngrams(characters, 2, reduction_type=tf_text.Reduction.STRING_JOIN, string_separator='')
print(bigrams.to_list())
"""
Explanation: The output is Unicode codepoints. This can be also useful for creating character ngrams, such as bigrams. To convert back into UTF-8 characters.
End of explanation
"""
MODEL_HANDLE = "https://tfhub.dev/google/zh_segmentation/1"
segmenter = tf_text.HubModuleTokenizer(MODEL_HANDLE)
tokens = segmenter.tokenize(["新华社北京"])
print(tokens.to_list())
"""
Explanation: HubModuleTokenizer
This is a wrapper around models deployed to TF Hub to make the calls easier since TF Hub currently does not support ragged tensors. Having a model perform tokenization is particularly useful for CJK languages when you want to split into words, but do not have spaces to provide a heuristic guide. At this time, we have a single segmentation model for Chinese.
End of explanation
"""
def decode_list(x):
if type(x) is list:
return list(map(decode_list, x))
return x.decode("UTF-8")
def decode_utf8_tensor(x):
return list(map(decode_list, x.to_list()))
print(decode_utf8_tensor(tokens))
"""
Explanation: It may be difficult to view the results of the UTF-8 encoded byte strings. Decode the list values to make viewing easier.
End of explanation
"""
strings = ["新华社北京"]
labels = [[0, 1, 1, 0, 1]]
tokenizer = tf_text.SplitMergeTokenizer()
tokens = tokenizer.tokenize(strings, labels)
print(decode_utf8_tensor(tokens))
"""
Explanation: SplitMergeTokenizer
The SplitMergeTokenizer & SplitMergeFromLogitsTokenizer have a targeted purpose of splitting a string based on provided values that indicate where the string should be split. This is useful when building your own segmentation models like the previous Segmentation example.
For the SplitMergeTokenizer, a value of 0 is used to indicate the start of a new string, and the value of 1 indicates the character is part of the current string.
End of explanation
"""
strings = [["新华社北京"]]
labels = [[[5.0, -3.2], [0.2, 12.0], [0.0, 11.0], [2.2, -1.0], [-3.0, 3.0]]]
tokenizer = tf_text.SplitMergeFromLogitsTokenizer()
tokenizer.tokenize(strings, labels)
print(decode_utf8_tensor(tokens))
"""
Explanation: The SplitMergeFromLogitsTokenizer is similar, but it instead accepts logit value pairs from a neural network that predict if each character should be split into a new string or merged into the current one.
End of explanation
"""
splitter = tf_text.RegexSplitter("\s")
tokens = splitter.split(["What you know you can't explain, but you feel it."], )
print(tokens.to_list())
"""
Explanation: RegexSplitter
The RegexSplitter is able to segment strings at arbitrary breakpoints defined by a provided regular expression.
End of explanation
"""
tokenizer = tf_text.UnicodeScriptTokenizer()
(tokens, start_offsets, end_offsets) = tokenizer.tokenize_with_offsets(['Everything not saved will be lost.'])
print(tokens.to_list())
print(start_offsets.to_list())
print(end_offsets.to_list())
"""
Explanation: Offsets
When tokenizing strings, it is often desired to know where in the original string the token originated from. For this reason, each tokenizer which implements TokenizerWithOffsets has a tokenize_with_offsets method that will return the byte offsets along with the tokens. The start_offsets lists the bytes in the original string each token starts at, and the end_offsets lists the bytes immediately after the point where each token ends. To refrase, the start offsets are inclusive and the end offsets are exclusive.
End of explanation
"""
tokenizer = tf_text.UnicodeCharTokenizer()
tokens = tokenizer.tokenize(["What you know you can't explain, but you feel it."])
print(tokens.to_list())
strings = tokenizer.detokenize(tokens)
print(strings.numpy())
"""
Explanation: Detokenization
Tokenizers which implement the Detokenizer provide a detokenize method which attempts to combine the strings. This has the chance of being lossy, so the detokenized string may not always match exactly the original, pre-tokenized string.
End of explanation
"""
docs = tf.data.Dataset.from_tensor_slices([['Never tell me the odds.'], ["It's a trap!"]])
tokenizer = tf_text.WhitespaceTokenizer()
tokenized_docs = docs.map(lambda x: tokenizer.tokenize(x))
iterator = iter(tokenized_docs)
print(next(iterator).to_list())
print(next(iterator).to_list())
"""
Explanation: TF Data
TF Data is a powerful API for creating an input pipeline for training models. Tokenizers work as expected with the API.
End of explanation
"""
|
ds-hwang/deeplearning_udacity | udacity_notebook/3_regularization.ipynb | mit | # These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
"""
Explanation: Deep Learning
Assignment 3
Previously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model.
The goal of this assignment is to explore regularization techniques.
End of explanation
"""
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
"""
Explanation: First reload the data we generated in notmist.ipynb.
End of explanation
"""
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
"""
Explanation: Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
End of explanation
"""
|
danijel3/ASRDemos | notebooks/MLP_Keras.ipynb | apache-2.0 | import numpy as np
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.optimizers import SGD, Adadelta
from keras.callbacks import RemoteMonitor
"""
Explanation: Simple MLP demo
This notebook demonstrates how to create a simple MLP for recognizing phonemes from speech. To do this, we will use a training dataset prepared in a different notebook titled VoxforgeDataPrep, so take a look at that before you start working on this demo.
In this example, we will use the excellent Keras library which depends upon either Theano or TensorFlow, so you will need to install those as well. Just follow the isntructions on the Keras website - it is recommended to use the freshest, Github versions of both Keras and Theano.
I also have the convinence of using the GPU for the actual computation. This code will work just as well on the CPU, but it's much faster on a good GPU.
We start by importing numpy (for loading and working with the data) and the neccessary Keras classes. Feel free to add more here if you wish to experiment with them.
End of explanation
"""
import sys
sys.path.append('../python')
from data import Corpus
with Corpus('../data/mfcc_train_small.hdf5',load_normalized=True,merge_utts=True) as corp:
train,dev=corp.split(0.9)
test=Corpus('../data/mfcc_test.hdf5',load_normalized=True,merge_utts=True)
tr_in,tr_out_dec=train.get()
dev_in,dev_out_dec=dev.get()
tst_in,tst_out_dec=test.get()
"""
Explanation: First let's load our data. In the VoxforgeDataPrep notebook, we created to arrays - inputs and outputs. The input nas the dimensions (num_samples,num_features) and the output is simply 1D vector of ints of length (num_samples). In this step, we split the training data into actual training (90%) and dev (10%) and merge that with the test data. Finally we save the indices for all the sets (instead of actual arrays).
End of explanation
"""
input_dim=tr_in.shape[1]
output_dim=np.max(tr_out_dec)+1
hidden_num=256
batch_size=256
epoch_num=100
def dec2onehot(dec):
num=dec.shape[0]
ret=np.zeros((num,output_dim))
ret[range(0,num),dec]=1
return ret
tr_out=dec2onehot(tr_out_dec)
dev_out=dec2onehot(dev_out_dec)
tst_out=dec2onehot(tst_out_dec)
print 'Samples num: {}'.format(tr_in.shape[0]+dev_in.shape[0]+tst_in.shape[0])
print ' of which: {} in train, {} in dev and {} in test'.format(tr_in.shape[0],dev_in.shape[0],tst_in.shape[0])
print 'Input size: {}'.format(input_dim)
print 'Output size (number of classes): {}'.format(output_dim)
"""
Explanation: Next we define some constants for our program. Input and output dimensions can be inferred from the data, but the hidden layer size has to be defined manually.
We also redefine our outputs as a 1-of-N matrix instead of an int vector. The old outputs were simply a list of integers (from 0 to 39) defining the phoneme (as listed in ../data/phones.list) class for each sample given at input. The new matrix has dimensions (num_samples, num_classes) and is mostly 0 with a single 1 put in place corresponding to the class index in the old output vector.
End of explanation
"""
model = Sequential()
model.add(Dense(input_dim=input_dim,output_dim=hidden_num))
model.add(Activation('sigmoid'))
model.add(Dense(output_dim=output_dim))
model.add(Activation('softmax'))
#optimizer = SGD(lr=0.01, momentum=0.9, nesterov=True)
optimizer= Adadelta()
loss='categorical_crossentropy'
"""
Explanation: Model definition
Here we define our model using the Keras interface. There are two main model types in Keras: sequential and graph. Sequential is much more common and easy to use, so we start with that.
Next we define the MLP topology. Here we have 3 layers: input, hidden and output. They are interconnected with two sets of Dense weight connections and a layer of activation functions after these weights. When defining the Dense weight layers, we need to provide the size: input and output are neccessary only for the first layer, subsequent layers use the output size of the previous layer as their input size.
We also define the type of optimizer and loss function we want to use. There are a few optimizers to choose from in the library and they are all interchangable. The differences between them are not too large in this example (feel free to experiment). The loss function chosen here is the cross-entropy function. Another option would be the simpler MSE (mean square error). Again, there doesn't seem to be much of a difference, but cross-entropy does seem like performing a bit better overall.
End of explanation
"""
model.compile(loss=loss, optimizer=optimizer)
print model.summary()
"""
Explanation: After defining the model and all its parameters, we can compile it. This literally means compiling, because the model is converted into C++ code in the background and compiled with lots of optimizations to work as efficiently as possible. The process can take a while, but is worth the added speed in training.
End of explanation
"""
from keras.utils import visualize_util
from IPython.display import SVG
SVG(visualize_util.to_graph(model,show_shape=True).create(prog='dot', format='svg'))
"""
Explanation: We can also try and visualize the model using the builtin Dot painter:
End of explanation
"""
val=(dev_in,dev_out)
hist=model.fit(tr_in, tr_out, shuffle=True, batch_size=batch_size, nb_epoch=epoch_num, verbose=0, validation_data=val)
"""
Explanation: Finally, we can start training the model. We provide the training function both training and validation data and define a few parameters: batch size and number of training epochs. Changing the batch size can affect both the training speed and final accuracy. This value is also closely related to the number of epochs. Generally, you want to run the training for as many epochs as needed for the model to converge on some value. The value of 100 should be fine for a quick comparison but up to 1k may be necessary to be abolutely sure (especially when testing larger models).
End of explanation
"""
import matplotlib.pyplot as P
%matplotlib inline
P.plot(hist.history['loss'])
"""
Explanation: The training method returns an object that contains the trained model parameters and the training history:
End of explanation
"""
res=model.evaluate(tst_in,tst_out,batch_size=batch_size,show_accuracy=True,verbose=0)
print 'Loss: {}'.format(res[0])
print 'Accuracy: {:%}'.format(res[1])
"""
Explanation: You can get better graphs and more data if you overload the training callback method, which will provide you with the model parameters after each epoch during training.
After the model is trained, we can easily test it using the evaluate method. The show_accuracy argument is required to compute the accuracy of the decision variable. The returned result has a 2-element list, where the first value is the loss of the model on the test data and the second is the accuracy:
End of explanation
"""
out = model.predict_classes(tst_in,batch_size=256,verbose=0)
confusion=np.zeros((output_dim,output_dim))
for s in range(len(out)):
confusion[out[s],tst_out_dec[s]]+=1
#normalize by class - because some classes occur much more often than others
for c in range(output_dim):
confusion[c,:]/=np.sum(confusion[c,:])
with open('../data/phones.list') as f:
ph=f.read().splitlines()
P.figure(figsize=(15,15))
P.pcolormesh(confusion,cmap=P.cm.gray)
P.xticks(np.arange(0,output_dim)+0.5)
P.yticks(np.arange(0,output_dim)+0.5)
ax=P.axes()
ax.set_xticklabels(ph)
ax.set_yticklabels(ph)
print ''
"""
Explanation: One other way to look at this is to check where the errors occur by looking at what's known as the confusion matrix. The confusion matrix counts the number of predicted outputs with respect on how they should have been predicted. All the values on the diagonal (so where the predicted class is equal to the reference) are correct results. Any values outside of the diagonal are the errors, or confusions of one class with another. For example, you can see that 'g' is confused by 'k' (both same phonation place, but different voiceness), 'r' with 'er' (same thing, but the latter is a diphone), 't' with 'ch' (again same phonantion place, but sligthly different pronounciaction) and so on...
End of explanation
"""
|
printedheart/h2o-3 | h2o-py/demos/H2O_tutorial_medium.ipynb | apache-2.0 | import pandas as pd
import numpy
from numpy.random import choice
from sklearn.datasets import load_boston
import h2o
h2o.init()
# transfer the boston data from pandas to H2O
boston_data = load_boston()
X = pd.DataFrame(data=boston_data.data, columns=boston_data.feature_names)
X["Median_value"] = boston_data.target
X = h2o.H2OFrame(python_obj=X.to_dict("list"))
# select 10% for valdation
r = X.runif(seed=123456789)
train = X[r < 0.9,:]
valid = X[r >= 0.9,:]
h2o.export_file(train, "Boston_housing_train.csv", force=True)
h2o.export_file(valid, "Boston_housing_test.csv", force=True)
"""
Explanation: H2O Tutorial
Author: Spencer Aiello
Contact: spencer@h2oai.com
This tutorial steps through a quick introduction to H2O's Python API. The goal of this tutorial is to introduce through a complete example H2O's capabilities from Python. Also, to help those that are accustomed to Scikit Learn and Pandas, the demo will be specific call outs for differences between H2O and those packages; this is intended to help anyone that needs to do machine learning on really Big Data make the transition. It is not meant to be a tutorial on machine learning or algorithms.
Detailed documentation about H2O's and the Python API is available at http://docs.h2o.ai.
Setting up your system for this demo
The following code creates two csv files using data from the Boston Housing dataset which is built into scikit-learn and adds them to the local directory
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
"""
Explanation: Enable inline plotting in the Jupyter Notebook
End of explanation
"""
fr = h2o.import_file("Boston_housing_train.csv")
"""
Explanation: Intro to H2O Data Munging
Read csv data into H2O. This loads the data into the H2O column compressed, in-memory, key-value store.
End of explanation
"""
fr.head()
"""
Explanation: View the top of the H2O frame.
End of explanation
"""
fr.tail()
"""
Explanation: View the bottom of the H2O Frame
End of explanation
"""
fr["CRIM"].head() # Tab completes
"""
Explanation: Select a column
fr["VAR_NAME"]
End of explanation
"""
columns = ["CRIM", "RM", "RAD"]
fr[columns].head()
"""
Explanation: Select a few columns
End of explanation
"""
fr[2:7,:] # explicitly select all columns with :
"""
Explanation: Select a subset of rows
Unlike in Pandas, columns may be identified by index or column name. Therefore, when subsetting by rows, you must also pass the column selection.
End of explanation
"""
# The columns attribute is exactly like Pandas
print "Columns:", fr.columns, "\n"
print "Columns:", fr.names, "\n"
print "Columns:", fr.col_names, "\n"
# There are a number of attributes to get at the shape
print "length:", str( len(fr) ), "\n"
print "shape:", fr.shape, "\n"
print "dim:", fr.dim, "\n"
print "nrow:", fr.nrow, "\n"
print "ncol:", fr.ncol, "\n"
# Use the "types" attribute to list the column types
print "types:", fr.types, "\n"
"""
Explanation: Key attributes:
* columns, names, col_names
* len, shape, dim, nrow, ncol
* types
Note:
Since the data is not in local python memory
there is no "values" attribute. If you want to
pull all of the data into the local python memory
then do so explicitly with h2o.export_file and
reading the data into python memory from disk.
End of explanation
"""
fr.shape
"""
Explanation: Select rows based on value
End of explanation
"""
mask = fr["CRIM"]>1
fr[mask,:].shape
"""
Explanation: Boolean masks can be used to subselect rows based on a criteria.
End of explanation
"""
fr.describe()
"""
Explanation: Get summary statistics of the data and additional data distribution information.
End of explanation
"""
x = fr.names
y="Median_value"
x.remove(y)
"""
Explanation: Set up the predictor and response column names
Using H2O algorithms, it's easier to reference predictor and response columns
by name in a single frame (i.e., don't split up X and y)
End of explanation
"""
model = h2o.random_forest(x=fr[:400,x],y=fr[:400,y],seed=42) # Define and fit first 400 points
model.predict(fr[400:fr.nrow,:]) # Predict the rest
"""
Explanation: Machine Learning With H2O
H2O is a machine learning library built in Java with interfaces in Python, R, Scala, and Javascript. It is open source and well-documented.
Unlike Scikit-learn, H2O allows for categorical and missing data.
The basic work flow is as follows:
* Fit the training data with a machine learning algorithm
* Predict on the testing data
Simple model
End of explanation
"""
perf = model.model_performance(fr[400:fr.nrow,:])
perf.r2() # get the r2 on the holdout data
perf.mse() # get the mse on the holdout data
perf # display the performance object
"""
Explanation: The performance of the model can be checked using the holdout dataset
End of explanation
"""
r = fr.runif(seed=12345) # build random uniform column over [0,1]
train= fr[r<0.75,:] # perform a 75-25 split
test = fr[r>=0.75,:]
model = h2o.random_forest(x=train[x],y=train[y],seed=42)
perf = model.model_performance(test)
perf.r2()
"""
Explanation: Train-Test Split
Instead of taking the first 400 observations for training, we can use H2O to create a random test train split of the data.
End of explanation
"""
model = h2o.random_forest(x=fr[x],y=fr[y], nfolds=10) # build a 10-fold cross-validated model
scores = numpy.array([m.r2() for m in model.xvals]) # iterate over the xval models using the xvals attribute
print "Expected R^2: %.2f +/- %.2f \n" % (scores.mean(), scores.std()*1.96)
print "Scores:", scores.round(2)
"""
Explanation: There was a massive jump in the R^2 value. This is because the original data is not shuffled.
Cross validation
H2O's machine learning algorithms take an optional parameter nfolds to specify the number of cross-validation folds to build. H2O's cross-validation uses an internal weight vector to build the folds in an efficient manner (instead of physically building the splits).
In conjunction with the nfolds parameter, a user may specify the way in which observations are assigned to each fold with the fold_assignment parameter, which can be set to either:
* AUTO: Perform random assignment
* Random: Each row has a equal (1/nfolds) chance of being in any fold.
* Modulo: Observations are in/out of the fold based by modding on nfolds
End of explanation
"""
from sklearn.cross_validation import cross_val_score
from h2o.cross_validation import H2OKFold
from h2o.estimators.random_forest import H2ORandomForestEstimator
from h2o.model.regression import h2o_r2_score
from sklearn.metrics.scorer import make_scorer
"""
Explanation: However, you can still make use of the cross_val_score from Scikit-Learn
Cross validation: H2O and Scikit-Learn
End of explanation
"""
model = H2ORandomForestEstimator(seed=42)
scorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer
custom_cv = H2OKFold(fr, n_folds=10, seed=42) # make a cv
scores = cross_val_score(model, fr[x], fr[y], scoring=scorer, cv=custom_cv)
print "Expected R^2: %.2f +/- %.2f \n" % (scores.mean(), scores.std()*1.96)
print "Scores:", scores.round(2)
"""
Explanation: You still must use H2O to make the folds. Currently, there is no H2OStratifiedKFold. Additionally, the H2ORandomForestEstimator is analgous to the scikit-learn RandomForestRegressor object with its own fit method
End of explanation
"""
h2o.__PROGRESS_BAR__=False
h2o.no_progress()
"""
Explanation: There isn't much difference in the R^2 value since the fold strategy is exactly the same. However, there was a major difference in terms of computation time and memory usage.
Since the progress bar print out gets annoying let's disable that
End of explanation
"""
from sklearn import __version__
sklearn_version = __version__
print sklearn_version
"""
Explanation: Grid Search
Grid search in H2O is still under active development and it will be available very soon. However, it is possible to make use of Scikit's grid search infrastructure (with some performance penalties)
Randomized grid search: H2O and Scikit-Learn
End of explanation
"""
%%time
from h2o.estimators.random_forest import H2ORandomForestEstimator # Import model
from sklearn.grid_search import RandomizedSearchCV # Import grid search
from scipy.stats import randint, uniform
model = H2ORandomForestEstimator(seed=42) # Define model
params = {"ntrees": randint(20,50),
"max_depth": randint(1,10),
"min_rows": randint(1,10), # scikit's min_samples_leaf
"mtries": randint(2,fr[x].shape[1]),} # Specify parameters to test
scorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer
custom_cv = H2OKFold(fr, n_folds=10, seed=42) # make a cv
random_search = RandomizedSearchCV(model, params,
n_iter=30,
scoring=scorer,
cv=custom_cv,
random_state=42,
n_jobs=1) # Define grid search object
random_search.fit(fr[x], fr[y])
print "Best R^2:", random_search.best_score_, "\n"
print "Best params:", random_search.best_params_
"""
Explanation: If you have 0.16.1, then your system can't handle complex randomized grid searches (it works in every other version of sklearn, including the soon to be released 0.16.2 and the older versions).
The steps to perform a randomized grid search:
1. Import model and RandomizedSearchCV
2. Define model
3. Specify parameters to test
4. Define grid search object
5. Fit data to grid search object
6. Collect scores
All the steps will be repeated from above.
Because 0.16.1 is installed, we use scipy to define specific distributions
ADVANCED TIP:
Turn off reference counting for spawning jobs in parallel (n_jobs=-1, or n_jobs > 1).
We'll turn it back on again in the aftermath of a Parallel job.
If you don't want to run jobs in parallel, don't turn off the reference counting.
Pattern is:
>>> h2o.turn_off_ref_cnts()
>>> .... parallel job ....
>>> h2o.turn_on_ref_cnts()
End of explanation
"""
def report_grid_score_detail(random_search, charts=True):
"""Input fit grid search estimator. Returns df of scores with details"""
df_list = []
for line in random_search.grid_scores_:
results_dict = dict(line.parameters)
results_dict["score"] = line.mean_validation_score
results_dict["std"] = line.cv_validation_scores.std()*1.96
df_list.append(results_dict)
result_df = pd.DataFrame(df_list)
result_df = result_df.sort("score", ascending=False)
if charts:
for col in get_numeric(result_df):
if col not in ["score", "std"]:
plt.scatter(result_df[col], result_df.score)
plt.title(col)
plt.show()
for col in list(result_df.columns[result_df.dtypes == "object"]):
cat_plot = result_df.score.groupby(result_df[col]).mean()
cat_plot.sort()
cat_plot.plot(kind="barh", xlim=(.5, None), figsize=(7, cat_plot.shape[0]/2))
plt.show()
return result_df
def get_numeric(X):
"""Return list of numeric dtypes variables"""
return X.dtypes[X.dtypes.apply(lambda x: str(x).startswith(("float", "int", "bool")))].index.tolist()
report_grid_score_detail(random_search).head()
"""
Explanation: We might be tempted to think that we just had a large improvement; however we must be cautious. The function below creates a more detailed report.
End of explanation
"""
%%time
params = {"ntrees": randint(30,40),
"max_depth": randint(4,10),
"mtries": randint(4,10),}
custom_cv = H2OKFold(fr, n_folds=5, seed=42) # In small datasets, the fold size can have a big
# impact on the std of the resulting scores. More
random_search = RandomizedSearchCV(model, params, # folds --> Less examples per fold --> higher
n_iter=10, # variation per sample
scoring=scorer,
cv=custom_cv,
random_state=43,
n_jobs=1)
random_search.fit(fr[x], fr[y])
print "Best R^2:", random_search.best_score_, "\n"
print "Best params:", random_search.best_params_
report_grid_score_detail(random_search)
"""
Explanation: Based on the grid search report, we can narrow the parameters to search and rerun the analysis. The parameters below were chosen after a few runs:
End of explanation
"""
from h2o.transforms.preprocessing import H2OScaler
from h2o.transforms.decomposition import H2OPCA
"""
Explanation: Transformations
Rule of machine learning: Don't use your testing data to inform your training data. Unfortunately, this happens all the time when preparing a dataset for the final model. But on smaller datasets, you must be especially careful.
At the moment, there are no classes for managing data transformations. On the one hand, this requires the user to tote around some extra state, but on the other, it allows the user to be more explicit about transforming H2OFrames.
Basic steps:
Remove the response variable from transformations.
Import transformer
Define transformer
Fit train data to transformer
Transform test and train data
Re-attach the response variable.
First let's normalize the data using the means and standard deviations of the training data.
Then let's perform a principal component analysis on the training data and select the top 5 components.
Using these components, let's use them to reduce the train and test design matrices.
End of explanation
"""
y_train = train.pop("Median_value")
y_test = test.pop("Median_value")
norm = H2OScaler()
norm.fit(train)
X_train_norm = norm.transform(train)
X_test_norm = norm.transform(test)
print X_test_norm.shape
X_test_norm
"""
Explanation: Normalize Data: Use the means and standard deviations from the training data.
End of explanation
"""
pca = H2OPCA(n_components=5)
pca.fit(X_train_norm)
X_train_norm_pca = pca.transform(X_train_norm)
X_test_norm_pca = pca.transform(X_test_norm)
# prop of variance explained by top 5 components?
print X_test_norm_pca.shape
X_test_norm_pca[:5]
model = H2ORandomForestEstimator(seed=42)
model.fit(X_train_norm_pca,y_train)
y_hat = model.predict(X_test_norm_pca)
h2o_r2_score(y_test,y_hat)
"""
Explanation: Then, we can apply PCA and keep the top 5 components.
End of explanation
"""
from h2o.transforms.preprocessing import H2OScaler
from h2o.transforms.decomposition import H2OPCA
from h2o.estimators.random_forest import H2ORandomForestEstimator
from sklearn.pipeline import Pipeline # Import Pipeline <other imports not shown>
model = H2ORandomForestEstimator(seed=42)
pipe = Pipeline([("standardize", H2OScaler()), # Define pipeline as a series of steps
("pca", H2OPCA(n_components=5)),
("rf", model)]) # Notice the last step is an estimator
pipe.fit(train, y_train) # Fit training data
y_hat = pipe.predict(test) # Predict testing data (due to last step being an estimator)
h2o_r2_score(y_test, y_hat) # Notice the final score is identical to before
"""
Explanation: Although this is MUCH simpler than keeping track of all of these transformations manually, it gets to be somewhat of a burden when you want to chain together multiple transformers.
Pipelines
"Tranformers unite!"
If your raw data is a mess and you have to perform several transformations before using it, use a pipeline to keep things simple.
Steps:
Import Pipeline, transformers, and model
Define pipeline. The first and only argument is a list of tuples where the first element of each tuple is a name you give the step and the second element is a defined transformer. The last step is optionally an estimator class (like a RandomForest).
Fit the training data to pipeline
Either transform or predict the testing data
End of explanation
"""
pipe = Pipeline([("standardize", H2OScaler()),
("pca", H2OPCA()),
("rf", H2ORandomForestEstimator(seed=42))])
params = {"standardize__center": [True, False], # Parameters to test
"standardize__scale": [True, False],
"pca__n_components": randint(2, 6),
"rf__ntrees": randint(50,80),
"rf__max_depth": randint(4,10),
"rf__min_rows": randint(5,10), }
# "rf__mtries": randint(1,4),} # gridding over mtries is
# problematic with pca grid over
# n_components above
from sklearn.grid_search import RandomizedSearchCV
from h2o.cross_validation import H2OKFold
from h2o.model.regression import h2o_r2_score
from sklearn.metrics.scorer import make_scorer
custom_cv = H2OKFold(fr, n_folds=5, seed=42)
random_search = RandomizedSearchCV(pipe, params,
n_iter=30,
scoring=make_scorer(h2o_r2_score),
cv=custom_cv,
random_state=42,
n_jobs=1)
random_search.fit(fr[x],fr[y])
results = report_grid_score_detail(random_search)
results.head()
"""
Explanation: This is so much easier!!!
But, wait a second, we did worse after applying these transformations! We might wonder how different hyperparameters for the transformations impact the final score.
Combining randomized grid search and pipelines
"Yo dawg, I heard you like models, so I put models in your models to model models."
Steps:
Import Pipeline, grid search, transformers, and estimators <Not shown below>
Define pipeline
Define parameters to test in the form: "(Step name)__(argument name)" A double underscore separates the two words.
Define grid search
Fit to grid search
End of explanation
"""
best_estimator = random_search.best_estimator_ # fetch the pipeline from the grid search
h2o_model = h2o.get_model(best_estimator._final_estimator._id) # fetch the model from the pipeline
save_path = h2o.save_model(h2o_model, path=".", force=True)
print save_path
# assumes new session
my_model = h2o.load_model(path=save_path)
my_model.predict(fr)
"""
Explanation: Currently Under Development (drop-in scikit-learn pieces):
* Richer set of transforms (only PCA and Scale are implemented)
* Richer set of estimators (only RandomForest is available)
* Full H2O Grid Search
Other Tips: Model Save/Load
It is useful to save constructed models to disk and reload them between H2O sessions. Here's how:
End of explanation
"""
|
rishuatgithub/MLPy | torch/PYTORCH_NOTEBOOKS/00-Crash-Course-Topics/01-Crash-Course-Pandas/08-Pandas-Exercises-Solutions.ipynb | apache-2.0 | # CODE HERE
import pandas as pd
"""
Explanation: <a href='http://www.pieriandata.com'><img src='../Pierian_Data_Logo.png'/></a>
<center><em>Copyright Pierian Data</em></center>
<center><em>For more information, visit us at <a href='http://www.pieriandata.com'>www.pieriandata.com</a></em></center>
Pandas Exercises - Solutions
TASK: Import pandas
End of explanation
"""
df = pd.read_csv('bank.csv')
"""
Explanation: TASK: Read in the bank.csv file that is located under the 01-Crash-Course-Pandas folder. Pay close attention to where the .csv file is located! Please don't post to the QA forums if you can't figure this one out, instead, run our solutions notebook directly to see how its done.
End of explanation
"""
# CODE HERE
df.head()
"""
Explanation: TASK: Display the first 5 rows of the data set
End of explanation
"""
# CODE HERE
df['age'].mean()
"""
Explanation: TASK: What is the average (mean) age of the people in the dataset?
End of explanation
"""
# CODE HERE
df['age'].idxmin()
df.iloc[503]['marital']
"""
Explanation: TASK: What is the marital status of the youngest person in the dataset?
HINT
End of explanation
"""
# CODE HERE
df['job'].nunique()
"""
Explanation: TASK: How many unique job categories are there?
End of explanation
"""
# CODE HERE
df['job'].value_counts()
"""
Explanation: TASK: How many people are there per job category? (Take a peek at the expected output)
End of explanation
"""
#CODE HERE
# Many, many ways to do this one! Here is just one way:
100*df['marital'].value_counts()['married']/len(df)
# df['marital].value_counts()
"""
Explanation: TASK: What percent of people in the dataset were married?
End of explanation
"""
df['default code'] = df['default'].map({'no':0,'yes':1})
df.head()
"""
Explanation: TASK: There is a column labeled "default". Use pandas' .map() method to create a new column called "default code" which contains a 0 if there was no default, or a 1 if there was a default. Then show the head of the dataframe with this new column.
Helpful Hint Link One
Helpful Hint Link Two
End of explanation
"""
# CODE HERE
df['marital code'] = df['marital'].apply(lambda status: status[0])
df.head()
"""
Explanation: TASK: Using pandas .apply() method, create a new column called "marital code". This column will only contained a shortened code of the possible marital status first letter. (For example "m" for "married" , "s" for "single" etc... See if you can do this with a lambda expression. Lots of ways to do this one!
Hint Link
End of explanation
"""
# CODE HERE
df['duration'].max()
"""
Explanation: TASK: What was the longest lasting duration?
End of explanation
"""
# CODE HERE
df[df['job']=='unemployed']['education'].value_counts()
"""
Explanation: TASK: What is the most common education level for people who are unemployed?
End of explanation
"""
# CODE HERE
df[df['job']=='unemployed']['age'].mean()
"""
Explanation: TASK: What is the average (mean) age for being unemployed?
End of explanation
"""
|
meppe/tensorflow-deepq | notebooks/karpathy_game.ipynb | mit | g.plot_reward(smoothing=100)
"""
Explanation: Average Reward over time
End of explanation
"""
g.__class__ = KarpathyGame
np.set_printoptions(formatter={'float': (lambda x: '%.2f' % (x,))})
x = g.observe()
new_shape = (x[:-2].shape[0]//g.eye_observation_size, g.eye_observation_size)
print(x[:-2].reshape(new_shape))
print(x[-2:])
g.to_html()
"""
Explanation: Visualizing what the agent is seeing
Starting with the ray pointing all the way right, we have one row per ray in clockwise order.
The numbers for each ray are the following:
- first three numbers are normalized distances to the closest visible (intersecting with the ray) object. If no object is visible then all of them are $1$. If there's many objects in sight, then only the closest one is visible. The numbers represent distance to friend, enemy and wall in order.
- the last two numbers represent the speed of moving object (x and y components). Speed of wall is ... zero.
Finally the last two numbers in the representation correspond to speed of the hero.
End of explanation
"""
|
janpipek/physt | doc/adaptive_histogram.ipynb | mit | # Necessary import evil
import physt
from physt import h1, h2, histogramdd
import numpy as np
import matplotlib.pyplot as plt
# Create an empty histogram
h = h1(None, "fixed_width", bin_width=10, name="People height", axis_name="cm", adaptive=True)
h
"""
Explanation: Adaptive histogram
This type of histogram automatically adapts bins when new values are added. Note that only fixed-width continuous binning scheme is currently supported.
End of explanation
"""
# Add a first value
h.fill(157)
h.plot()
h
# Add a second value
h.fill(173)
h.plot()
# Add a few more values, including weights
h.fill(173, 2)
h.fill(186, 5)
h.fill(188, 3)
h.fill(193, 1)
h.plot(errors=True, show_stats=True);
"""
Explanation: Adding single values
End of explanation
"""
ha = h1(None, "fixed_width", bin_width=10, adaptive=True)
ha.plot(show_stats=True);
# Beginning
ha.fill_n([10, 11, 34])
ha.plot();
# Add a distant value
ha.fill_n([234], weights=[10])
ha.plot(show_stats=True);
# Let's create a huge dataset
values = np.random.normal(130, 20, 100000)
%%time
# Add lots of values (no loop in Python)
hn = h1(None, "fixed_width", bin_width=10, adaptive=True)
hn.fill_n(values)
# ha.plot()
%%time
# Comparison with Python loop
hp = h1(None, "fixed_width", bin_width=10, adaptive=True)
for value in values:
hp.fill(value)
# Hopefully equal results
print("Equal?", hp == hn)
hp.plot(show_stats=True);
"""
Explanation: Adding multiple values at once
End of explanation
"""
ha1 = h1(None, "fixed_width", bin_width=5, adaptive=True)
ha1.fill_n(np.random.normal(100, 10, 1000))
ha2 = h1(None, "fixed_width", bin_width=5, adaptive=True)
ha2.fill_n(np.random.normal(70, 10, 500))
ha = ha1 + ha2
fig, ax= plt.subplots()
ha1.plot(alpha=0.1, ax=ax, label="1", color="red")
ha2.plot(alpha=0.1, ax=ax, label="2")
ha.plot("scatter", label="sum", ax=ax, errors=True)
ax.legend(loc=2); # TODO? Why don't we show the sum???
"""
Explanation: Adding two adaptive histograms together
End of explanation
"""
|
donaghhorgan/COMP9033 | labs/03 - Finding outliers.ipynb | gpl-3.0 | %matplotlib inline
import pandas as pd
"""
Explanation: Lab 03: Finding outliers
Introduction
This week's lab is focused on outlier detection and data cleaning. At the end of the lab, you should be able to use pandas to:
Create histograms and boxplots to help find outliers visually.
Remove data from a data frame.
Replace data in a data frame.
Getting started
Let's start by making sure that plots are displayed inline by issuing the magic command %matplotlib inline and importing pandas in the usual way.
End of explanation
"""
path_to_csv = "data/iris.csv"
"""
Explanation: Next, let's load the data. Write the path to your iris.csv file (i.e. the one from Lab 02) in the cell below:
End of explanation
"""
df = pd.read_csv(path_to_csv, index_col=['species', 'sample_number'])
df.head()
"""
Explanation: Execute the cell below to load the data into a pandas data frame and index that data frame by the species and sample_number columns:
End of explanation
"""
df.plot(kind='hist');
"""
Explanation: Finding outliers
Histograms
Last week, we looked at how pandas can be used to plot histograms for columns in our data frame. For instance, to create a histogram for each column, we can write:
End of explanation
"""
versicolor = df.loc['versicolor']
versicolor.plot(kind='hist');
"""
Explanation: We also saw how data frame indexing can be used to limit our view of the data to just one species of Iris. For instance, to plot a histogram for each column in our data frame, but only for the rows corresponding to Iris versicolor, we can write:
End of explanation
"""
versicolor.plot(kind='hist', subplots=True, layout=(2,2), figsize=(12,6));
"""
Explanation: Plotting multiple histograms on one chart can be a little cluttered though. We also saw how we could create individual charts for each column by passing subplots=True when we call the plot method, like this:
End of explanation
"""
versicolor.plot(kind='hist', subplots=True, layout=(2,2), figsize=(12,6), bins=30);
"""
Explanation: This is much more useful, but the histograms look a bit chunky because the default number of bins is set to ten. We can change this easily though, by passing the optional bins argument to the plot method, like in the cell below.
Note: By default, bins=10 unless other specified. Increasing the number of bins results in a "higher resolution" histogram, but comes at the cost of additional visual complexity. The trade off here is important. If we set the number of bins to be a very large number, the histogram will become much more detailed, but also more difficult to understand and interpret. On the other hand, if the number of bins is too small, then the bin widths will be very wide (i.e. the histogram will look "chunky") and important details may be hidden.
Choosing the right number of bins depends on your data and how much detail you're looking for, so it can change from situation to situation. As a general rule, you should stick with the default setting initially, and only increase or decrease this if you feel that it is necessary.
End of explanation
"""
versicolor.boxplot();
"""
Explanation: Increasing the number of bins gives us a more detailed view of how the data is behaving, which can often make it easier to detect outliers visually. In this instance, however, it seems that all of the data is reasonably well behaved - there are no obvious extreme values.
Boxplots
Boxplots offer an alternative method for visually detecting outliers. In pandas, boxplots aren't supported through the standard plot method, but instead through a separate boxplot method. However, apart from this, they operate in more or less the same way, like in the cell below.
Note: Depending on the version of pandas you are running, calling the boxplot method may generate a warning about the return_type argument not being set. This is just a warning to users that this functionality may change in a future release, and can safely be ignored as the behaviour in either case will not affect the result of the plotting call for our purposes.
End of explanation
"""
# Here, q1 = first quartile, q3 = third quartile, iqr = interquartile range, lw = lower whisker, uw = upper whisker
q1 = versicolor.quantile(0.25)
q3 = versicolor.quantile(0.75)
iqr = q3 - q1
lw = q1 - 1.5 * iqr
uw = q3 + 1.5 * iqr
# Outliers are below the lower whisker OR above the upper whisker
outliers = (versicolor < lw) | (versicolor > uw)
# Print the last few rows of "outliers"
outliers.tail()
"""
Explanation: As you can see, pandas creates a boxplot for each column in our data frame and places all four boxplots in the same chart, so that we can compare the distributions of the data in the columns side by side.
Inspecting the boxplots, it becomes clear that (at least according to the logic of the boxplot test) there are some outlying observations in our petal length data. In this instance, the outlier is not far from the lower whisker of the box plot (i.e. it's not a very extreme value), and so we may not want to go to the effort of dealing with it because it may not affect the outcome of any further analysis very severly. However, let's consider that it is an undesirable observation and we want to deal with it in some fashion.
Removing and replacing data
As we discussed in this week's lecture, we have three options for dealing with outliers:
Remove them.
Replace them with a "reasonable" value.
Adjust how we model the data.
In this instance, we don't have a particular modelling technique in mind, so adjusting how we model the data isn't really an option. However, we can choose to either remove the data or replace it with some value that would be considered reasonable.
Removing data
In order to remove an observation, we must first identify its indices in the data frame. We can do this by manually computing the whisker values and using them to identify the locations of the outliers:
Note: Typically, the lower whisker in a boxplot is set to be $1.5 \times \text{IQR}$ below the bottom edge of the box, while the upper whisker is set to be $1.5 \times \text{IQR}$ above the top edge of the box, where $\text{IQR}$ is the interquartile range, i.e. the distance betwen the top and bottom edges of the box.
End of explanation
"""
versicolor[~outliers].tail()
"""
Explanation: As you can see, the outlier occurs in the 49th row of the data frame.
To remove the row containing the outlying value, we first compute a copy of the data frame without the outlying value. To do this, we can just select all the entries not contained (~) in the outliers variable we computed earlier, like this:
End of explanation
"""
removed = versicolor[~outliers].dropna()
removed.tail() # Just show the last five rows
"""
Explanation: Next, we call the dropna method on the dataframe to remove all the rows containing outlying values:
End of explanation
"""
versicolor.median()
"""
Explanation: As you can see, the 49th row (where the outlier was) has now been removed.
Replacing data
If we have multiple rows and columns of data, then removing one point means we must remove the entire row or the entire column it belongs to. This is often inconvenient because we end up removing several more data points than just the one we intended to, and so our sample becomes smaller.
One alternative to removing a data point is to replace it with a suitable substitute value. Determining an appropriate substitution can be subjective, but two commonly used choices are the mean and the median. Let's replace the outlying point in our original data frame (i.e. df) with the median value of the sample it belongs to. To do this, we must first compute the median value of the sample, which we can do using the median method of the data frame, just like in Lab 02:
End of explanation
"""
replaced = versicolor[~outliers].fillna(versicolor.median())
replaced.tail() # Just show the last five rows
"""
Explanation: To set the new value, we first compute a copy of the data frame without the outlying value, just like earlier. Then, we can call the fillna method to fill any missing column values with the median values of those columns, like this:
End of explanation
"""
|
csieber/alpha-dataset | notebooks/segments.ipynb | mit | import numpy as np
import pandas as pd
import matplotlib.pylab as plt
dfsegs = pd.read_csv("../data/videos/CRZbG73SX3s_segments.csv")
"""
Explanation: Video Segments
The following example shows how to read the video segments files:
End of explanation
"""
segment_duration = 5
"""
Explanation: The duration of the segments has to be defined manually. It is 5s for all provided video segment files:
End of explanation
"""
fig = plt.figure(figsize=(14, 9))
ax = fig.add_subplot(111)
cmap = plt.get_cmap('copper')
colors = iter(cmap(np.linspace(0,1,3)))
labels = ['low', 'medium', 'high']
ql_cols = reversed(["quality_%d" % i for i in range(1,6)])
ql_labels = ['720p', '480p', '360p', '240p', '144p']
for ql, ql_label in zip(ql_cols, ql_labels):
dfql_segs = dfsegs.loc[:,ql] / 1024 / 1024
dfql_segs = dfql_segs.repeat(segment_duration)
dfql_segs = dfql_segs.reset_index(drop=True)
ax.plot(dfql_segs.index, dfql_segs, label=ql_label)
ax.grid()
ax.legend()
ax.set_xlabel("Time (s)")
ax.set_ylabel("Bitrate (Mbps)")
ax.set_xlim([0, 550])
"""
Explanation: Plot the video bit-rate for all quality levels based on the segment sizes:
End of explanation
"""
|
badlands-model/BayesLands | Examples/mountain/mountain.ipynb | gpl-3.0 | from pyBadlands.model import Model as badlandsModel
# Initialise model
model = badlandsModel()
# Define the XmL input file
model.load_xml('test','mountain.xml')
"""
Explanation: Orogenic landscapes modelling
In this example, we simulate landscape evolution in response to two simple climatic scenarios:
+ uniform and
+ orographic precipitation.
<div align="center">
<img src="images/oro_rain.jpg" alt="orographic precipitation" width="450" height="200"/>
</div>
We investigate the drainage network dynamics and the steady-state fluvial patterns that emerge from an application of these climatic forcing mechanisms.
The first part of the scenario starts from a flat topography subjected to a constant and uniform rate of tectonic rock uplift (>1 mm/a) and precipitation (1 m/a). The domain is rectangular and the four edges are kept at a constant base-level elevation. The area is a 40x80 km domain.
After 8 Ma, the second scenario is applied and consists in a linearly varying rainfall pattern corresponding to an orographic precipitation with the same uniform tectonic uplift rate. The Northern part of the domain is experiencing a 2 m/a precipitation rate and the Southern part is subject to a 0.1 m/a precipitation rate for the next 12 Ma.
Initial settings
For this model, we use the stream power law sediment transport model which scale the incision rate $E$ as a power function of surface water discharge $A$ and slope $S=\nabla z$:
$$ E = \kappa A^m (\nabla z)^n$$
where $\kappa$ is the erodibility coefficient dependent on lithology and mean precipitation rate, channel width, flood frequency, channel hydraulics.
The values given to these parameters ($\kappa$, $m$, $n$) need to be set in the XmL input file.
For this particular setting we do not need to record any deposition as the model is purely erosive. To speed up the model we turn off the deposition computation in Badlands by setting the dep element to 0 in the input file.
Starting pyBadlands
First we initialise the model and set the path to the XmL input file.
You can edit the XmL configuration file at /edit/volume/test/mountain/mountain.xml.
To view the complete XmL options you can follow this link to github page: complete.xml.
End of explanation
"""
start = time.time()
model.run_to_time(10000000)
print 'time', time.time() - start
"""
Explanation: Running pyBadlands
We can run the model for a given period. The end time in the XmL input file is set to 50M years but you might want to run the model for a coupled of iterations and check the output before running the model for the entire simulation time. This is done by putting the time in the run_to_time function.
Here we go for the full time directly... it should take less than 5 minutes on a single processor if you keep the initial setting unchanged.
End of explanation
"""
|
scotthuang1989/Python-3-Module-of-the-Week | concurrency/asyncio/Producing Results Asynchronously.ipynb | apache-2.0 | # %load asyncio_future_event_loop.py
import asyncio
def mark_done(future, result):
print('setting future result to {!r}'.format(result))
future.set_result(result)
event_loop = asyncio.get_event_loop()
try:
all_done = asyncio.Future()
print('scheduling mark_done')
event_loop.call_soon(mark_done, all_done, 'the result')
print('entering event loop')
result = event_loop.run_until_complete(all_done)
print('returned result: {!r}'.format(result))
finally:
print('closing event loop')
event_loop.close()
print('future result: {!r}'.format(all_done.result()))
!python asyncio_future_event_loop.py
"""
Explanation: A Future represents the result of work that has not been completed yet. The event loop can watch for a Future object’s state to indicate that it is done, allowing one part of an application to wait for another part to finish some work.
Waiting for a Future
A Future acts like a coroutine, so any techniques useful for waiting for a coroutine can also be used to wait for the future to be marked done. This example passes the future to the event loop’s run_until_complete() method.
End of explanation
"""
# %load asyncio_future_await.py
import asyncio
def mark_done(future, result):
print('setting future result to {!r}'.format(result))
future.set_result(result)
async def main(loop):
all_done = asyncio.Future()
print('scheduling mark_done')
loop.call_soon(mark_done, all_done, 'the result')
result = await all_done
print('returned result: {!r}'.format(result))
event_loop = asyncio.get_event_loop()
try:
event_loop.run_until_complete(main(event_loop))
finally:
event_loop.close()
!python asyncio_future_await.py
"""
Explanation: The state of the Future changes to done when set_result() is called, and the Future instance retains the result given to the method for retrieval later.
A Future can also be used with the await keyword, as in this example.
End of explanation
"""
# %load asyncio_future_callback.py
import asyncio
import functools
def callback(future, n):
print('{}: future done: {}'.format(n, future.result()))
async def register_callbacks(all_done):
print('registering callbacks on future')
all_done.add_done_callback(functools.partial(callback, n=1))
all_done.add_done_callback(functools.partial(callback, n=2))
async def main(all_done):
await register_callbacks(all_done)
print('setting result of future')
all_done.set_result('the result')
event_loop = asyncio.get_event_loop()
try:
all_done = asyncio.Future()
event_loop.run_until_complete(main(all_done))
finally:
event_loop.close()
"""
Explanation: Future Callbacks
In addition to working like a coroutine, a Future can invoke callbacks when it is completed. Callbacks are invoked in the order they are registered.
End of explanation
"""
!python asyncio_future_callback.py
"""
Explanation: The callback should expect one argument, the Future instance. To pass additional arguments to the callbacks, use functools.partial() to create a wrapper.
End of explanation
"""
|
pligor/predicting-future-product-prices | 02_preprocessing/.ipynb_checkpoints/exploration09-price_history_gaussian_process_regressor_clustered_data-checkpoint.ipynb | agpl-3.0 | from __future__ import division
import numpy as np
import pandas as pd
import sys
import math
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
import re
import os
import csv
from helpers.outliers import MyOutliers
from skroutz_mobile import SkroutzMobile
from sklearn.ensemble import IsolationForest
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, confusion_matrix, r2_score
from skroutz_mobile import SkroutzMobile
from sklearn.model_selection import StratifiedShuffleSplit
from helpers.my_train_test_split import MySplitTrainTest
from sklearn.preprocessing import StandardScaler
from preprocess_price_history import PreprocessPriceHistory
from price_history import PriceHistory
from dfa import dfa
import scipy.signal as ss
from scipy.spatial.distance import euclidean
from fastdtw import fastdtw
from sklearn.cluster import KMeans
from sklearn.preprocessing import MinMaxScaler
import random
from sklearn.metrics import silhouette_score
from os.path import isfile
from preprocess_price_history import PreprocessPriceHistory
from os.path import isfile
from sklearn.gaussian_process import GaussianProcessRegressor
from mobattrs_price_history_merger import MobAttrsPriceHistoryMerger
#from george import kernels
#import george
from sklearn.manifold import TSNE
import matplotlib as mpl
import pickle
import dill
random_state = np.random.RandomState(seed=16011984)
%matplotlib inline
mpl.rc('figure', figsize=(17,7)) #setting the default value of figsize for our plots
#https://matplotlib.org/users/customizing.html
data_path = '../../../../Dropbox/data'
mobattrs_ph_path = data_path + '/mobattrs_price_history'
mobattrs_ph_norm_path = mobattrs_ph_path + '/mobattrs_ph_norm.npy'
sku_ids_groups_path = data_path + '/sku_ids_groups'
npz_sku_ids_group_kmeans_six = sku_ids_groups_path + '/sku_ids_kmeans_six.npz'
"""
Explanation: http://nbviewer.jupyter.org/github/alexminnaar/time-series-classification-and-clustering/blob/master/Time%20Series%20Classification%20and%20Clustering.ipynb
End of explanation
"""
csv_in = "../price_history_03_seq_start_suddens_trimmed.csv"
#csv_out = "../price_history_for_sfa.csv"
#df_fixed_width.to_csv(csv_path, encoding='utf-8', quoting=csv.QUOTE_ALL)
ph = PriceHistory(csv_in)
seq = ph.extractSequenceByLocation(0)
print type(seq)
seq.shape, seq.name
"""
Explanation: Some processing
End of explanation
"""
sku_id_groups = np.load(npz_sku_ids_group_kmeans_six)
for key, val in sku_id_groups.iteritems():
print key, ",", val.shape
chosen_cluster = '3' #str because this is how values are stored as keys in npz files
mobiles_path = data_path + '/mobiles'
mobs_norm_path = mobiles_path + '/mobiles_norm.csv'
assert isfile(mobs_norm_path)
df = pd.read_csv(mobs_norm_path, index_col=0, encoding='utf-8', quoting=csv.QUOTE_ALL)
df.shape
np.all(np.logical_not(np.isnan(df.values.flatten())))
cluster_sku_ids = set(sku_id_groups[chosen_cluster]).intersection(df.index)
len(cluster_sku_ids)
df_cluster = df.loc[cluster_sku_ids]
df_cluster.shape
"""
Explanation: Loading data
End of explanation
"""
obj = MobAttrsPriceHistoryMerger(mobs_norm_path=mobs_norm_path, price_history_csv=csv_in)
%%time
dataframe = obj.get_table(df = df_cluster.drop(labels=SkroutzMobile.PRICE_COLS, axis=1), normalize_dates=True,
normalize_price=True)
dataframe
arr = dataframe.values
arr.shape
#mobattrs_ph_raw_path = mobattrs_ph_path + '/mobattrs_ph_raw.npy'
#np.save(mobattrs_ph_raw_path, arr)
#assert isfile(mobattrs_ph_raw_path)
np.all(np.logical_not(np.isnan(arr.flatten())))
# we are not saving
# np.save(mobattrs_ph_norm_path, arr_norm)
# assert isfile(mobattrs_ph_norm_path)
"""
Explanation: merging
End of explanation
"""
XX = arr[:, :MobAttrsPriceHistoryMerger.PRICE_IND]
XX.shape
yy = arr[:, MobAttrsPriceHistoryMerger.PRICE_IND]
yy.shape
%%time
gp = GaussianProcessRegressor()
gp.fit(XX, yy)
"""
Explanation: Gaussian Process Regressor
End of explanation
"""
# with open('cur_gp.pickle', 'w') as fp: # Python 3: open(..., 'wb')
# pickle.dump(gp, fp)
cur_sku_id = list(cluster_sku_ids)[0]
cur_sku_id
vals = dataframe.loc[cur_sku_id].values
xx = vals[:, :-1]
xx.shape
tars = vals[:, -1]
tars.shape
preds = gp.predict(xx)
preds.shape
plt.figure()
plt.plot(tars, 'r-', label='targets')
plt.plot(preds, 'b.', label='predictions')
plt.legend()
plt.show()
"""
Explanation: Reconstruct time series
we want to get the price values that correspond to a particular sku
but have we reserved this info ?? no..
End of explanation
"""
ph = PriceHistory(csv_in)
seqs = ph.extractAllSequences()
selseq = [seq for seq in seqs if seq.name == cur_sku_id]
assert len(selseq) == 1
selseq = selseq[0]
selseq.index
"""
Explanation: Train - Test
End of explanation
"""
|
google/trax | trax/examples/Knowledge_Tracing_Transformer.ipynb | apache-2.0 | # Choose a location for your storage bucket and BigQuery dataset to minimize data egress charges. Once you have
# created them, if you restart your notebook you can run this to see where your colab is running
# and factory reset until you get a location that is near your data.
!curl ipinfo.io
"""
Explanation: Intro
This notebook trains a transformer model on the EdNet dataset using the google/trax library. The EdNet dataset is large set of student responses to multiple choice questions related to English language learning. A recent Kaggle competition, Riiid! Answer Correctness Prediction, provided as subset of this data, consisting of 100 million responses to 13 thousand questions from 300 thousand students.
The state of the art result, detailed in SAINT+: Integrating Temporal Features for EdNet Correctness Prediction, achieves an AUC ROC of 0.7914. The winning solution in the Riiid! Answer Correctness Prediction competition achieved an AUC ROC of 0.820. This notebook achieves an AUC ROC of 0.776 implementing an approach similar to the state of the art approach, training for 25,000 steps. It demonstrates several techniques that may be useful to those getting started with the google/trax library or deep learning in general. This notebook demonstrates how to:
Use BigQuery to perform feature engineering
Create TFRecords with multiple sequences per record
Modify the trax Transformer model to accommodate a knowledge tracing dataset:
Utilize multiple encoder and decoder embeddings - aggregated either by concatenation or sum
Include a custom metric - AUC ROC
Utilize a combined padding and future mask
Use trax's gin-config integration to specify training parameters
Display training progress using trax's tensorboard integration
End of explanation
"""
# <hide-output>
!git clone https://github.com/google/trax.git
!pip install ./trax
!pip install -U pyarrow
!pip install -U google-cloud-bigquery google-cloud-bigquery-storage
from functools import partial
import json
import math
import os
from pathlib import Path
import subprocess
import sys
import time
import gin
from google.cloud import storage, bigquery
from google.cloud.bigquery import LoadJobConfig, QueryJobConfig, \
SchemaField, SourceFormat
import jax
from jax.config import config
import pandas as pd
import numpy as np
import requests
import sqlite3
import trax
from trax import fastmath
from trax import layers as tl
from trax.fastmath import numpy as tnp
import tensorflow as tf
from tqdm.notebook import tqdm
import zipfile
# Create google credentials and store in drive
# https://colab.research.google.com/drive/1LWhrqE2zLXqz30T0a0JqXnDPKweqd8ET
#
# Create a config.json file with variables for:
# "BUCKET": "",
# "BQ_DATASET": "",
# "KAGGLE_USERNAME": "",
# "KAGGLE_KEY": "",
# "PROJECT": "",
# "LOCATION": ""
from google.colab import drive
DRIVE = Path('/content/drive/My Drive')
PATH = 'riiid-transformer'
if not DRIVE.exists():
drive.mount(str(DRIVE.parent))
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = str(DRIVE/PATH/'google.json')
with open(str(DRIVE/PATH/'config.json')) as f:
CONFIG = json.load(f)
os.environ = {**os.environ, **CONFIG}
from kaggle.api.kaggle_api_extended import KaggleApi
kaggle_api = KaggleApi()
kaggle_api.authenticate()
AUTO = tf.data.experimental.AUTOTUNE
BUCKET = os.getenv('BUCKET', 'riiid-transformer')
BQ_DATASET = os.getenv('BQ_DATASET', 'my_data')
LOCATION = os.getenv('LOCATION', 'us-central1')
PROJECT = os.getenv('PROJECT', 'fastai-caleb')
bucket = storage.Client(project=PROJECT).get_bucket(BUCKET)
dataset = bigquery.Dataset(f'{PROJECT}.{BQ_DATASET}')
bq_client = bigquery.Client(project=PROJECT, location=LOCATION)
%matplotlib inline
from matplotlib import pyplot as plt
%load_ext tensorboard
gin.enter_interactive_mode()
"""
Explanation: Imports
End of explanation
"""
USE_TPU = False
DOWNLOAD_DATASET = False
LOAD_DATA_TO_BQ = False
PERFORM_FEATURE_ENGINEERING = False
TEST_FEATURE_ENGNEERING = False
CREATE_TFRECORDS = False
TEST_TFRECORDS = False
TRAIN_MODEL = False
"""
Explanation: Control Panel
These variables can be set to True to run the code in the sections described or False to skip over them after they have been run for the first time.
End of explanation
"""
if USE_TPU:
if 'TPU_DRIVER_MODE' not in globals():
url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver_nightly'
resp = requests.post(url)
TPU_DRIVER_MODE = 1
config.FLAGS.jax_xla_backend = "tpu_driver"
config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR']
print(config.FLAGS.jax_backend_target)
"""
Explanation: Initialize TPU
End of explanation
"""
if DOWNLOAD_DATASET:
kaggle_api.competition_download_cli('riiid-test-answer-prediction')
with zipfile.ZipFile('riiid-test-answer-prediction.zip', 'r') as zip_ref:
zip_ref.extractall()
for f in ['train.csv', 'questions.csv', 'lectures.csv']:
bucket.blob(f).upload_from_filename(f)
if False:
for f in tqdm(['train.csv', 'questions.csv', 'lectures.csv']):
bucket.blob(f).download_to_filename(f)
"""
Explanation: Download Dataset
End of explanation
"""
if False:
delete_contents=False
bq_client.delete_dataset(BQ_DATASET, delete_contents=delete_contents)
print(f'Dataset {dataset.dataset_id} deleted from project {dataset.project}.')
try:
dataset = bq_client.get_dataset(dataset.dataset_id)
print(f'Dataset {dataset.dataset_id} already exists '
f'in location {dataset.location} in project {dataset.project}.')
except:
dataset = bq_client.create_dataset(dataset)
print(f'Dataset {dataset.dataset_id} created '
f'in location {dataset.location} in project {dataset.project}.')
"""
Explanation: Create BigQuery Dataset
End of explanation
"""
dtypes_orig = {
'lectures': {
'lecture_id': 'uint16',
'tag': 'uint8',
'part': 'uint8',
'type_of': 'str',
},
'questions': {
'question_id': 'uint16',
'bundle_id': 'uint16',
'correct_answer': 'uint8',
'part': 'uint8',
'tags': 'str',
},
'train': {
'row_id': 'int64',
'timestamp': 'int64',
'user_id': 'int32',
'content_id': 'int16',
'content_type_id': 'int8',
'task_container_id': 'int16',
'user_answer': 'int8',
'answered_correctly': 'int8',
'prior_question_elapsed_time': 'float32',
'prior_question_had_explanation': 'bool'
}
}
dtypes_new = {
'lectures': {},
'questions': {
'tags_array': 'str'
},
'train': {
'task_container_id_q': 'int16',
'pqet_current': 'int32',
'ts_delta': 'int32'
}
}
dtypes = {}
for table_id in dtypes_orig:
dtypes[table_id] = {
**dtypes_orig[table_id],
**dtypes_new[table_id]
}
"""
Explanation: Dtypes
End of explanation
"""
# <hide-input>
type_map = {
'int64': 'INT64',
'int32': 'INT64',
'int16': 'INT64',
'int8': 'INT64',
'uint8': 'INT64',
'uint16': 'INT64',
'str': 'STRING',
'bool': 'BOOL',
'float32': 'FLOAT64'
}
schemas_orig = {table: [SchemaField(f, type_map[t]) for f, t in
fields.items()] for table, fields in dtypes_orig.items()}
schemas = {}
for table_id, fields in dtypes_new.items():
new_fields = [SchemaField(f, type_map[t]) for
f, t in fields.items() if 'array' not in f]
new_array_feilds = [SchemaField(f, 'INT64', 'REPEATED') for
f, t in fields.items() if 'array' in f]
new_fields += new_array_feilds
schemas[table_id] = schemas_orig[table_id] + new_fields
"""
Explanation: Big Query Table Schemas
End of explanation
"""
def load_job_cb(future):
"""Prints update upon completion to output of last run cell."""
seconds = (future.ended - future.created).total_seconds()
print(f'Loaded {future.output_rows:,d} rows to table {future.job_id.split("_")[0]} in '
f'{seconds:>4,.1f} sec, {int(future.output_rows / seconds):,d} per sec.')
def load_csv_from_uri(table_id, schemas_orig):
full_table_id = f'{BQ_DATASET}.{table_id}'
job_config = LoadJobConfig(
schema=schemas_orig[table_id],
source_format=SourceFormat.CSV,
skip_leading_rows=1
)
uri = f'gs://{BUCKET}/{table_id}.csv'
load_job = bq_client.load_table_from_uri(uri, full_table_id,
job_config=job_config,
job_id_prefix=f'{table_id}_')
print(f'job {load_job.job_id} started')
load_job.add_done_callback(load_job_cb)
return load_job
if LOAD_DATA_TO_BQ:
for table_id in dtypes_orig:
lj = load_csv_from_uri(table_id, schemas_orig).result()
"""
Explanation: Load Tables
End of explanation
"""
if PERFORM_FEATURE_ENGINEERING:
for table_id, schema in schemas.items():
table = bq_client.get_table(f'{BQ_DATASET}.{table_id}')
table.schema = schema
table = bq_client.update_table(table, ['schema'])
"""
Explanation: Update BiqQuery Schemas
Before performing feature engineering, we have to update the table schemas in Big Query to create columns for the new features.
End of explanation
"""
def done_cb(future):
seconds = (future.ended - future.started).total_seconds()
print(f'Job {future.job_id} finished in {seconds} seconds.')
def run_query(query, job_id_prefix=None, wait=True,
use_query_cache=True):
job_config = QueryJobConfig(
use_query_cache=use_query_cache)
query_job = bq_client.query(query, job_id_prefix=job_id_prefix,
job_config=job_config)
print(f'Job {query_job.job_id} started.')
query_job.add_done_callback(done_cb)
if wait:
query_job.result()
return query_job
def get_df_query_bqs(query, dtypes=None, fillna=None):
qj = bq_client.query(query)
df = qj.to_dataframe(create_bqstorage_client=True, progress_bar_type='tqdm_notebook')
if fillna is not None:
df = df.fillna(fillna)
try:
df = df.astype({c: dtypes.get(c, 'int32') for c in df.columns})
except:
print('dtypes not applied.')
finally:
return df
"""
Explanation: Feature Engineering
Using BigQuery for a dataset of 100 million rows is much faster than using local dataframes. In addition, you get to use the full power of SQL, including window functions, which are especially useful for time series feature engineering.
Feature engineering for this problem is fairly minimal and includes:
* Replacing missing null values for prior_question_elapsed_time and prior_question_had_explanation in the train table
* Replacing one missing tag value in the questions table
* Recalcuating the task_container_id as task_container_id_q so that it excludes lecture records and increases monotonically with timetamp so that the calucations for elapsed time and time delta, which depend on values from the immediately prior and immediately succeeding records, are calculated correctly.
* Calculating pqet_current, the time it took on average to answer the questions in the current task_container_id_q.
* Calculating ts_delta, the elapsed time between the last task_container_id_q and the current one.
* Creating folds table, in which users are assigned to one of 20 folds.
* Creating a tags_array field in the questions table, that returns an array of six elements populated with the tags assigned to each questions, padded with zeros if there are less than six.
End of explanation
"""
def update_missing_values(table_id='train', column_id=None, value=None):
return f"""
UPDATE {BQ_DATASET}.{table_id}
SET {column_id} = {value}
WHERE {column_id} is NULL;
""", sys._getframe().f_code.co_name + '_'
if PERFORM_FEATURE_ENGINEERING:
qj = run_query(*update_missing_values('train', 'prior_question_elapsed_time', '0'))
qj = run_query(*update_missing_values('train', 'prior_question_had_explanation', 'false'))
qj = run_query(*update_missing_values('questions', 'tags', '"188"'))
"""
Explanation: Replace Missing Values
End of explanation
"""
def update_task_container_id(table_id='train',
column_id='task_container_id',
excl_lectures=True):
excl_lec = 'WHERE content_type_id = 0' if excl_lectures else ''
return f"""
UPDATE {BQ_DATASET}.{table_id} t
SET {column_id} = target.calc
FROM (
SELECT row_id, DENSE_RANK()
OVER (
PARTITION BY user_id
ORDER BY timestamp
) calc
FROM {BQ_DATASET}.{table_id}
{excl_lec}
) target
WHERE target.row_id = t.row_id
""", sys._getframe().f_code.co_name + '_'
if PERFORM_FEATURE_ENGINEERING:
q = update_task_container_id(table_id='train',
column_id='task_container_id_q ',
excl_lectures=True)
qj = run_query(*q)
"""
Explanation: Recalculate Task Container Ids for Questions Only
End of explanation
"""
def update_pqet_current(table_id='train'):
return f"""
UPDATE {BQ_DATASET}.{table_id} t
SET t.pqet_current = CAST(p.pqet_current AS INT64)
FROM (
SELECT
row_id, LAST_VALUE(prior_question_elapsed_time) OVER (
PARTITION BY user_id ORDER BY task_container_id_q
RANGE BETWEEN 1 FOLLOWING AND 1 FOLLOWING) pqet_current
FROM {BQ_DATASET}.train
WHERE content_type_id = 0
) p
WHERE t.row_id = p.row_id;
UPDATE {BQ_DATASET}.{table_id}
SET pqet_current = 0
WHERE pqet_current IS NULL;
""", sys._getframe().f_code.co_name + '_'
if PERFORM_FEATURE_ENGINEERING:
qj = run_query(*update_pqet_current())
def update_ts_delta(table_id='train'):
return f"""
UPDATE {BQ_DATASET}.{table_id} t
SET t.ts_delta = timestamp - p.ts_prior
FROM (
SELECT
row_id, LAST_VALUE(timestamp) OVER (
PARTITION BY user_id ORDER BY task_container_id_q
RANGE BETWEEN 1 PRECEDING AND 1 PRECEDING) ts_prior
FROM {BQ_DATASET}.train
WHERE content_type_id = 0
) p
WHERE t.row_id = p.row_id;
UPDATE {BQ_DATASET}.{table_id}
SET ts_delta = 0
WHERE ts_delta IS NULL;
""", sys._getframe().f_code.co_name + '_'
if PERFORM_FEATURE_ENGINEERING:
qj = run_query(*update_ts_delta())
"""
Explanation: Calculate Current Question Elapsed Time and Timestamp Delta
End of explanation
"""
def create_table_folds(table_id='folds', n_folds=20):
return f"""
DECLARE f INT64;
CREATE OR REPLACE TABLE {BQ_DATASET}.{table_id} (
user_id INT64,
fold INT64,
record_count INT64
);
INSERT {BQ_DATASET}.{table_id} (user_id, fold, record_count)
SELECT f.user_id, CAST(FLOOR(RAND() * {n_folds}) AS INT64) fold, f.record_count
FROM (
SELECT user_id,
COUNT(row_id) record_count
FROM {BQ_DATASET}.train
WHERE content_type_id = 0
GROUP BY user_id
) f
ORDER BY user_id;
""", sys._getframe().f_code.co_name + '_'
if PERFORM_FEATURE_ENGINEERING:
qj = run_query(*create_table_folds())
if PERFORM_FEATURE_ENGINEERING:
df_folds = get_df_query_bqs(f"""
SELECT *
FROM {BQ_DATASET}.folds
""",
dtypes=dtypes)
if PERFORM_FEATURE_ENGINEERING:
df_folds.groupby('fold').count().user_id.plot(kind='bar', title='Count of Users by Fold');
if PERFORM_FEATURE_ENGINEERING:
df_folds.groupby('fold').mean().record_count.plot(kind='bar', title='Average Records per User by Fold');
if PERFORM_FEATURE_ENGINEERING:
df_fold_ac = get_df_query_bqs(f"""
SELECT fold, SUM(answered_correctly) ac_sum, COUNT(answered_correctly) rec_count
FROM {BQ_DATASET}.train
JOIN {BQ_DATASET}.folds
ON train.user_id = folds.user_id
GROUP BY fold
""",
dtypes=dtypes)
if PERFORM_FEATURE_ENGINEERING:
df_fold_ac.rec_count.plot(kind='bar', title='Count of Records by Fold');
if PERFORM_FEATURE_ENGINEERING:
(df_fold_ac.ac_sum / df_fold_ac.rec_count).plot(kind='bar', title='Percent Answered Correctly by Fold');
"""
Explanation: Create Folds Table
Assign users randomly to one of 20 folds. Store total records to facilitate filtering based on record count.
End of explanation
"""
def update_tags_array(table_id='questions', column_id='tags_array'):
return f"""
UPDATE {BQ_DATASET}.{table_id} q
SET {column_id} = tp.tags_fixed_len
FROM (
WITH tags_padded AS (
WITH tags_table AS (SELECT question_id, tags FROM {BQ_DATASET}.{table_id})
SELECT question_id, ARRAY_CONCAT(ARRAY_AGG(CAST(tag AS INT64) + 1), [0,0,0,0,0]) tags_array
FROM tags_table, UNNEST(SPLIT(tags, ' ')) as tag
GROUP BY question_id
)
SELECT question_id,
ARRAY(SELECT x FROM UNNEST(tags_array) AS x WITH OFFSET off WHERE off < 6 ORDER BY off) tags_fixed_len
FROM tags_padded
) tp
WHERE tp.question_id = q.question_id
""", sys._getframe().f_code.co_name + '_'
if PERFORM_FEATURE_ENGINEERING:
qj = run_query(*update_tags_array())
if PERFORM_FEATURE_ENGINEERING:
df_q = get_df_query_bqs('select * from my_data.questions', dtypes=dtypes)
print(df_q.head())
"""
Explanation: Create Tags Array on Questions Table
We need the tags as an array later when we create TFRecords. We also increment by one and pad with zeros to a fixed length of 6 so that they can be concatentated as a feature for modeling.
End of explanation
"""
if TEST_FEATURE_ENGNEERING:
df_train_samp = pd.read_csv('train.csv', nrows=100000)
df_train_samp.prior_question_had_explanation = df_train_samp.prior_question_had_explanation.fillna(False).astype(bool)
df_train_samp.prior_question_elapsed_time = df_train_samp.prior_question_elapsed_time.fillna(0)
user_ids_samp = df_train_samp.user_id.unique()[:-1]
print(len(user_ids_samp))
df_train_samp = df_train_samp[df_train_samp.user_id.isin(user_ids_samp) & (df_train_samp.content_type_id == 0)].reset_index(drop=True)
print(len(df_train_samp))
"""
Explanation: Feature Engineering Tests
Features come back out of Biq Query with the same values they went in with
ts_delta is equal to difference between timestamps on consecutive records
pqet_current is equal to prior_question_elapsed_time from next record
visually inspect distributions of ts_delta and pqet_current
Load Sample from train.csv
End of explanation
"""
if TEST_FEATURE_ENGNEERING:
df_bq_samp = get_df_query_bqs(f"""
SELECT *
FROM {BQ_DATASET}.train
WHERE user_id IN ({(',').join(map(str, user_ids_samp))})
AND content_type_id = 0
ORDER BY user_id, timestamp, row_id
""",
dtypes=None)
"""
Explanation: Pull sample of corresponding user_ids from BigQuery
End of explanation
"""
if TEST_FEATURE_ENGNEERING:
# values in columns are the same between train.csv and bq
for c in df_train_samp.columns:
assert all(df_train_samp[c] == df_bq_samp[c]), f'{c} is not the same'
# pqet_current pulls prior_question_elapsed_time back one task_container_id for each user
df_bq_samp_tst = df_bq_samp[['user_id', 'task_container_id_q', 'prior_question_elapsed_time', 'pqet_current']].groupby(['user_id', 'task_container_id_q']).max()
for user_id in user_ids_samp:
assert all(df_bq_samp_tst.loc[user_id].pqet_current.shift(1).iloc[1:] == df_bq_samp_tst.loc[user_id].prior_question_elapsed_time.iloc[1:])
# ts_delta equal to timestamp from current task_container_id_q minus timestamp from prior task_container_id_q
df_bq_samp_tst = df_bq_samp[['user_id', 'task_container_id_q', 'timestamp', 'ts_delta']].groupby(['user_id', 'task_container_id_q']).max()
for user_id in user_ids_samp:
assert all((df_bq_samp_tst.loc[user_id].timestamp - df_bq_samp_tst.loc[user_id].timestamp.shift(1)).iloc[1:] == df_bq_samp_tst.loc[user_id].ts_delta.iloc[1:])
if TEST_FEATURE_ENGNEERING:
df_bq_samp.pqet_current.hist();
if TEST_FEATURE_ENGNEERING:
df_bq_samp.ts_delta.hist();
"""
Explanation: Tests
End of explanation
"""
def _int64_feature(value):
if type(value) != type(list()):
value = [value]
return tf.train.Feature(int64_list=tf.train.Int64List(value=value))
def serialize_example(user_id, features):
feature_names = ['content_id', 'answered_correctly', 'part', 'pqet_current', 'ts_delta', 'tags',
'task_container_id', 'timestamp']
feature = {'user_id': _int64_feature(user_id)}
for i, n in enumerate(feature_names):
feature[n] = _int64_feature(features[i])
return tf.train.Example(features=tf.train.Features(feature=feature)).SerializeToString()
def parse_example(example):
feature_names = {'content_id': tf.int32, 'answered_correctly': tf.int32, 'part': tf.int32,
'pqet_current': tf.int32, 'ts_delta': tf.int64, 'tags': tf.int32,
'task_container_id': tf.int32, 'timestamp': tf.int64}
features = {'user_id': tf.io.FixedLenFeature([1], tf.int64)}
for k, v in feature_names.items():
features[k] = tf.io.VarLenFeature(tf.int64)
example = tf.io.parse_single_example(example, features)
for k, v in feature_names.items():
example[k] = tf.cast(example[k].values, v)
example['tags'] = tf.reshape(example['tags'], (tf.size(example['answered_correctly']), 6))
return example
def get_ds_tfrec_raw(folds=[0]):
file_pat = 'gs://{BUCKET}/tfrec/{f:02d}-*.tfrec'
file_pats = [file_pat.format(BUCKET=BUCKET, f=f) for f in folds]
options = tf.data.Options()
ds = (tf.data.Dataset.list_files(file_pats)
.with_options(options)
.interleave(tf.data.TFRecordDataset, num_parallel_calls=AUTO)
.map(parse_example, num_parallel_calls=AUTO)
)
return ds
def get_df_tfrec(folds):
df_tfrec = get_df_query_bqs(f"""
SELECT fold, train.user_id, content_id + 1 content_id,
answered_correctly + 1 answered_correctly, part, pqet_current, ts_delta,
tags_array tags, task_container_id_q task_container_id, timestamp
FROM {BQ_DATASET}.train
JOIN {BQ_DATASET}.folds
ON train.user_id = folds.user_id
JOIN {BQ_DATASET}.questions
ON train.content_id = questions.question_id
WHERE fold IN ({(', ').join(map(str, folds))})
AND content_type_id = 0
ORDER BY user_id, timestamp, row_id
""",
dtypes=None)
return df_tfrec
def write_tfrecords(folds):
df_tfrec = get_df_tfrec(folds)
for f in folds:
groups_dict = (df_tfrec[df_tfrec.fold == f]
.groupby('user_id')
.apply(lambda r: (list(r['content_id'].values),
list(r['answered_correctly'].values),
list(r['part'].values),
list(r['pqet_current'].values.astype(np.int64)),
list(r['ts_delta'].values.astype(np.int64)),
list(np.concatenate(r['tags'].values)),
list(r['task_container_id'].values.astype(np.int64)),
list(r['timestamp'].values.astype(np.int64)),
))).to_dict()
out_path = f'gs://{BUCKET}/tfrec'
filename = f'{f:02d}-{len(groups_dict.keys())}.tfrec'
record_file = f'{out_path}/{filename}'
with tf.io.TFRecordWriter(record_file) as writer:
for user_id, features in tqdm(groups_dict.items(), desc=f'Fold {f:02d}'):
writer.write(serialize_example(user_id, features))
"""
Explanation: Create TFRecords
We are going to create a set of TFRecords with one user per record and one fold per file. We are going to include the following columns as features:
* user_id - this won't get used as a feature, but is included to able to tie back to original data
* content_id - incremented by one to reserve 0 for padding character
* answered_correctly - incremented by one to reserve 0 for padding character
* part
* pqet_curret
* ts_delta
* tags - already incremented by one with zeros as padding
* task_container_id - excluding lectures and already indexed to one
* timestamp
End of explanation
"""
if CREATE_TFRECORDS:
fold_splits = np.array_split(np.arange(20), 10)
for folds in tqdm(fold_splits):
write_tfrecords(folds)
"""
Explanation: Write TFRecords
Process in chunks to avoid running out of memory.
End of explanation
"""
def test_tfrecord_folds(folds_test, n_sample=100):
pbar = tqdm(total=n_sample)
ds = get_ds_tfrec_raw(folds_test)
df = get_df_tfrec(folds_test)
for b in ds.shuffle(10000).take(n_sample):
try:
for c in [c for c in df.columns if c not in ['tags', 'fold', 'user_id']]:
try:
assert all(df[df.user_id == b['user_id'].numpy()[0]][c] == b[c].numpy())
except:
print(f"Error for user {b['user_id'].numpy()[0]}")
user_tags = np.concatenate(df[df.user_id == b['user_id'].numpy()[0]].tags.values)
assert all(user_tags == (b['tags'].numpy().flatten()))
except:
print(f"Error for user {b['user_id'].numpy()[0]}")
finally:
pbar.update()
if TEST_TFRECORDS:
folds_test = list(range(20))
ds = get_ds_tfrec_raw(folds=folds_test)
df_folds = get_df_query_bqs(f"""
SELECT *
FROM {BQ_DATASET}.folds
""",
dtypes=dtypes)
user_ids = []
count = 0
for b in ds:
user_ids.append(b['user_id'].numpy()[0])
count += len(b['content_id'].numpy())
assert len(set(user_ids)) == len(df_folds)
assert df_folds.record_count.sum() == count
test_tfrecord_folds([10])
b = next(iter(ds))
print(b)
"""
Explanation: Test TFRecords
Same number of users and records as in df_folds
Values in tfrecords are the same as in original data
End of explanation
"""
@gin.configurable
def get_ds_tfrec(folds=None, max_len=None, min_len=None):
file_pat = 'gs://{BUCKET}/tfrec/{f:02d}-*.tfrec'
file_pats = [file_pat.format(BUCKET=BUCKET, f=f) for f in folds]
options = tf.data.Options()
ds = (tf.data.Dataset.list_files(file_pats, shuffle=True)
.with_options(options)
.interleave(tf.data.TFRecordDataset, num_parallel_calls=AUTO)
.shuffle(10000)
.map(parse_example, num_parallel_calls=AUTO)
.filter(partial(filter_min_len, min_len=min_len))
.map(example_to_tuple, num_parallel_calls=AUTO)
.map(partial(trunc_seq, max_len=max_len), num_parallel_calls=AUTO)
.map(con_to_cat, num_parallel_calls=AUTO)
)
ds = ds.repeat().prefetch(AUTO)
def gen(generator=None):
del generator
for example in fastmath.dataset_as_numpy(ds):
yield example
return gen
def filter_min_len(e, min_len):
return tf.size(e['content_id']) >= min_len
def example_to_tuple(example):
return (example['content_id'], example['part'], example['tags'], example['task_container_id'],
example['answered_correctly'], example['pqet_current'], example['ts_delta'])
def trunc_seq(*b, max_len=None):
"""Returns a sequence drawn randomly from available tokens with a max length
of max_len.
"""
max_len = tf.constant(max_len)
seq_len = tf.size(b[0])
seq_end_min = tf.minimum(seq_len - 1, max_len)
seq_end = tf.maximum(max_len, tf.random.uniform((), seq_end_min, seq_len, dtype=tf.int32))
def get_seq(m):
return m[seq_end-max_len:seq_end]
return tuple(map(get_seq, b))
# SAINT+ Elapsed Time = prior_question_elapsed_time and Lag Time = time_stamp_1 - timestamp_0
# Elapsed Time categorical - capped at 300 seconds, discrete value for each second
# Lag Time - discretized to minutes 0, 1, 2, 3, 4, 5, 10, 20, 30 ... 1440. 150 discrete values.
ts_delta_lookup = tf.concat([tf.range(6, dtype=tf.int32), tf.repeat(5, 5)], axis=0)
cat = 10
while cat < 1440:
ts_delta_lookup = tf.concat([ts_delta_lookup, tf.repeat(cat, 10)], axis=0)
cat += 10
ts_delta_lookup = tf.concat([ts_delta_lookup, [1440]], axis=0)
def con_to_cat(*b):
def pqet_cat(e, vocab_size=None, val_min=None, val_max=None):
e = tf.clip_by_value(e, val_min, val_max)
val_range = val_max - val_min
e = tf.cast((e - val_min) * (vocab_size - 1) / val_range, tf.int32)
return e
def ts_delta_cat(e):
val_max = tf.cast(tf.reduce_max(ts_delta_lookup) * 60000, tf.float64)
e = tf.clip_by_value(tf.cast(e, tf.float64), 0, val_max)
e = tf.cast(e / 60000, tf.int32)
e = tf.gather(ts_delta_lookup, e)
return e
pqet = pqet_cat(b[-2], vocab_size=300, val_min=0, val_max=300000)
ts_delta = ts_delta_cat(b[-1])
return tuple((*b[:-2], pqet, ts_delta))
"""
Explanation: Dataset Functions
End of explanation
"""
def RocAucScore(num_thresholds=100, pos_label=2):
def f(y_score, y_true, weight):
weight = tnp.expand_dims(tnp.ravel(weight), -1)
softmax=tl.Softmax(axis=-1)
y_score = tnp.ravel(softmax(y_score)[:, :, -1])
y_score = tnp.expand_dims(y_score, -1)
y_true = tnp.expand_dims(tnp.ravel(y_true) == pos_label, -1).astype(tnp.float32)
thresholds = tnp.expand_dims(tnp.linspace(1, 0, num_thresholds), 0)
threshold_counts = y_score > thresholds
tps = tnp.logical_and(threshold_counts, y_true)
fps = tnp.logical_and(threshold_counts, tnp.logical_not(y_true))
tps = tnp.sum(tps * weight, axis=0)
fps = tnp.sum(fps * weight, axis=0)
tpr = tps / tps[-1]
fpr = fps / fps[-1]
return tnp.trapz(tpr, fpr)
return tl.Fn('RocAucScore', f)
metrics = {
'loss': tl.WeightedCategoryCrossEntropy(),
'accuracy': tl.WeightedCategoryAccuracy(),
'sequence_accuracy': tl.MaskedSequenceAccuracy(),
'auc_all': RocAucScore(),
'weights_per_batch_per_core': tl.Serial(tl.Drop(), tl.Drop(), tl.Sum())
}
"""
Explanation: Metrics Functions
End of explanation
"""
@gin.configurable
@tl.assert_shape('bl->b1ll')
def PaddingFutureMask(pad=0, block_self=False, tid=True, pad_end=False):
def f(x):
mask_pad = tnp.logical_not(tnp.equal(x, 0))[:, tnp.newaxis, tnp.newaxis, :]
x_new = x
if pad_end:
x_new = tnp.where(tnp.equal(x, 0), tnp.max(x), x)
if tid:
mask_future = x_new[:, :, tnp.newaxis] >= x_new[:, tnp.newaxis, :] + block_self
mask_future = mask_future[:, tnp.newaxis, :, :]
else:
mask_future = tnp.arange(x.shape[-1])[tnp.newaxis, tnp.newaxis, :, tnp.newaxis] \
>= tnp.arange(x.shape[-1])[tnp.newaxis, :]
return tnp.logical_and(mask_future, mask_pad)
return tl.Fn(f'PaddingFutureMask({pad})', f)
# the only thing different here is the shape assertions to accomodate the change
# in mask shape from b11l to b1ll
@tl.assert_shape('bld,b1ll->bld,b1ll')
@gin.configurable
def KTAttention(d_feature, n_heads=1, dropout=0.0, mode='train'):
return tl.Serial(
tl.Select([0, 0, 0]),
tl.AttentionQKV(
d_feature, n_heads=n_heads, dropout=dropout, mode=mode),
)
def my_add_loss_weights(generator, id_to_mask=None):
for example in generator:
weights = (example[0] != id_to_mask).astype(tnp.float32)
yield (*example, weights)
@gin.configurable
def KTAddLossWeights(id_to_mask=0): # pylint: disable=invalid-name
return lambda g: my_add_loss_weights(g, id_to_mask=id_to_mask)
def trim_tags(generator):
for example in generator:
# content_id, part, tags, tid, ac, pqet, ts_delta
yield (example[0], example[1], example[2][:, :, :6], example[3], example[4], example[5], example[6])
@gin.configurable
def TrimTags():
return lambda g: trim_tags(g)
@gin.configurable
def KTPositionalEncoder(max_position=10000.0, d_model=512, tid=False):
"""This is set up to perform standard positional encoding based on the
position in the sequence, but also to calculate position based on the
id of the task container to which the question belongs.
"""
def f(inputs):
# whether or not to use task_container_id or seq position
if tid:
position = tnp.expand_dims(inputs.astype(tnp.float32), -1)
else:
position = tnp.arange(inputs.shape[1])
position = position.astype(tnp.float32)[tnp.newaxis, :, tnp.newaxis]
i = tnp.expand_dims(tnp.arange(d_model, dtype=tnp.float32), 0)
angles = 1 / tnp.power(max_position, (2 * (i // 2)) /
tnp.array(d_model, dtype=tnp.float32))
angle_rads = position * angles
# apply sin to even index in the array
sines = tnp.sin(angle_rads[:, :, 0::2])
# apply cos to odd index in the array
cosines = tnp.cos(angle_rads[:, :, 1::2])
pos_encoding = tnp.concatenate([sines, cosines], axis=-1)
return pos_encoding
return tl.Fn('KTPositionalEncoder', f)
@gin.configurable
def KTTransformer(d_model,
d_input,
d_part,
d_tags,
d_out,
d_pqet,
d_ts_delta,
d_tid,
embed_concat=False,
d_ff=2048,
n_encoder_layers=6,
n_decoder_layers=6,
n_heads=8,
max_len=2048,
dropout=0.1,
dropout_shared_axes=None,
mode='train',
ff_activation=tl.Relu):
def Embedder(vocab_size, d_embed): # tokens --> vectors
return [
tl.Embedding(vocab_size, d_embed),
tl.Dropout(
rate=dropout, shared_axes=dropout_shared_axes, mode=mode),
]
# Encoder Embeddings
in_embedder = Embedder(*d_input)
part_embedder = Embedder(*d_part)
# Keeps the tags in the data batch tuple, but drops it if it
# isn't included in the embeddings.
if d_tags is not None:
tags_embedder = tl.Serial(Embedder(*d_tags), tl.Sum(axis=-2))
else:
tags_embedder = tl.Drop()
in_pos_encoder = KTPositionalEncoder(*d_tid)
# Decoder Embeddings
out_embedder = Embedder(*d_out)
pqet_embedder = Embedder(*d_pqet)
ts_delta_embedder = Embedder(*d_ts_delta)
out_pos_encoder = KTPositionalEncoder(*d_tid)
encoder_mode = 'eval' if mode == 'predict' else mode
in_encoder = [tl.Parallel(in_embedder, part_embedder, tags_embedder, in_pos_encoder)]
out_encoder = [tl.Parallel(out_embedder, pqet_embedder, ts_delta_embedder, out_pos_encoder)]
if embed_concat:
if d_tags is not None:
in_encoder += [tl.Concatenate(n_items=3), tl.Add()]
else:
in_encoder += [tl.Concatenate(n_items=2), tl.Add()]
out_encoder += [tl.Concatenate(n_items=3), tl.Add()]
else:
if d_tags is not None:
in_encoder += [tl.Add(), tl.Add(), tl.Add()]
else:
in_encoder += [tl.Add(), tl.Add()]
out_encoder += [tl.Add(), tl.Add(), tl.Add()]
encoder_blocks = [
_KTEncoderBlock(d_model, d_ff, n_heads, dropout, dropout_shared_axes,
mode, ff_activation)
for i in range(n_encoder_layers)]
encoder = tl.Serial(
in_encoder,
encoder_blocks,
tl.LayerNorm()
)
encoder_decoder_blocks = [
_KTEncoderDecoderBlock(d_model, d_ff, n_heads, dropout, dropout_shared_axes,
mode, ff_activation)
for i in range(n_decoder_layers)]
# output tuple - leading number is max index
return tl.Serial( # 7: 0:tok_e 1:tok_p 2:tok_t 3:tok_tid 4:tok_d 5:tok_pq, 6:tok_tsd 7:wts_l
tl.Select([0, 1, 2, 3, 3, 3, # 10: 0:tok_e 1:tok_p 2:tok_t 3:tok_tid 4:tok_tid 5: tok_tid
4, 5, 6, 4]), # 6:tok_d 7:tok`_pq, 8:tok_tsd 9:tok_d 10:wts_l
# Encode.
tl.Parallel(
tl.Select([0, 1, 2, 3]),
PaddingFutureMask(tid=True)
), # 10: tok_e tok_p tok_t tok_tid mask_combined tok_tid tok_d tok_pq tok_tsd tok_d wts_l
encoder, # 7: vec_e mask_combined tok_tid tok_d tok_pq tok_tsd tok_d wts_l
# Decode.
tl.Select([3, 4, 5, 2, 2, 0]), # 7: tok_d tok_pq tok_tsd tok_tid tok_tid vec_e tok_d wts_l
tl.Parallel(
tl.ShiftRight(mode=mode),
tl.ShiftRight(mode=mode),
tl.ShiftRight(mode=mode),
tl.ShiftRight(mode=mode),
tl.Serial(tl.ShiftRight(),
PaddingFutureMask(tid=False)),
), # 7: tok_d tok_pq tok_tsd tok_tid mask_combined vec_e tok_d wts_l
out_encoder, # 4: vec_d mask_combined vec_e tok_d wts_l
encoder_decoder_blocks, # 4: vec_d mask_combined vec_e tok_d wts_l
tl.LayerNorm(), # 4: vec_d mask_combined vec_e tok_d wts_l
# Map to output vocab.
tl.Select([0], n_in=3), # 3: vec_d tok_d wts_l
tl.Dense(d_out[0]), # vec_d .....
)
def _KTEncoderBlock(d_model, d_ff, n_heads,
dropout, dropout_shared_axes, mode, ff_activation):
"""Same as the default, but changes attention layer to KTAttention to
accept a combined padding and future mask.
"""
attention = KTAttention(
d_model, n_heads=n_heads, dropout=dropout, mode=mode)
feed_forward = _KTFeedForwardBlock(
d_model, d_ff, dropout, dropout_shared_axes, mode, ff_activation)
dropout_ = tl.Dropout(
rate=dropout, shared_axes=dropout_shared_axes, mode=mode)
return [
tl.Residual(
tl.LayerNorm(),
attention,
dropout_,
),
tl.Residual(
feed_forward
),
]
def _KTEncoderDecoderBlock(d_model, d_ff, n_heads,
dropout, dropout_shared_axes, mode, ff_activation):
"""Same as the default, but changes the first layer to KTAttention to
accept a combined padding and future mask.
"""
def _Dropout():
return tl.Dropout(rate=dropout, shared_axes=dropout_shared_axes, mode=mode)
attention = KTAttention(
d_model, n_heads=n_heads, dropout=dropout, mode=mode)
attention_qkv = tl.AttentionQKV(
d_model, n_heads=n_heads, dropout=dropout, mode=mode)
feed_forward = _KTFeedForwardBlock(
d_model, d_ff, dropout, dropout_shared_axes, mode, ff_activation)
return [ # vec_d masks vec_e
tl.Residual(
tl.LayerNorm(), # vec_d ..... .....
attention, # vec_d ..... .....
_Dropout(), # vec_d ..... .....
),
tl.Residual(
tl.LayerNorm(), # vec_d ..... .....
tl.Select([0, 2, 2, 1, 2]), # vec_d vec_e vec_e masks vec_e
attention_qkv, # vec_d masks vec_e
_Dropout(), # vec_d masks vec_e
),
tl.Residual(
feed_forward # vec_d masks vec_e
),
]
def _KTFeedForwardBlock(d_model, d_ff, dropout, dropout_shared_axes,
mode, activation):
"""Same as default.
"""
dropout_middle = tl.Dropout(
rate=dropout, shared_axes=dropout_shared_axes, mode=mode)
dropout_final = tl.Dropout(
rate=dropout, shared_axes=dropout_shared_axes, mode=mode)
return [
tl.LayerNorm(),
tl.Dense(d_ff),
activation(),
dropout_middle,
tl.Dense(d_model),
dropout_final,
]
"""
Explanation: Model Functions
End of explanation
"""
# Configure hyperparameters.
total_steps = 10000
gin.clear_config()
gin.parse_config(f"""
import trax.layers
import trax.models
import trax.optimizers
import trax.data.inputs
import trax.supervised.trainer_lib
# Parameters that will vary between experiments:
# ==============================================================================
# min_len = 12
# max_len = 64
# d_model = 512 # need to make sure this works with concat embeddings
# d_ff = 256
# n_encoder_layers = 2
# n_decoder_layers = 2
# n_heads = 2
# dropout = 0.0
min_len = 12
max_len = 256
d_model = 512 # need to make sure this works with concat embeddings
d_ff = 1024
n_encoder_layers = 6
n_decoder_layers = 6
n_heads = 8
dropout = 0.1
# Set to True to aggregate embeddings by concatenation. If set
# to False aggregation will be by sum.
embed_concat = True
# (Vocab, depth) Uncomment to use with aggregation by concatenation.
d_input = (13500, 384)
d_part = (8, 8)
d_tags = (189, 120)
# (Vocab, depth) Uncomment to use with aggregation by concatenation.
d_out = (3, 384)
d_pqet = (300, 64)
d_ts_delta = (150, 64)
# Used for positional encodings if not None. Positional encoding based
# on sequence in batch if None.
d_tid = (10000, %d_model)
# d_input = (13500, %d_model)
# d_part = (8, %d_model)
# d_tags = (189, %d_model)
# # d_tags = None
# d_out = (3, %d_model)
# d_pqet = (300, %d_model)
# d_ts_delta = (150, %d_model)
# d_tid = (10000, %d_model)
total_steps = {total_steps}
# Parameters for learning rate schedule:
# ==============================================================================
warmup_and_rsqrt_decay.n_warmup_steps = 3000
warmup_and_rsqrt_decay.max_value = 0.001
# multifactor.constant = 0.01
# multifactor.factors = 'constant * linear_warmup * cosine_decay'
# multifactor.warmup_steps = 4000
# multifactor.steps_per_cycle = %total_steps
# multifactor.minimum = .0001
# Parameters for Adam:
# ==============================================================================
# Adam.weight_decay_rate=0.0
Adam.b1 = 0.9
Adam.b2 = 0.999
Adam.eps = 1e-8
# Parameters for input pipeline:
# ==============================================================================
get_ds_tfrec.min_len = %min_len
get_ds_tfrec.max_len = %max_len
train/get_ds_tfrec.folds = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]
eval/get_ds_tfrec.folds = [19]
BucketByLength.boundaries = [32, 64, 128]
BucketByLength.batch_sizes = [512, 256, 128, 64]
# BucketByLength.batch_sizes = [16, 8, 4, 2]
BucketByLength.strict_pad_on_len = True
KTAddLossWeights.id_to_mask = 0
train/make_additional_stream.stream = [
@train/get_ds_tfrec(),
@BucketByLength(),
@TrimTags(),
@KTAddLossWeights()
]
eval/make_additional_stream.stream = [
@eval/get_ds_tfrec(),
@BucketByLength(),
@TrimTags(),
@KTAddLossWeights()
]
make_inputs.train_stream = @train/make_additional_stream()
make_inputs.eval_stream = @eval/make_additional_stream()
# Parameters for KTPositionalEncoder:
# ==============================================================================
KTPositionalEncoder.d_model = %d_model
# Set to True to calculate positional encodings based on position in orginal
# full length sequence, False to be based on position in batch sequence.
KTPositionalEncoder.tid = False
# Parameters for PaddingFutureMaske:
# ==============================================================================
PaddingFutureMask.pad_end = False
# Set to True to calculate future mask based on task container id (questions
# are delivered to users in groups identified by task_container id) or False
# to be based next question only.
PaddingFutureMask.tid = False
# Parameters for KTTransformer:
# ==============================================================================
KTTransformer.d_model = %d_model
KTTransformer.d_input = %d_input
KTTransformer.d_part = %d_part
KTTransformer.d_tags = %d_tags
KTTransformer.d_out = %d_out
KTTransformer.d_pqet = %d_pqet
KTTransformer.d_ts_delta = %d_ts_delta
KTTransformer.d_tid = %d_tid
KTTransformer.embed_concat = %embed_concat
KTTransformer.d_ff = %d_ff
KTTransformer.n_encoder_layers = %n_encoder_layers
KTTransformer.n_decoder_layers = %n_decoder_layers
KTTransformer.n_heads = %n_heads
KTTransformer.dropout = %dropout
# Parameters for train:
# ==============================================================================
train.inputs = @make_inputs
train.eval_frequency = 200
train.eval_steps = 20
train.checkpoints_at = {list(range(0,total_steps + 1, 2000))}
train.optimizer = @trax.optimizers.Adam
train.steps = %total_steps
train.model = @KTTransformer
train.lr_schedule_fn = @trax.supervised.lr_schedules.warmup_and_rsqrt_decay
""")
if False:
inputs = trax.data.inputs.make_inputs()
train_stream = inputs.train_stream(trax.fastmath.device_count())
train_eval_stream = inputs.train_eval_stream(trax.fastmath.device_count())
b = next(train_stream)
for i, m in enumerate(b):
print(i, m.shape)
b
if False:
model = KTTransformer()
model.init(trax.shapes.signature(b))
outs = model(b)
for i, m in enumerate(outs):
print(i, m.shape)
outs
"""
Explanation: Configuration
End of explanation
"""
run_no = 0
prefix = f'model_runs/{run_no:02d}'
output_dir = f'gs://{BUCKET}/{prefix}'
log_dir = output_dir[:-3]
%tensorboard --logdir $log_dir
if TRAIN_MODEL:
if False:
init_checkpoint = f'{output_dir}/model.pkl.gz'
else:
bucket.delete_blobs(list(bucket.list_blobs(prefix=prefix)))
loop = trax.supervised.trainer_lib.train(output_dir, metrics=metrics)
"""
Explanation: Training
End of explanation
"""
|
thomasyangrenqin/Udacity_Data_Analyst_Nanodegree | P3-Wrangle OpenStreetMap Data/Data wrangling part.ipynb | mit | import xml.etree.ElementTree as ET # Use cElementTree or lxml if too slow
OSM_FILE = "/Users/yangrenqin/udacity/P3/san-francisco.osm" # Replace this with your osm file
SAMPLE_FILE = "/Users/yangrenqin/udacity/P3/sample1.osm"
k = 30 # Parameter: take every k-th top level element
def get_element(osm_file, tags=('node', 'way', 'relation')):
context = iter(ET.iterparse(osm_file, events=('start', 'end')))
_, root = next(context)
for event, elem in context:
if event == 'end' and elem.tag in tags:
yield elem
root.clear()
with open(SAMPLE_FILE, 'w') as output:
output.write('<?xml version="1.0" encoding="UTF-8"?>\n')
output.write('<osm>\n ')
# Write every kth top level element
for i, element in enumerate(get_element(OSM_FILE)):
if i % k == 0:
output.write(ET.tostring(element, encoding='unicode'))
output.write('</osm>')
"""
Explanation: Create a relative samll sample file from the whole osm file
End of explanation
"""
from collections import defaultdict
filename='/Users/yangrenqin/udacity/P3/san-francisco.osm'
def count_tags(filename):
tags=defaultdict(int)
for _,elem in (ET.iterparse(filename)):
tags[elem.tag] += 1
return tags
count_tags(filename)
"""
Explanation: However, since the original full size osm file is too big, I didn't use this sample file in the later part. In the whole wrangling, audit and clean process, I just used the full original osm file
Count the number of different tags
End of explanation
"""
import re
filename='/Users/yangrenqin/udacity/P3/san-francisco.osm'
lower = re.compile(r'^([a-z]|_)*$')
lower_colon = re.compile(r'^([a-z]|_)*:([a-z]|_)*$')
lower_colons=re.compile(r'^([a-z]|_)*(:([a-z]|_)*)+$')
problemchars = re.compile(r'[=\+/&<>;\'"\?%#$@\,\. \t\r\n]')
def key_type(element, keys,other):
if element.tag == "tag":
a=lower.search(element.attrib['k'])
b=lower_colon.search(element.attrib['k'])
c=problemchars.search(element.attrib['k'])
d=lower_colons.search(element.attrib['k'])
if a:
keys['lower'] += 1
elif b:
keys['lower_colon'] += 1
elif c:
keys['problemchars'] += 1
elif d:
keys['lower_colons'] += 1
else:
keys['other'] += 1
other.append(element.attrib['k'])
return keys,other
def process_map(filename):
keys = {"lower": 0, "lower_colon": 0, "lower_colons":0, "problemchars": 0, "other": 0}
other=[]
for _, element in ET.iterparse(filename):
keys,other = key_type(element, keys,other)
return keys,other
keys,others=process_map(filename)
print(keys)
"""
Explanation: Find different "k" attribute of tags and count them
End of explanation
"""
import xml.etree.ElementTree as ET
from collections import defaultdict
import re
filename='/Users/yangrenqin/udacity/P3/san-francisco.osm'
expected = ["Street", "Avenue", "Boulevard", "Drive", "Court", "Place", "Square", "Lane", "Road",
"Trail", "Parkway", "Commons", "Way", "Highway", "Path", "Terrace", "Alley", "Center",
"Circle", "Plaza", "Real"]
street_type_re = re.compile(r'\b\S+\.?$', re.IGNORECASE)
street_types=defaultdict(set)
def audit_street_type(street_types, street_name):
m = street_type_re.search(street_name)
if m:
street_type = m.group()
if street_type not in expected:
street_types[street_type].add(street_name)
def is_street_name(elem):
return (elem.attrib['k'] == "addr:street")
def audit(osmfile):
osm_file = open(osmfile, "r")
for event, elem in ET.iterparse(osm_file, events=("start",)):
if elem.tag == "node" or elem.tag == "way":
for tag in elem.iter("tag"):
if is_street_name(tag):
audit_street_type(street_types, tag.attrib['v'])
osm_file.close()
return street_types
error_street_type=audit(filename)
error_street_type
"""
Explanation: Aduit the street type and find out the wrong street type
End of explanation
"""
street_mapping = { "St": "Street",
"St.": "Street",
"Steet": "Street",
"st": "Street",
"street": "Street",
"Ave": "Avenue",
"Ave.": "Avenue",
"ave": "Avenue",
"avenue": "Avenue",
"Rd.": "Road",
"Rd": "Road",
"Blvd": "Boulevard",
"Blvd,": "Boulevard",
"Blvd.": "Boulevard",
"Boulavard": "Boulevard",
"Boulvard": "Boulevard",
"Dr": "Drive",
"Dr.": "Drive",
"Pl": "Plaza",
"Plz": "Plaza",
"square": "Square"
}
postcode_mapping={"CA 94030": "94030",
"CA 94133": "94133",
"CA 94544": "94544",
"CA 94103": "94103",
"CA:94103": "94103"
}
error_postcode={'1087', '515', 'CA'}
cityname_mapping={"Berkeley, CA": "Berkeley",
"Fremont ": "Fremont",
"Oakland, CA": "Oakland",
"Oakland, Ca": "Oakland",
"San Francisco, CA": "San Francisco",
"San Francisco, CA 94102": "San Francisco",
"San Francicsco": "San Francisco",
"San Fransisco": "San Francisco",
"San Francsico": "San Francisco",
"Artherton": "Atherton"
}
error_cityname={'155', '157'}
"""
Explanation: Mapping dictionaries which used to update and clean those amendable data
End of explanation
"""
filename='/Users/yangrenqin/udacity/P3/san-francisco.osm'
def is_street_postcode(elem):
return (elem.attrib['k'] == "addr:postcode")
def audit(osmfile):
osm_file = open(osmfile, "r")
postcode_types = set()
for event, elem in ET.iterparse(osm_file, events=("start",)):
if elem.tag == "node" or elem.tag == "way":
for tag in elem.iter("tag"):
if is_street_postcode(tag):
postcode_types.add(tag.attrib['v'])
osm_file.close()
return postcode_types
postcode=audit(filename)
postcode
"""
Explanation: Find out the all type of postal code
End of explanation
"""
filename='/Users/yangrenqin/udacity/P3/san-francisco.osm'
def is_city(elem):
return (elem.attrib['k'] == "addr:city")
def audit(osmfile):
osm_file = open(osmfile, "r")
city_types = set()
for event, elem in ET.iterparse(osm_file, events=("start",)):
if elem.tag == "node" or elem.tag == "way":
for tag in elem.iter("tag"):
if is_city(tag):
city_types.add(tag.attrib['v'])
osm_file.close()
return city_types
cityname=audit(filename)
cityname
"""
Explanation: Find out all type of city names
End of explanation
"""
import csv
import codecs
import re
import xml.etree.cElementTree as ET
OSM_PATH = "/Users/yangrenqin/udacity/P3/san-francisco.osm"
NODES_PATH = "/Users/yangrenqin/udacity/P3/nodes.csv"
NODE_TAGS_PATH = "/Users/yangrenqin/udacity/P3/nodes_tags.csv"
WAYS_PATH = "/Users/yangrenqin/udacity/P3/ways.csv"
WAY_NODES_PATH = "/Users/yangrenqin/udacity/P3/ways_nodes.csv"
WAY_TAGS_PATH = "/Users/yangrenqin/udacity/P3/ways_tags.csv"
LOWER_COLON = re.compile(r'^([a-z]|_)+:([a-z]|_)+')
PROBLEMCHARS = re.compile(r'[=\+/&<>;\'"\?%#$@\,\. \t\r\n]')
street_type_re = re.compile(r'\b\S+\.?$', re.IGNORECASE)
# Make sure the fields order in the csvs matches the column order in the sql table schema
NODE_FIELDS = ['id', 'lat', 'lon', 'user', 'uid', 'version', 'changeset', 'timestamp']
NODE_TAGS_FIELDS = ['id', 'key', 'value', 'type']
WAY_FIELDS = ['id', 'user', 'uid', 'version', 'changeset', 'timestamp']
WAY_TAGS_FIELDS = ['id', 'key', 'value', 'type']
WAY_NODES_FIELDS = ['id', 'node_id', 'position']
def capitalize(a):
b=''
for i in a.split(' '):
i=i.capitalize()+' '
b=b+i
a=b.strip()
return a
def shape_element(element, node_attr_fields=NODE_FIELDS, way_attr_fields=WAY_FIELDS,
problem_chars=PROBLEMCHARS, default_tag_type='regular'):
node_attribs = {}
way_attribs = {}
way_nodes = []
tags = [] # Handle secondary tags the same way for both node and way elements
if element.tag == 'node':
for i in element.attrib:
if i in node_attr_fields:
node_attribs[i]=element.attrib[i]
if element.getchildren() == []:
pass
else:
for tag in element.iter('tag'):
node_tags={}
k=tag.attrib['k']
if PROBLEMCHARS.search(k):
continue
elif LOWER_COLON.search(k):
if LOWER_COLON.search(k).group() == k:
if k == "addr:street":
m = street_type_re.search(tag.attrib['v'])
if m:
street_type = m.group()
if street_type in error_street_type:
if street_type in street_mapping:
tag.attrib['v']=tag.attrib['v'].replace(street_type,street_mapping[street_type])
else:
continue
node_tags['key']=k.split(':')[1]
node_tags['type']=k.split(':')[0]
node_tags['id']=element.attrib['id']
node_tags['value']=tag.attrib['v']
tags.append(node_tags)
else:
continue
if k == "addr:postcode":
if tag.attrib['v'] in error_postcode:
continue
else:
if tag.attrib['v'] in postcode_mapping:
tag.attrib['v']=postcode_mapping[tag.attrib['v']]
node_tags['key']=k.split(':')[1]
node_tags['type']=k.split(':')[0]
node_tags['id']=element.attrib['id']
node_tags['value']=tag.attrib['v']
tags.append(node_tags)
if k == "addr:city":
if tag.attrib['v'] in error_cityname:
continue
else:
if tag.attrib['v'] in cityname_mapping:
tag.attrib['v']=cityname_mapping[tag.attrib['v']]
node_tags['key']=k.split(':')[1]
node_tags['type']=k.split(':')[0]
node_tags['id']=element.attrib['id']
node_tags['value']=capitalize(tag.attrib['v'])
tags.append(node_tags)
else:
node_tags['key']=k.partition(':')[-1]
node_tags['type']=k.partition(':')[0]
node_tags['id']=element.attrib['id']
node_tags['value']=tag.attrib['v']
tags.append(node_tags)
else:
node_tags['id']=element.attrib['id']
node_tags['value']=tag.attrib['v']
node_tags['key']=k
node_tags['type']=default_tag_type
tags.append(node_tags)
if element.tag == 'way':
for i in element.attrib:
if i in way_attr_fields:
way_attribs[i]=element.attrib[i]
if element.getchildren() == []:
pass
else:
for tag in element.iter('tag'):
way_tags={}
k=tag.attrib['k']
if PROBLEMCHARS.search(k):
continue
elif LOWER_COLON.search(k):
if LOWER_COLON.search(k).group() == k:
if k == "addr:street":
m = street_type_re.search(tag.attrib['v'])
if m:
street_type = m.group()
if street_type in error_street_type:
if street_type in street_mapping:
tag.attrib['v']=tag.attrib['v'].replace(street_type,street_mapping[street_type])
else:
continue
way_tags['key']=k.split(':')[1]
way_tags['type']=k.split(':')[0]
way_tags['id']=element.attrib['id']
way_tags['value']=tag.attrib['v']
tags.append(way_tags)
else:
continue
if k == "addr:postcode":
if tag.attrib['v'] in error_postcode:
continue
else:
if tag.attrib['v'] in postcode_mapping:
tag.attrib['v']=postcode_mapping[tag.attrib['v']]
way_tags['key']=k.split(':')[1]
way_tags['type']=k.split(':')[0]
way_tags['id']=element.attrib['id']
way_tags['value']=tag.attrib['v']
tags.append(way_tags)
if k == "addr:city":
if tag.attrib['v'] in error_cityname:
continue
else:
if tag.attrib['v'] in cityname_mapping:
tag.attrib['v']=cityname_mapping[tag.attrib['v']]
way_tags['key']=k.split(':')[1]
way_tags['type']=k.split(':')[0]
way_tags['id']=element.attrib['id']
way_tags['value']=capitalize(tag.attrib['v'])
tags.append(way_tags)
else:
way_tags['key']=k.partition(':')[-1]
way_tags['type']=k.partition(':')[0]
way_tags['id']=element.attrib['id']
way_tags['value']=tag.attrib['v']
tags.append(way_tags)
else:
way_tags['id']=element.attrib['id']
way_tags['value']=tag.attrib['v']
way_tags['key']=k
way_tags['type']=default_tag_type
tags.append(way_tags)
for i,nd in enumerate(element.iter('nd')):
way_nd={}
way_nd['id']=element.attrib['id']
way_nd['node_id']=nd.attrib['ref']
way_nd['position']=i
way_nodes.append(way_nd)
if element.tag == 'node':
if element.getchildren() == []:
return {'node': node_attribs}
else:
return {'node': node_attribs, 'node_tags': tags}
elif element.tag == 'way':
if element.getchildren() == []:
return {'way': way_attribs}
else:
return {'way': way_attribs, 'way_nodes': way_nodes, 'way_tags': tags}
def get_element(osm_file, tags=('node', 'way')):
"""Yield element if it is the right type of tag"""
context = ET.iterparse(osm_file, events=('start', 'end'))
_, root = next(context)
for event, elem in context:
if event == 'end' and elem.tag in tags:
yield elem
root.clear()
def is_numeric(s):
try:
float(s)
return True
except ValueError:
return False
def keep_numeric(original):
for i,v in original.items():
if is_numeric(v):
if float(v).is_integer():
original[i]=int(float(v))
else:
original[i]=float(v)
return original
def process_map(file_in):
"""Iteratively process each XML element and write to csv(s)"""
with codecs.open(NODES_PATH, 'w') as nodes_file, codecs.open(NODE_TAGS_PATH, 'w') as nodes_tags_file, \
codecs.open(WAYS_PATH, 'w') as ways_file,codecs.open(WAY_NODES_PATH, 'w') as way_nodes_file, \
codecs.open(WAY_TAGS_PATH, 'w') as way_tags_file:
nodes_writer = csv.DictWriter(nodes_file, NODE_FIELDS)
node_tags_writer = csv.DictWriter(nodes_tags_file, NODE_TAGS_FIELDS)
ways_writer = csv.DictWriter(ways_file, WAY_FIELDS)
way_nodes_writer = csv.DictWriter(way_nodes_file, WAY_NODES_FIELDS)
way_tags_writer = csv.DictWriter(way_tags_file, WAY_TAGS_FIELDS)
nodes_writer.writeheader()
node_tags_writer.writeheader()
ways_writer.writeheader()
way_nodes_writer.writeheader()
way_tags_writer.writeheader()
for element in get_element(file_in, tags=('node', 'way')):
el = shape_element(element)
if el:
if element.tag == 'node':
if element.getchildren() == []:
nodes_writer.writerow(keep_numeric(el['node']))
else:
nodes_writer.writerow(keep_numeric(el['node']))
node_tags_writer.writerows([keep_numeric(i) for i in el['node_tags']])
elif element.tag == 'way':
if element.getchildren() == []:
ways_writer.writerow(keep_numeric(el['way']))
else:
ways_writer.writerow(keep_numeric(el['way']))
way_nodes_writer.writerows([keep_numeric(i) for i in el['way_nodes']])
way_tags_writer.writerows([keep_numeric(i) for i in el['way_tags']])
if __name__ == '__main__':
process_map(OSM_PATH)
"""
Explanation: Audit, correct and write data from xml into csv files
End of explanation
"""
import sqlite3
import pandas as pd
db = sqlite3.connect('sanfrancisco.db')
c=db.cursor()
query="SELECT tags.value, COUNT(*) as count\
FROM (SELECT * FROM nodes_tags\
UNION ALL\
SELECT * FROM ways_tags) tags\
WHERE tags.key='street'\
GROUP BY tags.value\
ORDER BY count DESC;"
c.execute(query)
rows=pd.DataFrame(c.fetchall(),columns=['Street','count'])
db.close()
rows.head(20)
rows.tail(20)
db = sqlite3.connect('sanfrancisco.db')
c=db.cursor()
query="SELECT tags.value, COUNT(*) as count\
FROM (SELECT * FROM nodes_tags\
UNION ALL\
SELECT * FROM ways_tags) tags\
WHERE tags.key='postcode'\
GROUP BY tags.value\
ORDER BY count DESC;"
c.execute(query)
rows=pd.DataFrame(c.fetchall(),columns=['Postcode','count'])
db.close()
rows.head(10)
rows.tail(10)
db = sqlite3.connect('sanfrancisco.db')
c=db.cursor()
query="SELECT tags.value, COUNT(*) as count\
FROM (SELECT * FROM nodes_tags\
UNION ALL\
SELECT * FROM ways_tags) tags\
WHERE tags.key='city'\
GROUP BY tags.value\
ORDER BY count DESC;"
c.execute(query)
rows=pd.DataFrame(c.fetchall(),columns=['City','count'])
db.close()
rows
"""
Explanation: Verify the result of audit and update
End of explanation
"""
db = sqlite3.connect('sanfrancisco.db')
c=db.cursor()
query="SELECT nodes_tags.value,COUNT(*) as num\
FROM nodes_tags, (SELECT DISTINCT(id) FROM nodes_tags WHERE value='cafe') AS i\
WHERE nodes_tags.id=i.id AND nodes_tags.key='name'\
GROUP BY nodes_tags.value\
ORDER BY num desc\
LIMIT 1;"
c.execute(query)
result=c.fetchall()
db.close()
result
"""
Explanation: Data overview part
End of explanation
"""
|
Qumulo/python-notebooks | notebooks/Raw REST examples for the Qumulo API with python.ipynb | gpl-3.0 | import os
import requests
import json
import pprint
# python + ssl on MacOSX is rather noisy against dev clusters
requests.packages.urllib3.disable_warnings()
# set your environment variables or fill in the variables below
API_HOSTNAME = os.environ['API_HOSTNAME'] if 'API_HOSTNAME' in os.environ else '{your-cluster-hostname}'
API_USER = os.environ['API_USER'] if 'API_USER' in os.environ else '{api-cluster-user}'
API_PASSWORD = os.environ['API_PASSWORD'] if 'API_PASSWORD' in os.environ else '{api-cluster-password}'
# Setting up URLs and default header parameters
root_url = 'https://' + API_HOSTNAME + ':8000'
who_am_i_url = root_url + '/v1/session/who-am-i'
login_url = root_url + '/v1/session/login'
default_header = {'content-type': 'application/json'}
"""
Explanation: Raw python requests against the Qumulo API via REST
This Python example illustrates how raw RESTful requests can be run against the Qumulo API. These patterns could be used if you wish to create other language bindings against Qumulo. While they are one method of using python to interact with Qumulo, we recommend using Qumulo's python bindings installed via <code>pip install qumulo_api</code>
End of explanation
"""
post_data = {'username': API_USER, 'password': API_PASSWORD}
resp = requests.post(login_url,
data=json.dumps(post_data),
headers=default_header,
verify=False)
resp_data = json.loads(resp.text)
# Print the response for the login attempt.
pprint.pprint(resp_data)
"""
Explanation: Login to the Qumulo cluster via a "POST" to the login_url
End of explanation
"""
default_header['Authorization'] = 'Bearer ' + resp_data['bearer_token']
# A look at the current default requests header now
pprint.pprint(default_header)
"""
Explanation: Set up the Authorization bearer token header
End of explanation
"""
resp = requests.get(who_am_i_url,
headers=default_header,
verify=False)
# Print the response. Include the id, sid, and uid
pprint.pprint(json.loads(resp.text))
"""
Explanation: Run who am I via a raw "GET" request
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/nuist/cmip6/models/sandbox-2/seaice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nuist', 'sandbox-2', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: NUIST
Source ID: SANDBOX-2
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:34
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
gaufung/PythonStandardLibrary | Algorithm/Itertools.ipynb | mit | from itertools import chain
for i in chain([1,2,3], ['a', 'b', 'c']):
print(i, end=' ')
from itertools import *
def make_iterables_to_chain():
yield [1, 2, 3]
yield ['a', 'b', 'c']
for i in chain.from_iterable(make_iterables_to_chain()):
print(i, end=' ')
print()
"""
Explanation: 1 Merging and Splitting Iterators
Take several iterators as arguments and returns a single iterator
End of explanation
"""
from itertools import *
r1 = range(3)
r2 = range(2)
for r12 in zip(r1,r2):
print(r12)
print()
print(list(zip_longest(r1,r2)))
print(list(zip_longest(r1,r2, fillvalue='a')))
"""
Explanation: use zip_longest to zip a tuple
End of explanation
"""
from itertools import *
print('Stop at 5')
for i in islice(range(100), 5):
print(i, end =' ')
print('\n')
print('start at 5, and stop at 10')
for i in islice(range(100), 5, 10):
print(i, end=' ')
print('\n')
print('by ten to 100')
for i in islice(range(100), 0,100, 10):
print(i, end=' ')
print('\n')
"""
Explanation: islice returns selcted item by index
End of explanation
"""
from itertools import *
r = islice(count(), 5)
r1,r2 = tee(r)
print('r1', list(r1))
print('r2', list(r2))
"""
Explanation: tee returns several independent iterators(default to 2)
End of explanation
"""
from itertools import *
values = [(0, 5), (1, 6), (2, 7), (3, 8), (4, 9)]
for i in starmap(lambda x, y: (x, y, x * y), values):
print('{} * {} = {}'.format(*i))
"""
Explanation: 2 Converting Inputs
End of explanation
"""
from itertools import *
for i in zip(count(1), ['a', 'b', 'c']):
print(i)
"""
Explanation: 3 Producing New Values
End of explanation
"""
import fractions
from itertools import *
start = fractions.Fraction(1, 3)
step = fractions.Fraction(1, 3)
for i in zip(count(start, step), ['a', 'b', 'c']):
print('{}: {}'.format(*i))
from itertools import *
for i in zip(range(7), cycle(['a', 'b', 'c'])):
print(i)
from itertools import *
for i in repeat('over-and-over', 5):
print(i)
"""
Explanation: count() take start and step arguments
End of explanation
"""
from itertools import *
def should_drop(x):
print('Testing:', x)
return x < 1
for i in dropwhile(should_drop, [-1, 0, 1, 2, -2]):
print('Yielding:', i)
"""
Explanation: 4 Filtering
End of explanation
"""
from itertools import *
def should_take(x):
print('Testing:', x)
return x < 2
for i in takewhile(should_take, [-1, 0, 1, 2, -2]):
print('Yielding:', i)
"""
Explanation: dropwhile() does not filter every item of the input; after the condition is false the first time, all of the remaining items in the input are returned.
End of explanation
"""
from itertools import *
every_third = cycle([False, False, True])
data = range(1, 10)
for i in compress(data, every_third):
print(i, end=' ')
print()
"""
Explanation: As soon as should_take() returns False, takewhile() stops processing the input.
compress() offers another way to filter the contents of an iterable. Instead of calling a function, it uses the values in another iterable to indicate when to accept a value and when to ignore it.
End of explanation
"""
import functools
from itertools import *
import operator
import pprint
@functools.total_ordering
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def __repr__(self):
return '({}, {})'.format(self.x, self.y)
def __eq__(self, other):
return (self.x, self.y) == (other.x, other.y)
def __gt__(self, other):
return (self.x, self.y) > (other.x, other.y)
# Create a dataset of Point instances
data = list(map(Point,
cycle(islice(count(), 3)),
islice(count(), 7)))
print('Data:')
pprint.pprint(data, width=35)
print()
print('Grouped, unsorted:')
for k, g in groupby(data, operator.attrgetter('x')):
print(k, list(g))
print()
# Sort the data
data.sort()
print('Sorted:')
pprint.pprint(data, width=35)
print()
print('Grouped, sorted:')
for k, g in groupby(data, operator.attrgetter('x')):
print(k, list(g))
print()
"""
Explanation: 5 Group Data
End of explanation
"""
from itertools import *
import pprint
FACE_CARDS = ('J', 'Q', 'K', 'A')
SUITS = ('H', 'D', 'C', 'S')
DECK = list(
product(
chain(range(2, 11), FACE_CARDS),
SUITS,
)
)
for card in DECK:
print('{:>2}{}'.format(*card), end=' ')
if card[1] == SUITS[-1]:
print()
"""
Explanation: The input sequence needs to be sorted on the key value in order for the groupings to work out as expected.
Nested for loops that iterate over multiple sequences can often be replaced with product(), which produces a single iterable whose values are the Cartesian product of the set of input values.
End of explanation
"""
|
hfoffani/deep-learning | image-classification/dlnd_image_classification.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
"""
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
"""
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
# TODO: Implement Function
return x / 255.0
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
"""
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
"""
def make_one_hots(n):
one_hots = {}
for i in range(n):
oh = np.zeros(n)
oh[i] = 1
one_hots[i] = oh
return one_hots
one_hots = make_one_hots(10)
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
# TODO: Implement Function
return np.array([ one_hots[i] for i in x ])
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
# TODO: Implement Function
x = tf.placeholder(tf.float32, shape=(None, image_shape[0], image_shape[1], image_shape[2]), name="x")
return x
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
# TODO: Implement Function
y = tf.placeholder(tf.float32, shape=(None, n_classes), name="y")
return y
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
# TODO: Implement Function
keep_prob = tf.placeholder(tf.float32, name="keep_prob")
return keep_prob
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
"""
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
# TODO: Implement Function
xshape = x_tensor.get_shape().as_list()
weight = tf.Variable(tf.truncated_normal([
conv_ksize[0], conv_ksize[1], xshape[3], conv_num_outputs], stddev=0.05))
bias = tf.Variable(tf.constant(0.1, shape=[conv_num_outputs]))
padding = 'SAME'
strides = [1, conv_strides[0], conv_strides[1], 1]
conv2d = tf.nn.conv2d(x_tensor, weight, strides, padding) + bias
conv2d = tf.nn.relu(conv2d)
ksize = [1, pool_ksize[0], pool_ksize[1], 1]
strides = [1, pool_strides[0], pool_strides[1], 1]
conv2d = tf.nn.max_pool(conv2d, ksize, strides, padding)
return conv2d
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
"""
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
# TODO: Implement Function
dim = np.prod(x_tensor.get_shape().as_list()[1:])
x2 = tf.reshape(x_tensor, [-1, dim])
return x2
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
"""
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
xshape = x_tensor.get_shape().as_list()
weight = tf.Variable(tf.truncated_normal([xshape[1], num_outputs], stddev=0.1))
bias = tf.Variable(tf.constant(0.1, shape=[num_outputs]))
fully = tf.nn.relu(tf.matmul(x_tensor, weight) + bias)
return fully
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
"""
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
xshape = x_tensor.get_shape().as_list()
weight = tf.Variable(tf.truncated_normal([xshape[1], num_outputs], stddev=0.1))
bias = tf.Variable(tf.constant(0.1, shape=[num_outputs]))
o = tf.matmul(x_tensor, weight) + bias
return o
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
"""
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
conv_num_outputs_1 = 16
conv_ksize_1 = (5,5)
conv_strides_1 = (1,1)
pool_ksize_1 = (2,2)
pool_strides_1 = (1,1)
conv_num_outputs_2 = 64
conv_ksize_2 = (5,5)
conv_strides_2 = (1,1)
pool_ksize_2 = (2,2)
pool_strides_2 = (2,2)
conv_num_outputs_3 = 96
conv_ksize_3 = (2,2)
conv_strides_3 = (2,2)
pool_ksize_3 = (2,2)
pool_strides_3 = (2,2)
fully_numouts_1 = 300
fully_numouts_2 = 100
fully_numouts_3 = 20
num_outputs = 10
print('\nMODEL:')
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
x_tensor = conv2d_maxpool(x, conv_num_outputs_1, conv_ksize_1, conv_strides_1, pool_ksize_1, pool_strides_1)
print('CONV', x_tensor.get_shape().as_list())
x_tensor = conv2d_maxpool(x_tensor, conv_num_outputs_2, conv_ksize_2, conv_strides_2, pool_ksize_2, pool_strides_2)
print('CONV', x_tensor.get_shape().as_list())
x_tensor = conv2d_maxpool(x_tensor, conv_num_outputs_3, conv_ksize_3, conv_strides_3, pool_ksize_3, pool_strides_3)
print('CONV', x_tensor.get_shape().as_list())
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x_tensor = flatten(x_tensor)
print('FLAT', x_tensor.get_shape().as_list())
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
x_tensor = fully_conn(x_tensor, fully_numouts_1)
print('FC', x_tensor.get_shape().as_list())
x_tensor = tf.nn.dropout(x_tensor, keep_prob)
print('DROP')
x_tensor = fully_conn(x_tensor, fully_numouts_2)
print('FC', x_tensor.get_shape().as_list())
x_tensor = tf.nn.dropout(x_tensor, keep_prob)
print('DROP')
x_tensor = fully_conn(x_tensor, fully_numouts_3)
print('FC', x_tensor.get_shape().as_list())
x_tensor = tf.nn.dropout(x_tensor, keep_prob)
print('DROP')
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
o = output(x_tensor, num_outputs)
print('OUT:', o.get_shape().as_list())
# TODO: return output
return o
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
# TODO: Implement Function
session.run(optimizer, feed_dict={ x:feature_batch, y:label_batch, keep_prob:keep_probability} )
pass
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
"""
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
"""
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
# TODO: Implement Function
cst = sess.run(cost, feed_dict={ x:feature_batch, y:label_batch, keep_prob:1.0})
acc = sess.run(accuracy, feed_dict={x:valid_features, y:valid_labels, keep_prob:1.0})
print('Loss %f - Accuracy %.1f%%' % (cst, acc*100))
pass
"""
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
"""
# TODO: Tune Parameters
epochs = 50
batch_size = 64
keep_probability = .5
"""
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
"""
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
"""
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation
"""
|
mercybenzaquen/foundations-homework | foundations_hw/12/311 time series homework.ipynb | mit | #df = pd.read_csv("small-311-2015.csv")
df = pd.read_csv("311-2014.csv", nrows=200000)
df.head(2)
df.info()
def parse_date (str_date):
return dateutil.parser.parse(str_date)
df['created_dt']= df['Created Date'].apply(parse_date)
df.head(3)
df.info()
"""
Explanation: First, I made a mistake naming the data set! It's 2015 data, not 2014 data. But yes, still use 311-2014.csv. You can rename it.
Importing and preparing your data
Import your data, but only the first 200,000 rows. You'll also want to change the index to be a datetime based on the Created Date column - you'll want to check if it's already a datetime, and parse it if not.
End of explanation
"""
df["Complaint Type"].value_counts().head(1)
"""
Explanation: What was the most popular type of complaint, and how many times was it filed?
End of explanation
"""
df["Complaint Type"].value_counts().head(5).sort_values().plot(kind='barh')
"""
Explanation: Make a horizontal bar graph of the top 5 most frequent complaint types.
End of explanation
"""
df["Borough"].value_counts()
people_bronx= 1438159
people_queens= 2321580
people_manhattan=1636268
people_brooklyn= 2621793
people_staten_island= 473279
complaints_per_capita_bronx= 29610/people_bronx
complaints_per_capita_bronx
complaints_per_capita_queens=46824/people_queens
complaints_per_capita_queens
complaints_per_capita_manhattan=42050/people_manhattan
complaints_per_capita_manhattan
complaints_per_capita_staten_island=473279/people_staten_island
complaints_per_capita_staten_island
complaints_per_capita_brooklyn=2621793/people_brooklyn
complaints_per_capita_brooklyn
"""
Explanation: Which borough has the most complaints per capita? Since it's only 5 boroughs, you can do the math manually.
End of explanation
"""
df.index = df['created_dt']
#del df['Created Date']
df.head()
print("There were", len(df['2015-03']), "cases filed in March")
print("There were", len(df['2015-05']), "cases filed in May")
"""
Explanation: According to your selection of data, how many cases were filed in March? How about May?
End of explanation
"""
df['2015-04-01']
"""
Explanation: I'd like to see all of the 311 complaints called in on April 1st.
Surprise! We couldn't do this in class, but it was just a limitation of our data set
End of explanation
"""
df['2015-04-01']['Complaint Type'].value_counts().head(3)
df.info()
"""
Explanation: What was the most popular type of complaint on April 1st?
What were the most popular three types of complaint on April 1st
End of explanation
"""
df.resample('M').count()
df.resample('M').index[0]
import numpy as np
np.__version__
df.resample('M').count().plot(y="Unique Key")
ax= df.groupby(df.index.month).count().plot(y='Unique Key', legend=False)
ax.set_xticks([1,2,3,4,5,6,7,8,9,10,11, 12])
ax.set_xticklabels(['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
ax.set_ylabel("Number of Complaints")
ax.set_title("311 complains in 2015")
"""
Explanation: What month has the most reports filed? How many? Graph it.
End of explanation
"""
#df.resample('W').count().head(5)
df.resample('W').count().plot(y="Unique Key", color= "purple")
"""
Explanation: What week of the year has the most reports filed? How many? Graph the weekly complaints.
End of explanation
"""
df[df['Complaint Type'].str.contains("Noise")].head()
noise_df= df[df['Complaint Type'].str.contains("Noise")]
noise_graph= noise_df.groupby(noise_df.index.month).count().plot(y='Unique Key', legend=False)
noise_graph.set_xticks([1,2,3,4,5,6,7,8,9,10,11, 12])
noise_graph.set_xticklabels(['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
noise_graph.set_ylabel("Number of Noise Complaints")
noise_graph.set_title("311 noise complains in 2015")
noise_df.groupby(by=noise_df.index.hour)['Unique Key'].count().plot()
noise_graph= noise_df.groupby(noise_df.index.dayofweek).count().plot(y='Unique Key', legend=False)
noise_graph.set_xticks([1,2,3,4,5,6,7])
noise_graph.set_xticklabels(['Mon', 'Tues', 'Wed', 'Thur', 'Fri', 'Sat', 'Sun'])
noise_graph.set_ylabel("Number of Noise Complaints")
noise_graph.set_title("311 noise complains in 2015")
"""
Explanation: Noise complaints are a big deal. Use .str.contains to select noise complaints, and make an chart of when they show up annually. Then make a chart about when they show up every day (cyclic).
End of explanation
"""
daily_count= df['Unique Key'].resample('D').count().sort_values(ascending=False)
top_5_days= daily_count.head(5)
top_5_days
ax = top_5_days.plot(kind='bar') # I dont know how to put names to the labels
ax.set_title("Top 5 days")
ax.set_xlabel("Day")
ax.set_ylabel("Complaints")
"""
Explanation: Which were the top five days of the year for filing complaints? How many on each of those days? Graph it.
End of explanation
"""
hour_graph= df.groupby(df.index.hour).count().plot(y='Unique Key', legend=False)
hour_graph.set_xticks([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23])
hour_graph.set_title("A day of complaints")
hour_graph.set_xlabel("Hours")
hour_graph.set_ylabel("Complaints")
"""
Explanation: What hour of the day are the most complaints? Graph a day of complaints.
End of explanation
"""
twelve_am_complaints= df[df.index.hour <1]
twelve_am_complaints.head()
twelve_am_complaints['Complaint Type'].value_counts().head(5)
one_am_complaints= df[df.index.hour == 1]
one_am_complaints['Complaint Type'].value_counts().head(5)
eleven_pm_complaints= df[df.index.hour == 23]
eleven_pm_complaints['Complaint Type'].value_counts().head(5)
"""
Explanation: One of the hours has an odd number of complaints. What are the most common complaints at that hour, and what are the most common complaints the hour before and after?
End of explanation
"""
twelve_am_complaints.groupby(twelve_am_complaints.index.minute).count()
"""
Explanation: So odd. What's the per-minute breakdown of complaints between 12am and 1am? You don't need to include 1am.
End of explanation
"""
df['Agency'].value_counts().head(5)
df_NYPD = df[df['Agency'] == 'NYPD']
df_HPD = df[df['Agency'] == 'HPD']
df_DOT = df[df['Agency'] == 'DOT']
df_DPR= df[df['Agency'] == 'DPR']
df_DOHMH= df[df['Agency'] == 'DOHMH']
all_graph = df_NYPD.groupby(by= df_NYPD.index.hour).count().plot(y='Unique Key', label='NYPD complaints')
all_graph.set_xticks([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23])
all_graph.set_title("A day of complaints by the top 5 agencies")
all_graph.set_xlabel("Hours")
all_graph.set_ylabel("Complaints")
df_HPD.groupby(by= df_HPD.index.hour).count().plot(y='Unique Key', ax=all_graph , label='HPD complaints')
df_DOT.groupby(by= df_DOT.index.hour).count().plot(y='Unique Key', ax=all_graph , label='DOT complaints')
df_DPR.groupby(by= df_DPR.index.hour).count().plot(y='Unique Key', ax=all_graph , label='DPR complaints')
df_DOHMH.groupby(by= df_DOHMH.index.hour).count().plot(y='Unique Key', ax=all_graph , label='DOHMH complaints')
"""
Explanation: Looks like midnight is a little bit of an outlier. Why might that be? Take the 5 most common agencies and graph the times they file reports at (all day, not just midnight).
End of explanation
"""
all_graph = df_NYPD.groupby(by= df_NYPD.index.weekofyear).count().plot(y='Unique Key', label='NYPD complaints')
#all_graph.set_xticks([1,50])
all_graph.set_title("A year of complaints by the top 5 agencies")
all_graph.set_xlabel("Weeks")
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
df_HPD.groupby(by= df_HPD.index.week).count().plot(y='Unique Key', ax=all_graph , label='HPD complaints')
df_DOT.groupby(by= df_DOT.index.hour).count().plot(y='Unique Key', ax=all_graph , label='DOT complaints')
df_DPR.groupby(by= df_DPR.index.hour).count().plot(y='Unique Key', ax=all_graph , label='DPR complaints')
df_DOHMH.groupby(by= df_DOHMH.index.hour).count().plot(y='Unique Key', ax=all_graph , label='DOHMH complaints')
plt.legend(bbox_to_anchor=(0, 1), loc='best', ncol=1)
print("""May and June are the months with more complaints, followed by October, November and December.
In May the NYPD and HPD have an odd number of complaints""")
"""
Explanation: Graph those same agencies on an annual basis - make it weekly. When do people like to complain? When does the NYPD have an odd number of complaints?
End of explanation
"""
August_July = df["2015-07":"2015-08"]
August_July_complaints = August_July['Complaint Type'].value_counts().head(5)
August_July_complaints
May = df['2015-05']
May_complaints= May['Complaint Type'].value_counts().head(5)
May_complaints
# August_July_vs_May= August_July_complaints.plot(y='Unique Key', label='August - July complaints')
# August_July_vs_May.set_ylabel("Number of Complaints")
# August_July_vs_May.set_title("August-July vs May Complaints")
# May['Complaint Type'].value_counts().head(5).plot(y='Unique Key', ax=August_July_vs_May, label='May complaints')
# August_July_vs_May.set_xticks([1,2,3,4,5])
# August_July_vs_May.set_xticklabels(['Illegal Parking', 'Blocked Driveway', 'Noise - Street/Sidewalk', 'Street Condition', 'Noise - Commercial'])
#Most popular complaints of the HPD
df_HPD['Complaint Type'].value_counts().head(5)
summer_complaints= df_HPD["2015-06":"2015-08"]['Complaint Type'].value_counts().head(5)
summer_complaints
winter_complaints= df_HPD["2015-01":"2015-02"]['Complaint Type'].value_counts().head(5)
winter_complaints
winter_complaints_dec= df_HPD["2015-12"]['Complaint Type'].value_counts().head(5)
winter_complaints_dec
winter_results= df_HPD["2015-12"]['Complaint Type'].value_counts() + df_HPD["2015-01":"2015-02"]['Complaint Type'].value_counts()
winter_results
"""
Explanation: Maybe the NYPD deals with different issues at different times? Check the most popular complaints in July and August vs the month of May. Also check the most common complaints for the Housing Preservation Bureau (HPD) in winter vs. summer.
End of explanation
"""
|
boffi/boffi.github.io | dati_2018/03/PieceWise_Exact_Integration.ipynb | mit | T=1.0 # Natural period of the oscillator
w=2*pi # circular frequency of the oscillator
m=1000.0 # oscillator's mass, in kg
k=m*w*w # oscillator stifness, in N/m
z=0.05 # damping ratio over critical
c=2*z*m*w # damping
wd=w*sqrt(1-z*z) # damped circular frequency
ratio=sqrt(1-z*z) # ratio damped/undamped frequencies
"""
Explanation: Piecewise Exact Integration
The Dynamical System
We want to study a damped SDOF system, so characterized
End of explanation
"""
D=0.005 # static displacement, 5mm
P=D*k # force amplitude
"""
Explanation: The excitation is given by a force such that the static displacement is 5 mm, modulated by a sine in resonance with the dynamic sistem, i.e., $\omega=\omega_n$.
End of explanation
"""
def exact(t):
return D*((z*sin(wd*t)/ratio+cos(wd*t))*exp(-z*w*t)-cos(w*t))/(2*z)
t = np.linspace(0.0, 2.0, 1001)
plt.plot(t, exact(t)); plt.grid()
"""
Explanation: For such a system, we know exactly the response. The particular integral is
$$\xi(t)=-\frac{\cos\omega t}{2\zeta}$$
(why?) and imposing initial rest conditions the system response is
$$x(t) = \frac{\Delta_{st}}{2\zeta} ((\frac{\zeta}{\sqrt{1-\zeta^2}}\sin\omega_Dt + \cos\omega_Dt)\exp(-\zeta\omega t) - \cos\omega t),\qquad \omega=\omega_n.
$$
End of explanation
"""
def step(x0,v0,p0,p1,h,cdh,sdh):
dst=p0/k
ddst=(p1-p0)/k
B = x0 - dst + ((2*z)/w)*(ddst/h)
A = (v0 + z*w*B - ddst/h)/wd
x1 = A*sdh + B*cdh + dst + ddst - ddst/h * 2*z/w
v1 = A*(wd*cdh-z*w*sdh) - B*(z*w*cdh+wd*sdh) + ddst/h
return x1, v1
"""
Explanation: Numerical integration
We define a function that, given the initial conditions and the load, returns the displacement and the velocity at the end of the step.
End of explanation
"""
def resp(nstep):
T = np.linspace(0.0, 2.0, 2*nstep + 1)
X = np.zeros(2*nstep + 1)
h=1./float(nstep)
cdh=cos(wd*h)*exp(-z*w*h)
sdh=sin(wd*h)*exp(-z*w*h)
x1=0. ; v1=0. ; p1=0
for i, t in enumerate(T):
X[i] = x1
x0=x1 ; v0=v1 ; p0=p1 ; p1=P*sin(w*(t+h))
x1,v1=step(x0,v0,p0,p1,h, cdh, sdh)
return T, X
"""
Explanation: With those pieces in place, we can define a function that, for a given number of steps per period computes the response on the interval $0 \le t \le 2.0$.
End of explanation
"""
t_x = {n:resp(n) for n in (4, 8, 16)}
"""
Explanation: Let's compute the responses for different numbers of steps, and store them away too...
End of explanation
"""
plt.plot(t, exact(t), label='Analytical Response', lw=1.3)
for np in sorted(t_x.keys()):
plt.plot(*t_x[np], label='Npoints/period = %2d'%np)
plt.grid()
plt.legend(loc=3)
plt.xlabel('Time t/s')
plt.ylabel('Displacement x/m');
"""
Explanation: Eventually we can plot the numerical responses along with the exact response
End of explanation
"""
t16, x16 = t_x[16]
plt.plot(t16, exact(t16)-x16)
"""
Explanation: But... there are only two numerical curves and I've plotted three of them.
Let's plot the difference between the exact response and the response computed at 16 samples per period...
End of explanation
"""
from scipy.interpolate import InterpolatedUnivariateSpline as spline
smooth16 = spline(*t_x[16])
plt.plot(t, exact(t), label='Analytical')
plt.plot(t, smooth16(t), label='Numerical, 16 ppc, smoothed')
plt.legend(loc='best')
plt.grid()
"""
Explanation: As you can see, the max difference is about 0.3 mm, to be compared with a max response of almost 25 mm, hence an error in the order of 1.2% that in the previous plot led to the apparent disappearance of the NSTEP=16 curve.
Just for fun, how could you compute a smooth curve that interplates the results of the numerical analysis? Easy if you know the answer... smooth16 is, technically speaking, a class instance (it has methods and data) but it is also a callable (a function of sorts)...
End of explanation
"""
|
matmodlab/matmodlab2 | notebooks/MooneyRivlin.ipynb | bsd-3-clause | from bokeh.io import output_notebook
from bokeh.plotting import *
from matmodlab2 import *
from numpy import *
import numpy as np
from plotting_helpers import create_figure
output_notebook()
"""
Explanation: Mooney-Rivlin Hyperelasticity
Overview
A Mooney-Rivlin hyperelastic material is one for which the derivatives of the free energy with respect to the invariants of stretch are constant. The Mooney-Rivlin model
a special case of the more general polynomial hyperelastic model
has 2 elastic constants, typically fit to uniaxial extension/compression, equibiaxial extension/compression, or shear experimental data
usually valid for strains less than 100%
See Also
User Defined Materials
Linear Elastic Material
Polynomial Hyperelastic Material
Contents
<a href='#basic'>Fundamental Equations</a>
<a href='#implement'>Model Implementation</a>
<a href='#verify'>Model Verification</a>
End of explanation
"""
%pycat ../matmodlab2/materials/mooney_rivlin.py
"""
Explanation: <a name='basic'></a>
Fundamental Equations
The Mooney-Rivlin material is a special case of the more general polynomial hyperelastic model defined by the following free energy potential
$$
a = c_{10}\left(\overline{I}1 - 3\right) + c{01}\left(\overline{I}_2 - 3\right)
+ \frac{1}{D_1}\left(J-1\right)^2
$$
where $c_{10}$, $c_{01}$, and $D_1$ are material parameters, the $\overline{I}i$ are the isochoric invariants of the right Cauchy deformation tensor $C{IJ} = F_{kI}F_{kJ}$, $F_{iJ}$ are the components of the deformation gradient tensor, and $J$ is the determinant of the deformation gradient.
Second Piola-Kirchhoff Stress
The components of the second Piola-Kirchhoff stress $S_{IJ}$ are given by
$$
\frac{1}{2\rho_0} S_{IJ} = \frac{\partial a}{\partial C_{IJ}}
$$
For an isotropic material, the free energy $a$ is a function of $C_{IJ}$ through only its invariants, giving for $S_{IJ}$
$$
\frac{1}{2\rho_0}S_{IJ}
= \frac{\partial a}{\partial\overline{I}1}\frac{\partial\overline{I}_1}{\partial C{IJ}}
+ \frac{\partial a}{\partial\overline{I}2}\frac{\partial\overline{I}_2}{\partial C{IJ}}
+ \frac{\partial a}{\partial J}\frac{\partial J}{\partial C_{IJ}}
$$
The partial derivatives of the energy with respect to the invariants are
$$
\frac{\partial a}{\partial\overline{I}1} = c{10}, \quad
\frac{\partial a}{\partial\overline{I}2} = c{01}, \quad
\frac{\partial a}{\partial J} = \frac{2}{D_1}\left(J-1\right)
$$
and the partials of the invariants with respect to $C_{IJ}$ are
$$
\begin{align}
\frac{\partial\overline{I}1}{\partial C{IJ}}
&= \frac{\partial\left( J^{-2/3} C_{KK}\right)}{\partial C_{IJ}} \
%&= \frac{\partial J^{-2/3}}{\partial C_{IJ}} I_1 + J^{-2/3} \frac{\partial C_{KK}}{\partial C_{IJ}} \
%&= -\frac{2}{3}J^{-5/3}\frac{\partial J}{\partial C_{IJ}} I_1 + J^{-2/3} \delta_{IJ} \
%&= -\frac{2}{3}J^{-5/3}\frac{1}{2}J C_{IJ}^{-1} I_1 + J^{-2/3} \delta_{IJ} \
&= J^{-2/3}\left(\delta_{IJ} - \frac{1}{3}I_1 C_{IJ}^{-1}\right) \
\end{align}
$$
$$
\begin{align}
\frac{\partial\overline{I}2}{\partial C{IJ}}
&= \frac{\partial\left( J^{-4/3}\left(C_{KK}^2 - C_{MN}C_{MN}\right)\right)}{\partial C_{IJ}} \
%&= \frac{\partial J^{-4/3}}{\partial C_{IJ}} \left(C_{KK}^2 - C_{MN}C_{MN}\right) + J^{-4/3} \frac{\partial \left(C_{KK}^2 - C_{MN}C_{MN}\right)}{\partial C_{IJ}} \
&= J^{-4/3}\left(I_1\delta_{IJ} - C_{IJ} - \frac{2}{3} I_2 C_{IJ}^{-1}\right) \
\end{align}
$$
$$
\begin{align}
\frac{\partial J}{\partial C_{IJ}} &= \frac{\partial \sqrt{\det C_{KL}}}{\partial C_{IJ}} \
%&= \frac{1}{2 \sqrt{\det C_{KL}}} \frac{\partial\det C_{KL}}{\partial C_{IJ}} \
%&= \frac{1}{2 J} \frac{\partial\det C_{KL}}{\partial C_{IJ}} \
&= \frac{1}{2} J C_{IJ}^{-1} \
\end{align}
$$
Combining the above results, we arrive at the following expression for the components $S_{IJ}$:
$$
\frac{1}{2\rho_0}S_{IJ}
= c_{10}J^{-2/3}\left(\delta_{IJ} - \frac{1}{3}I_1 C_{IJ}^{-1}\right)
+ c_{01}J^{-4/3}\left(I_1\delta_{IJ} - C_{IJ} - \frac{2}{3} I_2 C_{IJ}^{-1}\right)
+ \frac{J}{D_1}\left(J-1\right) C_{IJ}^{-1}
$$
Cauchy Stress
The components of the Cauchy stress tensor $\sigma_{ij}$ are given by the push-forward $S_{IJ}$:
$$
J \sigma_{ij} = F_{iM}S_{MN}F_{jN}
$$
$$
\begin{align}
\frac{J}{2\rho_0}\sigma_{ij} &= F_{iM}\left[
c_{10}J^{-2/3}\left(\delta_{MN} - \frac{1}{3}I_1 C_{MN}^{-1}\right)
+ c_{01}J^{-4/3}\left(I_1\delta_{MN} - C_{MN} - \frac{2}{3} I_2 C_{MN}^{-1}\right)
+ \frac{J}{D_1}\left(J-1\right) C_{MN}^{-1}\right] F_{jN} \
&= c_{10}J^{-2/3}\left(F_{iN}F_{jN} - \frac{1}{3}I_1 F_{iM}C_{MN}^{-1}F_{jN}\right)
+ c_{01}J^{-4/3}\left(I_1F_{iN}F_{jN} - F_{iM}C_{MN}F_{jN} - \frac{2}{3} I_2 F_{iM}C_{MN}^{-1}F_{jN}\right) \
&\qquad + \frac{J}{D_1}\left(J-1\right) F_{iM}C_{MN}^{-1}F_{jN} \
\end{align}
$$
Recognizing that
$$
C_{MN} = F_{kM}F_{kN} \Rightarrow C_{MN}^{-1} = F_{kN}^{-1}F_{kM}^{-1}
$$
the components of the Cauchy stress can be written as
$$
\begin{align}
\frac{J}{2\rho_0}\sigma_{ij}
&= c_{10}J^{-2/3}\left(F_{iN}F_{jN} - \frac{1}{3}I_1 F_{iM}F_{kN}^{-1}F_{kM}^{-1}F_{jN}\right)
+ c_{01}J^{-4/3}\left(I_1F_{iN}F_{jN} - F_{iM}F_{kM}F_{kN}F_{jN} - \frac{2}{3} I_2 F_{iM}F_{kN}^{-1}F_{kM}^{-1}F_{jN}\right) \
&\qquad + \frac{J}{D_1}\left(J-1\right) F_{iM}F_{kN}^{-1}F_{kM}^{-1}F_{jN} \
&= c_{10}J^{-2/3}\left(B_{ij} - \frac{1}{3}I_1 \delta_{ij} \right)
+ c_{01}J^{-4/3}\left(I_1B_{ij} - B_{ik}B_{kj} - \frac{2}{3} I_2 \delta_{ij}\right)
+ \frac{J}{D_1}\left(J-1\right) \delta_{ij} \
&= J^{-2/3} \left(c_{10} + c_{01}\overline{I}1 \right)B{ij}
- c_{01}J^{-4/3}B_{ik}B_{kj}
+ \left(
\frac{J}{D_1}\left(J-1\right) - \frac{1}{3}\left(c_{10}\overline{I}1 + c{01}2\overline{I}2\right)
\right)\delta{ij}
\end{align}
$$
where $B_{ij} = F_{iN}F_{jN}$ is the left Cauchy deformation tensor. Finally, the components of the Cauchy stress are given by
$$
\begin{align}
\sigma_{ij} &= \frac{2\rho_0}{J} \left(
J^{-2/3} \left(c_{10} + c_{01}\overline{I}1 \right)B{ij}
- c_{01}J^{-4/3}B_{ik}B_{kj}\right)
+ \left(
\frac{2\rho_0}{D_1}\left(J-1\right) - \frac{2\rho_0}{3J}\left(c_{10}\overline{I}1 + c{01}2\overline{I}2\right)
\right)\delta{ij} \
&= \frac{2\rho_0}{J} \left(
\left(c_{10} + c_{01}\overline{I}1 \right)\overline{B}{ij}
- c_{01}\overline{B}{ik}\overline{B}{kj}\right)
+ \left(
\frac{2\rho_0}{D_1}\left(J-1\right) - \frac{2\rho_0}{3J}\left(c_{10}\overline{I}1 + c{01}2\overline{I}2\right)
\right)\delta{ij} \
\end{align}
$$
where $\overline{B}{ij} = J^{-2/3}B{ij}$
<a name='implement'></a>
Matmodlab Implementation
The Mooney-Rivlin material is implemented as the MooneyRivlinMaterial class in matmodlab2/materials/mooney_rivlin.py. The file can be viewed by executing the following cell.
End of explanation
"""
from sympy import Symbol, Matrix, Rational, symbols, sqrt
lam = Symbol('lambda')
F = Matrix(3, 3, [lam, 0, 0, 0, 1/sqrt(lam), 0, 0, 0, 1/sqrt(lam)])
B = Matrix(3, 3, F.dot(F.T))
Bsq = Matrix(3, 3, B.dot(B))
I = Matrix(3, 3, lambda i,j: 1 if i==j else 0)
I1 = B.trace()
I2 = ((B.trace()) ** 2 - Bsq.trace()) / 2
J = F.det()
X = J ** Rational(1, 3)
C1, C2, D1 = symbols('C10 C01 D1')
I1B = I1 / X ** 2
I2B = I2 / X ** 4
S = 2 / J * (1 / X ** 2 * (C1 + I1B * C2) * B - 1 / X ** 4 * C2 * Bsq) \
+ (2 / D1 * (J - 1) - 2 * (C1 * I1B + 2 * C2 * I2B) / 3) * I
(S[0,0] - S[1,1]).simplify()
"""
Explanation: <a name='verify'></a>
Verification
Uniaxial Stress
For an incompressible isotropic material, uniaxial stress is produced by the following deformation state
$$
[F] = \begin{bmatrix}\lambda && \ & \frac{1}{\sqrt{\lambda}} & \ & & \frac{1}{\sqrt{\lambda}} \end{bmatrix}
$$
The stress difference $\sigma_{\text{axial}} - \sigma_{\text{lateral}}$ is given by
End of explanation
"""
# Hyperelastic parameters, D1 set to a large number to force incompressibility
parameters = {'D1': 1.e12, 'C10': 1e6, 'C01': .1e6}
# stretch to 300%
lam = linspace(.5, 3, 50)
# Set up the simulator
mps = MaterialPointSimulator('test1')
mps.material = MooneyRivlinMaterial(**parameters)
# Drive the *incompressible* material through a path of uniaxial stress by
# prescribing the deformation gradient.
Fij = lambda x: (x, 0, 0, 0, 1/sqrt(x), 0, 0, 0, 1/sqrt(x))
mps.run_step('F', Fij(lam[0]), frames=10)
mps.run_step('F', Fij(1), frames=1)
mps.run_step('F', Fij(lam[-1]), frames=20)
# plot the analytic solution and the simulation
p = create_figure(bokeh=True, x_axis_label='Stretch', y_axis_label='Stress')
C10, C01 = parameters['C10'], parameters['C01']
# analytic solution for true and engineering stress
s = 2*C01*lam - 2*C01/lam**2 + 2*C10*lam**2 - 2*C10/lam
# plot the analytic solutions
p.line(lam, s, color='blue', legend='True', line_width=2)
p.line(lam, s/lam, color='green', legend='Engineering', line_width=2)
lam_ = np.exp(mps.get('E.XX'))
ss = mps.get('S.XX') - mps.get('S.ZZ')
p.circle(lam_, ss, color='orange', legend='Simulation, True')
p.circle(lam_, ss/lam_, color='red', legend='Simulation, Engineering')
p.legend.location = 'top_left'
show(p)
# check the actual solutions
assert abs(amax(ss) - amax(s)) / amax(s) < 1e-6
assert abs(amin(ss) - amin(s)) < 1e-6
"""
Explanation: We now exercise the Mooney-Rivlin material model using Matmodlab
End of explanation
"""
|
ogaway/Matching-Market | One-to-One.ipynb | gpl-3.0 | # coding: UTF-8
%matplotlib inline
import matchfuncs as mf
"""
Explanation: One-to-One Matching
End of explanation
"""
prop_prefs = [[0, 1, 2],
[0, 2, 1],
[2, 0, 1]]
resp_prefs = [[2, 0, 1],
[2, 0, 1],
[1, 2, 0]]
"""
Explanation: 安定結婚問題
男3人と女3人の安定結婚問題を考える。 男3人をそれぞれM0, M1, M2、女3人をそれぞれF0, F1, F2とおき、各人の付き合いたい人の第一希望〜第三希望が以下の表のように与えられたとする。
|| 第一希望 | 第二希望 | 第三希望 |
|:-----:|:-----------:|:------------:|:------------:|
|M0| F0 | F1 | F2 |
|M1| F0 | F2 | F1 |
|M2| F2 | F0 | F1|
|| 第一希望 | 第二希望 | 第三希望 |
|:-----:|:-----------:|:------------:|:------------:|
|F0| M2 | M0 | M1 |
|F1| M2 | M0 | M1 |
|F2| M1 | M2 | M0 |
男性側がプロポーズをしていくとき、Normal Matching Algorithm (NM) と Deffered Acceptance Algorithm (DA) とでカップルの組がどう変わってくるかを以下で考察する。
End of explanation
"""
prop_matched, resp_matched = mf.BOS(prop_prefs, resp_prefs)
prop_matched
resp_matched
mf.Graph(prop_matched, resp_matched)
"""
Explanation: 1) Normal Matching
マッチング過程
M0, M1はF0にプロポーズし、M2はF2にプロポーズをする。
このとき、M2は誰とも競合せずF2と付き合うことが出来るが、M0とM1は競合する。F0からすると、M1よりM0の方が好みなので、M1を断りM0と付き合うという選択を行う。
最後に、残ったM1は残ったF1にプロポーズして付き合う。
マッチング結果
M0 - F0
M1 - F1
M2 - F2
問題点
M1は第三希望のF1と付き合い、F2は第二希望のM2と付き合うことになっている。しかし、M1とF2が付き合ってしまえば、M1は第二希望の人、F2は第一希望の人と付き合うことができ、双方の効用が増加する。故に、このときM1とF2はこのマッチング結果を破棄して駆け落ちするインセンティブが生じてしまうので、「不安定」なマッチングであったと言える。
End of explanation
"""
prop_matched, resp_matched = mf.DA(prop_prefs, resp_prefs)
prop_matched
resp_matched
mf.Graph(prop_matched, resp_matched)
"""
Explanation: 2) Deffered Acceptance
マッチング過程
M0, M1はF0にプロポーズし、M2はF2にプロポーズをする。
このとき、M2は誰とも競合せずF2と付き合うことが出来るが、M0とM1は競合する。F0からすると、M1よりM0の方が好みなので、M1を断りM0と付き合うという選択を行う。
残ったM1は第二希望のF2にプロポーズをする。F2はM2からプロポーズを受けて付き合おうとしていたが、M2よりM1の方が好みなので、M2を断りM1と付き合うという選択を行う。
残ったM2は第二希望のF0にプロポーズをする。F0はM0からプロポーズを受けて付き合おうとしていたが、M0よりM2の方が好みなので、M0を断りM2と付き合うという選択を行う。
残ったM0は第二希望のF1にプロポーズをして、誰とも競合せずに付き合う。
マッチング結果
M0 - F1
M1 - F2
M2 - F0
このアルゴリズムでは駆け落ちするペアは存在せず「安定」なマッチングが実現され、上記のアルゴリズムで生じた問題を回避することが出来た。
End of explanation
"""
|
arnoldlu/lisa | ipynb/examples/energy_meter/EnergyMeter_HWMON.ipynb | apache-2.0 | import logging
from conf import LisaLogging
LisaLogging.setup()
"""
Explanation: Energy Meter Examples
Linux Kernel HWMon
More details can be found at https://github.com/ARM-software/lisa/wiki/Energy-Meters-Requirements#linux-hwmon.
End of explanation
"""
# Generate plots inline
%matplotlib inline
import os
# Support to access the remote target
import devlib
from env import TestEnv
# RTApp configurator for generation of PERIODIC tasks
from wlgen import RTA, Ramp
"""
Explanation: Import required modules
End of explanation
"""
# Setup target configuration
my_conf = {
# Target platform and board
"platform" : 'linux',
"board" : 'juno',
"host" : '192.168.0.1',
# Devlib modules to load
"modules" : ["cpufreq"], # Required by rt-app calibration
# Folder where all the results will be collected
"results_dir" : "EnergyMeter_HWMON",
# Energy Meters Configuration for BayLibre's ACME Cape
"emeter" : {
"instrument" : "hwmon",
"conf" : {
# Prefixes of the HWMon labels
'sites' : ['a53', 'a57'],
# Type of hardware monitor to be used
'kinds' : ['energy']
},
'channel_map' : {
'LITTLE' : 'a53',
'big' : 'a57',
}
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'rt-app' ],
# Comment this line to calibrate RTApp in your own platform
# "rtapp-calib" : {"0": 360, "1": 142, "2": 138, "3": 352, "4": 352, "5": 353},
}
# Initialize a test environment using:
te = TestEnv(my_conf, wipe=False, force_new=True)
target = te.target
"""
Explanation: Target Configuration
The target configuration is used to describe and configure your test environment.
You can find more details in examples/utils/testenv_example.ipynb.
End of explanation
"""
# Create and RTApp RAMP task
rtapp = RTA(te.target, 'ramp', calibration=te.calibration())
rtapp.conf(kind='profile',
params={
'ramp' : Ramp(
start_pct = 60,
end_pct = 20,
delta_pct = 5,
time_s = 0.5).get()
})
# EnergyMeter Start
te.emeter.reset()
rtapp.run(out_dir=te.res_dir)
# EnergyMeter Stop and samples collection
nrg_report = te.emeter.report(te.res_dir)
logging.info("Collected data:")
!tree $te.res_dir
"""
Explanation: Workload Execution and Power Consumptions Samping
Detailed information on RTApp can be found in examples/wlgen/rtapp_example.ipynb.
Each EnergyMeter derived class has two main methods: reset and report.
- The reset method will reset the energy meter and start sampling from channels specified in the target configuration. <br>
- The report method will stop capture and will retrieve the energy consumption data. This returns an EnergyReport composed of the measured channels energy and the report file. Each of the samples can also be obtained, as you can see below.
End of explanation
"""
logging.info("Measured channels energy:")
logging.info("%s", nrg_report.channels)
logging.info("Generated energy file:")
logging.info(" %s", nrg_report.report_file)
!cat $nrg_report.report_file
"""
Explanation: Power Measurements Data
End of explanation
"""
|
tjwei/HackNTU_Data_2017 | Week06/04-Keras-Intro.ipynb | mit | from keras.layers import Dense, Activation
model = Sequential()
model.add(Dense(units=10, input_dim=784))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
from IPython.display import SVG, display
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model, show_shapes=True).create(prog='dot', format='svg'))
model.fit(train_X, train_Y, validation_data=(validation_X, validation_Y), batch_size=128, epochs=15)
# 預測看看 test_X 前 20 筆
model.predict_classes(test_X[:20])
# 對答案
test_y[:20]
# 看看 test accuracy
model.evaluate(test_X, test_Y)
"""
Explanation: logistic regression
End of explanation
"""
# 參考答案
#%load q_keras_cnn.py
"""
Explanation: Q
將 optimizer 換成 "adam"
將 optimizer 換成 keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
建立 convolutional model
我們之前的網路架構
* convolution 2d kernel=(3,3), filters=32
* relu
* max pool
* convolution 2d kernel=(3,3), filters=64
* relu
* max pool
* dense units=1024
* relu
* dropout (rate=0.8) # 先省略這一層
* dense units = 10
* softmax
試著架出這樣的網路
然後訓練看看
開頭幾行可以這樣寫
python
from keras.layers import Dense, Activation, Conv2D, MaxPool2D, Reshape
model = Sequential()
model.add(Reshape((28, 28, 1), input_shape=(784,) ))
model.add(Conv2D(filters=32, kernel_size=(3,3), padding='same', activation="relu"))
End of explanation
"""
|
plablo09/geo_context | geo_context_pipeline.ipynb | apache-2.0 | import numpy as np
import pandas as pd
from sklearn import preprocessing
from sklearn.cross_validation import train_test_split
from helpers.models import fit_model
from helpers.helpers import make_binary, class_info
# set random state for camparability
random_state = np.random.RandomState(0)
"""
Explanation: Flujo de trabajo para utilizar el Contexto Geográfico como entrada para clasificar el sentimiento en tuits.
Imports generales
End of explanation
"""
# read data
context = pd.read_csv('data/muestra_variables.csv')
# select variable columns
cols_select = context.columns[6:]
variables = context.ix[:,cols_select]
for c in ['no_se','uname','content','cve_mza']:
del variables[c]
# reclass intervalo as numerical
def intervalo_to_numbers(x):
equiv = {'sun':0,'mon':1,'tue':2,'wed':3,'thu':4,'fri':5,'sat':6,'sun':7}
interval = 0.16666*int(x.split('.')[1])
day = x.split('.')[0]
valor = equiv[day] + interval
return valor
reclass = variables['intervalo'].apply(intervalo_to_numbers)
# drop old 'intervalo' column and replace it with numerical values
del variables['intervalo']
variables = variables.join(reclass,how='inner')
"""
Explanation: Preprocesamiento
Leer los datos y seleccionar las variables del estudio.
También recodificamos la variable "intervalo" como numérica
End of explanation
"""
data = variables.as_matrix()
data_Y = data[:,0]
data_X = data[:,1:]
print("Initial label distribution")
class_info(data_Y)
"""
Explanation: Obtener los datos como np.array y separar los datos en predictor (X) y objetivo (Y)
End of explanation
"""
data_X, data_Y = data_X[data_Y != 4], data_Y[data_Y != 4]
"""
Explanation: Eliminar los datos con etiqueta 4 (no sé que sean)
End of explanation
"""
Y_pos_neu = make_binary(data_Y, set((1.,2.)))
Y_neg_neu = make_binary(data_Y, set((3.,2.)))
print("Label distribution after binarization")
print("Pos + Neu")
class_info(Y_pos_neu)
print()
print("Neg + Neu")
class_info(Y_neg_neu)
"""
Explanation: Hacemos dos binarizaciones de los datos, en una agregamos las clases Pos y Neu (etiquetas 1 y 2) y en la otra agregamos Neg Y Neu (etiquetas 3 y 2).
En el caso de la primera, el problema se convierte en encontrar todos los tuit no-positivos. Mientras que en la segunda, el problema es encontrar todos los no-negativos. Entonces, la etiqueta positiva en el primer caso son los no-negativos, mientras que en el segundo caso son los no-positivos.
End of explanation
"""
(X_train_pos_neu, X_test_pos_neu,
Y_train_pos_neu, Y_test_pos_neu) = train_test_split(data_X, Y_pos_neu,
test_size=0.4,
random_state=random_state)
(X_train_neg_neu, X_test_neg_neu,
Y_train_neg_neu, Y_test_neg_neu) = train_test_split(data_X, Y_neg_neu,
test_size=0.4,
random_state=random_state)
"""
Explanation: Separamos en muestras de prueba (40%) y entrenamiento para ambas binarizaciones.
Más adelante podemos utilizar una estrategia de folds e iterar sobre ellos
End of explanation
"""
X_pos_neu_s = preprocessing.scale(X_train_pos_neu)
X_neg_neu_s = preprocessing.scale(X_train_neg_neu)
"""
Explanation: Reescalamos las muestras de entrenamiento
End of explanation
"""
param_grid = {'C': [1, 10, 100, 1000], 'gamma': [0.01,0.001, 0.0001],
'kernel': ['rbf']}
metrics = ['f1','accuracy','average_precision','roc_auc','recall']
"""
Explanation: Entrenamiento con las muestras sin balancear.
Primero vamos a entrenar SVMs con diferentes métricas utilizando las muestras originales, sin balancear.
El primer paso es definir el espacio de parámetros de búsqueda param_grid y las métricas a evaluar:
End of explanation
"""
fitted_models_pos_neu = {}
for metric in metrics:
fitted_models_pos_neu[metric] = fit_model(X_pos_neu_s,Y_train_pos_neu,
param_grid,metric,6)
for metric, model in fitted_models_pos_neu.items():
print ("Using metric {}".format(metric))
print("Best parameters set found on development set:")
print()
print(model.best_params_)
print("Grid scores on development set:")
print()
for params, mean_score, scores in model.grid_scores_:
print("%0.3f (+/-%0.03f) for %r"
% (mean_score, scores.std() * 2, params))
print()
"""
Explanation: Ahora sí, ajustamos las SVM con diferentes métricas, primero para la binarización Pos + Neu:
End of explanation
"""
#X_pos_neu_s_test = preprocessing.scale(X_test_pos_neu)
for metric, model in fitted_models_pos_neu.items():
this_estimator = fitted_models_pos_neu[metric].best_estimator_
this_score = this_estimator.score(X_pos_neu_s_test, Y_test_pos_neu)
y_pred = this_estimator.fit(X_pos_neu_s_test, Y_test_pos_neu).predict(X_pos_neu_s_test)
#conf_matrix = confusion_matrix(Y_test_pos_neu,y_pred)
df_confusion = pd.crosstab(Y_test_pos_neu, y_pred,
rownames=['Actual'],
colnames=['Predicted'], margins=True)
print ("Using metric {}".format(metric))
print("Validation score {}".format(this_score))
print("Confusion Matrix:")
print(df_confusion)
print()
"""
Explanation: Ahora evaluamos sobre la mustra de prueba, para obtener los scores de validación:
End of explanation
"""
Y_train_neg_neu = np.array([1 if val == 0 else 0 for val in Y_train_neg_neu])
fitted_models_neg_neu = {}
for metric in metrics:
fitted_models_neg_neu[metric] = fit_model(X_neg_neu_s,Y_train_neg_neu,
param_grid,metric,6)
for metric, model in fitted_models_neg_neu.items():
print ("Using metric {}".format(metric))
print("Best parameters set found on development set:")
print()
print(model.best_params_)
print("Grid scores on development set:")
print()
for params, mean_score, scores in model.grid_scores_:
print("%0.3f (+/-%0.03f) for %r"
% (mean_score, scores.std() * 2, params))
print()
"""
Explanation: Ahora lo mismo pero con la otra binarización, para hacer los dos casos comparables vamos a voltear las etiquetas de las clases:
End of explanation
"""
X_neg_neu_s_test = preprocessing.scale(X_test_neg_neu)
for metric, model in fitted_models_neg_neu.items():
this_estimator = fitted_models_neg_neu[metric].best_estimator_
this_score = this_estimator.score(X_neg_neu_s_test, Y_test_neg_neu)
y_pred = this_estimator.fit(X_neg_neu_s_test, Y_test_neg_neu).predict(X_neg_neu_s_test)
#conf_matrix = confusion_matrix(Y_test_pos_neu,y_pred)
df_confusion = pd.crosstab(Y_test_neg_neu, y_pred,
rownames=['Actual'],
colnames=['Predicted'], margins=True)
print ("Using metric {}".format(metric))
print("Validation score {}".format(this_score))
print()
print("Confusion Matrix:")
print(df_confusion)
"""
Explanation: Y sus métricas sobre la muestra test:
End of explanation
"""
|
lvrzhn/AstroHackWeek2015 | profile_parallel/FasterPython.ipynb | gpl-2.0 | import numpy as np
x = np.random.randn(1000)
"""
Explanation: Make My Python Code Faster
John Parejko, Lia Corrales, Phil Marshall, Andrew Hearin and Your Name Here>
This notebook demonstrates some ways to make your python code go faster.
Step 1: Profile and improve your code
Because how can you optimize something if you haven't first evaluated it?
Step 2: Parallelize your code
Because you probably own more than one CPU.
Profiling
End of explanation
"""
%timeit np.power(x,2)
%timeit x**2
"""
Explanation: Inline Timing
Use %timeit in the notebook, and other commands in functions... Need examples of these!
End of explanation
"""
import cProfile
import pstats
def square(x):
for k in range(1000):
sq = np.power(x,2)
sq = x**2
sq = x*x
return
log = 'square.profile'
cProfile.run('square(x)',filename=log)
stats = pstats.Stats(log)
stats.strip_dirs()
stats.sort_stats('cumtime').print_stats(20)
"""
Explanation: Profiling with cProfile
End of explanation
"""
def bettersquare(x):
def powersquare(x):
return np.power(x,2)
def justsquare(x):
return x**2
def selfmultiply(x):
return x*x
for k in range(1000):
sq = powersquare(x)
sq = justsquare(x)
sq = selfmultiply(x)
return
log = 'bettersquare.profile'
cProfile.run('bettersquare(x)',filename=log)
stats = pstats.Stats(log)
stats.strip_dirs()
stats.sort_stats('cumtime').print_stats(20)
"""
Explanation: OK - so all the time is being taken by the function "square," as expected.
We need to re-write with the lines separated into functions - which is a better way to code anyway.
End of explanation
"""
!pip install --upgrade line_profiler
"""
Explanation: Much better - you can see the cumulative time spent in each function.
Another useful tool is the line_profiler, from rkern on GitHub.
End of explanation
"""
def my_expensive_loop(n):
x = 0
for i in range(int(n)):
for j in range(int(n)):
x += i + j
%timeit my_expensive_loop(1000)
"""
Explanation: We could also run the line_profiler from the command line...
Which means the square function needs writing out to a file...
Can we do this from this notebook?
Cythonization
This is something of a last resort: don't go to cython unless you know it's going to help.
Cython allows us to replace simple lines of math with the equivalent lines of C, while still coding in python.
On the command line,
cython -a file.pyx
makes file.c, but also file.html. The html file shows you the lines that were unwrapped into C.
Can we demo this process from this notebook? Hmm.
Compiling cython with IPython Notebook magic functions
Here's a simple example of a double-for loop that cython speeds up tremendously, and a %magic trick for compiling cython within a Notebook. First, our simple slow pure python function:
End of explanation
"""
%load_ext cython
%%cython
def my_cythonized_loop(int n):
cdef int i, j, x
x = 0
for i in range(int(n)):
for j in range(int(n)):
x += i + j
%timeit my_cythonized_loop(1000)
"""
Explanation: Let's write the same exact function in cython syntax:
End of explanation
"""
"""
The multiprocessing joke.
"""
from __future__ import print_function
import multiprocessing
def print_function(word):
print(word, end=' ')
def tell_the_joke():
print()
print('Why did the parallel chicken cross the road?')
answer = 'To get to the other side.'
print()
# Summon a pool to handle some number of processes.
# Think of N as the number of processors you have?
N = 2
pool = multiprocessing.Pool(processes=N)
# Prepare a list of function inputs:
args = answer.split()
# Pass the function, and its arguments, to the pool:
pool.map(print_function, args)
# Tell the pool members to finish their work.
pool.close()
# "Ask the pool to report that they are done.
pool.join()
print()
print()
return
tell_the_joke()
"""
Explanation: What's happening here is that in the pure python code, at each step of these tight nested loops python is doing a bunch of type-checking on i, j and x. All that cdef declaration does is to tell the cython compiler to declare these variables as c-ints, so that the code will not do this type-checking anymore.
Even if the above pattern is the only one you ever learn in cython, it comes up so, so often that it's worth taking the time to pick up.
Parallelization
Multiprocessing
John's example:
End of explanation
"""
def new_function(word):
return word+' '
def tell_the_joke_better():
print()
print('Why did the parallel chicken cross the road?')
answer = 'To get to the other side.'
print()
# Summon a pool to handle some number of processes.
# Leave N = blank to have multiprocessing guess!
# Or measure it yourself:
N = multiprocessing.cpu_count()
pool = multiprocessing.Pool(processes=N)
# Prepare a list of function inputs:
args = answer.split()
# Pass the function, and its arguments, to the pool:
punchline = pool.map(new_function, args)
# Tell the pool members to finish their work.
pool.close()
# "Ask the pool to report that they are done.
pool.join()
# Use the outputs of the function, which are accessible via the map() method:
print(punchline)
print()
print()
return
tell_the_joke_better()
"""
Explanation: The processes print their output words at semi-random times - in general, you have to be careful with what you do when dealing with a simple pool of processors.
If we make our function return a word, rather than just print it, then we can collect the outputs and display them in the correct order.
End of explanation
"""
|
JasonSanchez/w261 | week12/MIDS-W261-HW-12-TEMPLATE.ipynb | mit | labVersion = 'MIDS_MLS_week12_v_0_9'
"""
Explanation: DATASCI W261: Machine Learning at Scale
W261-1 Fall 2015
Week 12: Criteo CTR Project
November 14, 2015
Student name INSERT STUDENT NAME HERE
Click-Through Rate Prediction Lab
This lab covers the steps for creating a click-through rate (CTR) prediction pipeline. You will work with the Criteo Labs dataset that was used for a recent Kaggle competition.
This lab will cover:
Part 1: Featurize categorical data using one-hot-encoding (OHE)
Part 2: Construct an OHE dictionary
Part 3: Parse CTR data and generate OHE features
Visualization 1: Feature frequency
Part 4: CTR prediction and logloss evaluation
Visualization 2: ROC curve
Part 5: Reduce feature dimension via feature hashing
Visualization 3: Hyperparameter heat map
Note that, for reference, you can look up the details of the relevant Spark methods in Spark's Python API and the relevant NumPy methods in the NumPy Reference
End of explanation
"""
# Data for manual OHE
# Note: the first data point does not include any value for the optional third feature
sampleOne = [(0, 'mouse'), (1, 'black')]
sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
sampleDataRDD = sc.parallelize([sampleOne, sampleTwo, sampleThree])
# TODO: Replace <FILL IN> with appropriate code
sampleOHEDictManual = {}
sampleOHEDictManual[(0,'bear')] = <FILL IN>
sampleOHEDictManual[(0,'cat')] = <FILL IN>
sampleOHEDictManual[(0,'mouse')] = <FILL IN>
sampleOHEDictManual<FILL IN>
sampleOHEDictManual<FILL IN>
sampleOHEDictManual<FILL IN>
sampleOHEDictManual<FILL IN>
# A testing helper
#https://pypi.python.org/pypi/test_helper/0.2
import hashlib
class TestFailure(Exception):
pass
class PrivateTestFailure(Exception):
pass
class Test(object):
passed = 0
numTests = 0
failFast = False
private = False
@classmethod
def setFailFast(cls):
cls.failFast = True
@classmethod
def setPrivateMode(cls):
cls.private = True
@classmethod
def assertTrue(cls, result, msg=""):
cls.numTests += 1
if result == True:
cls.passed += 1
print "1 test passed."
else:
print "1 test failed. " + msg
if cls.failFast:
if cls.private:
raise PrivateTestFailure(msg)
else:
raise TestFailure(msg)
@classmethod
def assertEquals(cls, var, val, msg=""):
cls.assertTrue(var == val, msg)
@classmethod
def assertEqualsHashed(cls, var, hashed_val, msg=""):
cls.assertEquals(cls._hash(var), hashed_val, msg)
@classmethod
def printStats(cls):
print "{0} / {1} test(s) passed.".format(cls.passed, cls.numTests)
@classmethod
def _hash(cls, x):
return hashlib.sha1(str(x)).hexdigest()
# TEST One-hot-encoding (1a)
from test_helper import Test
Test.assertEqualsHashed(sampleOHEDictManual[(0,'bear')],
'b6589fc6ab0dc82cf12099d1c2d40ab994e8410c',
"incorrect value for sampleOHEDictManual[(0,'bear')]")
Test.assertEqualsHashed(sampleOHEDictManual[(0,'cat')],
'356a192b7913b04c54574d18c28d46e6395428ab',
"incorrect value for sampleOHEDictManual[(0,'cat')]")
Test.assertEqualsHashed(sampleOHEDictManual[(0,'mouse')],
'da4b9237bacccdf19c0760cab7aec4a8359010b0',
"incorrect value for sampleOHEDictManual[(0,'mouse')]")
Test.assertEqualsHashed(sampleOHEDictManual[(1,'black')],
'77de68daecd823babbb58edb1c8e14d7106e83bb',
"incorrect value for sampleOHEDictManual[(1,'black')]")
Test.assertEqualsHashed(sampleOHEDictManual[(1,'tabby')],
'1b6453892473a467d07372d45eb05abc2031647a',
"incorrect value for sampleOHEDictManual[(1,'tabby')]")
Test.assertEqualsHashed(sampleOHEDictManual[(2,'mouse')],
'ac3478d69a3c81fa62e60f5c3696165a4e5e6ac4',
"incorrect value for sampleOHEDictManual[(2,'mouse')]")
Test.assertEqualsHashed(sampleOHEDictManual[(2,'salmon')],
'c1dfd96eea8cc2b62785275bca38ac261256e278',
"incorrect value for sampleOHEDictManual[(2,'salmon')]")
Test.assertEquals(len(sampleOHEDictManual.keys()), 7,
'incorrect number of keys in sampleOHEDictManual')
"""
Explanation: Part 1: Featurize categorical data using one-hot-encoding
(1a) One-hot-encoding
We would like to develop code to convert categorical features to numerical ones, and to build intuition, we will work with a sample unlabeled dataset with three data points, with each data point representing an animal. The first feature indicates the type of animal (bear, cat, mouse); the second feature describes the animal's color (black, tabby); and the third (optional) feature describes what the animal eats (mouse, salmon).
In a one-hot-encoding (OHE) scheme, we want to represent each tuple of (featureID, category) via its own binary feature. We can do this in Python by creating a dictionary that maps each tuple to a distinct integer, where the integer corresponds to a binary feature. To start, manually enter the entries in the OHE dictionary associated with the sample dataset by mapping the tuples to consecutive integers starting from zero, ordering the tuples first by featureID and next by category.
Later in this lab, we'll use OHE dictionaries to transform data points into compact lists of features that can be used in machine learning algorithms.
End of explanation
"""
import numpy as np
from pyspark.mllib.linalg import SparseVector
# TODO: Replace <FILL IN> with appropriate code
aDense = np.array([0., 3., 0., 4.])
aSparse = <FILL IN>
bDense = np.array([0., 0., 0., 1.])
bSparse = <FILL IN>
w = np.array([0.4, 3.1, -1.4, -.5])
print aDense.dot(w)
print aSparse.dot(w)
print bDense.dot(w)
print bSparse.dot(w)
# TEST Sparse Vectors (1b)
Test.assertTrue(isinstance(aSparse, SparseVector), 'aSparse needs to be an instance of SparseVector')
Test.assertTrue(isinstance(bSparse, SparseVector), 'aSparse needs to be an instance of SparseVector')
Test.assertTrue(aDense.dot(w) == aSparse.dot(w),
'dot product of aDense and w should equal dot product of aSparse and w')
Test.assertTrue(bDense.dot(w) == bSparse.dot(w),
'dot product of bDense and w should equal dot product of bSparse and w')
"""
Explanation: (1b) Sparse vectors
Data points can typically be represented with a small number of non-zero OHE features relative to the total number of features that occur in the dataset. By leveraging this sparsity and using sparse vector representations of OHE data, we can reduce storage and computational burdens. Below are a few sample vectors represented as dense numpy arrays. Use SparseVector to represent them in a sparse fashion, and verify that both the sparse and dense representations yield the same results when computing dot products (we will later use MLlib to train classifiers via gradient descent, and MLlib will need to compute dot products between SparseVectors and dense parameter vectors).
Use SparseVector(size, *args) to create a new sparse vector where size is the length of the vector and args is either a dictionary, a list of (index, value) pairs, or two separate arrays of indices and values (sorted by index). You'll need to create a sparse vector representation of each dense vector aDense and bDense.
End of explanation
"""
# Reminder of the sample features
# sampleOne = [(0, 'mouse'), (1, 'black')]
# sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
# sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
# TODO: Replace <FILL IN> with appropriate code
sampleOneOHEFeatManual = <FILL IN>
sampleTwoOHEFeatManual = <FILL IN>
sampleThreeOHEFeatManual = <FILL IN>
# TEST OHE Features as sparse vectors (1c)
Test.assertTrue(isinstance(sampleOneOHEFeatManual, SparseVector),
'sampleOneOHEFeatManual needs to be a SparseVector')
Test.assertTrue(isinstance(sampleTwoOHEFeatManual, SparseVector),
'sampleTwoOHEFeatManual needs to be a SparseVector')
Test.assertTrue(isinstance(sampleThreeOHEFeatManual, SparseVector),
'sampleThreeOHEFeatManual needs to be a SparseVector')
Test.assertEqualsHashed(sampleOneOHEFeatManual,
'ecc00223d141b7bd0913d52377cee2cf5783abd6',
'incorrect value for sampleOneOHEFeatManual')
Test.assertEqualsHashed(sampleTwoOHEFeatManual,
'26b023f4109e3b8ab32241938e2e9b9e9d62720a',
'incorrect value for sampleTwoOHEFeatManual')
Test.assertEqualsHashed(sampleThreeOHEFeatManual,
'c04134fd603ae115395b29dcabe9d0c66fbdc8a7',
'incorrect value for sampleThreeOHEFeatManual')
"""
Explanation: (1c) OHE features as sparse vectors
Now let's see how we can represent the OHE features for points in our sample dataset. Using the mapping defined by the OHE dictionary from Part (1a), manually define OHE features for the three sample data points using SparseVector format. Any feature that occurs in a point should have the value 1.0. For example, the DenseVector for a point with features 2 and 4 would be [0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0].
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
def oneHotEncoding(rawFeats, OHEDict, numOHEFeats):
"""Produce a one-hot-encoding from a list of features and an OHE dictionary.
Note:
You should ensure that the indices used to create a SparseVector are sorted.
Args:
rawFeats (list of (int, str)): The features corresponding to a single observation. Each
feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne)
OHEDict (dict): A mapping of (featureID, value) to unique integer.
numOHEFeats (int): The total number of unique OHE features (combinations of featureID and
value).
Returns:
SparseVector: A SparseVector of length numOHEFeats with indicies equal to the unique
identifiers for the (featureID, value) combinations that occur in the observation and
with values equal to 1.0.
"""
<FILL IN>
# Calculate the number of features in sampleOHEDictManual
numSampleOHEFeats = <FILL IN>
# Run oneHotEnoding on sampleOne
sampleOneOHEFeat = <FILL IN>
print sampleOneOHEFeat
# TEST Define an OHE Function (1d)
Test.assertTrue(sampleOneOHEFeat == sampleOneOHEFeatManual,
'sampleOneOHEFeat should equal sampleOneOHEFeatManual')
Test.assertEquals(sampleOneOHEFeat, SparseVector(7, [2,3], [1.0,1.0]),
'incorrect value for sampleOneOHEFeat')
Test.assertEquals(oneHotEncoding([(1, 'black'), (0, 'mouse')], sampleOHEDictManual,
numSampleOHEFeats), SparseVector(7, [2,3], [1.0,1.0]),
'incorrect definition for oneHotEncoding')
"""
Explanation: (1d) Define a OHE function
Next we will use the OHE dictionary from Part (1a) to programatically generate OHE features from the original categorical data. First write a function called oneHotEncoding that creates OHE feature vectors in SparseVector format. Then use this function to create OHE features for the first sample data point and verify that the result matches the result from Part (1c).
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
sampleOHEData = sampleDataRDD.<FILL IN>
print sampleOHEData.collect()
# TEST Apply OHE to a dataset (1e)
sampleOHEDataValues = sampleOHEData.collect()
Test.assertTrue(len(sampleOHEDataValues) == 3, 'sampleOHEData should have three elements')
Test.assertEquals(sampleOHEDataValues[0], SparseVector(7, {2: 1.0, 3: 1.0}),
'incorrect OHE for first sample')
Test.assertEquals(sampleOHEDataValues[1], SparseVector(7, {1: 1.0, 4: 1.0, 5: 1.0}),
'incorrect OHE for second sample')
Test.assertEquals(sampleOHEDataValues[2], SparseVector(7, {0: 1.0, 3: 1.0, 6: 1.0}),
'incorrect OHE for third sample')
"""
Explanation: (1e) Apply OHE to a dataset
Finally, use the function from Part (1d) to create OHE features for all 3 data points in the sample dataset.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
sampleDistinctFeats = (sampleDataRDD
<FILL IN>)
# TEST Pair RDD of (featureID, category) (2a)
Test.assertEquals(sorted(sampleDistinctFeats.collect()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),
(1, 'tabby'), (2, 'mouse'), (2, 'salmon')],
'incorrect value for sampleDistinctFeats')
"""
Explanation: Part 2: Construct an OHE dictionary
(2a) Pair RDD of (featureID, category)
To start, create an RDD of distinct (featureID, category) tuples. In our sample dataset, the 7 items in the resulting RDD are (0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'), (1, 'tabby'), (2, 'mouse'), (2, 'salmon'). Notably 'black' appears twice in the dataset but only contributes one item to the RDD: (1, 'black'), while 'mouse' also appears twice and contributes two items: (0, 'mouse') and (2, 'mouse'). Use flatMap and distinct.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
sampleOHEDict = (sampleDistinctFeats
<FILL IN>)
print sampleOHEDict
# TEST OHE Dictionary from distinct features (2b)
Test.assertEquals(sorted(sampleOHEDict.keys()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),
(1, 'tabby'), (2, 'mouse'), (2, 'salmon')],
'sampleOHEDict has unexpected keys')
Test.assertEquals(sorted(sampleOHEDict.values()), range(7), 'sampleOHEDict has unexpected values')
"""
Explanation: (2b) OHE Dictionary from distinct features
Next, create an RDD of key-value tuples, where each (featureID, category) tuple in sampleDistinctFeats is a key and the values are distinct integers ranging from 0 to (number of keys - 1). Then convert this RDD into a dictionary, which can be done using the collectAsMap action. Note that there is no unique mapping from keys to values, as all we require is that each (featureID, category) key be mapped to a unique integer between 0 and the number of keys. In this exercise, any valid mapping is acceptable. Use zipWithIndex followed by collectAsMap.
In our sample dataset, one valid list of key-value tuples is: [((0, 'bear'), 0), ((2, 'salmon'), 1), ((1, 'tabby'), 2), ((2, 'mouse'), 3), ((0, 'mouse'), 4), ((0, 'cat'), 5), ((1, 'black'), 6)]. The dictionary defined in Part (1a) illustrates another valid mapping between keys and integers.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
def createOneHotDict(inputData):
"""Creates a one-hot-encoder dictionary based on the input data.
Args:
inputData (RDD of lists of (int, str)): An RDD of observations where each observation is
made up of a list of (featureID, value) tuples.
Returns:
dict: A dictionary where the keys are (featureID, value) tuples and map to values that are
unique integers.
"""
<FILL IN>
sampleOHEDictAuto = <FILL IN>
print sampleOHEDictAuto
# TEST Automated creation of an OHE dictionary (2c)
Test.assertEquals(sorted(sampleOHEDictAuto.keys()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),
(1, 'tabby'), (2, 'mouse'), (2, 'salmon')],
'sampleOHEDictAuto has unexpected keys')
Test.assertEquals(sorted(sampleOHEDictAuto.values()), range(7),
'sampleOHEDictAuto has unexpected values')
"""
Explanation: (2c) Automated creation of an OHE dictionary
Now use the code from Parts (2a) and (2b) to write a function that takes an input dataset and outputs an OHE dictionary. Then use this function to create an OHE dictionary for the sample dataset, and verify that it matches the dictionary from Part (2b).
End of explanation
"""
# Run this code to view Criteo's agreement
from IPython.lib.display import IFrame
IFrame("http://labs.criteo.com/downloads/2014-kaggle-display-advertising-challenge-dataset/",
600, 350)
# TODO: Replace <FILL IN> with appropriate code
# Just replace <FILL IN> with the url for dac_sample.tar.gz
import glob
import os.path
import tarfile
import urllib
import urlparse
# Paste url, url should end with: dac_sample.tar.gz
url = '<FILL IN>'
url = url.strip()
baseDir = os.path.join('data')
inputPath = os.path.join('w261', 'dac_sample.txt')
fileName = os.path.join(baseDir, inputPath)
inputDir = os.path.split(fileName)[0]
def extractTar(check = False):
# Find the zipped archive and extract the dataset
tars = glob.glob('dac_sample*.tar.gz*')
if check and len(tars) == 0:
return False
if len(tars) > 0:
try:
tarFile = tarfile.open(tars[0])
except tarfile.ReadError:
if not check:
print 'Unable to open tar.gz file. Check your URL.'
return False
tarFile.extract('dac_sample.txt', path=inputDir)
print 'Successfully extracted: dac_sample.txt'
return True
else:
print 'You need to retry the download with the correct url.'
print ('Alternatively, you can upload the dac_sample.tar.gz file to your Jupyter root ' +
'directory')
return False
if os.path.isfile(fileName):
print 'File is already available. Nothing to do.'
elif extractTar(check = True):
print 'tar.gz file was already available.'
elif not url.endswith('dac_sample.tar.gz'):
print 'Check your download url. Are you downloading the Sample dataset?'
else:
# Download the file and store it in the same directory as this notebook
try:
urllib.urlretrieve(url, os.path.basename(urlparse.urlsplit(url).path))
except IOError:
print 'Unable to download and store: {0}'.format(url)
extractTar()
import os.path
baseDir = os.path.join('data')
inputPath = os.path.join('w261', 'dac_sample.txt')
fileName = os.path.join(baseDir, inputPath)
if os.path.isfile(fileName):
rawData = (sc
.textFile(fileName, 2)
.map(lambda x: x.replace('\t', ','))) # work with either ',' or '\t' separated data
print rawData.take(1)
"""
Explanation: Part 3: Parse CTR data and generate OHE features
Before we can proceed, you'll first need to obtain the data from Criteo. If you have already completed this step in the setup lab, just run the cells below and the data will be loaded into the rawData variable.
Below is Criteo's data sharing agreement. After you accept the agreement, you can obtain the download URL by right-clicking on the "Download Sample" button and clicking "Copy link address" or "Copy Link Location", depending on your browser. Paste the URL into the # TODO cell below. The file is 8.4 MB compressed. The script below will download the file to the virtual machine (VM) and then extract the data.
If running the cell below does not render a webpage, open the Criteo agreement in a separate browser tab. After you accept the agreement, you can obtain the download URL by right-clicking on the "Download Sample" button and clicking "Copy link address" or "Copy Link Location", depending on your browser. Paste the URL into the # TODO cell below.
Note that the download could take a few minutes, depending upon your connection speed.
The Criteo CTR data is for HW12.1 is available here (24.3 Meg, 100,000 Rows):
https://www.dropbox.com/s/m4jlnv6rdbqzzhu/dac_sample.txt?dl=0
Alternatively you can download the sample data directly by following the instructions contained in the cell below (8M compressed).
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
weights = [.8, .1, .1]
seed = 42
# Use randomSplit with weights and seed
rawTrainData, rawValidationData, rawTestData = rawData.<FILL IN>
# Cache the data
<FILL IN>
nTrain = <FILL IN>
nVal = <FILL IN>
nTest = <FILL IN>
print nTrain, nVal, nTest, nTrain + nVal + nTest
print rawData.take(1)
# TEST Loading and splitting the data (3a)
Test.assertTrue(all([rawTrainData.is_cached, rawValidationData.is_cached, rawTestData.is_cached]),
'you must cache the split data')
Test.assertEquals(nTrain, 79911, 'incorrect value for nTrain')
Test.assertEquals(nVal, 10075, 'incorrect value for nVal')
Test.assertEquals(nTest, 10014, 'incorrect value for nTest')
"""
Explanation: (3a) Loading and splitting the data
We are now ready to start working with the actual CTR data, and our first task involves splitting it into training, validation, and test sets. Use the randomSplit method with the specified weights and seed to create RDDs storing each of these datasets, and then cache each of these RDDs, as we will be accessing them multiple times in the remainder of this lab. Finally, compute the size of each dataset.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
def parsePoint(point):
"""Converts a comma separated string into a list of (featureID, value) tuples.
Note:
featureIDs should start at 0 and increase to the number of features - 1.
Args:
point (str): A comma separated string where the first value is the label and the rest
are features.
Returns:
list: A list of (featureID, value) tuples.
"""
<FILL IN>
parsedTrainFeat = rawTrainData.map(parsePoint)
numCategories = (parsedTrainFeat
.flatMap(lambda x: x)
.distinct()
.map(lambda x: (x[0], 1))
.reduceByKey(lambda x, y: x + y)
.sortByKey()
.collect())
print numCategories[2][1]
# TEST Extract features (3b)
Test.assertEquals(numCategories[2][1], 855, 'incorrect implementation of parsePoint')
Test.assertEquals(numCategories[32][1], 4, 'incorrect implementation of parsePoint')
"""
Explanation: (3b) Extract features
We will now parse the raw training data to create an RDD that we can subsequently use to create an OHE dictionary. Note from the take() command in Part (3a) that each raw data point is a string containing several fields separated by some delimiter. For now, we will ignore the first field (which is the 0-1 label), and parse the remaining fields (or raw features). To do this, complete the implemention of the parsePoint function.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
ctrOHEDict = <FILL IN>
numCtrOHEFeats = len(ctrOHEDict.keys())
print numCtrOHEFeats
print ctrOHEDict[(0, '')]
# TEST Create an OHE dictionary from the dataset (3c)
Test.assertEquals(numCtrOHEFeats, 233286, 'incorrect number of features in ctrOHEDict')
Test.assertTrue((0, '') in ctrOHEDict, 'incorrect features in ctrOHEDict')
"""
Explanation: (3c) Create an OHE dictionary from the dataset
Note that parsePoint returns a data point as a list of (featureID, category) tuples, which is the same format as the sample dataset studied in Parts 1 and 2 of this lab. Using this observation, create an OHE dictionary using the function implemented in Part (2c). Note that we will assume for simplicity that all features in our CTR dataset are categorical.
End of explanation
"""
from pyspark.mllib.regression import LabeledPoint
# TODO: Replace <FILL IN> with appropriate code
def parseOHEPoint(point, OHEDict, numOHEFeats):
"""Obtain the label and feature vector for this raw observation.
Note:
You must use the function `oneHotEncoding` in this implementation or later portions
of this lab may not function as expected.
Args:
point (str): A comma separated string where the first value is the label and the rest
are features.
OHEDict (dict of (int, str) to int): Mapping of (featureID, value) to unique integer.
numOHEFeats (int): The number of unique features in the training dataset.
Returns:
LabeledPoint: Contains the label for the observation and the one-hot-encoding of the
raw features based on the provided OHE dictionary.
"""
<FILL IN>
OHETrainData = rawTrainData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats))
OHETrainData.cache()
print OHETrainData.take(1)
# Check that oneHotEncoding function was used in parseOHEPoint
backupOneHot = oneHotEncoding
oneHotEncoding = None
withOneHot = False
try: parseOHEPoint(rawTrainData.take(1)[0], ctrOHEDict, numCtrOHEFeats)
except TypeError: withOneHot = True
oneHotEncoding = backupOneHot
# TEST Apply OHE to the dataset (3d)
numNZ = sum(parsedTrainFeat.map(lambda x: len(x)).take(5))
numNZAlt = sum(OHETrainData.map(lambda lp: len(lp.features.indices)).take(5))
Test.assertEquals(numNZ, numNZAlt, 'incorrect implementation of parseOHEPoint')
Test.assertTrue(withOneHot, 'oneHotEncoding not present in parseOHEPoint')
"""
Explanation: (3d) Apply OHE to the dataset
Now let's use this OHE dictionary by starting with the raw training data and creating an RDD of LabeledPoint objects using OHE features. To do this, complete the implementation of the parseOHEPoint function. Hint: parseOHEPoint is an extension of the parsePoint function from Part (3b) and it uses the oneHotEncoding function from Part (1d).
End of explanation
"""
def bucketFeatByCount(featCount):
"""Bucket the counts by powers of two."""
for i in range(11):
size = 2 ** i
if featCount <= size:
return size
return -1
featCounts = (OHETrainData
.flatMap(lambda lp: lp.features.indices)
.map(lambda x: (x, 1))
.reduceByKey(lambda x, y: x + y))
featCountsBuckets = (featCounts
.map(lambda x: (bucketFeatByCount(x[1]), 1))
.filter(lambda (k, v): k != -1)
.reduceByKey(lambda x, y: x + y)
.collect())
print featCountsBuckets
import matplotlib.pyplot as plt
x, y = zip(*featCountsBuckets)
x, y = np.log(x), np.log(y)
def preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999',
gridWidth=1.0):
"""Template for generating the plot layout."""
plt.close()
fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')
ax.axes.tick_params(labelcolor='#999999', labelsize='10')
for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:
axis.set_ticks_position('none')
axis.set_ticks(ticks)
axis.label.set_color('#999999')
if hideLabels: axis.set_ticklabels([])
plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')
map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])
return fig, ax
# generate layout and plot data
fig, ax = preparePlot(np.arange(0, 10, 1), np.arange(4, 14, 2))
ax.set_xlabel(r'$\log_e(bucketSize)$'), ax.set_ylabel(r'$\log_e(countInBucket)$')
plt.scatter(x, y, s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)
pass
"""
Explanation: Visualization 1: Feature frequency
We will now visualize the number of times each of the 233,286 OHE features appears in the training data. We first compute the number of times each feature appears, then bucket the features by these counts. The buckets are sized by powers of 2, so the first bucket corresponds to features that appear exactly once ( $ \scriptsize 2^0 $ ), the second to features that appear twice ( $ \scriptsize 2^1 $ ), the third to features that occur between three and four ( $ \scriptsize 2^2 $ ) times, the fifth bucket is five to eight ( $ \scriptsize 2^3 $ ) times and so on. The scatter plot below shows the logarithm of the bucket thresholds versus the logarithm of the number of features that have counts that fall in the buckets.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
def oneHotEncoding(rawFeats, OHEDict, numOHEFeats):
"""Produce a one-hot-encoding from a list of features and an OHE dictionary.
Note:
If a (featureID, value) tuple doesn't have a corresponding key in OHEDict it should be
ignored.
Args:
rawFeats (list of (int, str)): The features corresponding to a single observation. Each
feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne)
OHEDict (dict): A mapping of (featureID, value) to unique integer.
numOHEFeats (int): The total number of unique OHE features (combinations of featureID and
value).
Returns:
SparseVector: A SparseVector of length numOHEFeats with indicies equal to the unique
identifiers for the (featureID, value) combinations that occur in the observation and
with values equal to 1.0.
"""
<FILL IN>
OHEValidationData = rawValidationData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats))
OHEValidationData.cache()
print OHEValidationData.take(1)
# TEST Handling unseen features (3e)
numNZVal = (OHEValidationData
.map(lambda lp: len(lp.features.indices))
.sum())
Test.assertEquals(numNZVal, 372080, 'incorrect number of features')
"""
Explanation: (3e) Handling unseen features
We naturally would like to repeat the process from Part (3d), e.g., to compute OHE features for the validation and test datasets. However, we must be careful, as some categorical values will likely appear in new data that did not exist in the training data. To deal with this situation, update the oneHotEncoding() function from Part (1d) to ignore previously unseen categories, and then compute OHE features for the validation data.
End of explanation
"""
from pyspark.mllib.classification import LogisticRegressionWithSGD
# fixed hyperparameters
numIters = 50
stepSize = 10.
regParam = 1e-6
regType = 'l2'
includeIntercept = True
# TODO: Replace <FILL IN> with appropriate code
model0 = <FILL IN>
sortedWeights = sorted(model0.weights)
print sortedWeights[:5], model0.intercept
# TEST Logistic regression (4a)
Test.assertTrue(np.allclose(model0.intercept, 0.56455084025), 'incorrect value for model0.intercept')
Test.assertTrue(np.allclose(sortedWeights[0:5],
[-0.45899236853575609, -0.37973707648623956, -0.36996558266753304,
-0.36934962879928263, -0.32697945415010637]), 'incorrect value for model0.weights')
"""
Explanation: Part 4: CTR prediction and logloss evaluation
(4a) Logistic regression
We are now ready to train our first CTR classifier. A natural classifier to use in this setting is logistic regression, since it models the probability of a click-through event rather than returning a binary response, and when working with rare events, probabilistic predictions are useful. First use LogisticRegressionWithSGD to train a model using OHETrainData with the given hyperparameter configuration. LogisticRegressionWithSGD returns a LogisticRegressionModel. Next, use the LogisticRegressionModel.weights and LogisticRegressionModel.intercept attributes to print out the model's parameters. Note that these are the names of the object's attributes and should be called using a syntax like model.weights for a given model.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
from math import log
def computeLogLoss(p, y):
"""Calculates the value of log loss for a given probabilty and label.
Note:
log(0) is undefined, so when p is 0 we need to add a small value (epsilon) to it
and when p is 1 we need to subtract a small value (epsilon) from it.
Args:
p (float): A probabilty between 0 and 1.
y (int): A label. Takes on the values 0 and 1.
Returns:
float: The log loss value.
"""
epsilon = 10e-12
<FILL IN>
print computeLogLoss(.5, 1)
print computeLogLoss(.5, 0)
print computeLogLoss(.99, 1)
print computeLogLoss(.99, 0)
print computeLogLoss(.01, 1)
print computeLogLoss(.01, 0)
print computeLogLoss(0, 1)
print computeLogLoss(1, 1)
print computeLogLoss(1, 0)
# TEST Log loss (4b)
Test.assertTrue(np.allclose([computeLogLoss(.5, 1), computeLogLoss(.01, 0), computeLogLoss(.01, 1)],
[0.69314718056, 0.0100503358535, 4.60517018599]),
'computeLogLoss is not correct')
Test.assertTrue(np.allclose([computeLogLoss(0, 1), computeLogLoss(1, 1), computeLogLoss(1, 0)],
[25.3284360229, 1.00000008275e-11, 25.3284360229]),
'computeLogLoss needs to bound p away from 0 and 1 by epsilon')
"""
Explanation: (4b) Log loss
Throughout this lab, we will use log loss to evaluate the quality of models. Log loss is defined as: $$ \begin{align} \scriptsize \ell_{log}(p, y) = \begin{cases} -\log (p) & \text{if } y = 1 \\ -\log(1-p) & \text{if } y = 0 \end{cases} \end{align} $$ where $ \scriptsize p$ is a probability between 0 and 1 and $ \scriptsize y$ is a label of either 0 or 1. Log loss is a standard evaluation criterion when predicting rare-events such as click-through rate prediction (it is also the criterion used in the Criteo Kaggle competition). Write a function to compute log loss, and evaluate it on some sample inputs.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
# Note that our dataset has a very high click-through rate by design
# In practice click-through rate can be one to two orders of magnitude lower
classOneFracTrain = <FILL IN>
print classOneFracTrain
logLossTrBase = <FILL IN>
print 'Baseline Train Logloss = {0:.3f}\n'.format(logLossTrBase)
# TEST Baseline log loss (4c)
Test.assertTrue(np.allclose(classOneFracTrain, 0.22717773523), 'incorrect value for classOneFracTrain')
Test.assertTrue(np.allclose(logLossTrBase, 0.535844), 'incorrect value for logLossTrBase')
"""
Explanation: (4c) Baseline log loss
Next we will use the function we wrote in Part (4b) to compute the baseline log loss on the training data. A very simple yet natural baseline model is one where we always make the same prediction independent of the given datapoint, setting the predicted value equal to the fraction of training points that correspond to click-through events (i.e., where the label is one). Compute this value (which is simply the mean of the training labels), and then use it to compute the training log loss for the baseline model. The log loss for multiple observations is the mean of the individual log loss values.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
from math import exp # exp(-t) = e^-t
def getP(x, w, intercept):
"""Calculate the probability for an observation given a set of weights and intercept.
Note:
We'll bound our raw prediction between 20 and -20 for numerical purposes.
Args:
x (SparseVector): A vector with values of 1.0 for features that exist in this
observation and 0.0 otherwise.
w (DenseVector): A vector of weights (betas) for the model.
intercept (float): The model's intercept.
Returns:
float: A probability between 0 and 1.
"""
rawPrediction = <FILL IN>
# Bound the raw prediction value
rawPrediction = min(rawPrediction, 20)
rawPrediction = max(rawPrediction, -20)
return <FILL IN>
trainingPredictions = <FILL IN>
print trainingPredictions.take(5)
# TEST Predicted probability (4d)
Test.assertTrue(np.allclose(trainingPredictions.sum(), 18135.4834348),
'incorrect value for trainingPredictions')
"""
Explanation: (4d) Predicted probability
In order to compute the log loss for the model we trained in Part (4a), we need to write code to generate predictions from this model. Write a function that computes the raw linear prediction from this logistic regression model and then passes it through a sigmoid function $ \scriptsize \sigma(t) = (1+ e^{-t})^{-1} $ to return the model's probabilistic prediction. Then compute probabilistic predictions on the training data.
Note that when incorporating an intercept into our predictions, we simply add the intercept to the value of the prediction obtained from the weights and features. Alternatively, if the intercept was included as the first weight, we would need to add a corresponding feature to our data where the feature has the value one. This is not the case here.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
def evaluateResults(model, data):
"""Calculates the log loss for the data given the model.
Args:
model (LogisticRegressionModel): A trained logistic regression model.
data (RDD of LabeledPoint): Labels and features for each observation.
Returns:
float: Log loss for the data.
"""
<FILL IN>
logLossTrLR0 = evaluateResults(model0, OHETrainData)
print ('OHE Features Train Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossTrBase, logLossTrLR0))
# TEST Evaluate the model (4e)
Test.assertTrue(np.allclose(logLossTrLR0, 0.456903), 'incorrect value for logLossTrLR0')
"""
Explanation: (4e) Evaluate the model
We are now ready to evaluate the quality of the model we trained in Part (4a). To do this, first write a general function that takes as input a model and data, and outputs the log loss. Then run this function on the OHE training data, and compare the result with the baseline log loss.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
logLossValBase = <FILL IN>
logLossValLR0 = <FILL IN>
print ('OHE Features Validation Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossValBase, logLossValLR0))
# TEST Validation log loss (4f)
Test.assertTrue(np.allclose(logLossValBase, 0.527603), 'incorrect value for logLossValBase')
Test.assertTrue(np.allclose(logLossValLR0, 0.456957), 'incorrect value for logLossValLR0')
"""
Explanation: (4f) Validation log loss
Next, following the same logic as in Parts (4c) and 4(e), compute the validation log loss for both the baseline and logistic regression models. Notably, the baseline model for the validation data should still be based on the label fraction from the training dataset.
End of explanation
"""
labelsAndScores = OHEValidationData.map(lambda lp:
(lp.label, getP(lp.features, model0.weights, model0.intercept)))
labelsAndWeights = labelsAndScores.collect()
labelsAndWeights.sort(key=lambda (k, v): v, reverse=True)
labelsByWeight = np.array([k for (k, v) in labelsAndWeights])
length = labelsByWeight.size
truePositives = labelsByWeight.cumsum()
numPositive = truePositives[-1]
falsePositives = np.arange(1.0, length + 1, 1.) - truePositives
truePositiveRate = truePositives / numPositive
falsePositiveRate = falsePositives / (length - numPositive)
# Generate layout and plot data
fig, ax = preparePlot(np.arange(0., 1.1, 0.1), np.arange(0., 1.1, 0.1))
ax.set_xlim(-.05, 1.05), ax.set_ylim(-.05, 1.05)
ax.set_ylabel('True Positive Rate (Sensitivity)')
ax.set_xlabel('False Positive Rate (1 - Specificity)')
plt.plot(falsePositiveRate, truePositiveRate, color='#8cbfd0', linestyle='-', linewidth=3.)
plt.plot((0., 1.), (0., 1.), linestyle='--', color='#d6ebf2', linewidth=2.) # Baseline model
pass
"""
Explanation: Visualization 2: ROC curve
We will now visualize how well the model predicts our target. To do this we generate a plot of the ROC curve. The ROC curve shows us the trade-off between the false positive rate and true positive rate, as we liberalize the threshold required to predict a positive outcome. A random model is represented by the dashed line.
End of explanation
"""
from collections import defaultdict
import hashlib
def hashFunction(numBuckets, rawFeats, printMapping=False):
"""Calculate a feature dictionary for an observation's features based on hashing.
Note:
Use printMapping=True for debug purposes and to better understand how the hashing works.
Args:
numBuckets (int): Number of buckets to use as features.
rawFeats (list of (int, str)): A list of features for an observation. Represented as
(featureID, value) tuples.
printMapping (bool, optional): If true, the mappings of featureString to index will be
printed.
Returns:
dict of int to float: The keys will be integers which represent the buckets that the
features have been hashed to. The value for a given key will contain the count of the
(featureID, value) tuples that have hashed to that key.
"""
mapping = {}
for ind, category in rawFeats:
featureString = category + str(ind)
mapping[featureString] = int(int(hashlib.md5(featureString).hexdigest(), 16) % numBuckets)
if(printMapping): print mapping
sparseFeatures = defaultdict(float)
for bucket in mapping.values():
sparseFeatures[bucket] += 1.0
return dict(sparseFeatures)
# Reminder of the sample values:
# sampleOne = [(0, 'mouse'), (1, 'black')]
# sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
# sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
# TODO: Replace <FILL IN> with appropriate code
# Use four buckets
sampOneFourBuckets = hashFunction(<FILL IN>, sampleOne, True)
sampTwoFourBuckets = hashFunction(<FILL IN>, sampleTwo, True)
sampThreeFourBuckets = hashFunction(<FILL IN>, sampleThree, True)
# Use one hundred buckets
sampOneHundredBuckets = hashFunction(<FILL IN>, sampleOne, True)
sampTwoHundredBuckets = hashFunction(<FILL IN>, sampleTwo, True)
sampThreeHundredBuckets = hashFunction(<FILL IN>, sampleThree, True)
print '\t\t 4 Buckets \t\t\t 100 Buckets'
print 'SampleOne:\t {0}\t\t {1}'.format(sampOneFourBuckets, sampOneHundredBuckets)
print 'SampleTwo:\t {0}\t\t {1}'.format(sampTwoFourBuckets, sampTwoHundredBuckets)
print 'SampleThree:\t {0}\t {1}'.format(sampThreeFourBuckets, sampThreeHundredBuckets)
# TEST Hash function (5a)
Test.assertEquals(sampOneFourBuckets, {2: 1.0, 3: 1.0}, 'incorrect value for sampOneFourBuckets')
Test.assertEquals(sampThreeHundredBuckets, {72: 1.0, 5: 1.0, 14: 1.0},
'incorrect value for sampThreeHundredBuckets')
"""
Explanation: Part 5: Reduce feature dimension via feature hashing
(5a) Hash function
As we just saw, using a one-hot-encoding featurization can yield a model with good statistical accuracy. However, the number of distinct categories across all features is quite large -- recall that we observed 233K categories in the training data in Part (3c). Moreover, the full Kaggle training dataset includes more than 33M distinct categories, and the Kaggle dataset itself is just a small subset of Criteo's labeled data. Hence, featurizing via a one-hot-encoding representation would lead to a very large feature vector. To reduce the dimensionality of the feature space, we will use feature hashing.
Below is the hash function that we will use for this part of the lab. We will first use this hash function with the three sample data points from Part (1a) to gain some intuition. Specifically, run code to hash the three sample points using two different values for numBuckets and observe the resulting hashed feature dictionaries.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
def parseHashPoint(point, numBuckets):
"""Create a LabeledPoint for this observation using hashing.
Args:
point (str): A comma separated string where the first value is the label and the rest are
features.
numBuckets: The number of buckets to hash to.
Returns:
LabeledPoint: A LabeledPoint with a label (0.0 or 1.0) and a SparseVector of hashed
features.
"""
<FILL IN>
numBucketsCTR = 2 ** 15
hashTrainData = <FILL IN>
hashTrainData.cache()
hashValidationData = <FILL IN>
hashValidationData.cache()
hashTestData = <FILL IN>
hashTestData.cache()
print hashTrainData.take(1)
# TEST Creating hashed features (5b)
hashTrainDataFeatureSum = sum(hashTrainData
.map(lambda lp: len(lp.features.indices))
.take(20))
hashTrainDataLabelSum = sum(hashTrainData
.map(lambda lp: lp.label)
.take(100))
hashValidationDataFeatureSum = sum(hashValidationData
.map(lambda lp: len(lp.features.indices))
.take(20))
hashValidationDataLabelSum = sum(hashValidationData
.map(lambda lp: lp.label)
.take(100))
hashTestDataFeatureSum = sum(hashTestData
.map(lambda lp: len(lp.features.indices))
.take(20))
hashTestDataLabelSum = sum(hashTestData
.map(lambda lp: lp.label)
.take(100))
Test.assertEquals(hashTrainDataFeatureSum, 772, 'incorrect number of features in hashTrainData')
Test.assertEquals(hashTrainDataLabelSum, 24.0, 'incorrect labels in hashTrainData')
Test.assertEquals(hashValidationDataFeatureSum, 776,
'incorrect number of features in hashValidationData')
Test.assertEquals(hashValidationDataLabelSum, 16.0, 'incorrect labels in hashValidationData')
Test.assertEquals(hashTestDataFeatureSum, 774, 'incorrect number of features in hashTestData')
Test.assertEquals(hashTestDataLabelSum, 23.0, 'incorrect labels in hashTestData')
"""
Explanation: (5b) Creating hashed features
Next we will use this hash function to create hashed features for our CTR datasets. First write a function that uses the hash function from Part (5a) with numBuckets = $ \scriptsize 2^{15} \approx 33K $ to create a LabeledPoint with hashed features stored as a SparseVector. Then use this function to create new training, validation and test datasets with hashed features. Hint: parsedHashPoint is similar to parseOHEPoint from Part (3d).
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
def computeSparsity(data, d, n):
"""Calculates the average sparsity for the features in an RDD of LabeledPoints.
Args:
data (RDD of LabeledPoint): The LabeledPoints to use in the sparsity calculation.
d (int): The total number of features.
n (int): The number of observations in the RDD.
Returns:
float: The average of the ratio of features in a point to total features.
"""
<FILL IN>
averageSparsityHash = computeSparsity(hashTrainData, numBucketsCTR, nTrain)
averageSparsityOHE = computeSparsity(OHETrainData, numCtrOHEFeats, nTrain)
print 'Average OHE Sparsity: {0:.7e}'.format(averageSparsityOHE)
print 'Average Hash Sparsity: {0:.7e}'.format(averageSparsityHash)
# TEST Sparsity (5c)
Test.assertTrue(np.allclose(averageSparsityOHE, 1.6717677e-04),
'incorrect value for averageSparsityOHE')
Test.assertTrue(np.allclose(averageSparsityHash, 1.1805561e-03),
'incorrect value for averageSparsityHash')
"""
Explanation: (5c) Sparsity
Since we have 33K hashed features versus 233K OHE features, we should expect OHE features to be sparser. Verify this hypothesis by computing the average sparsity of the OHE and the hashed training datasets.
Note that if you have a SparseVector named sparse, calling len(sparse) returns the total number of features, not the number features with entries. SparseVector objects have the attributes indices and values that contain information about which features are nonzero. Continuing with our example, these can be accessed using sparse.indices and sparse.values, respectively.
End of explanation
"""
numIters = 500
regType = 'l2'
includeIntercept = True
# Initialize variables using values from initial model training
bestModel = None
bestLogLoss = 1e10
# TODO: Replace <FILL IN> with appropriate code
stepSizes = <FILL IN>
regParams = <FILL IN>
for stepSize in stepSizes:
for regParam in regParams:
model = (LogisticRegressionWithSGD
.train(hashTrainData, numIters, stepSize, regParam=regParam, regType=regType,
intercept=includeIntercept))
logLossVa = evaluateResults(model, hashValidationData)
print ('\tstepSize = {0:.1f}, regParam = {1:.0e}: logloss = {2:.3f}'
.format(stepSize, regParam, logLossVa))
if (logLossVa < bestLogLoss):
bestModel = model
bestLogLoss = logLossVa
print ('Hashed Features Validation Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossValBase, bestLogLoss))
# TEST Logistic model with hashed features (5d)
Test.assertTrue(np.allclose(bestLogLoss, 0.4481683608), 'incorrect value for bestLogLoss')
"""
Explanation: (5d) Logistic model with hashed features
Now let's train a logistic regression model using the hashed features. Run a grid search to find suitable hyperparameters for the hashed features, evaluating via log loss on the validation data. Note: This may take a few minutes to run. Use 1 and 10 for stepSizes and 1e-6 and 1e-3 for regParams.
End of explanation
"""
from matplotlib.colors import LinearSegmentedColormap
# Saved parameters and results. Eliminate the time required to run 36 models
stepSizes = [3, 6, 9, 12, 15, 18]
regParams = [1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2]
logLoss = np.array([[ 0.45808431, 0.45808493, 0.45809113, 0.45815333, 0.45879221, 0.46556321],
[ 0.45188196, 0.45188306, 0.4518941, 0.4520051, 0.45316284, 0.46396068],
[ 0.44886478, 0.44886613, 0.44887974, 0.44902096, 0.4505614, 0.46371153],
[ 0.44706645, 0.4470698, 0.44708102, 0.44724251, 0.44905525, 0.46366507],
[ 0.44588848, 0.44589365, 0.44590568, 0.44606631, 0.44807106, 0.46365589],
[ 0.44508948, 0.44509474, 0.44510274, 0.44525007, 0.44738317, 0.46365405]])
numRows, numCols = len(stepSizes), len(regParams)
logLoss = np.array(logLoss)
logLoss.shape = (numRows, numCols)
fig, ax = preparePlot(np.arange(0, numCols, 1), np.arange(0, numRows, 1), figsize=(8, 7),
hideLabels=True, gridWidth=0.)
ax.set_xticklabels(regParams), ax.set_yticklabels(stepSizes)
ax.set_xlabel('Regularization Parameter'), ax.set_ylabel('Step Size')
colors = LinearSegmentedColormap.from_list('blue', ['#0022ff', '#000055'], gamma=.2)
image = plt.imshow(logLoss,interpolation='nearest', aspect='auto',
cmap = colors)
pass
"""
Explanation: Visualization 3: Hyperparameter heat map
We will now perform a visualization of an extensive hyperparameter search. Specifically, we will create a heat map where the brighter colors correspond to lower values of logLoss.
The search was run using six step sizes and six values for regularization, which required the training of thirty-six separate models. We have included the results below, but omitted the actual search to save time.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
# Log loss for the best model from (5d)
logLossTest = <FILL IN>
# Log loss for the baseline model
logLossTestBaseline = <FILL IN>
print ('Hashed Features Test Log Loss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossTestBaseline, logLossTest))
# TEST Evaluate on the test set (5e)
Test.assertTrue(np.allclose(logLossTestBaseline, 0.537438),
'incorrect value for logLossTestBaseline')
Test.assertTrue(np.allclose(logLossTest, 0.455616931), 'incorrect value for logLossTest')
"""
Explanation: (5e) Evaluate on the test set
Finally, evaluate the best model from Part (5d) on the test set. Compare the resulting log loss with the baseline log loss on the test set, which can be computed in the same way that the validation log loss was computed in Part (4f).
End of explanation
"""
|
seinberg/deep-learning | image-classification/dlnd_image_classification.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 14
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
"""
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
"""
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
# TODO: Implement Function
return x.astype(np.float32, copy=False) / float(255.0)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
"""
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
"""
from sklearn import preprocessing
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
one_hot_binarizer = preprocessing.LabelBinarizer()
one_hot_binarizer.fit(range(0, 10))
return one_hot_binarizer.transform(x)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
return tf.placeholder(tf.float32,
[None, image_shape[0], image_shape[1], image_shape[2]],
name='x')
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
return tf.placeholder(tf.float32,
[None, n_classes],
name='y')
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
return tf.placeholder(tf.float32, name='keep_prob')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
If you're finding it hard to dedicate enough time for this course a week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use TensorFlow Layers or TensorFlow Layers (contrib) to build each layer, except "Convolutional & Max Pooling" layer. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
If you would like to get the most of this course, try to solve all the problems without TF Layers. Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
"""
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernel size 2-D Tuple for convolution
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
height = conv_ksize[0]
width = conv_ksize[1]
input_depth = x_tensor.get_shape().as_list()[3]
output_depth = conv_num_outputs
filter_weights = tf.Variable(tf.random_normal([height, width, input_depth, output_depth], mean=0.0, stddev=0.05))
filter_bias = tf.Variable(tf.random_normal([output_depth]))
# the stride for each dimension (batch_size, height, width, depth)
conv_strides_dims = [1, conv_strides[0], conv_strides[1], 1]
padding = 'SAME'
#print("neural net is being created...")
# https://www.tensorflow.org/versions/r0.11/api_docs/python/nn.html#conv2d
# `tf.nn.conv2d` does not include the bias computation so we have to add it ourselves after.
convolution = tf.nn.conv2d(x_tensor, filter_weights, conv_strides_dims, padding) + filter_bias
# batch normalization on convolution
convolution = tf.contrib.layers.batch_norm(convolution, center=True, scale=True)
#convolution = tf.nn.batch_normalization(convolution, mean=0.0, variance=1.0, offset=0.0, scale)
# non-linear activation function
convolution = tf.nn.elu(convolution)
# the ksize (filter size) for each dimension (batch_size, height, width, depth)
ksize = [1, pool_ksize[0], pool_ksize[1], 1]
# the stride for each dimension (batch_size, height, width, depth)
pool_strides_dims = [1, pool_strides[0], pool_strides[1], 1]
# https://www.tensorflow.org/versions/r0.11/api_docs/python/nn.html#max_pool
return tf.nn.max_pool(convolution, ksize, pool_strides_dims, padding)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer. You're free to use any TensorFlow package for all the other layers.
End of explanation
"""
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
return tf.contrib.layers.flatten(x_tensor)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
"""
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
End of explanation
"""
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
return tf.contrib.layers.fully_connected(x_tensor, num_outputs, activation_fn=tf.nn.elu)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
"""
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
#batch_size = x_tensor.get_shape().as_list()[1]
#weight = tf.Variable(tf.random_normal([batch_size, num_outputs], mean=0.0, stddev=0.03))
#bias = tf.Variable(tf.zeros(num_outputs))
#output_layer = tf.add(tf.matmul(x_tensor, weight), bias)
#return output_layer
return tf.contrib.layers.fully_connected(x_tensor, num_outputs, activation_fn=None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
"""
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
Note: Activation, softmax, or cross entropy shouldn't be applied to this.
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_num_outputs1 = 32
conv_num_outputs2 = 128
conv_num_outputs3 = 512
conv_ksize = (4, 4)
conv_strides = (1, 1)
pool_ksize = (4, 4)
pool_strides = (2, 2)
conv_layer1 = conv2d_maxpool(x, conv_num_outputs1, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_layer1 = tf.nn.dropout(conv_layer1, tf.to_float(keep_prob))
conv_layer2 = conv2d_maxpool(conv_layer1, conv_num_outputs2, conv_ksize, conv_strides, pool_ksize, pool_strides)
#conv_layer2 = tf.nn.dropout(conv_layer2, tf.to_float(keep_prob))
conv_layer3 = conv2d_maxpool(conv_layer2, conv_num_outputs3, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_layer3 = conv2d_maxpool(conv_layer3, conv_num_outputs3, (4, 4), (1, 1), pool_ksize, pool_strides)
conv_layer3 = conv2d_maxpool(conv_layer3, conv_num_outputs3, (4, 4), (1, 1), pool_ksize, pool_strides)
conv_layer3 = conv2d_maxpool(conv_layer3, conv_num_outputs3, (5, 5), (1, 1), pool_ksize, pool_strides)
conv_layer3 = conv2d_maxpool(conv_layer3, conv_num_outputs3, (5, 5), (1, 1), pool_ksize, pool_strides)
conv_layer3 = conv2d_maxpool(conv_layer3, conv_num_outputs3, (5, 5), (1, 1), pool_ksize, pool_strides)
conv_layer3 = tf.nn.dropout(conv_layer3, tf.to_float(keep_prob))
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flattened = flatten(conv_layer3)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
# num_outputs can be arbitrary in size
num_outputs = 1024
fully_conn_layer1 = fully_conn(flattened, 512)
fully_conn_layer1 = tf.nn.dropout(fully_conn_layer1, tf.to_float(keep_prob))
fully_conn_layer2 = fully_conn(fully_conn_layer1, 512)
#fully_conn_layer3 = fully_conn(fully_conn_layer2, 128)
#fully_conn_layer4 = fully_conn(fully_conn_layer3, 64)
#fully_conn_layer3 = tf.nn.dropout(fully_conn_layer3, tf.to_float(keep_prob))
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
out = output(fully_conn_layer2, 10)
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
return session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
"""
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
"""
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
# TODO: Implement Function
loss = session.run(cost, feed_dict =
{x: feature_batch, y: label_batch, keep_prob: 1.0})
valid_accuracy = session.run(accuracy,feed_dict =
{x: valid_features, y: valid_labels, keep_prob: 1.0})
print('Loss {} - Validation Accuracy: {}'.format(
loss,
valid_accuracy))
return float('{}'.format(valid_accuracy))
"""
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
"""
# TODO: Tune Parameters
epochs = 40
batch_size = 128
keep_probability = 0.7
"""
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
valid_acc = print_stats(sess, batch_features, batch_labels, cost, accuracy)
print('Accuracy: {}'.format(valid_acc))
"""
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
best_valid_accuracy = 0.0
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
valid_acc = print_stats(sess, batch_features, batch_labels, cost, accuracy)
if (valid_acc > best_valid_accuracy):
print('best validation accuracy ({} > {}); saving model'.format(valid_acc, best_valid_accuracy))
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
best_valid_accuracy = valid_acc
"""
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation
"""
|
psas/lv3.0-recovery | Useful_Misc/Drop_Test_Calculations.ipynb | gpl-3.0 | import math
import sympy
from sympy import Symbol, solve
from scipy.integrate import odeint
from types import SimpleNamespace
import numpy as np
import matplotlib.pyplot as plt
sympy.init_printing()
%matplotlib inline
"""
Explanation: LV3 Recovery Test
This notebook will encompass all calculations regarding the LV3 Recovery/eNSR Drop Test.
Resources
[http://www.usma.edu/math/Military%20Math%20Modeling/C5.pdf]
[http://www.the-rocketman.com/drogue-decent-rate.html]
[http://wind.willyweather.com/or/crook-county/prineville-reservoir.html]
* wind = NW 5.8mph
setup
imports
End of explanation
"""
# P = speed of plane [m/s]
P = 38
# wind speed [m/s]
w = 2.59283
# wind bearing, measured from east to north [degrees]
theta_w_deg= 360-45
# aircraft bearing, measured from east to north [degrees]
theta_a_deg= 90+45
# safety distance above the ground that the main chute should deploy at [m]
mainSafetyDist= 304.8 # 1000 ft = 304.8 m
"""
Explanation: input parameters
flight plan
End of explanation
"""
# worst case wind speed [m/s]
# w_worst= 12.86
# mass of payload [kg]
m = 28
# g = acceleration due to gravity [kg*m/s^2]
g = 9.81
# density of air [kg/m^3]
rho = 1.2
# terminal velocity of drogue [m/s]
vt_d= 18.5 # according to rocketman
# radius of drogue [m]
R_d= 0.762
# static line length [m]
sl = 2
# drogue line length [m]
dl= 50
# terminal velocity of main chute [m/s]
vt_m= 4.83108 # according to rocketman
# radius of main chute [m]
R_m= 5.4864
"""
Explanation: physical parameters
End of explanation
"""
# wind speed in the direction of flight
theta_a= theta_a_deg*2*np.pi/360
theta_w= theta_w_deg*2*np.pi/360
wx= w*np.cos(theta_w-theta_a)
# cross-wind speed from left to right (pilot's perspective)
wz= -w*np.sin(theta_w-theta_a)
"""
Explanation: Calculations
Convert wind directions into aircraft coordinates
End of explanation
"""
va = 0
## vertical distance gained
## since the static line is 2m, assuming it falls only in the vertical direction:
Lva = sl
# horizontal distance gained
## speed of plane times time to drop 2m static line
## 1/2*g*ta**2 = sl
ta = math.sqrt(2*sl/g)
Lha = P*ta
print('step a (from drop to static line disconnect):')
print('time to free fall fall 2 m = ', round(ta,4), ' s')
print('vertical length gained = ', round(Lva,4), ' m')
print('horizontal length gained = ', round(Lha,4), ' m')
"""
Explanation: Step a - static line extending
Assumptions:
no drag
static line is approximately 2m long
plane is flying at approximately 85 mph = 38 m/s
Variables
va = vertical velocity at instant the system is pushed from the plane [m/s]
sl = static line length [m]
Lva = vertical length gained from step a [m]
Lha = horizontal length gained from step a [m]
ta = time for step a to be complete [s]
calculations
End of explanation
"""
# vertical velocity at end of static line, beginning of timer
vb = va + (g*ta)
# since the deployment is controlled by a 2 sec timer:
tb = 2
# vertical length gained
Lvb = (vb*tb) + (0.5*g*(tb**2))
# horizontal length gained
Lhb = P*tb
print('step b (from static line disconnect to timer runout):')
print('vertical velocity at beginning of step b = ', round(vb,4), ' m/s')
print('vertical length gained = ', round(Lvb,4), ' m')
print('horizontal length gained = ', round(Lhb,4), ' m')
"""
Explanation: Step b - deployment timer running
deployment timer is a 2 sec timer
Assumptions
neglecting drag force
Variables
P = speed of plane
vb = velocity after 2m static line has extended (aka instant static line 'snaps')
g = acceleration due to gravity
Lvb = vertical length gained from step b
Lhb = horizontal length gained from step b
tb = time for step b to be complete
calculations
End of explanation
"""
# velocity at time of ring separation, end of timer
vc = vb + g*tb
Lhc = 0
Lvc = 0
print('vertical velocity at ring separation = ', round(vc,4), ' m/s')
"""
Explanation: Step c - eNSR ring separation
Assumptions:
This step only lasts for an instant; i.e. has no duration
drogue timer begins as ring separation occurs
Variables
P = speed of plane
vc = velocity at time of ring separation
g = acceleration due to gravity
Lvc = vertical length gained from step c
Lhc = horizontal length gained from step c
tc = time for step c to be complete
calculations
End of explanation
"""
Ps, vds, gs, Lvds, Lhds, tds, vcs = sympy.symbols('Ps vds gs Lvds Lhds tds vcs')
Dparms= {Ps: P, gs: g, vcs: vc}
tdEqn= (Ps*tds)**2 + (vcs*tds + 0.5*gs*tds**2)**2 - dl**2
tdSolns= sympy.solve(tdEqn.subs(Dparms))
print('possible solutions:', tdSolns)
for soln in [complex(x) for x in tdSolns]:
if (soln.imag != 0) or (soln.real <= 0):
pass
else:
print(soln, 'seems fine')
td= soln.real
# now go back and calculate x and y
Lhd = P*td
Lvd = vc*td + g*(td**2)
# vertical velocity gained after the 50' drop
vd = vc + g*td
print()
print('time to pull out drogue:', round(td,4), 's')
print('horizontal distance gained = ', round(Lhd,4), 'm')
print('vertical distance gained = ', round(Lvd,4), 'm')
print('vertical velocity at instant line becomes taught = ', round(vd,4), 'm/s')
print('horizontal velocity: ', P, 'm/s')
"""
Explanation: Step d - drogue line is being pulled out
Assumptions
no drag force considered for the payload for horizon. and vert. decent until drogue is fully unfurled
just accounting for the 50' shock chord, therefore not including the lines coming directly from the 'chute
the drogue pulls out at an angle due to a small amount of drag on the drogue slowing it down horizontally
Variables
P = speed of plane
vd = velocity after 50' shock chord is drawn out
g = acceleration due to gravity
Lvd = vertical distance gained from step d
Lhd = horizontal distance gained from step d
td = time for step d to be complete
dl = droge line length
the 50' chord as the hypotenuse
$50 = \sqrt{(x^2) + (y^2)}$
vertical length gained from step d
$Lvd = vctd + 0.5g*(td^2)$
horizontal length gained from step d
$Lhd = P*td$
calculate td by replacing x and y in the above equation
$dl^2 = (Ptd)^2 + (vctd + g*td^2)^2$
calculations
End of explanation
"""
# make a function that translates our equtions into odeint() format
def dragFunc(y, t0, p):
# map the positions and velocities to convenient names:
r_x= y[0]
r_y= y[1]
r_z= y[2]
rdot_x= y[3]
rdot_y= y[4]
rdot_z= y[5]
# calculate the accelerations:
rddot_x= 1/p.m*(-1/2*p.rho*p.A*p.Cd*np.sqrt((rdot_x-p.wx)**2+rdot_y**2+(rdot_z-p.wz)**2)*(rdot_x-p.wx))
rddot_y= 1/p.m*(-1/2*p.rho*p.A*p.Cd*np.sqrt((rdot_x-p.wx)**2+rdot_y**2+(rdot_z-p.wz)**2)*rdot_y -p.m*p.g)
rddot_z= 1/p.m*(-1/2*p.rho*p.A*p.Cd*np.sqrt((rdot_x-p.wx)**2+rdot_y**2+(rdot_z-p.wz)**2)*(rdot_z-p.wz))
# return the velocities and accelerations:
return([rdot_x, rdot_y, rdot_z, rddot_x, rddot_y, rddot_z])
D_d = m*g # drag force on drogue at terminal velocity [N]
A_d = math.pi*(R_d**2) # frontal area of drogue [m^2]
cd_d = (2*D_d)/(rho*A_d*vt_d**2) # drag coeff. of drogue []
# bundle up the parameters needed by dragFunc():
pd = SimpleNamespace()
pd.rho = rho
pd.A = A_d
pd.Cd = cd_d
pd.m = m
pd.g = g
pd.wx = wx
pd.wz = wz
# set the boundary conditions for the solver:
y0 = [0,0,0, P, -vd, 0]
t_step = 0.001
t_start = 0
t_final = 10
times_d = np.linspace(t_start, t_final, (t_final-t_start)/t_step)
# run the simulation:
soln_d = odeint(func= dragFunc, y0= y0, t= times_d, args= (pd,))
# find the time when it's okay to deploy the main chute:
# for i in range(0, len(soln)):
# if (soln_d_xddot[i] < 0.01*soln_d_xddot[0]) and (soln_d_yddot[i] < 0.01*soln_d_yddot[0]):
# print('At time', round(times_d[i],4), 'x and y acceleration are below 1% their original values.')
# tcr_d= times_d[i]
# break
# chop of the stuff after the critical time:
#soln= soln[range(0,i)]
#times= times[range(0,i)]
"""
Explanation: Step e - drogue is fully deployed
Assumptions
drag force in full effect
skipping impulse and time to steady state
Variables
cd = coeff. of drag [unitless]
D = drag force = mass of payload*g [N]
rho = density of air [kg/m^3]
A = area of parachute [m^2]
v = approx. steady state velocity of drogue [m/s]
m = mass of payload [kg]
Rd = drogue radius [m]
w = wind speed [m/s]
governing equations
Just start with Newton's 2nd law. The $-1/2\rho$ stuff is the drag force. It's negative because it opposes the motion. The biz with the $|\dot{\vec r}|\dot{\vec r}$ is to get a vector that has the magnitude of $r^2$ and the direction of $\vec r$.
$
m\ddot{\vec r} = -1/2\rhoA_dCd|\dot{\vec r}|\dot{\vec r} +m\vec g\
$
Break it out into components. (This is where we see that it's an ugly coupled diffeq.)
$
m \ddot r_x = -1/2\rhoA_dCd\sqrt{\dot r_x^2+\dot r_y^2}\dot r_x \
m\ddot r_y = -1/2\rhoA_dCd\sqrt{\dot r_x2+\dot r_y2}\dot r_y -m*g
$
numerical solution
End of explanation
"""
# break out the solutions into convenient names:
soln_d_x= [s[0] for s in soln_d]
soln_d_y= [s[1] for s in soln_d]
soln_d_z= [s[2] for s in soln_d]
soln_d_xdot= [s[3] for s in soln_d]
soln_d_ydot= [s[4] for s in soln_d]
soln_d_zdot= [s[5] for s in soln_d]
soln_d_xddot= np.diff(soln_d_xdot) # x acceleration
soln_d_yddot= np.diff(soln_d_ydot) # y acceleration
soln_d_zddot= np.diff(soln_d_zdot) # z acceleration
# plot da shiz:
plt.figure(1)
plt.plot(soln_d_x, soln_d_y)
plt.axis('equal')
plt.xlabel('horizontal range (m)')
plt.ylabel('vertical range (m)')
plt.figure(2)
plt.plot(times_d, soln_d_xdot)
plt.xlabel('time (s)')
plt.ylabel('horizontal velocity (m/s)')
plt.figure(3)
plt.plot(times_d, soln_d_ydot)
plt.xlabel('time (s)')
plt.ylabel('vertical velocity (m/s)')
plt.figure(4)
plt.plot(times_d[range(0, len(soln_d_xddot))], soln_d_xddot)
plt.xlabel('time (s)')
plt.ylabel('horizontal acceleration (m/s^2)')
plt.figure(5)
plt.plot(times_d[range(0, len(soln_d_yddot))], soln_d_yddot)
plt.xlabel('time (s)')
plt.ylabel('vertical acceleration (m/s^2)')
"""
Explanation: plots
End of explanation
"""
Lhe = soln_d_x[-1]
Lve = -soln_d_y[-1]
Lle = soln_d_z[-1]
te = times_d[-1]
print('horizontal distance travelled in step e:', Lhe)
print('vertical distance travelled in step e:', Lve)
print('lateral distance travelled in step e:', Lle)
print('time taken in step e:', te)
"""
Explanation: results
End of explanation
"""
# x-direction calculations
##########################
# from usma:
# mx" + cd*x' = cd*w
####### need python help here #######
# ugh, I have to go learn how to use scipy... 1 sec -- Joe
# mx" + cd*x
## homogeneous equation mx" + rho*x' = 0
## characteristic equation for the homogeneous differential equation is:
## mr^2 + rho*r = 0
## where the roots are:
## r1 = 0, r2 = -(rho/m)
## complementary solution:
## xc = C1*e^0 + C2* e^(-(rho*t/m))
## non-homogeneous equation mx" + rho*x' = rho*w
## complete solution x = C1 + C2*e^(-(rho*t/m)) + wt
## solving for C1 and C2 using results from step d as initial conditions
## except time = 0 since we are making calculations just for this step
## i.e. x(0) = x_curr_tot and vx(0) = P
## therefore C1 = and C2 =
# x_0 = Lha + Lhb + Lhc + Lhd
# t = 0
# vx_0 = P
# C1 = Symbol('C1')
# C2 = Symbol('C2')
# C_1 = solve(C1 + C2*math.exp(-(rho*t/m)) + w*t - x_0, C1)
# C_1
# print(C_1)
# C_2 = solve(C2*(-(rho/m)) + w - vx_0, C2)
# print(C_2)
# ## NEEEED HELLLPPP should be using piecewise to solve this
# ## copying C_1 output from just above with the C_2 value
# calc_C1 = 147.560492558936 + 586.6
# print(calc_C1)
#
# ## therefore the complete solution is:
# ## x = 734.1605 - 586.6*exp(-(rho*t/m)) + w*t
#
# ## if the drogue falls for 3 seconds, then
# t = 3
# Lhe = 734.1605 - 586.6*math.exp(-(rho*t/m)) + w*t
#
# print('horizontal distance gained = ', round(Lhe,4), 'm')
# print(' ')
#
# # y-direction calculations
# ##########################
#
# ## from usma
# ## characteristic equation:
# ## m*r^2 + rho*r = 0
# ## where the roots are r = 0 and r = (-b/m)
#
# ## complete solution:
# ## y = C1 + C2*exp(-(rho*t)/m)
# ## solving for C1 and C2 using results from step d as initial conditions
# ## except time = 0 since we are making calculations just for this step
#
# y_0 = Lva + Lvb + Lvc + Lvd
# print('y_0 = ', y_0)
# vy_0 = vd
# print('vy_0 = ',vy_0)
# t_0 = 0
# C1 = Symbol('C1')
# C2 = Symbol('C2')
# ## NEEEED HELLLPPP should be using piecewise to solve this
# # C1 equation
# C_1 = solve(C1 + C2*math.exp(-(rho*t_0/m)) - y_0, C1)
# print('C1 equation: ', C_1)
# # C2 equation/value
# C_2 = solve(C2*(-(rho/m)*math.exp(-(rho*t_0/m))) - vy_0, C2)
# print('C2 = ', C_2)
# ## copying C_1 output from just above with the C_2 value
# calc_C1 = 793.253769802079 + 62.2619406518579 #62.2619406518579 + (0.879350749407306*793.253769802079)
# print('C1 = ', calc_C1)
#
# # NEED HELP: need to make C_2 a number (int, float)
# ## if the drogue falls for 3 seconds, then
# t = 3
# Lve = calc_C1 + (-793.253769802079*math.exp(-(rho/m)*t))
#
# print('vertical distance gained = ', Lve, 'm')
#
# ## Maayybbbeee
#
# vert_length = v*t
# print(vert_length)
"""
Explanation: old calculations
End of explanation
"""
Lvf= mainSafetyDist
# step f time = vertical distance / main terminal velocity
tf= Lvf/vt_m
# horizontal distance= wind speed * step f time
Lhf= wx*tf
Llf= wz*tf
"""
Explanation: Step f - main 'chute fully deployed
If you want to justify to yourself that the main chute hits terminal velocity [almost] instantly, you can mess with the inputs for the numerical solution in step e.
Assumptions
drag force in full effect
skipping impulse and time to steady state
main 'chute is a full 18' in dia.
after payload has gone through the drogue decent, the horizontal velocity is the same as the wind speed
Variables
cd = coeff. of drag [unitless]
D = drag force = weight of payload*g [N]
rho = density of air [kg/m^3]
A = area of parachute [m^2]
v_main = approx. steady state velocity of main 'chute [m/s]
m = mass of payload [kg]
w = wind speed [m/s]
calculations
End of explanation
"""
print('horizontal distance travelled in step f:', Lhf, 'm')
print('vertical distance travelled in step f:', Lvf, 'm')
print('time taken in step f:', tf, 's')
"""
Explanation: results
End of explanation
"""
# TOTAL HORIZONTAL DISTANCE TRAVELED
X_TOT = Lha + Lhb + Lhc + Lhd + Lhe + Lhf
X_TOT_ft = X_TOT*3.28084
print('TOTAL HORIZONTAL DISTANCE TRAVELED = ', round(X_TOT,2), 'm ', ' = ', round(X_TOT_ft,2), 'ft')
# TOTAL VERTICAL DISTANCE DESCENDED
Y_TOT = Lva + Lvb + Lvc + Lvd + Lve + Lvf
Y_TOT_ft = Y_TOT*3.28084
print('TOTAL VERTICAL DISTANCE DESCENDED = ', round(Y_TOT,2), 'm ', ' = ', round(Y_TOT_ft,2), 'ft')
# TOTAL TIME FOR DESCENT
T_TOT = ta + tb + td + te + tf
# in minutes
t_tot_min = T_TOT/60
print('TOTAL TIME FOR DESCENT', round(T_TOT,2), 's = ', round(t_tot_min,2), 'min')
"""
Explanation: Results
totals
End of explanation
"""
delta_xs= np.array([0, Lha, Lhb, Lhc, Lhd, Lhe, Lhf])
delta_ys= -np.array([0, Lva, Lvb, Lvc, Lvd, Lve, Lvf])
delta_zs= np.array([0, 0, 0, 0, 0, Lle, Llf])
xs= np.cumsum(delta_xs)
ys= np.cumsum(delta_ys)
zs= np.cumsum(delta_zs)
plt.close('all')
plt.figure(1)
plt.plot(xs,ys)
_= plt.axis('equal')
plt.grid()
plt.title('down-range trajectory')
plt.xlabel('down-range distance from drop (m)')
plt.ylabel('altitude relative to drop (m)')
plt.figure(2)
plt.plot(zs, ys)
_= plt.axis('equal')
plt.grid()
plt.title('lateral trajectory')
plt.xlabel('lateral (left to right) distance from drop (m)')
plt.ylabel('altitude relative to drop (m)')
print('xs:', xs)
print('ys:', ys)
print('zs:', zs)
print('note that Y is up and Z is to the right of the aircraft... because I don\'t want to change my code.')
"""
Explanation: trajectories relative to drop point (aircraft coordinates)
End of explanation
"""
Es= xs*np.cos(theta_a) +zs*np.sin(theta_a)
Ns= xs*np.sin(theta_a) -zs*np.cos(theta_a)
plt.figure(2)
plt.plot(Es,ys)
_= plt.axis('equal')
plt.grid()
plt.title('east trajectory')
plt.xlabel('eastern distance from drop (m)')
plt.ylabel('altitude relative to drop (m)')
plt.figure(3)
plt.plot(Ns, ys)
_= plt.axis('equal')
plt.grid()
plt.title('north trajectory')
plt.xlabel('northern distance from drop (m)')
plt.ylabel('altitude relative to drop (m)')
print('Es:', Es)
print('ys:', ys)
print('Ns:', Ns)
"""
Explanation: trajectories relative to drop point (East-North coordinates)
End of explanation
"""
|
tommyod/abelian | docs/notebooks/fourier_series.ipynb | gpl-3.0 | # Imports related to plotting and LaTeX
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import display, Math
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('pdf', 'png')
def show(arg):
return display(Math(arg.to_latex()))
# Imports related to mathematics
import numpy as np
from abelian import LCA, HomLCA, LCAFunc
from sympy import Rational, pi
"""
Explanation: Tutorial: Fourier series
This is an interactive tutorial written with real code.
We start by setting up $\LaTeX$ printing and importing some classes.
End of explanation
"""
def identity(arg_list):
return sum(arg_list)
# Create the domain T and a function on it
T = LCA(orders = [1], discrete = [False])
function = LCAFunc(identity, T)
show(function)
"""
Explanation: Overview: $f(x) = x$ defined on $T = \mathbb{R}/\mathbb{Z}$
In this example we compute the Fourier series coefficients for
$f(x) = x$ with domain $T = \mathbb{R}/\mathbb{Z}$.
We will proceed as follows:
Define a function $f(x) = x$ on $T$.
Sample using pullback along $\phi_\text{sample}: \mathbb{Z}_n \to T$. Specifically, we will use $\phi(n) = 1/n$ to sample uniformly.
Compute the DFT of the sampled function using the dft method.
Use a transversal rule to move the DFT from $\mathbb{Z}_n$ to $\widehat{T} = \mathbb{Z}$.
Plot the result and compare with the analytical solution, which can be obtained by computing the complex Fourier coefficients of the Fourier integral by hand.
We start by defining the function on the domain.
Defining the function
End of explanation
"""
# Set up the number of sample points
n = 8
# Create the source of the monomorphism
Z_n = LCA([n])
phi_sample = HomLCA([Rational(1, n)],T, Z_n)
show(phi_sample)
"""
Explanation: We now create a monomorphism $\phi_\text{sample}$ to sample the function, where we make use of the Rational class to avoid numerical errors.
Sampling using pullback
End of explanation
"""
# Pullback along phi_sample
function_sampled = function.pullback(phi_sample)
"""
Explanation: We sample the function using the pullback.
End of explanation
"""
# Take the DFT (a multidimensional FFT is used)
function_sampled_dual = function_sampled.dft()
"""
Explanation: Then we compute the DFT (discrete Fourier transform). The DFT is available on functions defined on $\mathbb{Z}_\mathbf{p}$ with $p_i \geq 1$, i.e. on FGAs with finite orders.
The DFT
End of explanation
"""
# Set up a transversal rule
def transversal_rule(arg_list):
x = arg_list[0] # First element of vector/list
if x < n/2:
return [x]
else:
return [x - n]
# Calculate the Fourier series coefficients
phi_d = phi_sample.dual()
rule = transversal_rule
coeffs = function_sampled_dual.transversal(phi_d, rule)
show(coeffs)
"""
Explanation: Transversal
We use a transversal rule, along with $\widehat{\phi}_\text{sample}$, to push the function to $\widehat{T} = \mathbb{Z}$.
End of explanation
"""
# Set up a function for the analytical solution
def analytical(k):
if k == 0:
return 1/2
return complex(0, 1)/(2*pi*k)
# Sample the analytical and computed functions
sample_values = list(range(-int(1.5*n), int(1.5*n)+1))
analytical_sampled = list(map(analytical, sample_values))
computed_sampled = coeffs.sample(sample_values)
# Because the forward DFT does not scale, we scale manually
computed_sampled = [k/n for k in computed_sampled]
"""
Explanation: Comparing with analytical solution
Let us compare this result with the analytical solution, which is
$$c_k =
\begin{cases}
1/2 & \text{if } k = 0 \
-1 / 2 \pi i k & \text{else}.
\end{cases}$$
End of explanation
"""
# Since we are working with complex numbers
# and we wish to plot them, we convert
# to absolute values first
length = lambda x: float(abs(x))
analytical_abs = list(map(length, analytical_sampled))
computed_abs = list(map(length, computed_sampled))
# Plot it
plt.figure(figsize = (8,3))
plt.title('Absolute value of Fourier coefficients')
plt.plot(sample_values, analytical_abs, label = 'Analytical')
plt.plot(sample_values, computed_abs, label = 'Computed')
plt.grid(True)
plt.legend(loc = 'best')
plt.show()
"""
Explanation: Finally, we create the plot comparing the computed coefficients with the ones obtained analytically. Notice how the computed values drop to zero outside of the transversal region.
End of explanation
"""
|
tritemio/multispot_paper | out_notebooks/Multi-spot vs usALEX FRET histogram comparison-out-7d.ipynb | mit | data_id = '17d'
ph_sel_name = "None"
data_id = "7d"
"""
Explanation: Executed: Mon Mar 27 22:24:08 2017
Duration: 10 seconds.
End of explanation
"""
from fretbursts import *
sns = init_notebook()
import os
import pandas as pd
from IPython.display import display, Math
import lmfit
print('lmfit version:', lmfit.__version__)
figure_size = (5, 4)
default_figure = lambda: plt.subplots(figsize=figure_size)
save_figures = True
def savefig(filename, **kwargs):
if not save_figures:
return
import os
dir_ = 'figures/'
kwargs_ = dict(dpi=300, bbox_inches='tight')
#frameon=True, facecolor='white', transparent=False)
kwargs_.update(kwargs)
plt.savefig(dir_ + filename, **kwargs_)
print('Saved: %s' % (dir_ + filename))
"""
Explanation: Multi-spot vs usALEX FRET histogram comparison
Load FRETBursts software
End of explanation
"""
PLOT_DIR = './figure/'
import matplotlib as mpl
from cycler import cycler
bmap = sns.color_palette("Set1", 9)
colors = np.array(bmap)[(1,0,2,3,4,8,6,7), :]
mpl.rcParams['axes.prop_cycle'] = cycler('color', colors)
colors_labels = ['blue', 'red', 'green', 'violet', 'orange', 'gray', 'brown', 'pink', ]
for c, cl in zip(colors, colors_labels):
locals()[cl] = tuple(c) # assign variables with color names
sns.palplot(colors)
"""
Explanation: 8-spot paper plot style
End of explanation
"""
data_dir = './data/multispot/'
"""
Explanation: Data files
Data folder:
End of explanation
"""
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
"""
Explanation: Check that the folder exists:
End of explanation
"""
from glob import glob
file_list = sorted(glob(data_dir + '*_?.hdf5'))
labels = ['12d', '7d', '17d', '22d', '27d', 'DO']
files_dict = {lab: fname for lab, fname in zip(sorted(labels), file_list)}
files_dict
"""
Explanation: List of data files in data_dir:
End of explanation
"""
_fname = 'results/Multi-spot - leakage coefficient KDE wmean DexDem.csv'
leakageM = np.loadtxt(_fname, ndmin=1)
print('Leakage coefficient:', leakageM)
"""
Explanation: Correction parameters
Multispot
Load the multispot leakage coefficient from disk (computed in Multi-spot 5-Samples analyis - Leakage coefficient fit):
End of explanation
"""
_fname = 'results/usALEX - direct excitation coefficient dir_ex_t beta.csv'
dir_ex_tM = np.loadtxt(_fname, ndmin=1)
print('Direct excitation coefficient (dir_ex_t):', dir_ex_tM)
"""
Explanation: Load the multispot direct excitation coefficient ($d_{dirT}$) from disk (computed in usALEX - Corrections - Direct excitation physical parameter):
End of explanation
"""
_fname = 'results/Multi-spot - gamma factor.csv'
gammaM = np.loadtxt(_fname, ndmin=1)
print('Multispot gamma coefficient:', gammaM)
"""
Explanation: Load the multispot gamma ($\gamma_M$) coefficient (computed in Multi-spot Gamma Fitting):
End of explanation
"""
_fname = 'results/usALEX - leakage coefficient DexDem.csv'
leakageA = np.loadtxt(_fname)
print('usALEX Leakage coefficient:', leakageA)
"""
Explanation: usALEX
Load the usALEX leakage coefficient from disk (computed in usALEX - Corrections - Leakage fit):
End of explanation
"""
_fname = 'results/usALEX - gamma factor - all-ph.csv'
gammaA = np.loadtxt(_fname)
print('usALEX Gamma-factor:', gammaA)
"""
Explanation: Load the usALEX gamma coefficient (computed in usALEX - Corrections - Gamma factor fit):
End of explanation
"""
_fname = 'results/usALEX - beta factor - all-ph.csv'
betaA = np.loadtxt(_fname)
print('usALEX Gamma-factor:', betaA)
"""
Explanation: Load the usALEX beta coefficient (computed in usALEX - Corrections - Gamma factor fit):
End of explanation
"""
_fname = 'results/usALEX - direct excitation coefficient dir_ex_aa.csv'
dir_ex_aa = np.loadtxt(_fname)
print('Direct excitation coefficient (dir_ex_aa):', dir_ex_aa)
"""
Explanation: Load the usALEX direct-excitation coefficient ($d_{exAA}$) (computed in usALEX - Corrections - Direct excitation fit):
End of explanation
"""
dir_ex_tA = betaA * dir_ex_aa
dir_ex_tA
"""
Explanation: Compute usALEX direct-excitation coefficient ($d_{exT}$) (see usALEX - Corrections - Direct excitation physical parameter):
End of explanation
"""
donor_ref = False # False -> gamma correction is: g*nd + na
# True -> gamma correction is: nd + na/g
hist_weights = 'size'
## Background fit parameters
bg_kwargs_auto = dict(fun=bg.exp_fit,
time_s = 30,
tail_min_us = 'auto',
F_bg=1.7,
)
## Burst search
F=6
dither = False
size_th = 30 # Burst size threshold (selection on corrected burst sizes)
## FRET fit parameters
bandwidth = 0.03 # KDE bandwidth
E_range = {'7d': (0.7, 1.0), '12d': (0.4, 0.8), '17d': (0.2, 0.4),
'22d': (0.0, 0.1), '27d': (0.0, 0.1), 'DO': (0.0, 0.1)}
E_axis_kde = np.arange(-0.2, 1.2, 0.0002)
"""
Explanation: Parameters
Analysis parameters:
End of explanation
"""
def print_fit_report(E_pr, gamma=1, leakage=0, dir_ex_t=0, math=True):
"""Print fit and standard deviation for both corrected and uncorrected E
Returns d.E_fit.
"""
E_corr = fretmath.correct_E_gamma_leak_dir(E_pr, gamma=gamma, leakage=leakage, dir_ex_t=dir_ex_t)
E_pr_mean = E_pr.mean()*100
E_pr_delta = (E_pr.max() - E_pr.min())*100
E_corr_mean = E_corr.mean()*100
E_corr_delta = (E_corr.max() - E_corr.min())*100
if math:
display(Math(r'\text{Pre}\;\gamma\quad\langle{E}_{fit}\rangle = %.1f\%% \qquad'
'\Delta E_{fit} = %.2f \%%' % \
(E_pr_mean, E_pr_delta)))
display(Math(r'\text{Post}\;\gamma\quad\langle{E}_{fit}\rangle = %.1f\%% \qquad'
'\Delta E_{fit} = %.2f \%%' % \
(E_corr_mean, E_corr_delta)))
else:
print('Pre-gamma E (delta, mean): %.2f %.2f' % (E_pr_mean, E_pr_delta))
print('Post-gamma E (delta, mean): %.2f %.2f' % (E_corr_mean, E_corr_delta))
"""
Explanation: Utility functions
End of explanation
"""
d = loader.photon_hdf5(files_dict[data_id])
d.calc_bg(**bg_kwargs_auto)
d.burst_search(m=10, F=F, dither=dither)
d.time_max
ds = Sel(d, select_bursts.size, th1=30, gamma=gammaM, donor_ref=donor_ref)
ds.num_bursts
# fitter = bext.bursts_fitter(ds)
# fitter.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
# fitter.model = mfit.factory_two_gaussians(add_bridge=False, p2_center=0.4)
# fitter.fit_histogram()
# display(fitter.params['p2_center'])
# print_fit_report(fitter.params['p2_center'], gamma=gammaM, leakage=leakageM, dir_ex_t=dir_ex_tM)
dplot(ds, hist_fret);
#show_model=True, show_fit_stats=True, fit_from='p2_center', show_fit_value=True);
d_all = ds.collapse()
d_all_chunk = Sel(d_all, select_bursts.time, time_s2=600/8)
dplot(d_all_chunk, hist_fret)
Eraw = d_all_chunk.E[0]
E = fretmath.correct_E_gamma_leak_dir(Eraw, gamma=gammaM, leakage=leakageM, dir_ex_t=dir_ex_tM)
sns.set_style('whitegrid')
%config InlineBackend.figure_format='retina' # for hi-dpi displays
plt.hist(E, bins=np.arange(-0.2, 1.2, 0.025) + 0.5*0.025);
"""
Explanation: Multispot analysis
End of explanation
"""
bursts_usalex = pd.read_csv('results/bursts_usALEX_{sample}_{ph_sel}_F{F:.1f}_m{m}_size{th}.csv'
.format(sample=data_id, ph_sel='Dex', m=10, th=30, F=7), index_col=0)
bursts_usalex
Eraw_alex = bursts_usalex.E
E_alex = fretmath.correct_E_gamma_leak_dir(Eraw_alex, gamma=gammaA, leakage=leakageA, dir_ex_t=dir_ex_tA)
kws = dict(bins=np.arange(-0.2, 1.2, 0.025) + 0.5*0.025, histtype='step', lw=1.8)
plt.hist(E, label='Multispot', **kws)
plt.hist(E_alex, label='μs-ALEX', **kws)
plt.legend(loc=2)
plt.title('Sample %s: Multispot vs μs-ALEX comparison' % data_id)
plt.xlabel('FRET Efficiency')
plt.ylabel('# Bursts');
savefig('Multispot vs usALEX FRET hist comp sample %s' % data_id)
kws = dict(bins=np.arange(-0.2, 1.2, 0.025) + 0.5*0.025, histtype='step', lw=1.8, normed=True)
plt.hist(E, label='Multispot', **kws)
plt.hist(E_alex, label='μs-ALEX', **kws)
plt.legend(loc=2)
plt.title('Sample %s: Multispot vs μs-ALEX comparison' % data_id)
plt.xlabel('FRET Efficiency')
plt.ylabel('Probabiltity');
savefig('Multispot vs usALEX FRET hist comp sample %s normed' % data_id)
"""
Explanation: Comparison with usALEX
End of explanation
"""
|
anandha2017/udacity | nd101 Deep Learning Nanodegree Foundation/DockerImages/17_Weight_Initialisation/notebooks/weight-initialization/weight_initialization.ipynb | mit | %matplotlib inline
import tensorflow as tf
import helper
from tensorflow.examples.tutorials.mnist import input_data
print('Getting MNIST Dataset...')
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
print('Data Extracted.')
"""
Explanation: Weight Initialization
In this lesson, you'll learn how to find good initial weights for a neural network. Having good initial weights can place the neural network close to the optimal solution. This allows the neural network to come to the best solution quicker.
Testing Weights
Dataset
To see how different weights perform, we'll test on the same dataset and neural network. Let's go over the dataset and neural network.
We'll be using the MNIST dataset to demonstrate the different initial weights. As a reminder, the MNIST dataset contains images of handwritten numbers, 0-9, with normalized input (0.0 - 1.0). Run the cell below to download and load the MNIST dataset.
End of explanation
"""
# Save the shapes of weights for each layer
layer_1_weight_shape = (mnist.train.images.shape[1], 256)
layer_2_weight_shape = (256, 128)
layer_3_weight_shape = (128, mnist.train.labels.shape[1])
"""
Explanation: Neural Network
<img style="float: left" src="images/neural_network.png"/>
For the neural network, we'll test on a 3 layer neural network with ReLU activations and an Adam optimizer. The lessons you learn apply to other neural networks, including different activations and optimizers.
End of explanation
"""
all_zero_weights = [
tf.Variable(tf.zeros(layer_1_weight_shape)),
tf.Variable(tf.zeros(layer_2_weight_shape)),
tf.Variable(tf.zeros(layer_3_weight_shape))
]
all_one_weights = [
tf.Variable(tf.ones(layer_1_weight_shape)),
tf.Variable(tf.ones(layer_2_weight_shape)),
tf.Variable(tf.ones(layer_3_weight_shape))
]
helper.compare_init_weights(
mnist,
'All Zeros vs All Ones',
[
(all_zero_weights, 'All Zeros'),
(all_one_weights, 'All Ones')])
"""
Explanation: Initialize Weights
Let's start looking at some initial weights.
All Zeros or Ones
If you follow the principle of Occam's razor, you might think setting all the weights to 0 or 1 would be the best solution. This is not the case.
With every weight the same, all the neurons at each layer are producing the same output. This makes it hard to decide which weights to adjust.
Let's compare the loss with all ones and all zero weights using helper.compare_init_weights. This function will run two different initial weights on the neural network above for 2 epochs. It will plot the loss for the first 100 batches and print out stats after the 2 epochs (~860 batches). We plot the first 100 batches to better judge which weights performed better at the start.
Run the cell below to see the difference between weights of all zeros against all ones.
End of explanation
"""
helper.hist_dist('Random Uniform (minval=-3, maxval=3)', tf.random_uniform([1000], -3, 3))
"""
Explanation: As you can see the accuracy is close to guessing for both zeros and ones, around 10%.
The neural network is having a hard time determining which weights need to be changed, since the neurons have the same output for each layer. To avoid neurons with the same output, let's use unique weights. We can also randomly select these weights to avoid being stuck in a local minimum for each run.
A good solution for getting these random weights is to sample from a uniform distribution.
Uniform Distribution
A [uniform distribution](https://en.wikipedia.org/wiki/Uniform_distribution_(continuous%29) has the equal probability of picking any number from a set of numbers. We'll be picking from a continous distribution, so the chance of picking the same number is low. We'll use TensorFlow's tf.random_uniform function to pick random numbers from a uniform distribution.
tf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None)
Outputs random values from a uniform distribution.
The generated values follow a uniform distribution in the range [minval, maxval). The lower bound minval is included in the range, while the upper bound maxval is excluded.
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
minval: A 0-D Tensor or Python value of type dtype. The lower bound on the range of random values to generate. Defaults to 0.
maxval: A 0-D Tensor or Python value of type dtype. The upper bound on the range of random values to generate. Defaults to 1 if dtype is floating point.
dtype: The type of the output: float32, float64, int32, or int64.
seed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.
name: A name for the operation (optional).
We can visualize the uniform distribution by using a histogram. Let's map the values from tf.random_uniform([1000], -3, 3) to a histogram using the helper.hist_dist function. This will be 1000 random float values from -3 to 3, excluding the value 3.
End of explanation
"""
# Default for tf.random_uniform is minval=0 and maxval=1
basline_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape)),
tf.Variable(tf.random_uniform(layer_2_weight_shape)),
tf.Variable(tf.random_uniform(layer_3_weight_shape))
]
helper.compare_init_weights(
mnist,
'Baseline',
[(basline_weights, 'tf.random_uniform [0, 1)')])
"""
Explanation: The histogram used 500 buckets for the 1000 values. Since the chance for any single bucket is the same, there should be around 2 values for each bucket. That's exactly what we see with the histogram. Some buckets have more and some have less, but they trend around 2.
Now that you understand the tf.random_uniform function, let's apply it to some initial weights.
Baseline
Let's see how well the neural network trains using the default values for tf.random_uniform, where minval=0.0 and maxval=1.0.
End of explanation
"""
uniform_neg1to1_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -1, 1)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -1, 1)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -1, 1))
]
helper.compare_init_weights(
mnist,
'[0, 1) vs [-1, 1)',
[
(basline_weights, 'tf.random_uniform [0, 1)'),
(uniform_neg1to1_weights, 'tf.random_uniform [-1, 1)')])
"""
Explanation: The loss graph is showing the neural network is learning, which it didn't with all zeros or all ones. We're headed in the right direction.
General rule for setting weights
The general rule for setting the weights in a neural network is to be close to zero without being too small. A good pracitce is to start your weights in the range of $[-y, y]$ where
$y=1/\sqrt{n}$ ($n$ is the number of inputs to a given neuron).
Let's see if this holds true, let's first center our range over zero. This will give us the range [-1, 1).
End of explanation
"""
uniform_neg01to01_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.1, 0.1)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.1, 0.1)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.1, 0.1))
]
uniform_neg001to001_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.01, 0.01)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.01, 0.01)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.01, 0.01))
]
uniform_neg0001to0001_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.001, 0.001)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.001, 0.001)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.001, 0.001))
]
helper.compare_init_weights(
mnist,
'[-1, 1) vs [-0.1, 0.1) vs [-0.01, 0.01) vs [-0.001, 0.001)',
[
(uniform_neg1to1_weights, '[-1, 1)'),
(uniform_neg01to01_weights, '[-0.1, 0.1)'),
(uniform_neg001to001_weights, '[-0.01, 0.01)'),
(uniform_neg0001to0001_weights, '[-0.001, 0.001)')],
plot_n_batches=None)
helper.compare_init_weights(
mnist,
'[-0.1, 0.1) vs [-0.01, 0.01) vs [-0.001, 0.001)',
[
(uniform_neg01to01_weights, '[-0.1, 0.1)'),
(uniform_neg001to001_weights, '[-0.01, 0.01)'),
(uniform_neg0001to0001_weights, '[-0.001, 0.001)')],
plot_n_batches=None)
"""
Explanation: We're going in the right direction, the accuracy and loss is better with [-1, 1). We still want smaller weights. How far can we go before it's too small?
Too small
Let's compare [-0.1, 0.1), [-0.01, 0.01), and [-0.001, 0.001) to see how small is too small. We'll also set plot_n_batches=None to show all the batches in the plot.
End of explanation
"""
import numpy as np
general_rule_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -1/np.sqrt(layer_1_weight_shape[0]), 1/np.sqrt(layer_1_weight_shape[0]))),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -1/np.sqrt(layer_2_weight_shape[0]), 1/np.sqrt(layer_2_weight_shape[0]))),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -1/np.sqrt(layer_3_weight_shape[0]), 1/np.sqrt(layer_3_weight_shape[0])))
]
helper.compare_init_weights(
mnist,
'[-0.1, 0.1) vs General Rule',
[
(uniform_neg01to01_weights, '[-0.1, 0.1)'),
(general_rule_weights, 'General Rule')],
plot_n_batches=None)
"""
Explanation: Looks like anything [-0.01, 0.01) or smaller is too small. Let's compare this to our typical rule of using the range $y=1/\sqrt{n}$.
End of explanation
"""
helper.hist_dist('Random Normal (mean=0.0, stddev=1.0)', tf.random_normal([1000]))
"""
Explanation: The range we found and $y=1/\sqrt{n}$ are really close.
Since the uniform distribution has the same chance to pick anything in the range, what if we used a distribution that had a higher chance of picking numbers closer to 0. Let's look at the normal distribution.
Normal Distribution
Unlike the uniform distribution, the normal distribution has a higher likelihood of picking number close to it's mean. To visualize it, let's plot values from TensorFlow's tf.random_normal function to a histogram.
tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)
Outputs random values from a normal distribution.
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
mean: A 0-D Tensor or Python value of type dtype. The mean of the normal distribution.
stddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the normal distribution.
dtype: The type of the output.
seed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.
name: A name for the operation (optional).
End of explanation
"""
normal_01_weights = [
tf.Variable(tf.random_normal(layer_1_weight_shape, stddev=0.1)),
tf.Variable(tf.random_normal(layer_2_weight_shape, stddev=0.1)),
tf.Variable(tf.random_normal(layer_3_weight_shape, stddev=0.1))
]
helper.compare_init_weights(
mnist,
'Uniform [-0.1, 0.1) vs Normal stddev 0.1',
[
(uniform_neg01to01_weights, 'Uniform [-0.1, 0.1)'),
(normal_01_weights, 'Normal stddev 0.1')])
"""
Explanation: Let's compare the normal distribution against the previous uniform distribution.
End of explanation
"""
helper.hist_dist('Truncated Normal (mean=0.0, stddev=1.0)', tf.truncated_normal([1000]))
"""
Explanation: The normal distribution gave a slight increasse in accuracy and loss. Let's move closer to 0 and drop picked numbers that are x number of standard deviations away. This distribution is called Truncated Normal Distribution.
Truncated Normal Distribution
tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)
Outputs random values from a truncated normal distribution.
The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
mean: A 0-D Tensor or Python value of type dtype. The mean of the truncated normal distribution.
stddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the truncated normal distribution.
dtype: The type of the output.
seed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.
name: A name for the operation (optional).
End of explanation
"""
trunc_normal_01_weights = [
tf.Variable(tf.truncated_normal(layer_1_weight_shape, stddev=0.1)),
tf.Variable(tf.truncated_normal(layer_2_weight_shape, stddev=0.1)),
tf.Variable(tf.truncated_normal(layer_3_weight_shape, stddev=0.1))
]
helper.compare_init_weights(
mnist,
'Normal vs Truncated Normal',
[
(normal_01_weights, 'Normal'),
(trunc_normal_01_weights, 'Truncated Normal')])
"""
Explanation: Again, let's compare the previous results with the previous distribution.
End of explanation
"""
helper.compare_init_weights(
mnist,
'Baseline vs Truncated Normal',
[
(basline_weights, 'Baseline'),
(trunc_normal_01_weights, 'Truncated Normal')])
"""
Explanation: There's no difference between the two, but that's because the neural network we're using is too small. A larger neural network will pick more points on the normal distribution, increasing the likelihood it's choices are larger than 2 standard deviations.
We've come a long way from the first set of weights we tested. Let's see the difference between the weights we used then and now.
End of explanation
"""
|
apryor6/apryor6.github.io | visualizations/seaborn/notebooks/jointplot.ipynb | mit | %matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
plt.rcParams['figure.figsize'] = (20.0, 10.0)
plt.rcParams['font.family'] = "serif"
"""
Explanation: seaborn.jointplot
Seaborn's jointplot displays a relationship between 2 variables (bivariate) as well as 1D profiles (univariate) in the margins. This plot is a convenience class that wraps JointGrid.
End of explanation
"""
# Generate some random multivariate data
x, y = np.random.RandomState(8).multivariate_normal([0, 0], [(1, 0), (0, 1)], 1000).T
df = pd.DataFrame({"x":x,"y":y})
"""
Explanation: The multivariate normal distribution is a nice tool to demonstrate this type of plot as it is sampling from a multidimensional Gaussian and there is natural clustering. I'll set the covariance matrix equal to the identity so that the X and Y variables are uncorrelated -- meaning we will just get a blob
End of explanation
"""
p = sns.jointplot(data=df,x='x', y='y')
"""
Explanation: Default plot
End of explanation
"""
p = sns.jointplot(data=df,x='x', y='y',kind='scatter')
"""
Explanation: Currently, jointplot wraps JointGrid with the following options for kind:
- scatter
- reg
- resid
- kde
- hex
Scatter is the default parameters
End of explanation
"""
p = sns.jointplot(data=df,x='x', y='y',kind='reg')
"""
Explanation: 'reg' plots a linear regression line. Here the line is close to flat because we chose our variables to be uncorrelated
End of explanation
"""
x2, y2 = np.random.RandomState(9).multivariate_normal([0, 0], [(1, 0), (0, 1)], len(x)).T
df2 = pd.DataFrame({"x":x,"y":y2})
p = sns.jointplot(data=df,x='x', y='y',kind='resid')
"""
Explanation: 'resid' plots the residual of the data to the regression line -- which is not very useful for this specific example because our regression line is almost flat and thus the residual is almost the same as the data.
End of explanation
"""
p = sns.jointplot(data=df,x='x', y='y',kind='kde')
"""
Explanation: kde plots a kernel density estimate in the margins and converts the interior into a shaded countour plot
End of explanation
"""
p = sns.jointplot(data=df,x='x', y='y',kind='hex')
"""
Explanation: 'hex' bins the data into hexagons with histograms in the margins. At this point you probably see the "pre-cooked" nature of jointplot. It provides nice defaults, but if you wanted, for example, a KDE on the margin of this hexplot you will need to use JointGrid.
End of explanation
"""
from scipy.stats import tmin
p = sns.jointplot(data=df, x='x', y='y',kind='kde',stat_func=tmin)
# tmin is computing roughly the equivalent of the following
print(df.loc[df.x>df.y,'x'].min())
"""
Explanation: stat_func can be used to provide a function for computing a summary statistic from the data. The full x, y data vectors are passed in, so the function must provide one value or a tuple from many. As an example, I'll provide tmin, which when used in this way will return the smallest value of x that was greater than its corresponding value of y.
End of explanation
"""
p = sns.jointplot(data=df,
x='x',
y='y',
kind='kde',
color="#99ffff")
p = sns.jointplot(data=df,
x='x',
y='y',
kind='kde',
ratio=1)
"""
Explanation: Change the color
End of explanation
"""
p = sns.jointplot(data=df,
x='x',
y='y',
kind='kde',
space=2)
"""
Explanation: Create separation between 2D plot and marginal plots with space
End of explanation
"""
p = sns.jointplot(data=df,
x='x',
y='y',
kind='kde',
xlim=(-15,15),
ylim=(-15,15))
"""
Explanation: xlim and ylim can be used to adjust the field of view
End of explanation
"""
p = sns.jointplot(data=df,
x='x',
y='y',
kind='kde',
marginal_kws={'lw':5,
'color':'red'})
"""
Explanation: Pass additional parameters to the marginal plots with marginal_kws. You can pass similar options to joint_kws and annot_kws
End of explanation
"""
sns.set(rc={'axes.labelsize':30,
'figure.figsize':(20.0, 10.0),
'xtick.labelsize':25,
'ytick.labelsize':20})
from itertools import chain
p = sns.jointplot(data=df,
x='x',
y='y',
kind='kde',
xlim=(-3,3),
ylim=(-3,3),
space=0,
stat_func=None,
marginal_kws={'lw':3,
'bw':0.2}).set_axis_labels('X','Y')
p.ax_marg_x.set_facecolor('#ccffccaa')
p.ax_marg_y.set_facecolor('#ccffccaa')
for l in chain(p.ax_marg_x.axes.lines,p.ax_marg_y.axes.lines):
l.set_linestyle('--')
l.set_color('black')
plt.text(-1.7,-2.7, "Joint Plot", fontsize = 55, color='Black', fontstyle='italic')
fig, ax = plt.subplots(1,1)
sns.set(rc={'axes.labelsize':30,
'figure.figsize':(20.0, 10.0),
'xtick.labelsize':25,
'ytick.labelsize':20})
from itertools import chain
p = sns.jointplot(data=df,
x='x',
y='y',
kind='kde',
xlim=(-3,3),
ylim=(-3,3),
space=0,
stat_func=None,
ax=ax,
marginal_kws={'lw':3,
'bw':0.2}).set_axis_labels('X','Y')
p.ax_marg_x.set_facecolor('#ccffccaa')
p.ax_marg_y.set_facecolor('#ccffccaa')
for l in chain(p.ax_marg_x.axes.lines,p.ax_marg_y.axes.lines):
l.set_linestyle('--')
l.set_color('black')
plt.text(-1.7,-2.7, "Joint Plot", fontsize = 55, color='Black', fontstyle='italic')
# p = sns.jointplot(data=df,
# x='x',
# y='y',
# kind='kde',
# xlim=(-3,3),
# ylim=(-3,3),
# space=0,
# stat_func=None,
# ax=ax[1],
# marginal_kws={'lw':3,
# 'bw':0.2}).set_axis_labels('X','Y')
# p.ax_marg_x.set_facecolor('#ccffccaa')
# p.ax_marg_y.set_facecolor('#ccffccaa')
# for l in chain(p.ax_marg_x.axes.lines,p.ax_marg_y.axes.lines):
# l.set_linestyle('--')
# l.set_color('black')
p.savefig('../../figures/jointplot.png')
"""
Explanation: Finalize
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.12/_downloads/plot_tf_dics.ipynb | bsd-3-clause | # Author: Roman Goj <roman.goj@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne.event import make_fixed_length_events
from mne.datasets import sample
from mne.time_frequency import compute_epochs_csd
from mne.beamformer import tf_dics
from mne.viz import plot_source_spectrogram
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
noise_fname = data_path + '/MEG/sample/ernoise_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
subjects_dir = data_path + '/subjects'
label_name = 'Aud-lh'
fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name
"""
Explanation: Time-frequency beamforming using DICS
Compute DICS source power in a grid of time-frequency windows and display
results.
The original reference is:
Dalal et al. Five-dimensional neuroimaging: Localization of the time-frequency
dynamics of cortical activity. NeuroImage (2008) vol. 40 (4) pp. 1686-1700
End of explanation
"""
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel
# Pick a selection of magnetometer channels. A subset of all channels was used
# to speed up the example. For a solution based on all MEG channels use
# meg=True, selection=None and add mag=4e-12 to the reject dictionary.
left_temporal_channels = mne.read_selection('Left-temporal')
picks = mne.pick_types(raw.info, meg='mag', eeg=False, eog=False,
stim=False, exclude='bads',
selection=left_temporal_channels)
raw.pick_channels([raw.ch_names[pick] for pick in picks])
reject = dict(mag=4e-12)
# Re-normalize our empty-room projectors, which should be fine after
# subselection
raw.info.normalize_proj()
# Setting time windows. Note that tmin and tmax are set so that time-frequency
# beamforming will be performed for a wider range of time points than will
# later be displayed on the final spectrogram. This ensures that all time bins
# displayed represent an average of an equal number of time windows.
tmin, tmax, tstep = -0.55, 0.75, 0.05 # s
tmin_plot, tmax_plot = -0.3, 0.5 # s
# Read epochs
event_id = 1
events = mne.read_events(event_fname)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
baseline=None, preload=True, proj=True, reject=reject)
# Read empty room noise raw data
raw_noise = mne.io.read_raw_fif(noise_fname, preload=True)
raw_noise.info['bads'] = ['MEG 2443'] # 1 bad MEG channel
raw_noise.pick_channels([raw_noise.ch_names[pick] for pick in picks])
raw_noise.info.normalize_proj()
# Create noise epochs and make sure the number of noise epochs corresponds to
# the number of data epochs
events_noise = make_fixed_length_events(raw_noise, event_id)
epochs_noise = mne.Epochs(raw_noise, events_noise, event_id, tmin_plot,
tmax_plot, baseline=None, preload=True, proj=True,
reject=reject)
epochs_noise.info.normalize_proj()
epochs_noise.apply_proj()
# then make sure the number of epochs is the same
epochs_noise = epochs_noise[:len(epochs.events)]
# Read forward operator
forward = mne.read_forward_solution(fname_fwd, surf_ori=True)
# Read label
label = mne.read_label(fname_label)
"""
Explanation: Read raw data
End of explanation
"""
# Setting frequency bins as in Dalal et al. 2008
freq_bins = [(4, 12), (12, 30), (30, 55), (65, 300)] # Hz
win_lengths = [0.3, 0.2, 0.15, 0.1] # s
# Then set FFTs length for each frequency range.
# Should be a power of 2 to be faster.
n_ffts = [256, 128, 128, 128]
# Subtract evoked response prior to computation?
subtract_evoked = False
# Calculating noise cross-spectral density from empty room noise for each
# frequency bin and the corresponding time window length. To calculate noise
# from the baseline period in the data, change epochs_noise to epochs
noise_csds = []
for freq_bin, win_length, n_fft in zip(freq_bins, win_lengths, n_ffts):
noise_csd = compute_epochs_csd(epochs_noise, mode='fourier',
fmin=freq_bin[0], fmax=freq_bin[1],
fsum=True, tmin=-win_length, tmax=0,
n_fft=n_fft)
noise_csds.append(noise_csd)
# Computing DICS solutions for time-frequency windows in a label in source
# space for faster computation, use label=None for full solution
stcs = tf_dics(epochs, forward, noise_csds, tmin, tmax, tstep, win_lengths,
freq_bins=freq_bins, subtract_evoked=subtract_evoked,
n_ffts=n_ffts, reg=0.001, label=label)
# Plotting source spectrogram for source with maximum activity
# Note that tmin and tmax are set to display a time range that is smaller than
# the one for which beamforming estimates were calculated. This ensures that
# all time bins shown are a result of smoothing across an identical number of
# time windows.
plot_source_spectrogram(stcs, freq_bins, tmin=tmin_plot, tmax=tmax_plot,
source_index=None, colorbar=True)
"""
Explanation: Time-frequency beamforming based on DICS
End of explanation
"""
|
jseabold/statsmodels | examples/notebooks/regression_plots.ipynb | bsd-3-clause | %matplotlib inline
from statsmodels.compat import lzip
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.formula.api import ols
plt.rc("figure", figsize=(16,8))
plt.rc("font", size=14)
"""
Explanation: Regression Plots
End of explanation
"""
prestige = sm.datasets.get_rdataset("Duncan", "carData", cache=True).data
prestige.head()
prestige_model = ols("prestige ~ income + education", data=prestige).fit()
print(prestige_model.summary())
"""
Explanation: Duncan's Prestige Dataset
Load the Data
We can use a utility function to load any R dataset available from the great <a href="https://vincentarelbundock.github.io/Rdatasets/">Rdatasets package</a>.
End of explanation
"""
fig = sm.graphics.influence_plot(prestige_model, criterion="cooks")
fig.tight_layout(pad=1.0)
"""
Explanation: Influence plots
Influence plots show the (externally) studentized residuals vs. the leverage of each observation as measured by the hat matrix.
Externally studentized residuals are residuals that are scaled by their standard deviation where
$$var(\hat{\epsilon}i)=\hat{\sigma}^2_i(1-h{ii})$$
with
$$\hat{\sigma}^2_i=\frac{1}{n - p - 1 \;\;}\sum_{j}^{n}\;\;\;\forall \;\;\; j \neq i$$
$n$ is the number of observations and $p$ is the number of regressors. $h_{ii}$ is the $i$-th diagonal element of the hat matrix
$$H=X(X^{\;\prime}X)^{-1}X^{\;\prime}$$
The influence of each point can be visualized by the criterion keyword argument. Options are Cook's distance and DFFITS, two measures of influence.
End of explanation
"""
fig = sm.graphics.plot_partregress("prestige", "income", ["income", "education"], data=prestige)
fig.tight_layout(pad=1.0)
fig = sm.graphics.plot_partregress("prestige", "income", ["education"], data=prestige)
fig.tight_layout(pad=1.0)
"""
Explanation: As you can see there are a few worrisome observations. Both contractor and reporter have low leverage but a large residual. <br />
RR.engineer has small residual and large leverage. Conductor and minister have both high leverage and large residuals, and, <br />
therefore, large influence.
Partial Regression Plots (Duncan)
Since we are doing multivariate regressions, we cannot just look at individual bivariate plots to discern relationships. <br />
Instead, we want to look at the relationship of the dependent variable and independent variables conditional on the other <br />
independent variables. We can do this through using partial regression plots, otherwise known as added variable plots. <br />
In a partial regression plot, to discern the relationship between the response variable and the $k$-th variable, we compute <br />
the residuals by regressing the response variable versus the independent variables excluding $X_k$. We can denote this by <br />
$X_{\sim k}$. We then compute the residuals by regressing $X_k$ on $X_{\sim k}$. The partial regression plot is the plot <br />
of the former versus the latter residuals. <br />
The notable points of this plot are that the fitted line has slope $\beta_k$ and intercept zero. The residuals of this plot <br />
are the same as those of the least squares fit of the original model with full $X$. You can discern the effects of the <br />
individual data values on the estimation of a coefficient easily. If obs_labels is True, then these points are annotated <br />
with their observation label. You can also see the violation of underlying assumptions such as homoskedasticity and <br />
linearity.
End of explanation
"""
subset = ~prestige.index.isin(["conductor", "RR.engineer", "minister"])
prestige_model2 = ols("prestige ~ income + education", data=prestige, subset=subset).fit()
print(prestige_model2.summary())
"""
Explanation: As you can see the partial regression plot confirms the influence of conductor, minister, and RR.engineer on the partial relationship between income and prestige. The cases greatly decrease the effect of income on prestige. Dropping these cases confirms this.
End of explanation
"""
fig = sm.graphics.plot_partregress_grid(prestige_model)
fig.tight_layout(pad=1.0)
"""
Explanation: For a quick check of all the regressors, you can use plot_partregress_grid. These plots will not label the <br />
points, but you can use them to identify problems and then use plot_partregress to get more information.
End of explanation
"""
fig = sm.graphics.plot_ccpr(prestige_model, "education")
fig.tight_layout(pad=1.0)
"""
Explanation: Component-Component plus Residual (CCPR) Plots
The CCPR plot provides a way to judge the effect of one regressor on the <br />
response variable by taking into account the effects of the other <br />
independent variables. The partial residuals plot is defined as <br />
$\text{Residuals} + B_iX_i \text{ }\text{ }$ versus $X_i$. The component adds $B_iX_i$ versus <br />
$X_i$ to show where the fitted line would lie. Care should be taken if $X_i$ <br />
is highly correlated with any of the other independent variables. If this <br />
is the case, the variance evident in the plot will be an underestimate of <br />
the true variance.
End of explanation
"""
fig = sm.graphics.plot_ccpr_grid(prestige_model)
fig.tight_layout(pad=1.0)
"""
Explanation: As you can see the relationship between the variation in prestige explained by education conditional on income seems to be linear, though you can see there are some observations that are exerting considerable influence on the relationship. We can quickly look at more than one variable by using plot_ccpr_grid.
End of explanation
"""
fig = sm.graphics.plot_regress_exog(prestige_model, "education")
fig.tight_layout(pad=1.0)
"""
Explanation: Single Variable Regression Diagnostics
The plot_regress_exog function is a convenience function that gives a 2x2 plot containing the dependent variable and fitted values with confidence intervals vs. the independent variable chosen, the residuals of the model vs. the chosen independent variable, a partial regression plot, and a CCPR plot. This function can be used for quickly checking modeling assumptions with respect to a single regressor.
End of explanation
"""
fig = sm.graphics.plot_fit(prestige_model, "education")
fig.tight_layout(pad=1.0)
"""
Explanation: Fit Plot
The plot_fit function plots the fitted values versus a chosen independent variable. It includes prediction confidence intervals and optionally plots the true dependent variable.
End of explanation
"""
#dta = pd.read_csv("http://www.stat.ufl.edu/~aa/social/csv_files/statewide-crime-2.csv")
#dta = dta.set_index("State", inplace=True).dropna()
#dta.rename(columns={"VR" : "crime",
# "MR" : "murder",
# "M" : "pctmetro",
# "W" : "pctwhite",
# "H" : "pcths",
# "P" : "poverty",
# "S" : "single"
# }, inplace=True)
#
#crime_model = ols("murder ~ pctmetro + poverty + pcths + single", data=dta).fit()
dta = sm.datasets.statecrime.load_pandas().data
crime_model = ols("murder ~ urban + poverty + hs_grad + single", data=dta).fit()
print(crime_model.summary())
"""
Explanation: Statewide Crime 2009 Dataset
Compare the following to http://www.ats.ucla.edu/stat/stata/webbooks/reg/chapter4/statareg_self_assessment_answers4.htm
Though the data here is not the same as in that example. You could run that example by uncommenting the necessary cells below.
End of explanation
"""
fig = sm.graphics.plot_partregress_grid(crime_model)
fig.tight_layout(pad=1.0)
fig = sm.graphics.plot_partregress("murder", "hs_grad", ["urban", "poverty", "single"], data=dta)
fig.tight_layout(pad=1.0)
"""
Explanation: Partial Regression Plots (Crime Data)
End of explanation
"""
fig = sm.graphics.plot_leverage_resid2(crime_model)
fig.tight_layout(pad=1.0)
"""
Explanation: Leverage-Resid<sup>2</sup> Plot
Closely related to the influence_plot is the leverage-resid<sup>2</sup> plot.
End of explanation
"""
fig = sm.graphics.influence_plot(crime_model)
fig.tight_layout(pad=1.0)
"""
Explanation: Influence Plot
End of explanation
"""
from statsmodels.formula.api import rlm
rob_crime_model = rlm("murder ~ urban + poverty + hs_grad + single", data=dta,
M=sm.robust.norms.TukeyBiweight(3)).fit(conv="weights")
print(rob_crime_model.summary())
#rob_crime_model = rlm("murder ~ pctmetro + poverty + pcths + single", data=dta, M=sm.robust.norms.TukeyBiweight()).fit(conv="weights")
#print(rob_crime_model.summary())
"""
Explanation: Using robust regression to correct for outliers.
Part of the problem here in recreating the Stata results is that M-estimators are not robust to leverage points. MM-estimators should do better with this examples.
End of explanation
"""
weights = rob_crime_model.weights
idx = weights > 0
X = rob_crime_model.model.exog[idx.values]
ww = weights[idx] / weights[idx].mean()
hat_matrix_diag = ww*(X*np.linalg.pinv(X).T).sum(1)
resid = rob_crime_model.resid
resid2 = resid**2
resid2 /= resid2.sum()
nobs = int(idx.sum())
hm = hat_matrix_diag.mean()
rm = resid2.mean()
from statsmodels.graphics import utils
fig, ax = plt.subplots(figsize=(16,8))
ax.plot(resid2[idx], hat_matrix_diag, 'o')
ax = utils.annotate_axes(range(nobs), labels=rob_crime_model.model.data.row_labels[idx],
points=lzip(resid2[idx], hat_matrix_diag), offset_points=[(-5,5)]*nobs,
size="large", ax=ax)
ax.set_xlabel("resid2")
ax.set_ylabel("leverage")
ylim = ax.get_ylim()
ax.vlines(rm, *ylim)
xlim = ax.get_xlim()
ax.hlines(hm, *xlim)
ax.margins(0,0)
"""
Explanation: There is not yet an influence diagnostics method as part of RLM, but we can recreate them. (This depends on the status of issue #888)
End of explanation
"""
|
hershaw/data-science-101 | course/class1/correlation/examples/01 - correlation matrix and heatmap.ipynb | mit | df = x_plus_noise(randomness=0)
sns.heatmap(df.corr(), vmin=0, vmax=1)
df.corr()
"""
Explanation: Correlation Matrix
By calling df.corr() on a full pandas DataFrame will return a square matrix containing all pairs of correlations.
By plotting them as a heatmap, you can visualize many correlations more efficiently.
Correlation matrix with two perfectly correlated features
End of explanation
"""
df = x_plus_noise(randomness=0.5)
sns.heatmap(df.corr(), vmin=0, vmax=1)
df.corr()
"""
Explanation: Correlation matrix with mildly-correlated features
End of explanation
"""
df = x_plus_noise(randomness=1)
sns.heatmap(df.corr(), vmin=0, vmax=1)
df.corr()
"""
Explanation: Correlation matrix with not-very-correlated features
End of explanation
"""
|
BDannowitz/polymath-progression-blog | jlab-hackathon/notebooks/04-Multiclass-Classifier.ipynb | gpl-2.0 | %matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import os
import sys
import numpy as np
import math
"""
Explanation: Multi-Class Classifier on Particle Track Data
End of explanation
"""
track_params = pd.read_csv('../TRAIN/track_parms.csv')
track_params.tail()
"""
Explanation: Get angle values and cast to boolean
End of explanation
"""
# Bin the phi values to get multi-class labels
track_params['phi_binned'], phi_bins = pd.cut(track_params.phi,
bins=range(-10, 12, 2),
retbins=True)
track_params['phi_binned'] = track_params['phi_binned'].astype(str)
track_params.head()
"""
Explanation: Create our simple classification target
End of explanation
"""
from tensorflow.keras.preprocessing.image import ImageDataGenerator
DATAGEN = ImageDataGenerator(rescale=1./255.,
validation_split=0.25)
height = 100
width = 36
def create_generator(target, subset, class_mode,
idg=DATAGEN, df=track_params, N=1000):
return idg.flow_from_dataframe(
dataframe=track_params.head(N),
directory="../TRAIN",
x_col="filename",
y_col=target,
subset=subset,
target_size=(height, width),
batch_size=32,
seed=314,
shuffle=True,
class_mode=class_mode,
)
"""
Explanation: Create an image generator from this dataframe
End of explanation
"""
from tensorflow.keras import Sequential, Model
from tensorflow.keras.layers import (
Conv2D, Activation, MaxPooling2D,
Flatten, Dense, Dropout, Input
)
"""
Explanation: Create a very simple convolutional model from scratch
End of explanation
"""
mc_train_generator = create_generator(
target="phi_binned",
subset="training",
class_mode="categorical",
N=10000
)
mc_val_generator = create_generator(
target="phi_binned",
subset="validation",
class_mode="categorical",
N=10000
)
"""
Explanation: Okay, maybe that was too easy
I mean, if any pixels are lit up on the top half / bottom half, it's a smoking gun.
Let's make it harder with binned measurements and treat it as categorical.
End of explanation
"""
width = 36
height = 100
channels = 3
def multiclass_classifier():
model = Sequential()
# Convoluional Layer
model.add(Conv2D(32, (3, 3), input_shape=(height, width, channels)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# Dense, Classification Layer
model.add(Flatten())
model.add(Dense(32))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
return model
STEP_SIZE_TRAIN = mc_train_generator.n//mc_train_generator.batch_size
STEP_SIZE_VAL = mc_val_generator.n//mc_val_generator.batch_size
mc_model = multiclass_classifier()
mc_history = mc_model.fit_generator(
generator=mc_train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=mc_val_generator,
validation_steps=STEP_SIZE_VAL,
epochs=10
)
plt.plot(mc_history.history['accuracy'], label="Train Accuracy")
plt.plot(mc_history.history['val_accuracy'], label="Validation Accuracy")
plt.legend()
plt.show()
"""
Explanation: Similar model, with some tweaks
End of explanation
"""
holdout_track_params = pd.read_csv('../VALIDATION/track_parms.csv')
holdout_track_params['phi_binned'] = pd.cut(
holdout_track_params['phi'],
bins=phi_bins
)
holdout_track_params['phi_binned'] = (
holdout_track_params['phi_binned'].astype(str)
)
mc_holdout_generator = DATAGEN.flow_from_dataframe(
dataframe=holdout_track_params,
directory="../VALIDATION",
x_col="filename",
y_col="phi_binned",
subset=None,
target_size=(height, width),
batch_size=32,
seed=314,
shuffle=False,
class_mode="categorical",
)
holdout_track_params['y_pred'] = mc_model.predict_classes(mc_holdout_generator)
holdout_track_params['y_true'] = mc_holdout_generator.classes
import numpy as np
from sklearn.metrics import confusion_matrix
def plot_confusion_matrix(cm,
target_names,
title='Confusion matrix',
cmap=None,
normalize=True):
"""
given a sklearn confusion matrix (cm), make a nice plot
Arguments
---------
cm: confusion matrix from sklearn.metrics.confusion_matrix
target_names: given classification classes such as [0, 1, 2]
the class names, for example: ['high', 'medium', 'low']
title: the text to display at the top of the matrix
cmap: the gradient of the values displayed from matplotlib.pyplot.cm
see http://matplotlib.org/examples/color/colormaps_reference.html
plt.get_cmap('jet') or plt.cm.Blues
normalize: If False, plot the raw numbers
If True, plot the proportions
Usage
-----
plot_confusion_matrix(cm = cm, # confusion matrix created by
# sklearn.metrics.confusion_matrix
normalize = True, # show proportions
target_names = y_labels_vals, # list of names of the classes
title = best_estimator_name) # title of graph
Citiation
---------
http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
"""
import matplotlib.pyplot as plt
import numpy as np
import itertools
accuracy = np.trace(cm) / float(np.sum(cm))
misclass = 1 - accuracy
if cmap is None:
cmap = plt.get_cmap('Blues')
plt.figure(figsize=(10, 8))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
if target_names is not None:
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 1.5 if normalize else cm.max() / 2
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
if normalize:
plt.text(j, i, "{:0.4f}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
else:
plt.text(j, i, "{:,}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label\naccuracy={:0.4f}; misclass={:0.4f}'.format(accuracy, misclass))
plt.show()
y_pred = mc_model.predict_classes(mc_holdout_generator)
y_true = mc_holdout_generator.labels
label_list = ['(-10.0, -8.0]', '(-8.0, -6.0]', '(-6.0, -4.0]', '(-4.0, -2.0]',
'(-2.0, 0.0]', '(0.0, 2.0]', '(2.0, 4.0]', '(4.0, 6.0]', '(6.0, 8.0]',
'(8.0, 10.0]']
plot_confusion_matrix(confusion_matrix(y_true, y_pred),
target_names=label_list,
normalize=False)
"""
Explanation: Check out predictions on Holdout data
End of explanation
"""
|
tiagoft/curso_audio | classificador_regras.ipynb | mit | %matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
"""
Explanation: Classificação por Regras Pré-Definidas
O problema com o qual vamos lidar é o de classificar automaticamente elementos de um conjunto através de suas características mensuráveis. Trata-se, assim, do problema de observar elementos e, através dessas observações, inferir qual é a classe à qual o elemento pertence. Neste caderno, iremos utilizar um processo de inferência baseado em regras pré-definidas.
Objetivos
Ao final desta iteração, o estudante será capaz de:
* Entender a relevância de características adequadas em conjuntos de dados
* Analisar a relevância de características de dados usando scatter plots e histograma
* Entender o conceito de fronteira de decisão
* Construir regras para classificação à partir da análise manual de dados
* Otimizar parâmetros de regras usando busca exaustiva
End of explanation
"""
import csv
with open("biometria.csv", 'rb') as f:
dados = list(csv.reader(f))
for d in dados:
print d
"""
Explanation: Conjunto de dados
No nosso estudo de caso, verificaremos se é possível identificar o esporte que um jogador pratica observando apenas suas características físicas. Para isso, utilizaremos dados reais de altura e peso dos jogadores das seleções brasileiras de futebol e vôlei. Os dados estão num arquivo CSV, que pode ser carregado para uma variável de ambiente para nossa simulação.
End of explanation
"""
# Separando os dados em arrays do numpy
rotulos_volei = [d[0] for d in dados[1:-1] if d[0] is 'V']
rotulos_futebol = [d[0] for d in dados[1:-1] if d[0] is 'F']
altura_volei = np.array([float(d[1]) for d in dados[1:-1] if d[0] is 'V'])
altura_futebol = np.array([float(d[1]) for d in dados[1:-1] if d[0] is 'F'])
peso_volei = np.array([float(d[2]) for d in dados[1:-1] if d[0] is 'V'])
peso_futebol = np.array([float(d[2]) for d in dados[1:-1] if d[0] is 'F'])
plt.figure();
plt.scatter(peso_volei, altura_volei, color='red');
plt.scatter(peso_futebol, altura_futebol, color='blue');
plt.ylabel('Altura (m)');
plt.xlabel('Peso (kg)');
plt.xlim([60, 120]);
plt.ylim([1.6, 2.2]);
plt.legend(['V', 'F'], loc=4);
"""
Explanation: Visualizando dados
Cada um dos elementos do conjunto de dados é caracterizado por três valores: o esporte que pratica (Futebol ou Vôlei), sua altura e seu peso. Visualizar todos esses dados na forma de uma tabela, porém, é claramente pouco prático. Podemos imaginar como conjuntos de dados ainda maiores se comportariam - uma tabela com jogadores de futebol e vôlei de todos os países que participam do campeonato mundial, por exemplo, seria obviamente muito grande para ser analisada na forma de números.
Uma forma bastante comum de visualização de dados é o scatter plot. Trata-se de um tipo de figura na qual os pontos de um conjunto são colocados em uma figura. Utilizaremos cores para identificar o esporte relacionado a cada um dos pontos de dados.
End of explanation
"""
def classificador_limiar(limiar, dados, rotulos=('V', 'F')):
ans = []
for i in xrange(len(dados)):
if dados[i] > limiar:
ans.append(rotulos[0])
else:
ans.append(rotulos[1])
return ans
print "Exemplo: ", classificador_limiar(1.9, [1.99, 1.9, 1.89, 1.3, 2.1])
"""
Explanation: O scatter plot nos permite verificar a relevância de cada uma das características que medimos para o problema de classificação em questão. Observando a distribuição dos dados no eixo vertical, verificamos que jogadores de vôlei, quase sempre, são mais altos que os jogadores de futebol. Observando a distribuição de dados no eixo horizontal, verificamos que jogadores futebol tendem a ser mais leves que os jogadores de vôlei, mas não há uma divisão tão clara quanto no caso da altura.
Isso nos indica que poderíamos escolher um limiar de altura acima do qual um jogador seria classificado como um jogador de vôlei, e, consequentemente, abaixo do qual ele seria classificado como jogador de futebol. Implementei o classificador como uma função que recebe como entrada um valor de limiar e um conjunto de dados, e retorna os rótulos que devem ser associados a cada um dos pontos desse conjunto. A função aplica a regra do limiar a cada um dos elementos do vetor de dados recebido na entrada.
End of explanation
"""
plt.figure();
plt.scatter(peso_volei, altura_volei, color='red');
plt.scatter(peso_futebol, altura_futebol, color='blue');
plt.plot([60, 120], [1.9, 1.9], color='green', lw=1)
plt.ylabel('Altura (m)');
plt.xlabel('Peso (kg)');
plt.xlim([60, 120]);
plt.ylim([1.6, 2.2]);
plt.legend(['Limiar', 'V', 'F'], loc=4);
"""
Explanation: A escolha de um limiar de classificação pode ser interpretada como a divisão do espaço definido pelas características observadas em partições, sendo que cada uma corresponde a uma classe. Se escolhermos um limiar de 1.90 m para a decisão, observaremos o seguinte particionamento:
End of explanation
"""
def comparar_resultados(resultado, gabarito):
acertos = 0
erros = 0
for i in range(len(resultado)):
if resultado[i] == gabarito[i]:
acertos += 1
else:
erros += 1
return acertos, erros
# Executar classificacao
classificacao_volei = classificador_limiar(1.9, altura_volei)
classificacao_futebol = classificador_limiar(1.9, altura_futebol)
# Comparar resultados com gabarito
resultados_volei = comparar_resultados(classificacao_volei, rotulos_volei)
resultados_futebol = comparar_resultados(classificacao_futebol, rotulos_futebol)
# Mostrar resultados
print "Volei: ", resultados_volei
print "Futebol:", resultados_futebol
"""
Explanation: Aplicando a regra de decisão
Chega então o momento de aplicar, de fato, a regra de decisão aos dados de nosso conjunto. Após essa aplicação, poderemos comparar o resultado da classificação automática com o gabarito (ground-truth), o que nos permite contar erros e acertos. Em especial, nos interessa contar erros e acertos separadamente para cada classe de jogadores.
End of explanation
"""
plt.figure();
plt.scatter(peso_volei + 2*np.random.random(peso_volei.shape), altura_volei, color='red');
plt.scatter(peso_futebol + 2*np.random.random(peso_futebol.shape), altura_futebol, color='blue');
plt.plot([60, 120], [1.9, 1.9], color='green', lw=1)
plt.ylabel('Altura (m)');
plt.xlabel('Peso (kg)');
plt.xlim([60, 120]);
plt.ylim([1.6, 2.2]);
plt.legend(['Limiar', 'V', 'F'], loc=4);
"""
Explanation: Um resultado bastante interessante desta execução é que, embora as figuras de scatter plot tenham mostrado apenas quatro jogadores de vôlei próximos à fronteira de decisão (e, portanto, sujeitos a erros), o sistema de avaliação acusou cinco erros na classificação. Isso aconteceu porque alguns pontos foram sobrepostos ao serem desenhados na imagem. Uma possível maneira de contornar esse problema é adicionando um pequeno ruído aleatório à posição de cada um dos pontos, evidenciando os elementos ocultos.
End of explanation
"""
plt.figure();
plt.hist([altura_volei, altura_futebol], 10, normed=0, histtype='bar',
color=['red', 'blue'],
label=['V', 'F']);
plt.xlabel('Altura (m)');
plt.ylabel('Quantidade de jogadores');
plt.legend(loc=1);
plt.figure();
plt.hist([peso_volei, peso_futebol], 10, normed=0, histtype='bar',
color=['red', 'blue'],
label=['V', 'F']);
plt.xlabel('Peso (kg)');
plt.ylabel('Quantidade de jogadores');
plt.legend(loc=1);
"""
Explanation: Esse procedimento evidencia caracteríticas que poderiam ficar ocultas no conjunto de dados. Porém, se aplicado em excesso, pode tornar a representação menos precisa. Uma ferramenta de análise de dados que permite verificar quantos pontos estão em cada posição é o histograma.
End of explanation
"""
limiar = 1.5
# Executar classificacao
classificacao_volei = classificador_limiar(limiar, altura_volei)
classificacao_futebol = classificador_limiar(limiar, altura_futebol)
# Comparar resultados com gabarito
resultados_volei = comparar_resultados(classificacao_volei, rotulos_volei)
resultados_futebol = comparar_resultados(classificacao_futebol, rotulos_futebol)
# Mostrar resultados e limiar de classificação
plt.figure();
plt.scatter(peso_volei + 2*np.random.random(peso_volei.shape), altura_volei, color='red');
plt.scatter(peso_futebol + 2*np.random.random(peso_futebol.shape), altura_futebol, color='blue');
plt.plot([60, 120], [limiar, limiar], color='green', lw=1)
plt.ylabel('Altura (m)');
plt.xlabel('Peso (kg)');
plt.xlim([60, 120]);
plt.ylim([1.6, 2.2]);
plt.legend(['Limiar', 'V', 'F'], loc=4);
print "Total de acertos:", resultados_volei[0] + resultados_futebol[0]
"""
Explanation: O histograma traz uma representação mais clara do comportamento dos dados, evidenciando a frequência de ocorrência de cada faixa de valores em cada dimensão. Porém, ao mesmo tempo, não evidencia as correlações entre variáveis. De qualquer forma, trata-se de uma ferramenta importante para verificar quais características são relevantes no processo de classificação.
Otimizando o processo de classificação
Não temos, neste momento, nenhum motivo para crer que nosso limiar inicial - 1.9m - seja o melhor possível (ou: o ótimo) para realizar a classificação automática a que nos propusemos. No trecho de código abaixo, é possível variar o valor do limiar e então visualizar a fronteira de decisão e o número total de acertos do processo de classificação. Antes de prosseguir, experimente alguns valores para o limiar e tente otimizar o número de acertos do sistema.
End of explanation
"""
limiares = [] # limiares candidatos
respostas = []
# Limiares que serao testados
inicial = 1.6
passo = 0.001
final = 2.2
i = inicial
melhor_limiar = inicial
melhor_classificacao = 0
while i <= final :
# Executar classificacao
classificacao_volei = classificador_limiar(i, altura_volei);
classificacao_futebol = classificador_limiar(i, altura_futebol);
# Comparar resultados com gabarito
resultados_volei = comparar_resultados(classificacao_volei, rotulos_volei);
resultados_futebol = comparar_resultados(classificacao_futebol, rotulos_futebol);
# Calcula o total de acertos e armazena o resultado
res = resultados_volei[0] + resultados_futebol[0]
respostas.append(res);
limiares.append(i);
# Verifica se consegui uma classificacao melhor
if res > melhor_classificacao:
melhor_classificacao = res
melhor_limiar = i
# Da mais um passo
i += passo;
# Mostrar resultados e limiar de classificação
plt.figure();
plt.plot(limiares, respostas);
plt.ylabel('Acertos');
plt.xlabel('Limiar');
print "Melhor limiar:", melhor_limiar, " Acertos:", melhor_classificacao
"""
Explanation: Deve ficar óbvio que a otimização do limiar através da variação manual rapidamente se torna um processo laborioso. Embora algumas respostas sejam claramente piores que outras, existem várias respostas que parecem boas dentro de um intervalo muito pequeno, e não temos como garantir que uma delas seja, necessariamente, ótima. Porém, podemos aumentar nossas chances de encontrar um valor ótimo se automatizarmos o processo de busca exaustiva.
O código abaixo executa o processo de busca exaustiva variando o limiar entre dois limites - inicial e final - com passos de tamanho conhecido. A cada passo, verifica se o resultado encontrado é melhor que o melhor resultado armazenado até então, e, caso seja, armazena esse novo resultado. Verifique o que acontece com o resultado ao tornar o passo progressivamente mais refinado.
End of explanation
"""
|
tyamamot/h29iro | codes/3_Evaluation.ipynb | mit | !pyNTCIREVAL
"""
Explanation: 第3回 情報検索の評価
この演習ページでは,既存のツールを使って各種評価指標を計算する方法について説明します.
参考文献
- 情報アクセス評価方法論 -検索エンジンの進歩のために-, 酒井哲也, コロナ社, 2015.
ライブラリ
この演習では,情報検索におけるさまざまな評価指標を計算するためのツールキットである NTCIREVAL のPython版である pyNTCIREVAL を使用します.
pyNTCIREVAL by 京都大学 加藤 誠 先生
NTCIREVAL by 早稲田大学 酒井 哲也 先生
NTCIREVALの説明を上記ページから引用します.
```
NTCIREVALは、様々な検索評価指標を計算するためのツールキットです。
NTCIRやTRECのad hoc文書検索タスクの他、diversified search resultsの評価やNTCIR-8コミュニティQAタスクの評価などにも利用できます。
NTCIREVALは例えば以下のような指標を算出できます:
-Average Precision
-Q-measure
-nDCG
-Expected Reciprocal Rank (ERR)
-Graded Average Precision (GAP)
-Rank-Biased Precision (RBP)
-Normalised Cumulative Utility (NCU)
-上記各指標の短縮リスト版
-Bpref
-D-measures and D#-measures (多様性評価用)
-Intent-Aware (IA) metrics (多様性評価用)
```
ライブラリのインストール
pipというPythonライブラリ管理ツールを使用してインストールします. ターミナル上で h29iroのフォルダに移動し,下記コマンドで pyNTCIREVAL をインストールしてください.
pip install git+https://github.com/mpkato/pyNTCIREVAL.git
正しくインストールできれば, notebook上で
!pyNTCIREVAL
と実行すれば,以下の様なメッセージが出力されます.
```
Usage: pyNTCIREVAL [OPTIONS] COMMAND [ARGS]...
Options:
-h, --help Show this message and exit.
Commands:
compute
label
```
End of explanation
"""
!cat ../data/eval/q1.rel
"""
Explanation: なお,notebook上で $!$ の後の文字列はシェル(ターミナル)に対するコマンドと解釈され,シェルの出力がnotebookの画面に出力されます.
1. 評価データの準備
NTCIREVALおよびpyNTCIREVALでは,評価用のテキストファイルをプログラムに渡すことで,評価値を計算します.
../data/eval/ にサンプルデータを置いています.
基本的に,ある手法のある検索課題に対する評価指標を検索するためには,以下の2つのファイルを準備する必要があります.
適合性評価ファイル(*.rel)
検索結果ファイル(*.res)
適合性評価ファイル
適合性評価ファイルは,ある検索課題に対するコレクション中の適合性評価結果を表すテキストファイルです. ../data/eval/q1.rel にサンプルデータを置いています.このファイル名は,検索課題$q_1$に対する適合性評価ファイルであることを意味しています(NTICREVALではファイル名に形式はありません.単純に利用者が分かりやすいため,山本がこのような名前をつけています).
q1.rel の中身はこのようになっています.
End of explanation
"""
!cat ../data/eval/method1.q1.res
"""
Explanation: このファイルの一行の意味は,
文書ID 適合性ラベル
となっています. 文書IDは評価データ作成者が適宜付与したIDです.適合性ラベルは慣習上このような書き方をしており,
L0 が不適合を表し,L1, L2 ... , 以降は適合性の度合い(適合度)を表します.今回は,適合度は3段階(${0,1,2 }$)のため,ラベルは${L0,L1,L2}$の3種類です.4段階の適合度を用いる場合は,${L0,L1,L2,L3}$をラベルとして用います.
たとえば, q1.relファイルの3行目の
d3 L2
は,文書ID $d_3$の適合度が $2$ であることを表しています.
検索結果ファイル
検索結果ファイルは,ある手法のある検索課題に対する検索結果(つまり,ランク付けされた文書集合)を表すテキストファイルです. ../data/eval/method1.q1.resにサンプルデータを置いています.
method1.q1.res の中身はこのようになっています.
End of explanation
"""
!pyNTCIREVAL label -r ../data/eval/q1.rel < ../data/eval/method1.q1.res
"""
Explanation: このように,検索結果ファイルはランキング結果を単純に文書IDで表します.たとえば,このファイルは, 検索課題$q_1$に対して $d_1, d_2, d_3$ の順で文書をランキングしたことを表しています.
2. 適合性ラベル付き検索結果ファイルの作成
適合性評価ファイルと検索結果ファイルを準備したら,次に適合性ラベル付き検索結果ファイルを作成します.このファイルはpyNTCIREVALの機能を利用して作成することができます.また,自身でプログラムを書いてこのファイルを作成することもできます. pyNTCIREVAL を用いて適合性ラベル付き検索結果ファイルを作成する場合は,以下のコマンドを実行します.
End of explanation
"""
!pyNTCIREVAL label -r ../data/eval/q1.rel < ../data/eval/method1.q1.res > ../data/eval/method1.q1.rel
"""
Explanation: シェルのパイプを用いているので,シェルについて詳しくない人は上記コマンドの意味がよく分からないかもしれませんが
pyNTCIREVAL label
は適合性ラベル付き検索結果ファイルを作成するためのコマンドです.
-r ../data/eval/q1.rel
は適合性評価ファイルの場所を指定しており,
< ../data/eval/method1.q1.res
はpyNTCIREVALに,ラベルを付与して欲しい検索結果を渡しています.
上記コマンドを実行すると,
d1 L1
d2 L0
d3 L2
という結果が得られます.つまりこのプログラムは,検索結果ファイル中の各文書IDに対して,適合性評価ファイル中の対応する文書IDの適合性ラベルを付加しています.
ちなみに,上記コマンドだけでは適合性ラベル付き検索結果ファイルの内容が画面に出力されるだけです.この内容をファイルに保存するには,例えば以下の様にします.
End of explanation
"""
!cat ../data/eval/method1.q1.rel
"""
Explanation: ```
../data/eval/method1.q1.rel
```
これもシェルのコマンドで,出力をmethod1.q1.relに書き込むという意味です.
End of explanation
"""
!pyNTCIREVAL compute -r ../data/eval/q1.rel -g 1:3 --cutoffs=1,3 < ../data/eval/method1.q1.rel
"""
Explanation: 3. 評価指標の計算
適合性評価ファイルと,適合性評価付き検索結果ファイルが準備できたら,それらをpyNTCIREVALに渡すことで,各種評価指標を計算できます.
End of explanation
"""
!pyNTCIREVAL compute -r ../data/eval/q1.rel -g 1:3 --cutoffs=1,3 < ../data/eval/method1.q1.rel > ../data/eval/method1.q1.eval
"""
Explanation: コマンドの説明
pyNTECIREVAL compute
は評価指標を計算するためのコマンドです.
-g 1:3
は適合度$L1,L2$の文書に対する利得(Gain)を指定しています.この値はたとえばnDCGを計算する際に使用されます.今回は利得関数として, $g(i) = 2^{{\rm rel}_i} -1$を用いるので, $L1 = 1, L2 = 3$という利得を指定しています.
--cutoffs 1:3
は,指標を計算する際に検索結果上位何件まで考慮するか(@$k$)を指定しています.この場合,上位1,3件における指標がそれぞれ出力されます.
結果の見方
各指標の結果が出力されています.たとえば,nERR@0003は上位$3$件まで考慮した際のnERRの値を表しています(すなわち,nERR@$3$).また,@$k$が付与されていない指標は与えられたランキング結果全てを考慮した際の評価値です.
<span style="color:red">注意点として,本講義で示したnDCGの定義は MSnDCG版と呼ばれるnDCGの計算方法です.従って,本講義のnDCG@$3$に対応する指標はMSnDCG@0003になります.</span>
最後に,評価結果をファイルに保存するには,先ほどと同様に出力をファイルに書き込みます.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.