repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
ceholden/ceholden.github.io | _drafts/2016-09-09-Landsat-Metadata-Dask.ipynb | mit | import dask.dataframe as ddf
columns = {
'sceneID': str,
'sensor': str,
'path': int,
'row': int,
'acquisitionDate': str,
'cloudCover': float,
'cloudCoverFull': float,
'sunElevation': float,
'sunAzimuth': float,
'DATA_TYPE_L1': str,
'GEOMETRIC_RMSE_MODEL': float,
'GEOMETRIC_RMSE_MODEL_X': float,
'GEOMETRIC_RMSE_MODEL_Y': float,
'satelliteNumber': float
}
df = ddf.read_csv('LANDSAT*.csv',
usecols=columns.keys(),
dtype=columns,
parse_dates=['acquisitionDate'],
blocksize=int(20e6))
df = df.assign(year=df.acquisitionDate.dt.year)
df.columns
"""
Explanation: Landsat Metadata Analysis with Dask
Understanding the global distribution of Landsat observations over the satellite's 40+ year record can help answer many questions including:
How viable is a particular analytical method given the observation frequency and quality in my study site?
What is the distribution of cloud cover across the planet as observed by Landsat?
How does solar and sensor geometry change in the Landsat record across time and the planet?
In order to mine this information, we can use the Landsat Bulk Metadata dataset from the USGS which provides rich metadata about every observation in the Landsat record across the history of the program. Unfortunately, these files are gigantic and would be very difficult to process using an average computer.
Luckily for those using Python, the Dask library can provide both multiprocessing and out-of-core computation capabilities while keeping to the same function calls that you might be familiar with from Numpy and Pandas. Of interest to us is the dask.dataframe collection which allows us to easily process the Landsat metadata CSV files.
To begin, first download the Landsat metadata files and unzip them. While Pandas can read from compressed CSV files, we will want to break these CSV files apart into many pieces for processing and they need to be uncompressed to split them.
For this tutorial, we will only be using the Landsat 8 and Landsat 7 metadata:
bash
wget http://landsat.usgs.gov/metadata_service/bulk_metadata_files/LANDSAT_8.csv.gz
gunzip LANDSAT_8.csv.gz
wget http://landsat.usgs.gov/metadata_service/bulk_metadata_files/LANDSAT_ETM_SLC_OFF.csv.gz
gunzip LANDSAT_ETM_SLC_OFF.csv.gz
wget http://landsat.usgs.gov/metadata_service/bulk_metadata_files/LANDSAT_ETM.csv.gz
gunzip LANDSAT_8.csv.gz
TODO: a transition
End of explanation
"""
df.groupby('sensor').sensor.count().compute()
"""
Explanation: Question: How many observations are there?
End of explanation
"""
result = df.groupby(['path', 'row', 'year']).cloudCoverFull.mean().compute()
result.loc[12, 31, :]
"""
Explanation: Question: What is the mean cloud cover for every path/row across the years?
End of explanation
"""
result = df.groupby(['DATA_TYPE_L1']).cloudCoverFull.mean().compute()
result
"""
Explanation: Question: What is the cloud cover difference between a L1T and L1GT product?
End of explanation
"""
df = df.assign(DATA_TYPE_L1=df.DATA_TYPE_L1.apply(lambda x: x if x != 'L1Gt' else 'L1GT'))
result = df.groupby(['DATA_TYPE_L1']).cloudCoverFull.mean().compute()
result
"""
Explanation: Looks like there is a labeling issue due to a capitalization difference in L1GT versus L1Gt. We can correct this as well:
End of explanation
"""
df.groupby(['DATA_TYPE_L1', 'sensor'])[['GEOMETRIC_RMSE_MODEL_X', 'GEOMETRIC_RMSE_MODEL_Y']].mean().compute()
"""
Explanation: Question: How does the various levels of preprocessing affect the estimated geometric accuracy? Are more recent Landsat sensors more accurate?
End of explanation
"""
from dask.dot import dot_graph
dot_graph(result.dask)
import dask
"""
Explanation: Unfortunately it looks like the Landsat 8 observations do not record the estimated geometric accuracy unless a systematic terrain correction using Ground Control Points is successful.
Behind the scenes
End of explanation
"""
import pandas as pd
_df = pd.read_csv('LANDSAT_8.csv', parse_dates=['acquisitionDate'], nrows=100)
s = _df['DATA_TYPE_L1']
"""
Explanation: Getting started with Dask
Because Dask DataFrame implements much of the Pandas API, we eliminate the added complications of parallel computing or computing on large datasets by first working our analysis out on a small subset using Pandas.
If one makes sure they stick the to subset of the Pandas API that Dask supports, leveraging the power of Dask is as simple as changing the data ingest or creation call and adding a .compute() to the computation.
End of explanation
"""
|
Jackporter415/phys202-2015-work | assignments/assignment03/NumpyEx03.ipynb | mit | import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import antipackage
import github.ellisonbg.misc.vizarray as va
"""
Explanation: Numpy Exercise 3
Imports
End of explanation
"""
def brownian(maxt, n):
"""Return one realization of a Brownian (Wiener) process with n steps and a max time of t."""
t = np.linspace(0.0,maxt,n)
h = t[1]-t[0]
Z = np.random.normal(0.0,1.0,n-1)
dW = np.sqrt(h)*Z
W = np.zeros(n)
W[1:] = dW.cumsum()
return t, W
"""
Explanation: Geometric Brownian motion
Here is a function that produces standard Brownian motion using NumPy. This is also known as a Wiener Process.
End of explanation
"""
t,W = np.array(brownian(1,1000))
assert isinstance(t, np.ndarray)
assert isinstance(W, np.ndarray)
assert t.dtype==np.dtype(float)
assert W.dtype==np.dtype(float)
assert len(t)==len(W)==1000
"""
Explanation: Call the brownian function to simulate a Wiener process with 1000 steps and max time of 1.0. Save the results as two arrays t and W.
End of explanation
"""
plt.plot(t,W)
assert True # this is for grading
"""
Explanation: Visualize the process using plt.plot with t on the x-axis and W(t) on the y-axis. Label your x and y axes.
End of explanation
"""
#Standard numpy functions
dW = np.array(np.diff(W))
dW.mean()
dW.var(), dW.std()
assert len(dW)==len(W)-1
assert dW.dtype==np.dtype(float)
"""
Explanation: Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences.
End of explanation
"""
def geo_brownian(t, W, X0, mu, sigma):
"Return X(t) for geometric brownian motion with drift mu, volatility sigma."""
#Plug into the equation above
X = X0 * np.exp((mu-sigma**2/2)**t + sigma * W)
return X
assert True # leave this for grading
"""
Explanation: Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation:
$$
X(t) = X_0 e^{((\mu - \sigma^2/2)t + \sigma W(t))}
$$
Use Numpy ufuncs and no loops in your function.
End of explanation
"""
#Standard Plot
plt.plot(geo_brownian(t,W,1.0,0.5,0.3))
assert True # leave this for grading
"""
Explanation: Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\mu=0.5$ and $\sigma=0.3$ with the Wiener process you computed above.
Visualize the process using plt.plot with t on the x-axis and X(t) on the y-axis. Label your x and y axes.
End of explanation
"""
|
pierre-rouanet/aupyom | examples/Live modification of the pitch and time-scale of sounds.ipynb | gpl-3.0 | from aupyom import Sampler, Sound
from aupyom.util import example_audio_file
sampler = Sampler()
audio_file = example_audio_file()
s1 = Sound.from_file(audio_file)
"""
Explanation: Live modification of the pitch and time-scale of sounds
Aupyom was mainly designed so it is really easily to modify the pitch ant time-scale of sounds. It has been created in the Poppy-project context where robots are used as educational tools. Interactivity was thus a key feature.
In this notebook, you will see:
how to change the pitch, and time-scale of sounds
see how interactive widgets can be used to lively modify those features
Load sounds into the sampler
First, we load sounds into the sampler (please see this notebook for details).
End of explanation
"""
sampler.play(s1)
"""
Explanation: Start playing sound:
End of explanation
"""
s1.pitch_shift = 3
"""
Explanation: Shift the pitch of sounds
You can directly change the pitch of any sound directly via a property:
End of explanation
"""
s1.pitch_shift = -2
"""
Explanation: Note that you can modify the pitch while the sounds is played.
You can also decrease the pitch:
End of explanation
"""
s1.pitch_shift = 0
"""
Explanation: You can "reset" the pitch to its base value:
End of explanation
"""
s1.stretch_factor = 2.0
"""
Explanation: Modify the time-scale: without pitch modification
You can also speed-up or slow down sound using the stretch_factor property.
For instance, if you want to double the speed by 2x:
End of explanation
"""
s1.stretch_factor = 0.5
"""
Explanation: You can also slow it down:
End of explanation
"""
s1.stretch_factor = 1.0
"""
Explanation: And to reset to its initial play speed:
End of explanation
"""
def modify_sound(pitch, stretch):
s1.pitch_shift = pitch
s1.stretch_factor = stretch
from ipywidgets import interact, FloatSlider
sampler.play(s1)
interact(modify_sound,
pitch=FloatSlider(min=-10, max=10, value=0, step=0.5),
stretch=FloatSlider(min=0.1, max=10, value=1.0, step=0.1));
"""
Explanation: Using widgets to lively modify sounds
You can use the notebook widgets to create an interface and easily modify sounds:
End of explanation
"""
|
karlstroetmann/Formal-Languages | ANTLR4-Python/Earley-Parser/Earley-Parser.ipynb | gpl-2.0 | !type simple.g
!cat simple.g
"""
Explanation: Implementing an Earley Parser
A Grammar for Grammars
Earley's algorithm has two inputs:
- a grammar $G$ and
- a string $s$.
It then checks whether the string $s$ can be parsed with the given grammar.
In order to input the grammar in a natural way, we first have to develop a parser for grammars.
An example grammar that we want to parse is stored in the file simple.g.
End of explanation
"""
!type Pure.g4
!cat Pure.g4
"""
Explanation: We use <span style="font-variant:small-caps;">Antlr</span> to develop a parser for this Grammar.
The pure grammar to parse this type of grammar is stored in
the file Pure.g4.
End of explanation
"""
!type Grammar.g4
!cat -n Grammar.g4
"""
Explanation: The annotated grammar is stored in the file Grammar.g4.
End of explanation
"""
!antlr4 -Dlanguage=Python3 Grammar.g4
from GrammarLexer import GrammarLexer
from GrammarParser import GrammarParser
import antlr4
"""
Explanation: We start by generating both scanner and parser.
End of explanation
"""
def parse_grammar(filename):
input_stream = antlr4.FileStream(filename)
lexer = GrammarLexer(input_stream)
token_stream = antlr4.CommonTokenStream(lexer)
parser = GrammarParser(token_stream)
grammar = parser.start()
return grammar.g
parse_grammar('simple.g')
"""
Explanation: The function parse_grammar takes a filename as its argument and returns the grammar that is stored in the given file. The grammar is represented as list of rules. Each rule is represented as a tuple. The example below will clarify this structure.
End of explanation
"""
class EarleyItem():
def __init__(self, variable, alpha, beta, index):
self.mVariable = variable
self.mAlpha = alpha
self.mBeta = beta
self.mIndex = index
def __eq__(self, other):
return isinstance(other, EarleyItem) and \
self.mVariable == other.mVariable and \
self.mAlpha == other.mAlpha and \
self.mBeta == other.mBeta and \
self.mIndex == other.mIndex
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(self.__repr__())
def __repr__(self):
alphaStr = ' '.join(self.mAlpha)
betaStr = ' '.join(self.mBeta)
return f'<{self.mVariable} → {alphaStr} • {betaStr}, {self.mIndex}>'
"""
Explanation: Earley's Algorithm
Given a context-free grammar $G = \langle V, \Sigma, R, S \rangle$ and a string $s = x_1x_2 \cdots x_n \in \Sigma^$ of length $n$,
an Earley item* is a pair of the form
$$\langle A \rightarrow \alpha \bullet \beta, k \rangle$$
such that
- $(A \rightarrow \alpha \beta) \in R\quad$ and
- $k \in {0,1,\cdots,n}$.
The class EarleyItem represents a single Earley item.
- mVariable is the variable $A$,
- mAlpha is $\alpha$,
- mBeta is $\beta$, and
- mIndex is $k$.
Since we later have to store objects of class EarleyItem in sets, we have to implement the functions
- __eq__,
- __ne__,
- __hash__.
It is easiest to implement __hash__ by first converting the object into a string. Hence we also
implement the function __repr__, that converts an EarleyItem into a string.
End of explanation
"""
def isComplete(self):
return self.mBeta == ()
EarleyItem.isComplete = isComplete
del isComplete
"""
Explanation: Given an Earley item self, the function isComplete checks, whether the Earley item self has the form
$$\langle A \rightarrow \alpha \bullet, k \rangle,$$
i.e. whether the $\bullet$ is at the end of the grammar rule.
End of explanation
"""
def sameVar(self, C):
return len(self.mBeta) > 0 and self.mBeta[0] == C
EarleyItem.sameVar = sameVar
del sameVar
"""
Explanation: The function sameVar(self, C) checks, whether the item following the dot is the same as the variable
given as argument, i.e. sameVar(self, C) returns True if self is an Earley item of the form
$$\langle A \rightarrow \alpha \bullet C\beta, k \rangle.$$
End of explanation
"""
def scan(self, t):
if len(self.mBeta) > 0:
return self.mBeta[0] == t or self.mBeta[0] == "'" + t + "'"
return False
EarleyItem.scan = scan
del scan
"""
Explanation: The function scan(self, t) checks, whether the item following the dot matches the token t,
i.e. scan(self, t) returns True if self is an Earley item of the form
$$\langle A \rightarrow \alpha \bullet t\beta, k \rangle.$$
The argument $t$ can either be the name of a token or a literal.
End of explanation
"""
def nextVar(self):
if len(self.mBeta) > 0:
var = self.mBeta[0]
if var[0] != "'" and var.islower():
return var
return None
EarleyItem.nextVar = nextVar
del nextVar
"""
Explanation: Given an Earley item, this function returns the name of the variable following the dot. If there is no variable following the dot, the function returns None. The function can distinguish variables from token names because variable names consist only of lower case letters.
End of explanation
"""
def moveDot(self):
return EarleyItem(self.mVariable,
self.mAlpha + (self.mBeta[0],),
self.mBeta[1:],
self.mIndex)
EarleyItem.moveDot = moveDot
del moveDot
"""
Explanation: The function moveDot(self) moves the $\bullet$ in the Earley item self, where self has the form
$$\langle A \rightarrow \alpha \bullet \beta, k \rangle$$
over the next variable, token, or literal in $\beta$. It assumes that $\beta$ is not empty.
End of explanation
"""
class Grammar():
def __init__(self, Rules):
self.mRules = Rules
"""
Explanation: The class Grammar represents a context free grammar. It stores a list of the rules of the grammar.
Each grammar rule of the form
$$ a \rightarrow \beta $$
is stored as the tuple $(a,) + \beta$. The start symbol is assumed to be the variable on the left hand side of
the first rule. To distinguish syntactical variables from tokens, variables contain only lower case letters,
while tokens either contain only upper case letters or they start and end with a single quote character "'".
End of explanation
"""
def startItem(self):
return EarleyItem('Start', (), (self.startVar(),), 0)
Grammar.startItem = startItem
del startItem
"""
Explanation: The function startItem returns the Earley item
$$ \langle\hat{S} \rightarrow \bullet S, 0\rangle $$
where $S$ is the start variable of the given grammar and $\hat{S}$ is a new variable.
End of explanation
"""
def finishItem(self):
return EarleyItem('Start', (self.startVar(),), (), 0)
Grammar.finishItem = finishItem
del finishItem
"""
Explanation: The function finishItem returns the Earley item
$$ \langle\hat{S} \rightarrow S \bullet, 0\rangle $$
where $S$ is the start variable of the given grammar and $\hat{S}$ is a new variable.
End of explanation
"""
def startVar(self):
return self.mRules[0][0]
Grammar.startVar = startVar
del startVar
"""
Explanation: The function startVar returns the start variable of the grammar. It is assumed that
the first rule grammar starts with the start variable of the grammar.
End of explanation
"""
def toString(self):
result = ''
for head, *body in self.mRules:
result += f'{head}: {body};\n'
return result
Grammar.__str__ = toString
del toString
"""
Explanation: The function toString creates a readable presentation of the grammar rules.
End of explanation
"""
class EarleyParser():
def __init__(self, grammar, TokenList):
self.mGrammar = grammar
self.mString = [None] + TokenList # hack so mString[1] is the first token
self.mStateList = [set() for i in range(len(TokenList)+1)]
print('Grammar:\n')
print(self.mGrammar)
print(f'Input: {self.mString}\n')
self.mStateList[0] = { self.mGrammar.startItem() }
"""
Explanation: The class EarleyParser implements the parsing algorithm of Jay Earley.
The class maintains the following member variables:
- mGrammar is the grammar that is used to parse the given token string.
- mString is the list of tokens and literals that has to be parsed.
As a hack, the first element of this list in None.
Therefore, mString[i] is the $i^\textrm{th}$ token.
- mStateList is a list of sets of Earley items. If $n$ is the length of the given token string
(excluding the first element None), then $Q_i = \texttt{mStateList}[i]$.
The idea is that the set $Q_i$ is the set of those Earley items that the parser could be in
when it has read the tokens mString[1], $\cdots$, mString[n]. $Q_0$ is initialized as follows:
$$ Q_0 = \bigl{\langle\hat{S} \rightarrow \bullet S, 0\rangle\bigr}. $$
The Earley items are interpreted as follows: If we have
$$ \langle C \rightarrow \alpha \bullet \beta, k\rangle \in Q_i, $$
then we know the following:
- After having read the tokens mString[:k+1] the parser tries to parse the variable $C$
in the token string mString[k+1:].
- After having read the token string mString[k+1:i+1] the parser has already recognized $\alpha$
and now needs to recognize $\beta$ in the token string mString[i+1:] in order to parse the variable $C$.
End of explanation
"""
def parse(self):
"run Earley's algorithm"
n = len(self.mString) - 1 # mString[0] = None
for i in range(0, n+1):
if i + 1 <= n:
next_token = self.mString[i+1]
else:
next_token = 'EOF'
print('_' * 80)
print(f'next token = {next_token}')
print('_' * 80)
change = True
while change:
change = self.complete(i)
change = self.predict(i) or change
self.scan(i)
# print states
print(f'\nQ{i}:')
Qi = self.mStateList[i]
for item in Qi:
print(item)
if i + 1 <= n:
print(f'\nQ{i+1}:')
Qip1 = self.mStateList[i+1]
for item in Qip1:
print(item)
if self.mGrammar.finishItem() in self.mStateList[-1]:
print('Parsing successful!')
else:
print('Parsing failed!')
EarleyParser.parse = parse
del parse
"""
Explanation: The method parse implements Earley's algorithm. For all states
$Q_1$, $\cdots$, $Q_n$ we proceed as follows:
- We apply the completion operation followed by the prediction operation.
This is done until no more states are added to $Q_i$.
(The inner while loop is not necessary if the grammar does not contain $\varepsilon$-rules.)
- Finally, the scanning operation is applied to $Q_i$. This operation adds
items to the set $Q_{i+1}$.
After $Q_i$ has been computed, we proceed to process $Q_{i+1}$.
Parsing is successful iff
$$ \langle\hat{S} \rightarrow S \bullet, 0\rangle \in Q_n $$
End of explanation
"""
def complete(self, i):
change = False
added = True
Qi = self.mStateList[i]
while added:
added = False
newQi = set()
for item in Qi:
if item.isComplete():
C = item.mVariable
j = item.mIndex
Qj = self.mStateList[j]
for newItem in Qj:
if newItem.sameVar(C):
moved = newItem.moveDot()
newQi.add(moved)
if not (newQi <= Qi):
change = True
added = True
print("completion:")
for newItem in newQi:
if newItem not in Qi:
print(f'{newItem} added to Q{i}')
self.mStateList[i] |= newQi
Qi = self.mStateList[i]
return change
EarleyParser.complete = complete
del complete
"""
Explanation: The method complete(self, i) applies the completion operation to the state $Q_i$:
If we have
- $\langle C \rightarrow \gamma \bullet, j\rangle \in Q_i$ and
- $\langle A \rightarrow \beta \bullet C \delta, k\rangle \in Q_j$,
then the parser tried to parse the variable $C$ after having read mString[:j+1]
and we know that
$$ C \Rightarrow^ \texttt{mString[j+1:i+1]}, $$
i.e. the parser has recognized $C$ after having read mString[j+1:i+1].
Therefore the parser should proceed to recognize $\delta$ in state $Q_i$.
Therefore we add the Earley item* $\langle A \rightarrow \beta C \bullet \delta,k\rangle$ to the set $Q_i$:
$$\langle C \rightarrow \gamma \bullet, j\rangle \in Q_i \wedge
\langle A \rightarrow \beta \bullet C \delta, k\rangle \in Q_j \;\rightarrow\;
Q_i := Q_i \cup \bigl{ \langle A \rightarrow \beta C \bullet \delta, k\rangle \bigr}
$$
End of explanation
"""
def predict(self, i):
change = False
added = True
Qi = self.mStateList[i]
while added:
added = False
newQi = set()
for item in Qi:
c = item.nextVar()
if c != None:
for rule in self.mGrammar.mRules:
if c == rule[0]:
newQi.add(EarleyItem(c, (), rule[1:], i))
if not (newQi <= Qi):
change = True
added = True
print("prediction:")
for newItem in newQi:
if newItem not in Qi:
print(f'{newItem} added to Q{i}')
self.mStateList[i] |= newQi
Qi = self.mStateList[i]
return change
EarleyParser.predict = predict
del predict
"""
Explanation: The method self.predict(i) applies the prediction operation to the state $Q_i$:
If $\langle A \rightarrow \beta \bullet C \delta, k \rangle \in Q_j$, then
the parser tries to recognize $C\delta$ after having read mString[:j+1]. To this end
it has to parse $C$ in the string mString[j+1:].
Therefore, if $C \rightarrow \gamma$ is a rule of our grammar,
we add the Earley item $\langle C \rightarrow \bullet \gamma, j\rangle$ to the set $Q_j$:
$$ \langle A \rightarrow \beta \bullet C \delta, k\rangle \in Q_j
\wedge (C \rightarrow \gamma) \in R
\;\rightarrow\;
Q_j := Q_j \cup\bigl{ \langle C \rightarrow \bullet\gamma, j\rangle\bigr}.
$$
As the right hand side $\gamma$ might start with a variable, the function uses a fix point iteration
until no more Earley items are added to $Q_j$.
End of explanation
"""
def scan(self, i):
Qi = self.mStateList[i]
n = len(self.mString) - 1 # remember mStateList[0] == None
if i + 1 <= n:
a = self.mString[i+1]
for item in Qi:
if item.scan(a):
self.mStateList[i+1].add(item.moveDot())
print('scanning:')
print(f'{item.moveDot()} added to Q{i+1}')
EarleyParser.scan = scan
del scan
import re
"""
Explanation: The function self.scan(i) applies the scanning operation to the state $Q_i$.
If $\langle A \rightarrow \beta \bullet a \gamma, k\rangle \in Q_i$ and $a$ is a token,
then the parser tries to recognize the right hand side of the grammar rule
$$ A \rightarrow \beta a \gamma$$
and after having read mString[k+1:i+1] it has already recognized $\beta$.
If we now have mString[i+1] == a, then the parser still has to recognize $\gamma$ in mString[i+2:].
Therefore, the Earley object $\langle A \rightarrow \beta a \bullet \gamma, k\rangle$ is added to
the set $Q_{i+1}$:
$$\langle A \rightarrow \beta \bullet a \gamma, k\rangle \in Q_i \wedge x_{i+1} = a
\;\rightarrow\;
Q_{i+1} := Q_{i+1} \cup \bigl{ \langle A \rightarrow \beta a \bullet \gamma, k\rangle \bigr}
$$
End of explanation
"""
def tokenize(s):
'''Transform the string s into a list of tokens. The string s
is supposed to represent an arithmetic expression.
'''
lexSpec = r'''([ \t]+) | # blanks and tabs
([1-9][0-9]*|0) | # number
([()]) | # parentheses
([-+*/]) | # arithmetical operators
(.) # unrecognized character
'''
tokenList = re.findall(lexSpec, s, re.VERBOSE)
result = []
for ws, number, parenthesis, operator, error in tokenList:
if ws: # skip blanks and tabs
continue
elif number:
result += [ 'NUMBER' ]
elif parenthesis:
result += [ parenthesis ]
elif operator:
result += [ operator ]
else:
result += [ f'ERROR({error})']
return result
tokenize('1 + 2 * 3')
"""
Explanation: The function tokenize transforms the string s into a list of tokens. See below for an example.
End of explanation
"""
def test(file, word):
Rules = parse_grammar(file)
grammar = Grammar(Rules)
TokenList = tokenize(word)
ep = EarleyParser(grammar, TokenList)
ep.parse()
test('simple.g', '1 * 2 + 3')
"""
Explanation: The function test takes two arguments.
- file is the name of a file containing a grammar,
- word is a string that should be parsed.
word is first tokenized. Then the resulting token list is parsed using Earley's algorithm.
End of explanation
"""
!del GrammarLexer.* GrammarParser.* Grammar.tokens GrammarListener.py Grammar.interp
!rmdir /S /Q __pycache__
!dir /B
!rm GrammarLexer.* GrammarParser.* Grammar.tokens GrammarListener.py Grammar.interp
!rm -r __pycache__
!ls
"""
Explanation: The command below cleans the directory.
End of explanation
"""
|
nholtz/structural-analysis | Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone1.ipynb | cc0-1.0 | from __future__ import print_function
import salib as sl
sl.import_notebooks()
from Tables import Table
from Nodes import Node
from Members import Member
from LoadSets import LoadSet, LoadCombination
from NodeLoads import makeNodeLoad
from FixedEndForces import makeMemberLoad
from collections import OrderedDict, defaultdict
class Object(object):
pass
class Frame2D(object):
def __init__(self,dsname=None):
self.dsname = dsname
self.rawdata = Object()
self.nodes = OrderedDict()
self.members = OrderedDict()
self.nodeloads = LoadSet()
self.memberloads = LoadSet()
self.loadcombinations = LoadCombination()
#self.dofdesc = []
#self.nodeloads = defaultdict(list)
#self.membloads = defaultdict(list)
self.ndof = 0
self.nfree = 0
self.ncons = 0
self.R = None
self.D = None
self.PDF = None # P-Delta forces
COLUMNS_xxx = [] # list of column names for table 'xxx'
def get_table(self,tablename,extrasok=False):
columns = getattr(self,'COLUMNS_'+tablename)
t = Table(tablename,columns=columns)
t.read()
reqdl= columns
reqd = set(reqdl)
prov = set(t.columns)
if reqd-prov:
raise Exception('Columns missing {} for table "{}". Required columns are: {}'\
.format(list(reqd-prov),tablename,reqdl))
if not extrasok:
if prov-reqd:
raise Exception('Extra columns {} for table "{}". Required columns are: {}'\
.format(list(prov-reqd),tablename,reqdl))
return t
"""
Explanation: Milestone 1 - input of nodes, members, loads and load combos.
2-Dimensional Frame Analysis - Version 04
This program performs an elastic analysis of 2-dimensional structural frames. It has the following features:
1. Input is provided by a set of CSV files (and cell-magics exist so you can specifiy the CSV data
in a notebook cell). See the example below for an, er, example.
1. Handles concentrated forces on nodes, and concentrated forces, concentrated moments, and linearly varying distributed loads applied transversely anywhere along the member (i.e., there is as yet no way to handle longitudinal
load components).
1. It handles fixed, pinned, roller supports and member end moment releases (internal pins). The former are
handled by assigning free or fixed global degrees of freedom, and the latter are handled by adjusting the
member stiffness matrix.
1. It has the ability to handle named sets of loads with factored combinations of these.
1. The DOF #'s are assigned by the program, with the fixed DOF #'s assigned after the non-fixed. The equilibrium
equation is then partitioned for solution. Among other advantages, this means that support settlement could be
easily added (there is no UI for that, yet).
1. A non-linear analysis can be performed using the P-Delta method (fake shears are computed at column ends due to the vertical load acting through horizontal displacement differences, and these shears are applied as extra loads
to the nodes).
1. A full non-linear (2nd order) elastic analysis will soon be available by forming the equilibrium equations
on the deformed structure. This is very easy to add, but it hasn't been done yet. Shouldn't be too long.
1. There is very little no documentation below, but that will improve, slowly.
End of explanation
"""
%%Table nodes
NODEID,X,Y,Z
A,0,0,5000
B,0,4000,5000
C,8000,4000,5000
D,8000,0,5000
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_nodes = ('NODEID','X','Y')
def install_nodes(self):
node_table = self.get_table('nodes')
for ix,r in node_table.data.iterrows():
if r.NODEID in self.nodes:
raise Exception('Multiply defined node: {}'.format(r.NODEID))
n = Node(r.NODEID,r.X,r.Y)
self.nodes[n.id] = n
self.rawdata.nodes = node_table
def get_node(self,id):
try:
return self.nodes[id]
except KeyError:
raise Exception('Node not defined: {}'.format(id))
##test:
f = Frame2D()
##test:
f.install_nodes()
##test:
f.nodes
##test:
f.get_node('C')
"""
Explanation: Nodes
End of explanation
"""
%%Table members
MEMBERID,NODEJ,NODEK
AB,A,B
BC,B,C
DC,D,C
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_members = ('MEMBERID','NODEJ','NODEK')
def install_members(self):
table = self.get_table('members')
for ix,m in table.data.iterrows():
if m.MEMBERID in self.members:
raise Exception('Multiply defined member: {}'.format(m.MEMBERID))
memb = Member(m.MEMBERID,self.get_node(m.NODEJ),self.get_node(m.NODEK))
self.members[memb.id] = memb
self.rawdata.members = table
def get_member(self,id):
try:
return self.members[id]
except KeyError:
raise Exception('Member not defined: {}'.format(id))
##test:
f.install_members()
f.members
##test:
m = f.get_member('BC')
m.id, m.L, m.dcx, m.dcy
"""
Explanation: Members
End of explanation
"""
%%Table node_loads
LOAD,NODEID,DIRN,F
Wind,B,FX,-200000.
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_node_loads = ('LOAD','NODEID','DIRN','F')
def install_node_loads(self):
table = self.get_table('node_loads')
dirns = ['FX','FY','FZ']
for ix,row in table.data.iterrows():
n = self.get_node(row.NODEID)
if row.DIRN not in dirns:
raise ValueError("Invalid node load direction: {} for load {}, node {}; must be one of '{}'"
.format(row.DIRN, row.LOAD, row.NODEID, ', '.join(dirns)))
l = makeNodeLoad({row.DIRN:row.F})
self.nodeloads.append(row.LOAD,n,l)
self.rawdata.node_loads = table
##test:
f.install_node_loads()
##test:
for o,l,fact in f.nodeloads.iterloads('Wind'):
print(o,l,fact,l*fact)
"""
Explanation: Node Loads
End of explanation
"""
%%Table member_loads
LOAD,MEMBERID,TYPE,W1,W2,A,B,C
Live,BC,UDL,-50,,,,
Live,BC,PL,-200000,,5000
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_member_loads = ('LOAD','MEMBERID','TYPE','W1','W2','A','B','C')
def install_member_loads(self):
table = self.get_table('member_loads')
for ix,row in table.data.iterrows():
m = self.get_member(row.MEMBERID)
l = makeMemberLoad(m.L,row)
self.memberloads.append(row.LOAD,m,l)
self.rawdata.member_loads = table
##test:
f.install_member_loads()
##test:
for o,l,fact in f.memberloads.iterloads('Live'):
print(o.id,l,fact,l.fefs()*fact)
"""
Explanation: Member Loads
End of explanation
"""
%%Table load_combinations
COMBO,LOAD,FACTOR
One,Live,1.5
One,Wind,1.75
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_load_combinations = ('COMBO','LOAD','FACTOR')
def install_load_combinations(self):
table = self.get_table('load_combinations')
for ix,row in table.data.iterrows():
self.loadcombinations.append(row.COMBO,row.LOAD,row.FACTOR)
self.rawdata.load_combinations = table
##test:
f.install_load_combinations()
##test:
for o,l,fact in f.loadcombinations.iterloads('One',f.nodeloads):
print(o.id,l,fact)
for o,l,fact in f.loadcombinations.iterloads('One',f.memberloads):
print(o.id,l,fact,l.fefs()*fact)
"""
Explanation: Load Combinations
End of explanation
"""
@sl.extend(Frame2D)
class Frame2D:
def iter_nodeloads(self,comboname):
for o,l,f in self.loadcombinations.iterloads(comboname,self.nodeloads):
yield o,l,f
def iter_memberloads(self,comboname):
for o,l,f in self.loadcombinations.iterloads(comboname,self.memberloads):
yield o,l,f
##test:
for o,l,fact in f.iter_nodeloads('One'):
print(o.id,l,fact)
for o,l,fact in f.iter_memberloads('One'):
print(o.id,l,fact,l.fefs()*fact)
"""
Explanation: Load Iterators
End of explanation
"""
##test:
Table.CELLDATA
"""
Explanation: Accumulated Cell Data
End of explanation
"""
|
Olsthoorn/TransientGroundwaterFlow | exercises_notebooks/Sudden_head_ change_section_54.ipynb | gpl-3.0 | import numpy as np
import matplotlib.pyplot as plt
from scipy.special import erfc # scipy.special has numerous special mathematical functions
"""
Explanation: Sudden head change at $x=0$
IHE, Delft, Transient Groundwater
Exercises in class, 2012-01-07
@T.N.Olsthoorn
dddk
Loading modules
End of explanation
"""
x = np.linspace(0, 20, 201) # generate an array with 201 numbers between and including 0 and 20
y = np.sin(x) # computed the sin of these x-values
plt.plot(x, y, label='john') # plot a line and give the line the label "john"
plt.plot(x, y**2, label='squared') # plot another line and give this line the label "squared"
plt.title('My title') # put a title on the top of the axes
plt.xlabel('x [m]') # put a label at the x-axis
plt.ylabel('y [m3/d]') # put a label at the y-axis
plt.grid() # toggle the gridlines on (plt.grid(True) sets it, plt.grid(False) removes the lines)
plt.xscale('log') # use logarithmic scale for the x-axis
plt.legend() # add a legend, which uses the labels specified for the lines
z = np.zeros_like(x) # generate an array of zeros, the array has exactly the same size as the x-array
x.shape # show the size of the x-array (should be (201,) as it has no orientation and contains 201 values)
"""
Explanation: Some exercises in naive plotting, but it works well
End of explanation
"""
A, B, C = 123.45, 356789.1234, np.pi * 4 # assign values to the new veriables A, B and C
print('This is the old way of specifying the variable to insert in a string and to format them:')
print(A, B, C) # print them, without any formatting
print('A = {} m2/d, B = {} l/h, c = {} [-]'.format(A, B, C)) # print them within a string, replacing {}
# this is done by applying the method 'format(...)' to the string object. This method, takes the variables and
# puts them in the string at in the place of the successive braces {}. To format these variables in any desired
# way, you have to specify the formatting within the braces as is done below
print('A = {:.4g} m2/d, B = {:.2f} l/h, c = {:.3e} [-]'.format(A, B, C))
# Here each variable will be formatted as specified within each of the braces sets. .4g is general format with
# four significant digits, :.2f is floating point with two decimals and .3e is scientific
# with 3 significant didgets.
print('A = {:.4g} m2/d, B = {:20.2f} l/h, c = {:.3e} [-]'.format(A, B, C))
# The number in front of the accuracy didget is the field width. Here, the second variable will be
# printed in a field of 20 blanks as a floating point number with 2 decimals
print('A = {0:.4g} m2/d, B = {2:20.2f} l/h, c = {1:.3e} [-]'.format(A, B, C))
# A value in front of the colon (:) is the index of the variable. In this case 0 is the first
# variable (A), 2 is the third (C) and 1 is the second(B). So using these indices allows the
# the order in which the variables are inserted into the string to differ from the order in the format string.
print('A = {0:.4g} m2/d, C = {2:20.2f} l/h, B = {1:.3e} A = {0:.3f}, C={2:.3g} [-]'.format(A, B, C))
# Here the index is used more than once, this is also allowed if you want to use a variable
# more than once in a string.
# Now the most mordern way of inserting variable in a string and formatting them.
# Instead of the format(...) method, you tell he string that insertions are to be made.
# You do that by placing an f (formatted string) in front of the string like so f'str {}'
n = 4
print(f'\nWhat now follows are {n} fstrings, this is the most modern way:') # \n generates a new line, see result
print(f'A = {C} m2/d, B = {A} l/h, c = {B} [-]')
print(f'A = {A:.4g} m2/d, B = {B:.2f} l/h, c = {C:.3e} [-]')
# With f-strings, you can put the variables right within the braces in front of the colon or
# even without a colon, if you just want to print them without specific formatting.
print(f'A = {C:.4g} m2/d, B = {A:.2f} l/h, c = {B:.3e} [-]')
# As you can see, the formatting works exactly the same as with the .format(...) method. The only
# difference is that the variables themselves are now directly placed within the { }
"""
Explanation: Excersize with some formatted printing
End of explanation
"""
u = np.linspace(0, 5, 501)[1:]
# generates 510 values between 0 and 5 and select number 1 (=the second value) till the last
# So this yields 500 values. The firs one is let out because we don't want the 0 for later.
# basic indexing of array
#u(2:end) # the matlab way. Matlab arrays start at index 1, and indexing is with brackes ()
#u[1:] # the python way, whith []. Python arrays start at 0, so from 1 (= the second till the end)
#u[:-1] # The python way to select all values except the last
#u[20:30] # select values with index 20-30 that is 20, 21, .... 29 (not 30). u[20] is the 21 value (because 0 is the first)
#u[10:100:5] # Select values of u with index 10:100 with steps of 5, so 10, 15, 20, 25 ... 95 (100 is not included)
#u[:10] # Select u-values 0, 1, 2 , .., 9 (10 is not inlcluded)
#u = np.arange(0, 10) # the generates "a range" with values 0, 1,... 8, 9 (not 10)
#u = np.linspace(0, 10, 3) # likewise generates a range with values 0, 3, 6, 9
#u[0] # first u-value
#u[2] # third u-value
"""
Explanation: First some basic array indexing to select array values
End of explanation
"""
y = erfc(u) # we already have the u-values, now compute the corresponding erfc(u) values
plt.plot(u, y) # plot y vs u
plt.xlabel('u') # add label to the x-axis
plt.ylabel('erfc(u)')
plt.title('Plot of erfc(u) vs u')
plt.show() # finish and show the plot
plt.plot(1/u, y) # now plot y vs 1/u
plt.xlabel('1/u') # add label to x-axis
plt.ylabel('erfc(u)')
plt.grid() # Toggles gridlines on
plt.title('Plot of erfc(u) vs 1/u')
plt.show() # finish and show the plot
plt.plot(1/u, y) # again, plot y vs 1/u
plt.xlabel('1/u') # put xlabel
plt.ylabel('erfc(u)')
plt.xscale('log') # make x-axis logarithmic
plt.title('Plot of erfc(u) vs 1/u, but 1/u on logarithmic axis')
plt.grid(True) # add grid lines
plt.show() # finish and show the plot
"""
Explanation: The complementary error function
got it from scipy.special (see above from scipy.special import erfc)
End of explanation
"""
4 == 4 # this is an expression, its outcome is True (True is an object, like is False)
4 > 4 # this is obviously False
# If y is an array, then y > 0.6 is a boolean array, i.e. an array with the same size as y but with only
# values True or False
# We already have an array y
# So here we make all the values of y that are larger than 0.6 equal to 0.6
y[y > 0.6] = 0.6
plt.plot(1/u, y)
plt.xscale('log')
plt.grid()
plt.ylim((0, 1))
"""
Explanation: Booleans (True or False) and logical indexing to select values from an array
End of explanation
"""
# The aquifer properties
kD = 600 # m2/d
S = 0.1 # [-]
x = 200. # m, the value of x for which we wand the graph
# The array of times at which we want the head to be computed
t = np.linspace(0, 100, 10001)
# The list of change times (you can make it an array if you wish but it's not necessary here)
tc = [3, 5, 7, 8.1, 11.3, 17, 19] # d, change times
# The river stage values. Each value is valid from its change time till the next change time
A = np.array([2, 1.5, 1.3, 2.6, 3.1, 0.5, 2.0]) # m, stage of the river (it must be an array)
# Compute the changes of the river stage from the stage itself
#dA = np.hstack((0, A)) # take the array A and put a 0 in front of it
#dA = dA[1:] - dA[:-1] # take value A[i+1] - A[i]
dA = np.diff(np.hstack((0, A))) # this is the most efficient and proper way to compute the stage changers
print('A =', A) # show the array of stage values
print('dA =', dA) # show the array of stage-change values
# Now we're all set to do the superposition
s = np.zeros_like(t) # initialize the heads with all zeros (size of s is the same as size of t)
# put dA and tc next to each other (using the zip function)
# Then loop over them, while yielding the next value of dA and tc from the arrays
# and call them dAi and tci respectively
for dAi, tci in zip(dA, tc):
# Compute the u for this change time tci.
# Use logical indexiing t > tci to select only the times that are larger than the current change time
ui = np.sqrt(x ** 2 * S / (4 * kD * (t[t > tci] - tci)))
# Compute the head change due to this stage change dAi and change time tci only
ds = dAi * erfc(ui)
#s[t > tci] = s[t > tci] + ds # This is also good, but using += below (add to yourself) is more elegant
s[t > tci] += ds # Add this to the total s-array (this is the superposition) (only at locations were t > tci)
plt.plot(t[t > tci], ds) # plot contribution of this head change from this tci
# After the loop, when all change times have been processed and results of each individual head change
# was added to the head-change array s, plot it, to see the overall result
plt.plot(t, s, 'k', lw=2, label='total')
plt.legend()
plt.show() # finish and show graph (you don't strictly need it, but it's the proper way to finish a graph)
# If you "out-comment" line 38, by placing # in front, you only get the total end result
'aant to change'
"""
Explanation: Superposition
The effect of a sudden change by an amount A [m] of the river stage at $x=0$ and $t=0$ is given by
$$s(x, t) = A \, \mbox{erfc}(u), \,\,\, \mbox{where} \,\, u = \sqrt{\frac{x^2 S}{4 kD t}}$$
Given a series of change times $t_{ci}$ and corrseponding sudden changes $\delta A_i$, we can super impose the effect of the individual changes to get the total effect of the varying river stage on the groundwater head (and discharge)
$$ s(x, t) = \sum_{i=0}^n \left( \delta A_i \, \mbox{erfc} \sqrt{\frac{x^2 S}{4 kD (t - t_{ci})}}\,\right),\,\,\,\, t > t_{ci}$$
This is demonstrated below.
End of explanation
"""
|
rlopc/datcom-labs | ugr-datcom-ncc_ni-labs/ugr-datcom-ncc_ni-lab_00/ex_06-numerical_integration_hh_squid_axon.ipynb | gpl-3.0 | %matplotlib inline
import brian2 as b2
import matplotlib.pyplot as plt
from neurodynex.hodgkin_huxley import HH
from neurodynex.tools import input_factory
import jupyterthemes as jt
jt.get_themes()
jt.
HH.getting_started()
"""
Explanation: Adaptative Integrate And Fire Model
End of explanation
"""
I_min = 2.30
current = input_factory.get_step_current(5, 100, b2.ms, I_min *b2.uA)
state_monitor = HH.simulate_HH_neuron(current, 120 * b2.ms)
HH.plot_data(state_monitor, title="HH Neuron, minimal current")
"""
Explanation: 6.1. Exercise: step current response
We study the response of a Hodgkin-Huxley neuron to different input currents. Have a look at the documentation of the functions HH.simulate_HH_neuron() and HH.plot_data() and the module neurodynex.tools.input_factory.
6.1.1. Question
What is the lowest step current amplitude I_min for generating at least one spike? Determine the value by trying different input amplitudes in the code fragment:
End of explanation
"""
I_min = 6.32
current = input_factory.get_step_current(5, 400, b2.ms, I_min *b2.uA)
state_monitor = HH.simulate_HH_neuron(current, 500 * b2.ms)
HH.plot_data(state_monitor, title="HH Neuron, minimal current")
"""
Explanation: 6.1.2. Question
What is the lowest step current amplitude to generate repetitive firing?
End of explanation
"""
I_min_slow = 12.21
slow_ramp_current = input_factory.get_ramp_current(5, 50, b2.ms, 0.*b2.uA, I_min_slow *b2.uA)
state_monitor = HH.simulate_HH_neuron(slow_ramp_current, 100 * b2.ms)
HH.plot_data(state_monitor, title="HH Neuron, minimal current")
"""
Explanation: 6.2. Exercise: slow and fast ramp current
The minimal current to elicit a spike does not just depend on the amplitude I or on the total charge Q of the current, but on the “shape” of the current. Let’s see why:
6.2.1. Question
Inject a slow ramp current into a HH neuron. The current has amplitude 0A at t in [0, 5] ms and linearly increases to an amplitude I_min_slow at t=50ms. At t>50ms, the current is set to 0A. What is the minimal amplitude I_min_slow to trigger one spike (vm>50mV)?
End of explanation
"""
I_min_fas = 12.21
fast_ramp_current = input_factory.get_ramp_current(50, 100, 0.1*b2.ms, 0.*b2.uA, I_min_fast *b2.uA)
state_monitor = HH.simulate_HH_neuron(slow_ramp_current, 100 * b2.ms)
HH.plot_data(state_monitor, title="HH Neuron, minimal current")
"""
Explanation: 6.2.2. Question
Now inject a fast ramp current into a HH neuron. The current has amplitude 0 at t in [0, 5] ms and linearly increases to an amplitude I_min_fast at t=10ms. At t>10ms, the current is set to 0A. What is the minimal amplitude I_min_fast to trigger one spike? Note: Technically the input current is implemented using a TimedArray. For a short, steep ramp, the one milliseconds discretization for the current is not high enough. You can create a more fine resolution:
End of explanation
"""
|
harpolea/r3d2 | docs/states.ipynb | mit | from r3d2 import eos_defns, State
eos = eos_defns.eos_gamma_law(5.0/3.0)
U = State(1.0, 0.1, 0.0, 2.0, eos)
"""
Explanation: States
A Riemann Problem is specified by the state of the material to the left and right of the interface. In this hydrodynamic problem, the state is fully determined by an equation of state and the variables
$$
{\bf U} = \begin{pmatrix} \rho_0 \ v_x \ v_t \ \epsilon \end{pmatrix},
$$
where $\rho_0$ is the rest-mass density, $v_x$ the velocity normal to the interface, $v_t$ the velocity tangential to the interface, and $\epsilon$ the specific internal energy.
Defining a state
In r3d2 we define a state from an equation of state and the values of the key variables:
End of explanation
"""
U
"""
Explanation: Inside the notebook, the state will automatically display the values of the key variables:
End of explanation
"""
U2 = State(10.0, -0.3, 0.1, 5.0, eos, label="L")
U2
"""
Explanation: Adding a label to the state for output purposes requires an extra keyword:
End of explanation
"""
q_available = 0.1
t_ignition = 10.0
Cv = 1.0
eos_reactive = eos_defns.eos_gamma_law_react(5.0/3.0, q_available, Cv, t_ignition, eos)
U_reactive = State(5.0, 0.1, 0.1, 2.0, eos_reactive, label="Reactive")
U_reactive
"""
Explanation: Reactive states
If the state has energy available for reactions, that information is built into the equation of state. The definition of the equation of state changes: the definition of the state itself does not:
End of explanation
"""
print("Left wavespeed of first state is {}".format(U.wavespeed(0)))
print("Middle wavespeed of second state is {}".format(U2.wavespeed(1)))
print("Right wavespeed of reactive state is {}".format(U.wavespeed(2)))
"""
Explanation: Additional functions
A state knows its own wavespeeds. Given a wavenumber (the left acoustic wave is 0, the middle contact or advective wave is 1, and the right acoustic wave is 2), we have:
End of explanation
"""
print("Primitive variables of first state are {}".format(U.prim()))
"""
Explanation: A state will return the key primitive variables ($\rho, v_x, v_t, \epsilon$):
End of explanation
"""
print("All variables of second state are {}".format(U.state()))
"""
Explanation: A state will return all the variables it computes, which is $\rho, v_x, v_t, \epsilon, p, W, h, c_s$: the primitive variables as above, the pressure $p$, Lorentz factor $W$, specific enthalpy $h$, and speed of sound $c_s$:
End of explanation
"""
|
zzsza/Datascience_School | 30. 딥러닝/07. RNN 기본 구조와 Keras를 사용한 RNN 구현.ipynb | mit | s = np.sin(2 * np.pi * 0.125 * np.arange(20))
plt.plot(s, 'ro-')
plt.xlim(-0.5, 20.5)
plt.ylim(-1.1, 1.1)
plt.show()
"""
Explanation: RNN 기본 구조와 Keras를 사용한 RNN 구현
신경망을 사용하여 문장(sentence)이나 시계열(time series) 데이터와 같은 순서열(sequence)를 예측하는 문제를 푸는 경우, 예측하고자 하는 값이 더 오랜 과거의 데이터에 의존하게 하려면 시퀀스를 나타내는 벡터의 크기를 증가시켜야 한다. 예를 들어 10,000개의 단어로 구성된 단어장을 사용하는 언어 모형에서 과거 100개 단어의 순서열에 대해 출력을 예측하려면 1,000,000 차원의 입력 벡터가 필요하다.
RNN(Recurrent Neural Network)는 뉴런의 상태(state)를 저장하고 이를 다음 스텝에서의 입력으로 사용함으로써 긴 순서열에 대해서도 예측을 할 수 있는 신경망 구조이다. 여기에서는 RNN의 기본 구조와 Keras 파이썬 패키지에서 지원하는 RNN 구현 방법에 대해 알아본다.
RNN의 기본 구조
일반적인 feedforward 신경망 구조는 다음과 같이 출력 벡터 $y$가 입력 $x$ 벡터와 신경망 가중치 행렬 $U$의 곱에 activation 함수를 적용한 결과로 나타난다.
$$ y = \sigma ( U x ) $$
이 식에서 $\sigma$는 activation 함수를 뜻한다.
하나의 은닉층(hidden layer)을 가지는 MLP(Multi-Layer Perceptron)의 경우에는 다음과 같이 표현할 수 있다.
$$ h = \sigma(U x) $$
$$ o = \sigma(V h) $$
이 식에서 $h$는 은닉층 벡터, $o$은 출력 벡터, $U$는 입력으로부터 은닉층까지의 가중치 행렬, $V$는 은닉층으로부터 츨력까지의 가중치 행렬이다.
RNN에서는 출력 벡터 $o$ 이외에도 상태 벡터 $s$를 출력한다. 상태 벡터는 일종의 은닉층 벡터와 비슷하지만 입력 $x$ 뿐 아니라 바로 전단계의 상태 벡터 값에도 의존한다. 출력 벡터는 상태 벡터의 값에 의존한다.
$$ s_t = \sigma(Ux_t + Ws_{t-1}) $$
$$ o_t = \sigma(Vs_t) $$
여기에서 첨자 $t$는 순서열의 순서를 나타낸다. RNN은 시간 스텝에 따라 연결해서 펼쳐놓으면 무한개의 은닉층을 가진 MLP와 유사한 효과가 있다. 그림으로 나타내면 다음과 같다.
<img src="http://d3kbpzbmcynnmx.cloudfront.net/wp-content/uploads/2015/09/rnn.jpg" style="width: 80%;">
<small>이미지 출처: http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/</small>
다만 MLP와 달리 상태 벡터의 변환 행렬이 고정되어 있다.
순서열 예측
RNN이 기존 신경망과 가장 다른 점은 순서열을 처리할 수 있다는 점이다. RNN에 입력 벡터 순서열 $x_1, x_2, \ldots, x_n$ 을 차례대로 입력하면 상태 순서열 $s_1, s_2, \ldots, s_n$이 내부적으로 생성되고 출력으로는 출력 순서열 $o_1, o_2, \ldots, o_n$ 이 나온다.
만약 원하는 결과가 출력 순서열 $o_1, o_2, \ldots, o_n$ 이 target 순서열 $y_1, y_2, \ldots, y_n$ 과 같아지는 것이라면 입력 순서열 길이와 출력 순서열 길이가 같은 특수한 경우의 sequnce-to-sequence (many-to-many) 예측 문제가 되고 순서열의 마지막 출력 $o_n$ 값이 $y_n$값과 같아지는 것만 목표라면 단순한 sequence to value (many-to-one) 문제가 된다.
<img src="https://deeplearning4j.org/img/rnn_masking_1.png" style="width: 100%;">
<small>이미지 출처: https://deeplearning4j.org/usingrnns</small>
Back-Propagation Through Time (BPTT)
RNN은 시간에 따라 펼쳐놓으면 구조가 MLP와 유사하기 때문에 Back-Propagation 방법으로 gradient를 계산할 수 있다. 다만 실제로 여러개의 은닉층이 있는 것이 아니라 시간 차원에서 존재하기 때문에 Back-Propagation Through Time (BPTT) 방법이라고 한다.
Keras를 사용한 RNN 구현
파이썬용 신경망 패키지인 Keras를 사용해서 RNN을 구현해 보자. Keras 는 theano 나 tensorflow 를 사용하여 신경망을 구현해 줄 수 있도록 하는 고수준 라이브러리다.
Keras 는 다양한 형태의 신경망 구조를 블럭 형태로 제공하고 있으며 SimpleRNN, LSTM, GRU 와 같은 RNN 구조도 제공한다. Keras에서 제공하는 RNN 에 대한 자세한 내용은 다음을 참조한다.
https://keras.io/layers/recurrent/
시계열 예측 문제
풀어야 할 문제는 다음과 같은 사인파형 시계열을 입력으로 다음 스텝의 출력을 예측하는 단순한 시계열 예측 문제이다.
End of explanation
"""
from scipy.linalg import toeplitz
S = np.fliplr(toeplitz(np.r_[s[-1], np.zeros(s.shape[0] - 2)], s[::-1]))
S[:5, :3]
X_train = S[:-1, :3][:, :, np.newaxis]
Y_train = S[:-1, 3]
X_train.shape, Y_train.shape
X_train[:2]
Y_train[:2]
plt.subplot(211)
plt.plot([0, 1, 2], X_train[0].flatten(), 'bo-', label="input sequence")
plt.plot([3], Y_train[0], 'ro', label="target")
plt.xlim(-0.5, 4.5)
plt.ylim(-1.1, 1.1)
plt.legend()
plt.title("First sample sequence")
plt.subplot(212)
plt.plot([1, 2, 3], X_train[1].flatten(), 'bo-', label="input sequence")
plt.plot([4], Y_train[1], 'ro', label="target")
plt.xlim(-0.5, 4.5)
plt.ylim(-1.1, 1.1)
plt.legend()
plt.title("Second sample sequence")
plt.tight_layout()
plt.show()
"""
Explanation: 이 시계열 자료에서 입력 순서열과 target 값을 만든다. 입력 순서열은 3 스텝 크기의 순서열을 사용하고 target으로는 그 다음 시간 스텝의 값을 사용한다. 즉, 3개의 순서열을 입력한 다음 마지막 출력값이 target과 일치하게 만드는 sequence-to-value (many-to-one) 문제를 풀어보도록 한다.
Keras 에서 RNN 을 사용하려면 입력 데이터는 (nb_samples, timesteps, input_dim) 크기를 가지는 ndim=3인 3차원 텐서(tensor) 형태이어야 한다.
nb_samples: 자료의 수
timesteps: 순서열의 길이
input_dim: x 벡터의 크기
여기에서는 단일 시계열이므로 input_dim = 1 이고 3 스텝 크기의 순서열을 사용하므로 timesteps = 3 이며 자료의 수는 18 개이다.
다음코드와 같이 원래의 시계열 벡터를 Toeplitz 행렬 형태로 변환하여 3차원 텐서를 만든다.
End of explanation
"""
from keras.models import Sequential
from keras.layers import SimpleRNN, Dense
np.random.seed(0)
model = Sequential()
model.add(SimpleRNN(10, input_dim=1, input_length=3))
model.add(Dense(1))
model.compile(loss='mse', optimizer='sgd')
"""
Explanation: Keras의 SimpleRNN 클래스
Keras에서 신경망 모형은 다음과 같은 순서로 만든다.
Sequential 클래스 객체인 모형을 생성한다.
add 메서드로 다양한 레이어를 추가한다.
compile 메서드로 목적함수 및 최적화 방법을 지정한다.
fit 메서드로 가중치를 계산한다.
우선 가장 단순한 신경망 구조인 SimpleRNN 클래스를 사용하는 방법은 다음 코드와 같다.
여기에서는 SimpleRNN 클래스 객체로 10개의 뉴런을 가지는 RNN 층을 만든다. 첫번째 인수로는 뉴런의 크기, input_dim 인수로는 벡터의 크기, input_length 인수로는 순서열으 길이를 입력한다.
그 다음으로 SimpleRNN 클래스 객체에서 나오는 10개의 출력값을 하나로 묶어 실수 값을 출력으로 만들기 위해 Dense 클래스 객체를 추가하였다.
손실 함수로는 mean-squred-error를, 최적화 방법으로는 단순한 stochastic gradient descent 방법을 사용한다.
End of explanation
"""
plt.plot(Y_train, 'ro-', label="target")
plt.plot(model.predict(X_train[:,:,:]), 'bs-', label="output")
plt.xlim(-0.5, 20.5)
plt.ylim(-1.1, 1.1)
plt.legend()
plt.title("Before training")
plt.show()
"""
Explanation: 일단 학습을 시키기 이전에 나오는 출력을 살펴보자.
End of explanation
"""
history = model.fit(X_train, Y_train, nb_epoch=100, verbose=0)
plt.plot(history.history["loss"])
plt.title("Loss")
plt.show()
"""
Explanation: fit 메서드로 학습을 한다.
End of explanation
"""
plt.plot(Y_train, 'ro-', label="target")
plt.plot(model.predict(X_train[:,:,:]), 'bs-', label="output")
plt.xlim(-0.5, 20.5)
plt.ylim(-1.1, 1.1)
plt.legend()
plt.title("After training")
plt.show()
"""
Explanation: 학습을 마친 후의 출력은 다음과 같다.
End of explanation
"""
from keras.layers import TimeDistributed
model2 = Sequential()
model2.add(SimpleRNN(20, input_dim=1, input_length=3, return_sequences=True))
model2.add(TimeDistributed(Dense(1)))
model2.compile(loss='mse', optimizer='sgd')
"""
Explanation: SimpleRNN 클래스 생성시 return_sequences 인수를 True로 하면 출력 순서열 중 마지막 값만 출력하는 것이 아니라 전체 순서열을 3차원 텐서 형태로 출력하므로 sequence-to-sequence 문제로 풀 수 있다. 다만 입력 순서열과 출력 순서열의 크기는 같아야 한다.
다만 이 경우에는 다음에 오는 Dense 클래스 객체를 TimeDistributed wrapper를 사용하여 3차원 텐서 입력을 받을 수 있게 확장해 주어야 한다.
End of explanation
"""
X_train2 = S[:-3, 0:3][:, :, np.newaxis]
Y_train2 = S[:-3, 3:6][:, :, np.newaxis]
X_train2.shape, Y_train2.shape
X_train2[:2]
Y_train2[:2]
plt.subplot(211)
plt.plot([0, 1, 2], X_train2[0].flatten(), 'bo-', label="input sequence")
plt.plot([3, 4, 5], Y_train2[0].flatten(), 'ro-', label="target sequence")
plt.xlim(-0.5, 6.5)
plt.ylim(-1.1, 1.1)
plt.legend()
plt.title("First sample sequence")
plt.subplot(212)
plt.plot([1, 2, 3], X_train2[1].flatten(), 'bo-', label="input sequence")
plt.plot([4, 5, 6], Y_train2[1].flatten(), 'ro-', label="target sequence")
plt.xlim(-0.5, 6.5)
plt.ylim(-1.1, 1.1)
plt.legend()
plt.title("Second sample sequence")
plt.tight_layout()
plt.show()
history2 = model2.fit(X_train2, Y_train2, nb_epoch=100, verbose=0)
plt.plot(history2.history["loss"])
plt.title("Loss")
plt.show()
"""
Explanation: 이 번에는 출력값도 3개짜리 순서열로 한다.
End of explanation
"""
plt.subplot(211)
plt.plot([1, 2, 3], X_train2[1].flatten(), 'bo-', label="input sequence")
plt.plot([4, 5, 6], Y_train2[1].flatten(), 'ro-', label="target sequence")
plt.plot([4, 5, 6], model2.predict(X_train2[1:2,:,:]).flatten(), 'gs-', label="output sequence")
plt.xlim(-0.5, 7.5)
plt.ylim(-1.1, 1.1)
plt.legend()
plt.title("Second sample sequence")
plt.subplot(212)
plt.plot([2, 3, 4], X_train2[2].flatten(), 'bo-', label="input sequence")
plt.plot([5, 6, 7], Y_train2[2].flatten(), 'ro-', label="target sequence")
plt.plot([5, 6, 7], model2.predict(X_train2[2:3,:,:]).flatten(), 'gs-', label="output sequence")
plt.xlim(-0.5, 7.5)
plt.ylim(-1.1, 1.1)
plt.legend()
plt.title("Third sample sequence")
plt.tight_layout()
plt.show()
"""
Explanation: 학습 결과는 다음과 같다.
End of explanation
"""
|
plipp/informatica-pfr-2017 | nbs/2/2-Numerical-Data-Pandas-Self-Employment-Rates-DF-Exercise.ipynb | mit | countries = ['AUS', 'AUT', 'BEL', 'CAN', 'CZE', 'FIN', 'DEU', 'GRC', 'HUN', 'ISL', 'IRL', 'ITA', 'JPN',
'KOR', 'MEX', 'NLD', 'NZL', 'NOR', 'POL', 'PRT', 'SVK', 'ESP', 'SWE', 'CHE', 'TUR', 'GBR',
'USA', 'CHL', 'COL', 'EST', 'ISR', 'RUS', 'SVN', 'EU28', 'EA19', 'LVA']
male_selfemployment_rates = [12.13246, 15.39631, 18.74896, 9.18314, 20.97991, 18.87097,
13.46109, 39.34802, 13.3356, 16.83681, 25.35344, 29.27118,
12.06516, 27.53898, 31.6945, 19.81751, 17.68489, 9.13669,
24.15699, 22.95656, 19.00245, 21.16428, 13.93171, 8.73181,
30.73483, 19.11255, 7.48383, 25.92752, 52.27145, 12.05042,
15.8517, 8.10048, 19.02411, 19.59021, 19.1384, 14.75558]
female_selfemployment_rates = [8.18631, 10.38607, 11.07756, 8.0069, 12.78461,
9.42761, 7.75637, 29.56566, 8.00408, 7.6802, 8.2774, 18.33204,
9.7313, 23.56431, 32.81488, 13.36444, 11.50045, 4.57464,
17.63891, 13.92678, 10.32846, 12.82925, 6.22453, 9.28793,
38.32216, 10.21743, 5.2896, 25.24502, 49.98448, 6.624,
9.0243, 6.26909, 13.46641, 11.99529, 11.34129, 8.88987]
countries_by_continent = {'AUS':'AUS', 'AUT':'EUR', 'BEL':'EUR', 'CAN':'AM',
'CZE':'EUR', 'FIN':'EUR', 'DEU':'EUR', 'GRC':'EUR',
'HUN':'EUR', 'ISL':'EUR', 'IRL':'EUR', 'ITA':'EUR',
'JPN':'AS', 'KOR':'AS', 'MEX':'AM', 'NLD':'EUR',
'NZL':'AUS', 'NOR':'EUR', 'POL':'EUR', 'PRT':'EUR',
'SVK':'EUR', 'ESP':'EUR', 'SWE':'EUR', 'CHE':'EUR',
'TUR':'EUR', 'GBR':'EUR', 'USA':'AM' , 'CHL':'AM',
'COL':'AM' , 'EST':'EUR', 'ISR':'AS', 'RUS':'EUR',
'SVN':'EUR', 'EU28':'EUR','EA19':'AS', 'LVA':'EUR'}
import pandas as pd
df_male_selfemployment_rates = pd.DataFrame({'countries':countries, 'selfemployment_rates':male_selfemployment_rates})
df_male_selfemployment_rates.head()
df_female_selfemployment_rates = pd.DataFrame({'countries':countries, 'selfemployment_rates':female_selfemployment_rates})
df_female_selfemployment_rates.head()
df_country_continent = pd.DataFrame(list(countries_by_continent.items()), columns=['country','continent'])
df_country_continent.head()
"""
Explanation: Self Employment Data 2015
from OECD
End of explanation
"""
# TODO
"""
Explanation: Solutions with Pandas
Basic Calculations and Statistics
Exercise 1
Calculate for each country the overallselfemployment_rate:<br>
df_selfemployment_rate:=(male_selfemployment_rates+female_selfemployment_rates)/2
(assumes that #women ~#men)
End of explanation
"""
# TODO
"""
Explanation: Exercise 2
Calculate
- maximum
- minimum
- sum
- mean
- standard deviation
for/of all selfemployment_rates.
End of explanation
"""
# TODO
"""
Explanation: Exercise 3
Find the Country with the highest selfemployment_rate.
End of explanation
"""
# TODO
"""
Explanation: Exercise 4
Find the the sum of all selfemployment_rates, which are between 10-15.
End of explanation
"""
# TODO
"""
Explanation: Exercise 5
a) Plot a barchart of the selfemployment_rates by country (as in [Basic-Plotting]. Use pandas plotting facilities).
Use Pandas reindex to get the labeling in place.
End of explanation
"""
# TODO
"""
Explanation: b) Plot a barchart of the male vs. female selfemployment_rates by country (as in Basic-Plotting, but using pandas plotting facilities).
Use Pandas reindex to get the labeling in place.
End of explanation
"""
# TODO group by Continent
"""
Explanation: Aggregetions
Exercise 6
Calculate the mean of the selfemployment-rates per continent.
End of explanation
"""
|
Housebeer/Natural-Gas-Model | backup/Matching Market v1.ipynb | mit | import random as rnd
class Supplier():
def __init__(self):
self.wta = []
# the supplier has n quantities that they can sell
# they may be willing to sell this quantity anywhere from a lower price of l
# to a higher price of u
def set_quantity(self,n,l,u):
for i in range(n):
p = rnd.uniform(l,u)
self.wta.append(p)
# return the dictionary of willingness to ask
def get_ask(self):
return self.wta
class Buyer():
def __init__(self):
self.wtp = []
# the supplier has n quantities that they can buy
# they may be willing to sell this quantity anywhere from a lower price of l
# to a higher price of u
def set_quantity(self,n,l,u):
for i in range(n):
p = rnd.uniform(l,u)
self.wtp.append(p)
# return list of willingness to pay
def get_bid(self):
return self.wtp
class Market():
count = 0
last_price = ''
b = []
s = []
def __init__(self,b,s):
# buyer list sorted in descending order
self.b = sorted(b, reverse=True)
# seller list sorted in ascending order
self.s = sorted(s, reverse=False)
# return the price at which the market clears
# assume equal numbers of sincere buyers and sellers
def get_clearing_price(self):
# buyer makes a bid, starting with the buyer which wants it most
for i in range(len(self.b)):
if (self.b[i] > self.s[i]):
self.count +=1
self.last_price = self.b[i]
return self.last_price
def get_units_cleared(self):
return self.count
"""
Explanation: Matching Market
This simple model consists of a buyer, a supplier, and a market.
The buyer represents a group of customers whose willingness to pay for a single unit of the good is captured by a vector of prices wta. You can initiate the buyer with a set_quantity function which randomly assigns the willingness to pay according to your specifications. You may ask for these willingness to pay quantities with a get_bid function.
The supplier is similiar, but instead the supplier is willing to be paid to sell a unit of technology. The supplier for instance may have non-zero variable costs that make them unwilling to produce the good unless they receive a specified price. Similarly the supplier has a get_ask function which returns a list of desired prices.
The willingness to pay or sell are set randomly using uniform random distributions. The resultant lists of bids are effectively a demand curve. Likewise the list of asks is effectively a supply curve. A more complex determination of bids and asks is possible, for instance using time of year to vary the quantities being demanded.
Microeconomic Foundations
The market assumes the presence of an auctioneer which will create a book, which seeks to match the bids and the asks as much as possible. If the auctioneer is neutral, then it is incentive compatible for the buyer and the supplier to truthfully announce their bids and asks. The auctioneer will find a single price which clears as much of the market as possible. Clearing the market means that as many willing swaps happens as possible. You may ask the market object at what price the market clears with the get_clearing_price function. You may also ask the market how many units were exchanged with the get_units_cleared function.
Agent-Based Objects
The following section presents three objects which can be used to make an agent-based model of an efficient, two-sided market.
End of explanation
"""
# make a supplier and get the asks
supplier = Supplier()
supplier.set_quantity(60,10,30)
ask = supplier.get_ask()
# make a buyer and get the bids (n,l,u)
buyer = Buyer()
buyer.set_quantity(60,10,30)
bid = buyer.get_bid()
# make a market where the buyers and suppliers can meet
# the bids and asks are a list of prices
market = Market(bid,ask)
price = market.get_clearing_price()
quantity = market.get_units_cleared()
# output the results of the market
print("Goods cleared for a price of ",price)
print("Units sold are ", quantity)
"""
Explanation: Example Market
In the following code example we use the buyer and supplier objects to create a market. At the market a single price is announced which causes as many units of goods to be swapped as possible. The buyers and sellers stop trading when it is no longer in their own interest to continue.
End of explanation
"""
|
bjornaa/ladim | examples/line/holoviews.ipynb | mit | import numpy as np
from netCDF4 import Dataset
import holoviews as hv
from postladim import ParticleFile
hv.extension('bokeh')
"""
Explanation: Plotting particle distributions with holoviews
End of explanation
"""
# Read bathymetry and land mask
with Dataset('../data/ocean_avg_0014.nc') as ncid:
H = ncid.variables['h'][:, :]
M = ncid.variables['mask_rho'][:, :]
jmax, imax = M.shape
# Select sea and land features
H = np.where(M > 0, H, np.nan) # valid at sea
M = np.where(M < 1, M, np.nan) # valid on land
# Make land image
ds_land = hv.Dataset((np.arange(imax), np.arange(jmax), M), ['x', 'y'], 'Land mask')
im_land = ds_land.to(hv.Image, kdims=['x', 'y'], group='land')
# Make bathymetry image
ds_bathy = hv.Dataset((np.arange(imax), np.arange(jmax), -np.log10(H)),
['x', 'y'], 'Bathymetry')
im_bathy = ds_bathy.to(hv.Image, kdims=['x', 'y'])
background = im_bathy * im_land
"""
Explanation: Background map
Make a background bathymetric map.
A simple land representation is given by colouring the land cells in the
ROMS file. Take the logarithm of the bathymetry to enhance topographic details
in the shallow North Sea.
End of explanation
"""
pf = ParticleFile('line.nc')
def pplot(timestep):
"""Scatter plot of particle distibution at a given time step"""
X, Y = pf.position(timestep)
return background * hv.Scatter((X, Y))
"""
Explanation: Particle plot function
Open the particle file and make a function to make a
Scatter element of the particle distribution at a given time step.
End of explanation
"""
%%opts Image (cmap='blues_r' alpha=0.7)
%%opts Image.land (cmap=['#AABBAA'])
%%opts Scatter (color='red')
pplot(0) + pplot(pf.num_times-1) # Final particle distribution
"""
Explanation: Still images
Set a greyish colour on land and use shades of blue at sea. Show initial
and final particle distributions.
End of explanation
"""
%%output size=150
%%opts Scatter (color='red')
dmap = hv.DynamicMap(pplot, kdims=['timestep'])
dmap.redim.range(timestep=(0, pf.num_times-1))
"""
Explanation: Dynamic map
Make a DynamicMap of all the particle distributions.
End of explanation
"""
|
ComputationalModeling/spring-2017-danielak | past-semesters/spring_2016/day-by-day/day08-modeling-viral-load-2/Day_8_Pre_Class_Notebook_SOLUTIONS.ipynb | agpl-3.0 | # Imports the functionality that we need to display YouTube videos in a Jupyter Notebook.
# You need to run this cell before you run ANY of the YouTube videos.
from IPython.display import YouTubeVideo
# WATCH THE VIDEO IN FULL-SCREEN MODE
YouTubeVideo("8_wSb927nH0",width=640,height=360) # Complex 'if' statements
"""
Explanation: Day 8 - pre-class assignment SOLUTIONS
Goals for today's pre-class assignment
Use complex if statements and loops to make decisions in a computer program
Assignment instructions
Watch the videos below, read through Sections 4.1, 4.4, and 4.5 of the Python Tutorial, and complete the programming problems assigned below.
This assignment is due by 11:59 p.m. the day before class, and should be uploaded into the "Pre-class assignments" dropbox folder for Day 8. Submission instructions can be found at the end of the notebook.
End of explanation
"""
# put your code here.
import numpy as np
my_array = np.arange(1,11)
for val in my_array:
if val%2 == 0:
print(val, "is even")
else:
print(val, "is odd")
if val%3 == 0:
print(val, "is divisible by 3")
elif val%5 == 0:
print(val, "is divisible by 5")
else:
print(val, "wow, that's disappointing")
# WATCH THE VIDEO IN FULL-SCREEN MODE
YouTubeVideo("MzZCeHB0CbE",width=640,height=360) # Complex 'if' statements
"""
Explanation: Question 1: In the cell below, use numpy's 'arange' method to create an array filled with all of the integers between 1 and 10 (inclusive). Loop through the array, and use if/elif/else to:
Print out if the number is even or odd.
Print out if the number is divisible by 3.
Print out if the number is divisible by 5.
If the number is not divisible by either 3 or 5, print out "wow, that's disappointing."
Note 1: You may need more than one if/elif/else statement to do this!
Note 2: If you have a numpy array named my_numpy_array, you don't necessarily have to use the numpy nditer method. You can loop using the standard python syntax as well. In other words:
for val in my_numpy_array:
print(val)
will work just fine.
End of explanation
"""
# put your code here.
my_list = [1,3,17,23,9,-4,2,2,11,4,-7]
sum = 0
for val in my_list:
if val < 0:
break
print(val)
sum += val
print("the sum after the loop is:", sum)
"""
Explanation: Question 2: In the space below, loop through the given array, breaking when you get to the first negative number. Print out the value you're examining after you check for negative numbers. Create a variable and set it to zero before the loop, and add each number in the list to it after the check for negative numbers. What is that variable equal to after the loop?
End of explanation
"""
# put your code here
# put your code here.
my_list = [1,3,17,23,9,-4,2,2,11,4,-7]
sum = 0
for val in my_list:
if val % 2 == 0:
continue
print(val)
sum += val
print("the sum after the loop is:", sum)
"""
Explanation: Question 3: In the space below, loop through the array given above, skipping every even number with the continue statement. Print out the value you're examining after you check for even numbers. Create a variable and set it to zero before the loop, and add each number in the list to it after the check for even numbers. What is that variable equal to after the loop?
End of explanation
"""
# put your code here!
my_list = [1,3,17,23,9,-4,2,2,11,4,-7]
sum = 0
for val in my_list:
if val > 99: # should never be called, because the values are too small!
break
print(val)
sum += val
else:
print("yay, success!")
print("the sum after the loop is:", sum)
"""
Explanation: Question 4: Copy and paste your code from question #2 above and change it in two ways:
Modify the numbers in the array so the if/break statement is never called.
There is an else clause after the end of the loop (not the end of the if statement!) that prints out "yay, success!" if the loop completes successfully, but not if it breaks.
Verify that if you use the original array, the print statement in the else clause doesn't work!
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion | notebooks/building_production_ml_systems/solutions/0_export_data_from_bq_to_gcs.ipynb | apache-2.0 | from google import api_core
from google.cloud import bigquery
"""
Explanation: Exporting data from BigQuery to Google Cloud Storage
In this notebook, we export BigQuery data to GCS so that we can reuse our Keras model that was developed on CSV data.
End of explanation
"""
# Change below if necessary
PROJECT = !gcloud config get-value project # noqa: E999
PROJECT = PROJECT[0]
BUCKET = PROJECT
OUTDIR = f"gs://{BUCKET}/taxifare/data"
%env PROJECT=$PROJECT
%env BUCKET=$BUCKET
%env OUTDIR=$OUTDIR
"""
Explanation: Change the following cell as necessary:
End of explanation
"""
bq = bigquery.Client(project=PROJECT)
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset)
print("Dataset created")
except api_core.exceptions.Conflict:
print("Dataset already exists")
"""
Explanation: Create BigQuery tables
If you haven not already created a BigQuery dataset for our data, run the following cell:
End of explanation
"""
%%bigquery
CREATE OR REPLACE TABLE taxifare.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
"""
Explanation: Let's create a table with 1 million examples.
Note that the order of columns is exactly what was in our CSV files.
End of explanation
"""
%%bigquery
CREATE OR REPLACE TABLE taxifare.feateng_valid_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
"""
Explanation: Make the validation dataset be 1/10 the size of the training dataset.
End of explanation
"""
%%bash
echo "Deleting current contents of $OUTDIR"
gsutil -m -q rm -rf $OUTDIR
echo "Extracting training data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
taxifare.feateng_training_data \
$OUTDIR/taxi-train-*.csv
echo "Extracting validation data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
taxifare.feateng_valid_data \
$OUTDIR/taxi-valid-*.csv
gsutil ls -l $OUTDIR
!gsutil cat gs://$BUCKET/taxifare/data/taxi-train-000000000000.csv | head -2
"""
Explanation: Export the tables as CSV files
End of explanation
"""
|
chesters99/ghpages | content/loan-default-prediction.ipynb | gpl-3.0 | %%time
print('Reading: loan_stat542.csv into loans dataframe...')
loans = pd.read_csv('loan_stat542.csv')
print('Loans dataframe:', loans.shape)
test_ids = pd.read_csv('Project3_test_id.csv', dtype={'test1':int,'test2':int, 'test3':int,})
print('ids dataframe:', test_ids.shape)
trains = []
tests = []
labels = []
for i, col in enumerate(test_ids.columns):
trains.append(loans.loc[~loans.id.isin(test_ids[col]),:])
tests.append( loans.loc[ loans.id.isin(test_ids[col]), loans.columns!='loan_status'])
labels.append(loans.loc[ loans.id.isin(test_ids[col]), ['id','loan_status']])
labels[i]["y"] = (labels[i].loan_status != 'Fully Paid').astype(int)
labels[i].drop('loan_status', axis=1, inplace=True)
print('Fold', i, trains[i].shape, tests[i].shape, labels[i].shape)
print('Writing train, test, labels csv files...')
# fold=0
# _ = trains[fold].to_csv('train.csv', index=False)
# _ = tests [fold].to_csv('test.csv', index=False)
# _ = labels[fold].to_csv('label.csv', index=False)
print('Done!')
"""
Explanation: Read Loans csv and Create test/train csv files
End of explanation
"""
def process(data):
data['emp_length'] = data.emp_length.fillna('Unknown').str.replace('<','LT')
data['dti'] = data.dti.fillna(0)
data['revol_util'] = data.revol_util.fillna(0)
data['mort_acc'] = data.mort_acc.fillna(0)
data['pub_rec_bankruptcies'] = data.pub_rec_bankruptcies.fillna(0)
temp = pd.to_datetime(data.earliest_cr_line)
data['earliest_cr_line'] = temp.dt.year*12 - 1950*12 + temp.dt.month
data.drop(['emp_title','title','zip_code','grade','fico_range_high'], axis=1, inplace=True)
return data
def logloss(y, p):
loglosses = np.where(y==1, -np.log(p+1e-15), -np.log(1-p+1e-15))
return np.mean(loglosses)
def prep_train_test(train, test):
train = process(train)
X_train = train.drop(['loan_status'], axis=1)
X_train = pd.get_dummies(X_train) # create dataframe with dummy variables replacing categoricals
X_train = X_train.reindex(sorted(X_train.columns), axis=1) # sort columns to be in same sequence as test
y_train = (train.loan_status!='Fully Paid').astype(int)
test = process(test)
X_test = pd.get_dummies(test) # create dataframe with dummy variables replacing categoricals
all_columns = X_train.columns.union(X_test.columns) # add columns to test that are in train but not test
X_test = X_test.reindex(columns=all_columns).fillna(0)
X_test = X_test.reindex(sorted(X_train.columns), axis=1) # sort columns to be in same sequence at train
return X_train, y_train, X_test
%%time
import time
seed=42
models = [
LogisticRegression(penalty='l1',C=1, random_state=seed),
# GradientBoostingClassifier(max_features='sqrt', learning_rate=0.055, n_estimators=780, max_depth=7,
# min_samples_leaf=2, subsample=0.9, min_samples_split=4,
# min_weight_fraction_leaf=0, random_state=seed),
xgb.XGBClassifier(learning_rate=0.037, n_estimators=860, min_child_weight=8, max_depth=7, gamma=0.3,
subsample=0.52, colsample_bytree=0.92, reg_lambda=0.67, reg_alpha=0.03,
objective= 'binary:logistic', n_jobs=-1, random_state=seed, eval_metric='logloss'),
]
num_models, num_folds = len(models), len(test_ids.columns)
errors = np.zeros([num_models, num_folds])
for fold in range(num_folds):
np.random.seed(seed=seed)
train = trains[fold].copy()
test = tests [fold].copy()
label = labels[fold].copy()
fraction = 1
if fraction < 1:
train = train.sample(frac=fraction, random_state=seed)
test = test.sample(frac=fraction*4, random_state=seed)
print()
X_train, y_train, X_test = prep_train_test(train, test)
# print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
for i, model in enumerate(models):
start_time = time.time()
_ = model.fit(X_train, y_train)
probs = model.predict_proba(X_test)[:,1]
y_test = pd.merge(tests[fold][['id']], label, how='left', on='id')
errors[i, fold] = logloss(y_test.y, probs)
print('{:26.26} Fold={}, Runime={:8.2f} seconds, logloss={:8.5f}'.format(
type(model).__name__, fold, round(time.time()-start_time,2), errors[i,fold]))
# df = pd.DataFrame({'id': test.id, 'prob': probs.round(5)})
# df.to_csv('mysubmission'+str(i+1)+'.txt', index=False)
# print('Created mysubmission'+str(i+1)+'.txt, rows=', df.shape[0])
print("\nSUMMARY:")
for i, m in enumerate(models):
print('{:26.26} mean logloss={:8.5f}'.format(type(m).__name__, errors[i,:].mean()))
"""
Explanation: Read and process train and test dataframes
End of explanation
"""
from skopt import gp_minimize, gbrt_minimize
from skopt.plots import plot_convergence
import datetime, warnings
def objective(values):
index = str(values)
if index in cache:
print('GET FROM CACHE:', index, round(cache[index],4))
return cache[index]
if model_type == 'LogisticRegression':
params = {'penalty': values[0], 'C': values[1],}
model = LogisticRegression(**params, random_state=seed, n_jobs=-1)
# if model_type == 'RandomForestClassifier':
# params = {'n_estimators': values[0], 'max_features': values[1], 'max_depth': values[2],}
# model = RandomForestClassifier(**params, n_jobs=-1)
if model_type == 'GradientBoostingClassifier':
params = {'learning_rate': values[0], 'n_estimators': values[1], 'max_depth': values[2],
'min_samples_split': values[3], 'min_samples_leaf': values[4],
'min_weight_fraction_leaf' : values[5], 'subsample': values[6], 'max_features': values[7] }
model = GradientBoostingClassifier(**params, random_state=seed)
if model_type == 'XGBClassifier':
params = {'learning_rate': values[0], 'n_estimators': int(values[1]), 'min_child_weight': int(values[2]),
'max_depth': int(values[3]), 'gamma': values[4], 'subsample': values[5],
'colsample_bytree': values[6], 'lambda': values[7], 'alpha': values[8], 'eval_metric':'logloss'}
model = xgb.XGBClassifier(**params, random_state=seed, nthread=-1, n_jobs=-1,silent=1)
print(datetime.datetime.now().time().replace(microsecond=0), ', Params',params)
# scores = -cross_val_score(model, X_train, y_train, scoring="neg_log_loss", cv=5, n_jobs=-1)
_ = model.fit(X_train, y_train)
probs = model.predict_proba(X_test)[:,1]
y_test = pd.merge(test[['id']], label, how='left', on='id')
cache[index] = np.mean( logloss(y_test.y, probs) )
return cache[index]
# %%time
import warnings
np.random.seed(seed)
warnings.filterwarnings("ignore", category=UserWarning) # turn off already evaluated errors
params={'LogisticRegression': [
['l1',],
(1e-1,1e+1,'uniform'),
],
'GradientBoostingClassifier': [
(0.04, 0.10, 'uniform'), # learning rate
(500, 900), # n_estimators
(3, 7), # max_depth
(2, 5), # min_samples_split
(2, 5), # min_samples_leaf
(0, 0.3), # min_weight_fraction_leaf
(0.8, 1.0,'uniform'), # subsample
('sqrt',), # max_features
],
'XGBClassifier': [
(0.01, 0.05, 'uniform'), # learning_rate 0.05, 0.3,
(300, 700), # n_estimators
(5, 9), # min_child_weight
(4, 8), # max_depth 3-10
(0, 0.5, 'uniform'), # gamma 0-0.4
(0.4, 1.0, 'uniform'), # subsample 0.5 - 0.99
(0.8, 1.0, 'uniform'), # colsample_bytree 0.5 - 0.99
(0.8, 1.0, 'uniform'), # reg_lambda
(0.0, 0.5, 'uniform'), # reg_alpha
],}
train = trains[0].copy()
test = tests[0].copy()
label = labels[0].copy()
fraction = 1
if fraction < 1:
train = train.sample(frac=fraction, random_state=seed)
test = test.sample(frac=fraction, random_state=seed)
print(train.shape)
X_train, y_train, X_test = prep_train_test(train, test)
print(X_train.shape, y_train.shape, X_test.shape)
model_types = params.keys()
model_types = ['GradientBoostingClassifier']
for model_type in model_types:
cache = {}
space = params[model_type]
result = gbrt_minimize(objective,space,n_random_starts=15, n_calls=300, random_state=seed,verbose=True,n_jobs=-1)
print('\n', model_type, ', Best Params=', result.x, ' Best Score=', round(result.fun,6),'\n')
_ = plt.figure(figsize=(15,8))
_ = plot_convergence(result, yscale='log')
warnings.filterwarnings("default", category=UserWarning) # turn on already evaluated errors
"""
Explanation: Model Tuning with skopt
End of explanation
"""
sorted_d = sorted(cache.items(), key=lambda x: x[1])
temp = []
for i in range(len(sorted_d)):
temp.append((sorted_d[i][0], round(sorted_d[i][1],5)))
print('{} {}'.format(round(sorted_d[i][1],5), sorted_d[i][0]))
"""
Explanation: GBM best results - sorted
End of explanation
"""
sorted_d = sorted(cache.items(), key=lambda x: x[1])
temp = []
for i in range(len(sorted_d)):
temp.append((sorted_d[i][0], round(sorted_d[i][1],5)))
print('{} {}'.format(round(sorted_d[i][1],5), sorted_d[i][0]))
"""
Explanation: XGB best results - sorted
End of explanation
"""
%%time
XGB_params = {
'learning_rate': np.linspace(0.05, 1, 1), # 0.03 to 0.2, tuned to 0.05
'min_child_weight': np.linspace(5, 10, 1, dtype=int), # 1 to 6, tuned to 2
'max_depth': np.linspace(4, 8, 1, dtype=int), # 3 to 10, tuned to 3
'gamma': np.linspace(0.2, 0.4, 1), # 0 to 0.4, tuned to 0
'subsample': np.linspace(1, 1, 1), # 0.6 to 1, tuned to 1.0
'colsample_bytree': np.linspace(0.75, 1, 1), # 0.6 to 1, tuned to 0.6
'reg_lambda': np.linspace(0.25, 0.6, 1), # 0 to 1, tuned to 1.0
'reg_alpha': np.linspace(0.15, 0.5, 1), # 0 to 1, tuned to 0.5
'silent': [1,],
}
train = trains[0].copy()
test = tests[0].copy()
fraction = 0.05
if fraction < 1:
train = train.sample(frac=fraction, random_state=seed)
test = test.sample(frac=fraction*4, random_state=seed)
print(train.shape, test.shape)
X_train, y_train, X_test= prep_train_test(train, test)
print(X_train.shape, y_train.shape, X_test.shape)
tune_cv=False # perform CV just to get number of boost rounds/estimators before using GridSearchCV
if tune_cv:
xgtrain = xgb.DMatrix(X_train, label=y_train)
cvresult = xgb.cv(objective=binary:logistic, learning_rate=0.05,min_child_weight=5, max_depth=4, gamma=0.2,
subsample=1.0, colsample_bytree=0.75, reg_lambda=0.25, reg_alpha=0.15, silent=1,
xgtrain, num_boost_round=1000, early_stopping_rounds=50, nfold=10,
metrics='logloss', verbose_eval=10, show_stdv=False)
else:
XGBbase = xgb.XGBClassifier(n_estimators=150, objective= 'binary:logistic', random_state=seed, n_jobs=-1)
XGBmodel = GridSearchCV(XGBbase, cv=10, n_jobs=-1, param_grid=XGB_params, scoring='neg_log_loss' ,verbose=5)
_ = XGBmodel.fit(X_train, y_train)
print(XGBmodel.best_estimator_)
print(XGBmodel.best_params_)
print(-round(XGBmodel.best_score_,6))
# fig, ax = plt.subplots(1,1,figsize=(16,26))
# _ = xgb.plot_importance(model, ax=ax)
"""
Explanation: Tune XGBoost Model manually with CV
End of explanation
"""
# rejected models
# KNeighborsClassifier(500), # tuned on 5% of data
# SVC(gamma='auto', C=1, probability=True, random_state=seed), # quite slow
# DecisionTreeClassifier(max_depth=2, random_state=seed),
# RandomForestClassifier(n_estimators=300, random_state=seed),
# AdaBoostClassifier(random_state=seed),
# GaussianNB(),
# LinearDiscriminantAnalysis(),
temp = X_train.corr()
temp = temp[temp>0.50]
temp.values[[np.arange(len(temp.columns))]*2] = np.NaN
temp = temp.dropna(how='all', axis=0)
temp = temp.dropna(how='all', axis=1)
temp
m1 = models[-1]
df = pd.DataFrame({'id': test.id, 'prob': probs.round(11)})
df.to_csv('mysubmission1.txt', index=False)
print('Created mysubmission1.txt, rows=', df.shape[0], ', Model=', type(m1).__name__)
m2 = models[-1]
df = pd.DataFrame({'id': test.id, 'prob': probs.round(11)})
df.to_csv('mysubmission2.txt', index=False)
print('Created mysubmission2.txt, rows=', df.shape[0], ', Model=', type(m2).__name__)
m3 = models[-1]
df = pd.DataFrame({'id': test.id, 'prob': probs.round(11)})
df.to_csv('mysubmission3.txt', index=False)
print('Created mysubmission3.txt, rows=', df.shape[0], ', Model=', type(m3).__name__)
"""
Explanation: BELOW HERE IS MISC STUFF
End of explanation
"""
%%time
loans['emp_title'] = loans.emp_title.fillna('_unknown').str.lower()
loans['emp_title1'] = categorise_emp_title(loans.emp_title)
loans['default'] = (loans.loan_status!='Fully Paid').astype(int)
g1 = loans.groupby(['emp_title','emp_title1'])['default'].agg(['sum','count'])
g1.columns = ['sum1','count1']
g1['rate1'] = g1['sum1'] / g1['count1']
g1 = g1.sort_values("rate1", ascending=False)
"""
Explanation: consolidation checking
End of explanation
"""
features = X_train.columns
importances = model.feature_importances_
indices = np.argsort(importances*-1)
num_features = 40
indices = indices[0:num_features]
_ = plt.figure(figsize=(16,7))
_ = plt.title('XGBoost: Most Important Features')
_ = plt.bar(range(len(indices)), importances[indices], color='steelblue')
_ = plt.xticks(range(len(indices)), [features[i] for i in indices], rotation=85, fontsize = 14)
_ = plt.ylabel('Relative Importance')
_ = plt.xlim(-1,num_features-1)
_ = plt.tight_layout()
_ = plt.savefig('features.png',dpi=100)
np.random.seed(seed)
# TRAIN LASSO MODEL AND GENERATE SUBMISSION FILE
model = ElasticNet(alpha=0.001, l1_ratio=0.4, max_iter=5000, tol=0.0001, random_state=seed),
_ = model.fit(X_train, y_train)
XGBpreds = model.predict(X_test)
XGB_df = pd.DataFrame({'PID': y_test.index, 'Sale_Price': np.expm1(XGBpreds).round(1)})
XGB_df.to_csv('mysubmission1.txt', index=False)
print('Created mysubmission1.txt, rows=', XGB_df.shape[0],
', Model=', type(model).__name__, ', RMSElogPrice =', round( rmse(y_test, XGBpreds),6 ))
# TRAIN GBM MODEL AND GENERATE SUBMISSION FILE
model = GradientBoostingRegressor(learning_rate=0.03, n_estimators=550, max_depth=5, min_samples_split=4,
min_samples_leaf=3, min_weight_fraction_leaf=0, subsample=0.64, max_features='sqrt', random_state=seed)
_ = model.fit(X_train, y_train)
ENet_preds = model.predict(X_test)
ENet_df = pd.DataFrame({'PID': y_test.index, 'Sale_Price': np.expm1(ENet_preds).round(1)})
ENet_df.to_csv('mysubmission2.txt', index=False)
print('Created mysubmission2.txt, rows=', XGB_df.shape[0],
', Model=', type(model).__name__, ', RMSElogPrice =', round( rmse(y_test, ENet_preds),6 ))
# RE-READ SUBMISSION FILES AND CHECK FOR CORRECTNESS
temp = pd.read_csv('mysubmission1.txt')
print('\nChecking mysubmission1 file, RMSE=', round(rmse(np.log1p(temp.Sale_Price), y_test.values),6) )
temp = pd.read_csv('mysubmission2.txt')
print('Checking mysubmission2 file, RMSE=', round(rmse(np.log1p(temp.Sale_Price), y_test.values),6) )
"""
Explanation: Create Two Predictions files from test.csv
End of explanation
"""
|
SylvainCorlay/bqplot | examples/Applications/Visualizing the US Elections.ipynb | apache-2.0 | from __future__ import print_function
import pandas as pd
import numpy as np
from ipywidgets import VBox, HBox
import os
codes = pd.read_csv(os.path.abspath('../data_files/state_codes.csv'))
try:
from pollster import Pollster
except ImportError:
print('Pollster not found. Installing Pollster..')
try:
import subprocess
subprocess.check_call(['pip', 'install', 'pollster==0.1.6'])
except:
print("The pip installation failed. Please manually install Pollster and re-run this notebook.")
def get_candidate_data(question):
clinton, trump, undecided, other = 0., 0., 0., 0.
for candidate in question['subpopulations'][0]['responses']:
if candidate['last_name'] == 'Clinton':
clinton = candidate['value']
elif candidate['last_name'] == 'Trump':
trump = candidate['value']
elif candidate['choice'] == 'Undecided':
undecided = candidate['value']
else:
other = candidate['value']
return clinton, trump, other, undecided
def get_row(question, partisan='Nonpartisan', end_date='2016-06-21'):
# if question['topic'] != '2016-president':
if ('2016' in question['topic']) and ('Presidential' in question['topic']):
hillary, donald, other, undecided = get_candidate_data(question)
return [{'Name': question['name'], 'Partisan': partisan, 'State': question['state'],
'Date': np.datetime64(end_date), 'Trump': donald, 'Clinton': hillary, 'Other': other,
'Undecided': undecided}]
else:
return
def analyze_polls(polls):
global data
for poll in polls:
for question in poll.questions:
resp = get_row(question, partisan=poll.partisan, end_date=poll.end_date)
if resp is not None:
data = data.append(resp)
return
try:
from pollster import Pollster
pollster = Pollster()
# Getting data from Pollster. This might take a second.
raw_data = pollster.charts(topic='2016-president')
data = pd.DataFrame(columns=['Name', 'Partisan', 'State', 'Date', 'Trump', 'Clinton', 'Other',
'Undecided'])
for i in raw_data:
analyze_polls(i.polls())
except:
raise ValueError('Please install Pollster and run the functions above')
def get_state_party(code):
state = codes[codes['FIPS']==code]['USPS'].values[0]
if data[data['State']==state].shape[0] == 0:
return None
polls = data[(data['State']==state) & (data['Trump'] > 0.) & (data['Clinton'] > 0.)].sort_values(by='Date')
if polls.shape[0] == 0:
return None
if (polls.tail(1)['Trump'] > polls.tail(1)['Clinton']).values[0]:
return 'Republican'
else:
return 'Democrat'
def get_color_data():
color_data = {}
for i in codes['FIPS']:
color_data[i] = get_state_party(i)
return color_data
def get_state_data(code):
state = codes[codes['FIPS']==code]['USPS'].values[0]
if data[data['State']==state].shape[0] == 0:
return None
polls = data[(data['State']==state) & (data['Trump'] > 0.) & (data['Clinton'] > 0.)].sort_values(by='Date')
return polls
from bqplot import *
from ipywidgets import Layout
dt_x = DateScale()
sc_y = LinearScale()
time_series = Lines(scales={'x': dt_x, 'y': sc_y}, colors=['#E91D0E', '#2aa1ec'], marker='circle')
ax_x = Axis(scale=dt_x, label='Date')
ax_y = Axis(scale=sc_y, orientation='vertical', label='Percentage')
ts_fig = Figure(marks=[time_series], axes=[ax_x, ax_y], title='General Election - State Polls',
layout=Layout(min_width='650px', min_height='400px'))
sc_geo = AlbersUSA()
sc_c1 = OrdinalColorScale(domain=['Democrat', 'Republican'], colors=['#2aa1ec', '#E91D0E'])
color_data = get_color_data()
map_styles = {'color': color_data,
'scales': {'projection': sc_geo, 'color': sc_c1}, 'colors': {'default_color': 'Grey'}}
axis = ColorAxis(scale=sc_c1)
states_map = Map(map_data=topo_load('map_data/USStatesMap.json'), tooltip=ts_fig, **map_styles)
map_fig = Figure(marks=[states_map], axes=[axis],title='General Election Polls - State Wise')
def hover_callback(name, value):
polls = get_state_data(value['data']['id'])
if polls is None or polls.shape[0] == 0:
time_series.y = [0.]
return
time_series.x, time_series.y = polls['Date'].values.astype(np.datetime64), [polls['Trump'].values, polls['Clinton'].values]
ts_fig.title = str(codes[codes['FIPS']==value['data']['id']]['Name'].values[0]) + ' Polls - Presidential Election'
states_map.on_hover(hover_callback)
national = data[(data['State']=='US') & (data['Trump'] > 0.) & (data['Clinton'] > 0.)].sort_values(by='Date')
dt_x = DateScale()
sc_y = LinearScale()
clinton_scatter = Scatter(x=national['Date'].values.astype(np.datetime64), y=national['Clinton'],
scales={'x': dt_x, 'y': sc_y},
colors=['#2aa1ec'])
trump_scatter = Scatter(x=national['Date'].values.astype(np.datetime64), y=national['Trump'],
scales={'x': dt_x, 'y': sc_y},
colors=['#E91D0E'])
ax_x = Axis(scale=dt_x, label='Date', tick_format='%b-%Y', num_ticks=8)
ax_y = Axis(scale=sc_y, orientation='vertical', label='Percentage')
scat_fig = Figure(marks=[clinton_scatter, trump_scatter], axes=[ax_x, ax_y], title='General Election - National Polls')
"""
Explanation: Visualizing the 2016 General Election Polls
End of explanation
"""
VBox([map_fig, scat_fig])
"""
Explanation: Hover on the map to visualize the poll data for that state.
End of explanation
"""
county_data = pd.read_csv(os.path.abspath('../data_files/2008-election-results.csv'))
winner = np.array(['McCain'] * county_data.shape[0])
winner[(county_data['Obama'] > county_data['McCain']).values] = 'Obama'
sc_geo_county = AlbersUSA()
sc_c1_county = OrdinalColorScale(domain=['McCain', 'Obama'], colors=['Red', 'DeepSkyBlue'])
color_data_county = dict(zip(county_data['FIPS'].values.astype(int), list(winner)))
map_styles_county = {'color': color_data_county,
'scales': {'projection': sc_geo_county, 'color': sc_c1_county}, 'colors': {'default_color': 'Grey'}}
axis_county = ColorAxis(scale=sc_c1_county)
county_map = Map(map_data=topo_load('map_data/USCountiesMap.json'), **map_styles_county)
county_fig = Figure(marks=[county_map], axes=[axis_county],title='US Elections 2008 - Example',
layout=Layout(min_width='800px', min_height='550px'))
names_sc = OrdinalScale(domain=['Obama', 'McCain'])
vote_sc_y = LinearScale(min=0, max=100.)
names_ax = Axis(scale=names_sc, label='Candidate')
vote_ax = Axis(scale=vote_sc_y, orientation='vertical', label='Percentage')
vote_bars = Bars(scales={'x': names_sc, 'y': vote_sc_y}, colors=['#2aa1ec', '#E91D0E'])
bar_fig = Figure(marks=[vote_bars], axes=[names_ax, vote_ax], title='Vote Margin',
layout=Layout(min_width='600px', min_height='400px'))
def county_hover(name, value):
if (county_data['FIPS'] == value['data']['id']).sum() == 0:
bar_fig.title = ''
vote_bars.y = [0., 0.]
return
votes = county_data[county_data['FIPS'] == value['data']['id']]
dem_vote = float(votes['Obama %'].values[0])
rep_vote = float(votes['McCain %'].values[0])
vote_bars.x, vote_bars.y = ['Obama', 'McCain'], [dem_vote, rep_vote]
bar_fig.title = 'Vote % - ' + value['data']['name']
county_map.on_hover(county_hover)
county_map.tooltip = bar_fig
"""
Explanation: Visualizing the County Results of the 2008 Elections
End of explanation
"""
county_fig
"""
Explanation: Hover on the map to visualize the voting percentage for each candidate in that county
End of explanation
"""
|
metpy/MetPy | v0.8/_downloads/Skew-T_Layout.ipynb | bsd-3-clause | import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import metpy.calc as mpcalc
from metpy.cbook import get_test_data
from metpy.plots import add_metpy_logo, Hodograph, SkewT
from metpy.units import units
"""
Explanation: Skew-T with Complex Layout
Combine a Skew-T and a hodograph using Matplotlib's GridSpec layout capability.
End of explanation
"""
col_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed']
df = pd.read_fwf(get_test_data('may4_sounding.txt', as_file_obj=False),
skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names)
df['u_wind'], df['v_wind'] = mpcalc.get_wind_components(df['speed'],
np.deg2rad(df['direction']))
# Drop any rows with all NaN values for T, Td, winds
df = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed',
'u_wind', 'v_wind'), how='all').reset_index(drop=True)
"""
Explanation: Upper air data can be obtained using the siphon package, but for this example we will use
some of MetPy's sample data.
End of explanation
"""
p = df['pressure'].values * units.hPa
T = df['temperature'].values * units.degC
Td = df['dewpoint'].values * units.degC
wind_speed = df['speed'].values * units.knots
wind_dir = df['direction'].values * units.degrees
u, v = mpcalc.get_wind_components(wind_speed, wind_dir)
# Create a new figure. The dimensions here give a good aspect ratio
fig = plt.figure(figsize=(9, 9))
add_metpy_logo(fig, 630, 80, size='large')
# Grid for plots
gs = gridspec.GridSpec(3, 3)
skew = SkewT(fig, rotation=45, subplot=gs[:, :2])
# Plot the data using normal plotting functions, in this case using
# log scaling in Y, as dictated by the typical meteorological plot
skew.plot(p, T, 'r')
skew.plot(p, Td, 'g')
skew.plot_barbs(p, u, v)
skew.ax.set_ylim(1000, 100)
# Add the relevant special lines
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
# Good bounds for aspect ratio
skew.ax.set_xlim(-30, 40)
# Create a hodograph
ax = fig.add_subplot(gs[0, -1])
h = Hodograph(ax, component_range=60.)
h.add_grid(increment=20)
h.plot(u, v)
# Show the plot
plt.show()
"""
Explanation: We will pull the data out of the example dataset into individual variables and
assign units.
End of explanation
"""
|
4DGenome/Chromosomal-Conformation-Course | Notebooks/A5-Modeling_-_analysis_of_3D_models.ipynb | gpl-3.0 | from pytadbit import load_structuralmodels
models_t0 = load_structuralmodels('T0.models')
model= models_t0[0]
"""
Explanation: Descriptive statisitcs on a sinlge model
End of explanation
"""
print model.radius_of_gyration()
"""
Explanation: Calculate the radius of gyration of a model (median distance of all particles from the center of mass of the model):
End of explanation
"""
print model.contour()
"""
Explanation: Calculate the length of a model, the sum of all distances between consecutive particles:
End of explanation
"""
print model.longest_axe()
print model.shortest_axe()
"""
Explanation: Calculate the minimum rectangular space in which to fit the model:
End of explanation
"""
print model.cube_side()
print model.cube_volume()
"""
Explanation: The minimum cube:
End of explanation
"""
|
drericstrong/Blog | 20161220_Dice Advantage and Disadvantage.ipynb | agpl-3.0 | import numpy as np
import seaborn as sns
from scipy import stats
import matplotlib.pyplot as plt
%matplotlib inline
#Remember that Python is zero-indexed, and the range function will return
#up to one value less than the second parameter
roll1poss = list(range(1,21))
roll2poss = list(range(1,21))
#This next line might look scary, but we are finding the maximum of (each
#first roll out of all possible first rolls) and (each second roll out of
#all possible second rolls), using a listcomp
maxRoll = [max(roll1,roll2) for roll1 in roll1poss for roll2 in roll2poss]
#Now, we want to reshape the array so that it is 20x20 instead of 1x400
maxRollReshape = np.reshape(maxRoll,[20,20])
#Finally, let's plot the results. I wrote it as a function so it can be
#re-used for Case 2, as well
def plotHeatmap(rollResults, titleLab):
f, ax = plt.subplots(figsize=(8, 6))
cmap = sns.cubehelix_palette(light=1, as_cmap=True)
sns.heatmap(rollResults, cmap=cmap, vmax=20, square=True,
linewidths=.5, cbar_kws={"shrink": .5}, ax=ax,
xticklabels=roll1poss,yticklabels=roll2poss);
ax.set_xlabel('First Roll')
ax.set_ylabel('Second Roll')
ax.set_title(titleLab)
plotHeatmap(maxRollReshape,'Case 1: Maximum of Two 20-Sided Dice');
"""
Explanation: The goal of this post is to answer the following questions related to rolling two 20-sided dice:
Case 1 (Advantage): If I roll two 20-sided dice and take the greater of the two results, how much higher is the expected value than if I had simply rolled one 20-sided die?
Case 2 (Disadvantage): If I roll two 20-sided dice and take the lesser of the two results, how much smaller is the expected value than if I had simply rolled one 20-sided die?
Visual Solution
Starting data analysis with visualization is always a good choice, as it might reveal features of the dataset which aren't immediately obvious from the problem description. To get an intuitive feel of what's happening behind the scenes, we can visualize every possible result of the two dice throws by plotting the result of the first dice roll on the x-axis and the result of the second dice roll on the y-axis.
The following code will plot Case 1, the maximum of two 20-sided dice (advantage), for all possible combinations:
End of explanation
"""
#Using the same reasoning as in the previous code
minRoll = [min(roll1,roll2) for roll1 in roll1poss for roll2 in roll2poss]
minRollReshape = np.reshape(minRoll,[20,20])
plotHeatmap(minRollReshape,'Case 2: Minimum of Two 20-Sided Dice');
"""
Explanation: Notice that the graph is much darker (high numbers) on the bottom and right edges. Even if the first roll is a 1, as long as the second roll is 20, then the maximum will be 20, so we should expect to see a heavier concentration of darker colors in this figure compared to the second case.
The same process can be repeated for Case 2, the minimum of two 20-sided dice (disadvantage), for all possible combinations:
End of explanation
"""
#Sum all the possible results, then divide by the number of elements
maxExpect = sum(maxRoll)/len(maxRoll)
minExpect = sum(minRoll)/len(minRoll)
normExpect = sum(roll1poss)/len(roll1poss)
stdExpect = np.std(roll1poss)
print("The expected value of Case 1 (Advantage) is:",maxExpect)
print("The expected value of Case 2 (Disadvantage) is:",minExpect)
print("The expected value of a normal 20-sided dice is:",normExpect)
"""
Explanation: In this figure, the opposite effect occurred- we see lighter colors on the left and upper edges, and the concentration of colors is much lighter compared to Case 1.
This problem can actually be solved very simply by adding up all the possible results and dividing by the total number of states, like so:
End of explanation
"""
#Set up a side-by-side plot with two axes
f,((ax1,ax2)) = plt.subplots(1,2,figsize=(8,6))
#Define this as a function so I don't have to repeat myself for both plots
def plotHistogram(curAx, listVal, titleLab, scaling):
curAx.axis([1,20,0,scaling])
values = np.array(listVal)
sns.distplot(values,ax=curAx, kde=False, bins=20)
props = dict(boxstyle='round', facecolor='wheat', alpha=0.5)
textStr = '$\mu=%.3f$\n$\sigma=%.3f$' % (values.mean(), values.std())
curAx.text(0.05, 0.95, textStr, transform=curAx.transAxes, fontsize=14,
verticalalignment='top', bbox=props)
curAx.set_title('Histogram of {}'.format(titleLab))
curAx.set_ylabel('# Occurences')
curAx.set_xlabel('{} Value'.format(titleLab))
plotHistogram(ax1, maxRoll, 'Case 1 (Maximum)',50)
plotHistogram(ax2, minRoll, 'Case 2 (Minimum)',50)
"""
Explanation: Hence, Case 1 (Advantage) is effectively adding 3.325 to a normal 20-sided dice roll, while Case 2 (Disadvantage) is effectively subtracting 3.325.
However, there is an added benefit to rolling under advantage (Case 1) and a detriment to rolling under disadvantage (Case 2), as the standard deviation will be different for these two distributions, compared to a normal dice roll. You can think of the standard deviation as the "consistency" of the dice roll. As the standard deviation decreases, the dice will tend to roll more consistently high (Case 1) or low (Case 2).
This can be visualized by looking at the histograms of possible values for each case:
End of explanation
"""
from random import randint
#The numHist variable specifies how many times we will be rolling
#the dice. More is better, but slower.
numHist = 1000000
#Case 1- Maximum of 2 dice rolls
resultListMax = []
for ii in range(numHist):
roll1 = randint(1,20)
roll2 = randint(1,20)
resultListMax.append(max(roll1,roll2))
#Case 2- Minimium of 2 dice rolls
resultListMin = []
for ii in range(numHist):
roll1 = randint(1,20)
roll2 = randint(1,20)
resultListMin.append(min(roll1,roll2))
#Visualize the results
f,((ax1,ax2)) = plt.subplots(1,2,figsize=(8,6))
plotHistogram(ax1, resultListMax, 'Case 1 (Maximum)',150000)
plotHistogram(ax2, resultListMin, 'Case 2 (Minimum)',150000)
"""
Explanation: For comparison, the standard deviation of a normal 20-sided dice roll is 5.77, so the difference in standard deviation is significant.
Statistical Solution
Next, let's try to reason out this problem using statistics. Take a look at the histograms above. For the maximum of two 20-sided dice (Case 1), the number of occurrences for each value has a pattern:
1, 3, 5, 7, 9, 11, ... , 39
This pattern could also be described as the following, with k equal to the result:
2(k-1)+1
To find the expected value, we want to sum up all the values that the dice roll could take (k = 1 to 20) and divide by the total number of possibilities (400):
$E[K]=\frac{1}{400}\sum_{k=1}^{20} (2(k-1)+1)k=\frac{1}{400}\sum_{k=1}^{20} 2k^{2}-k=\frac{5530}{400} = 13.825$
In other words, we get exactly the same solution as before.
Similarly, the minimum of two 20-sided dice (Case 2) has the opposite pattern:
39, 37, 35, 33, 31, 29, ... , 1
This pattern could be described as:
2(20-k)+1
Again, we sum up all the values that the dice roll could take (k = 1 to 20) and divide by the total number of possibilities (400). Unsurprisingly, we get the same result as from the visual solution above, since they are equivalent:
$E[K]=\frac{1}{400}\sum_{k=1}^{20} (2(20-k)+1)k=\frac{1}{400}\sum_{k=1}^{20} -2k^{2}+41k=\frac{2870}{400} = 7.175$
This solution generalizes very well for an n-sided dice. The maximum of two n-sided dice will be:
$\frac{1}{n^{2}} \sum_{k=1}^{n}(2k^{2}-k)$
While the minimum of two n-sided dice will be:
$\frac{1}{n^{2}} \sum_{k=1}^{n}(-2k^{2}+(2n+1)k)$
Monte Carlo
The final way we will solve this problem is by sampling the distributions using a Monte Carlo method. Monte Carlo methods use repeated random simulation (sampling) of a distribution to achieve an approximate result. In layman's terms, we're going to actually roll the two dice, repeat this process one million (or more) times, and then we'll look at the results.
Monte Carlo methods are so attractive because in the simplest cases, they do not require much thought. If you know how to program, and you can describe the situation logically, you can write a Monte Carlo algorithm. However, my doctorate in nuclear engineering, for which Monte Carlo methods are very common, compels me to mention the huge caveat that Monte Carlo methods can become extremely complex in many practical situations, requiring variance reduction methods and other messy techniques.
First, let's try it for Case 1 (Advantage):
End of explanation
"""
|
ilogue/pyrsa | demos/exercise_all.ipynb | lgpl-3.0 | import numpy as np
from scipy import io
import matplotlib.pyplot as plt
import pyrsa
"""
Explanation: Getting started exercise for RSA3.0
Introduction
In these three exercises you will get an introduction to the functionality of the new pyRSA-toolbox for inferring the underlying model representation based on measured data. Generally we assume that there is a true underlying representation, which is captured by our model. The measurement process like fMRI will lead to a distorted view of that representation, which we may or may not include into our analysis as an explicit measurement model.
For illustration these exercises use simulated RDMs from the paper "Inferring brain-computational mechanisms with models of activity measurements" by Kriegeskorte & Diedrichsen (2016). Ground truth RDMs are here simulated based on the layers of Alexnet--the deep neural network model, which sparked the interest in deep learning. Simulated data rdms were generated as follows: First, voxel responses were generated by randomly selecting locations within the layer and modelling their response as a local average of the feature values. Then, noise was added to those voxel responses and RDMs were computed from these noisy responses. As model predictions to compare to, we use noise-free RDMs generated for each layer, by applying different amounts of smoothing and averaging to the layer representation.
Our overall aim in this setting is to infer which representation the data rdms were based on, i.e. which layer was used for generating the data. Towards this aim we will make three steps:
In Exercise 1, we will load the data, convert them into the formats used in the toolbox and have a first exploratory look at the data.
In Exercise 2, we will compare the RDMs based on the undistorted representations to the simulated data RDMs. This is the classical and simplest approach and already allows us to perform model comparisons and the general evaluation of model-RDMs. This approach uses fixed models, i.e. each model predicts a single fixed RDM. We will see that this does not allow us to correctly infer the underlying representation though, because the measurement process distorts the RDMs too much.
In Exercise 3, we will apply flexible models. This means that each model predicts a distribution of RDMs. In the present context this means that the model is flexible in which measurement model is applied to explain the data. To evaluate such flexible models additional cross-validation is necessary, which we also discuss in this exercise.
Exercise 1: Data and RDM handling
End of explanation
"""
matlab_data = io.matlab.loadmat('rdms_inferring/modelRDMs_A2020.mat')
matlab_data = matlab_data['modelRDMs']
n_models = len(matlab_data[0])
model_names = [matlab_data[0][i][0][0] for i in range(n_models)]
measurement_model = [matlab_data[0][i][1][0] for i in range(n_models)]
rdms_array = np.array([matlab_data[0][i][3][0] for i in range(n_models)])
"""
Explanation: Load model RDMs
Here the models are different layers of Alexnet.
For each layer, different models of how the fMRI voxels sample the neurons are being considered.
The simulated data were generated in Matlab (Kriegeskorte & Diedrichsen 2016). Thus, we load the Matlab files in .mat format.
For each model-RDM, we obtain the RDM itself, a model name, and a measurement model name. The model name specifies the layer used to generate the RDM. The measurement model name specifies the applied distortions.
End of explanation
"""
model_rdms = pyrsa.rdm.RDMs(rdms_array,
rdm_descriptors={'brain_computational_model':model_names,
'measurement_model':measurement_model},
dissimilarity_measure='Euclidean'
)
"""
Explanation: These steps are not specific to the toolbox, but to the format the RDMs were originally saved in.
To load other data, simply transform them such that they are numpy arrays of either the whole RDM or vector format of the upper triangular part of the matrix.
Store the model RDMs as a pyRSA object
We place the RDMs in a pyRSA object which can contain additional descriptors for the RDMs and the experimental conditions.
Here we label each RDM with the name of the brain-computational model (AlexNet layer) and the name of the measurement model.
End of explanation
"""
conv1_rdms = model_rdms.subset('brain_computational_model','conv1')
plt.figure(figsize=(10,10))
pyrsa.vis.show_rdm(conv1_rdms, do_rank_transform=True, rdm_descriptor='measurement_model')
"""
Explanation: The variable model_rdms is now a custom object, which contains all the RDMs from the .mat file with the additional information.
It also has a few methods for forming subsets of the data, saving and loading, etc.
Show the RDMs from AlexNet layer conv1
As a simple example, select the RDMs that correspond to the first convolutional layer. These can then be plotted using the function pyrsa.vis.show_rdm.
End of explanation
"""
conv1_rdms = model_rdms.subset('brain_computational_model','conv1')
print(conv1_rdms)
"""
Explanation: These are the RDMs which were generated from convolutional layer 1 by different measurement models. Each RDM is labeled with the name of the measurement model. Also in the lower right corner the average RDM is plotted.
Print information about a set of RDMs
The pyRSA objects can simply be passed to the print function to obtain a short description of their content.
End of explanation
"""
matlab_data = io.matlab.loadmat('rdms_inferring/noisyModelRDMs_A2020.mat')
repr_names_matlab = matlab_data['reprNames']
fwhms_matlab = matlab_data['FWHMs']
noise_std_matlab = matlab_data['relNoiseStds']
rdms_matlab = matlab_data['noisyModelRDMs']
repr_names = [repr_names_matlab[i][0][0] for i in range(repr_names_matlab.shape[0])]
fwhms = fwhms_matlab.squeeze().astype('float')
noise_std = noise_std_matlab.squeeze().astype('float')
rdms_matrix = rdms_matlab.squeeze().astype('float')
"""
Explanation: Questions
Of course, you can also show all RDMs or select any other subset. Have a look at the different RDMs!
How many RDMs are there for each layer?
Generate a plot which shows all RDMs with the 'complete' measurement model.
How different do the different measurement models look to you and how different do the different layers look?
Exercise 2: Fixed model inference
Load data RDMs
Here we use simulated data to demonstrate RSA inference.
Since we know the true data-generating model in each case, we can tell when inference fails or succeeds.
For each data RDM, we obtain the name of the underlying Layer, a full width at half maximum (FWHM) value and a noise standard deviation. The FWHM value specifies the spatial range the simulated voxels average over. The noise standard deviation specifies how much noise was added to the voxel responses.
End of explanation
"""
# indices choosing brain-computational model, noise level, and the size of the kernel with which each voxel samples the neural activity
i_rep = 2 #np.random.randint(len(repr_names))
i_noise = 1 #np.random.randint(len(noise_std))
i_fwhm = 0 #np.random.randint(len(fwhms))
# print the chosen representation definition
repr_name = repr_names[i_rep]
print('The chosen ground truth model is:')
print(repr_name)
print('with noise level:')
print(noise_std[i_noise])
print('with averaging width (full width at half magnitude):')
print(fwhms[i_fwhm])
# put the rdms into an RDMs object and show it
rdms_data = pyrsa.rdm.RDMs(rdms_matrix[:, i_rep, i_fwhm, i_noise, :].transpose())
plt.figure(figsize=(10,10))
pyrsa.vis.show_rdm(rdms_data, do_rank_transform=True)
"""
Explanation: Choose the data RDMs for inference
Here we choose which data RDMs we use for the exercise. You can change the representation, the noise level and the amount of averaging by chaning the index values at the beginning.
We then convert the chosen data RDMs into an pyrsa RDMs object and display them as we did for the model RDMs.
End of explanation
"""
models = []
for i_model in np.unique(model_names):
rdm_m = model_rdms.subset('brain_computational_model', i_model).subset('measurement_model','complete')
m = pyrsa.model.ModelFixed(i_model, rdm_m)
models.append(m)
print('created the following models:')
for i in range(len(models)):
print(models[i].name)
"""
Explanation: Define fixed models
An "RDM model" is a pyRSA object that can predict a data RDM.
For example, a flexible RDM model may contain a set of predictor RDMs, which predict the data RDM as a weighted combination.
Here we use fixed RDM models, which contain just a single RDM with no parameters to be fitted.
Models are generated by first choosing the RDM, in this case the one with the right "brain_computational_model" and the "measurement_model" "complete", which corresponds to no distortions added. This object is then passed to the function pyrsa.model.ModelFixed, which generates a fixed RDM model. These RDM models are then collected in the list models.
End of explanation
"""
results_1 = pyrsa.inference.eval_fixed(models, rdms_data, method='corr')
pyrsa.vis.plot_model_comparison(results_1)
#results_1 = pyrsa.inference.eval_fixed(models, rdms_data, method='spearman')
#pyrsa.vis.plot_model_comparison(results_1)
#results_1 = pyrsa.inference.eval_fixed(models, rdms_data, method='tau-a')
#pyrsa.vis.plot_model_comparison(results_1)
#results_1 = pyrsa.inference.eval_fixed(models, rdms_data, method='rho-a')
#pyrsa.vis.plot_model_comparison(results_1)
"""
Explanation: Compare model RDMs to measured RDMs
Evaluate models naively, i.e. simply compute the average correlation to the data RDMs.
End of explanation
"""
results_2a = pyrsa.inference.eval_bootstrap_rdm(models, rdms_data, method='corr')
pyrsa.vis.plot_model_comparison(results_2a)
"""
Explanation: In these plots the models do not have errorbars as we did not run any estimate of the variablitiy.
The upper bound of the noise ceiling is computed by finding the RDM with the highest possible average similarity to the measured RDMs. This is not 1 because the RDMs for different subjects or measurements differ. The lower bound of the noise ceiling is a leave one out crossvalidation of this averaging procedure, i.e. we find the RDM to perform optimally on all but one of the RDMs and evaluate this average RDM on the left-out RDM. Each RDM is left out once and the correlations are averaged.
Bootstrapping
To perform statistical comparisons and estimate how uncertain we should be about the models' performance, we can perform bootstrapping:
In each plot the errobars correspond to +/- one SEM based on the bootrap samples.
The lines above the plot show which pairwise comparisons are significant.
Model comparison by bootstrapping the subjects
We can bootstrap resample the subjects, which estimates how variable the model performances would be if we repeted the experiment with the same stimuli but new subjects from the same population. Based on that uncertainty estimate, we can statistically compare model performances. We would like to take the many pairwise model comparisons into account in performing inference. We have a choice: We can either control the family wise error rate (FWER) or the false discovery rate (FDR). Here we use a Bonferroni correction for FWER and the Benjamini-Hochberg procedure for FDR.
End of explanation
"""
results_2b = pyrsa.inference.eval_bootstrap_pattern(models, rdms_data, method='corr')
pyrsa.vis.plot_model_comparison(results_2b)
"""
Explanation: Model comparison by bootstrapping the stimuli
We can alternatively bootstrap resample the stimuli to estimate how much model performance would vary if we repeated the experiment with the same subjects using a new sample of stimuli from the same population.
End of explanation
"""
results_2c = pyrsa.inference.eval_bootstrap(models, rdms_data, method='corr')
pyrsa.vis.plot_model_comparison(results_2c)
"""
Explanation: Model comparison by bootstrapping both stimuli and subjects
Finally, we can bootstrap resample both stimuli and subjects to estimate how variable the model performances would be if we repeated the experiment with new subjects and new stimuli from their respective populations:
End of explanation
"""
models_flex = []
for i_model in np.unique(model_names):
models_flex.append(pyrsa.model.ModelSelect(i_model,
model_rdms.subset('brain_computational_model', i_model)))
print('created the following models:')
for i in range(len(models_flex)):
print(models_flex[i].name)
"""
Explanation: Questions
Does the right model win? And do the mean estimates from bootstrapping differ from the evaluations over the whole dataset?
Compare the results for the different bootstrapping methods. Which method leads to the widest confidence intervals, which one to the smallest?
Exercise 3: Crossvalidation for flexible models
Defining flexible models
Here we use a type of flexible model called a selection model. This type of model specifies that the true RDM is one from a list of RDMs. To evaluate flexible models, they have to be fitted to data, i.e. we need to provide some data, which can be used to adjust the RDM-prediction of the model. For a selection model, the fitting process simply selects the RDM that performs best on the training data. The model will perform better on this data than on independent data. An unbiased performance estimate therefore requires independent test data. Crossvalidation is a data-efficient way of obtaining an unbiased performance estimate.
We first have to generate the selection models. This process is the same as for fixed models, but uses pyrsa.model.ModelSelect and passes multiple RDMs instead of a single one:
End of explanation
"""
train_set, test_set, ceil_set = pyrsa.inference.sets_k_fold(rdms_data, k_pattern=3, k_rdm=2)
"""
Explanation: Crossvalidation
As a first step, we split our data into training and test sets, which should not share either subjects or stimuli. To do so, we split each dimension into k groups and leave one of these groups out as a testset and use all others as training data. Models choose their parameters to maximize performance on the training set and are evaluated on the test set. Additionally a so-called ceil set is created, which contains the data from the training subjects for the test stimuli, which is necessary for calculating a noise ceiling.
The variables k_pattern and k_rdm specify how many folds should be formed over stimuli and subjects, respectively.
End of explanation
"""
results_3_cv = pyrsa.inference.crossval(models_flex, rdms_data, train_set, test_set,
ceil_set=ceil_set, method='corr')
# plot results
pyrsa.vis.plot_model_comparison(results_3_cv)
"""
Explanation: With these sets we can now evaluate our models, as we did without crossvalidaton, and plot the results. The performance estimates will be averaged across folds and we obtain a single performance estimate without errorbars. The variability over cross-validation folds are not indicative of the variability across independent datasets. Although training and test data are independent of each other in each fold, performance estimates are not independent across folds of crossvalidation.
End of explanation
"""
results_3_full = pyrsa.inference.bootstrap_crossval(models_flex, rdms_data, k_pattern=4, k_rdm=2, method='corr', N=100)
# plot results
pyrsa.vis.plot_model_comparison(results_3_full)
"""
Explanation: Bootstrapped Crossvalidation
We can perform bootstrapping around the crossvalidation to get uncertainty estimates for the evaluation and for model comparison.
End of explanation
"""
#[your code here]
"""
Explanation: Questions
Does the right model win?
Try some different settings for the crossvalidation: How do the results change when you make the training and test sets larger or smaller?
End of explanation
"""
|
rmenegaux/bqplot | examples/Mark Interactions.ipynb | apache-2.0 | x_sc = LinearScale()
y_sc = LinearScale()
x_data = np.arange(20)
y_data = np.random.randn(20)
scatter_chart = Scatter(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc}, default_colors=['dodgerblue'],
interactions={'click': 'select'},
selected_style={'opacity': 1.0, 'fill': 'DarkOrange', 'stroke': 'Red'},
unselected_style={'opacity': 0.5})
ax_x = Axis(scale=x_sc)
ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')
fig = Figure(marks=[scatter_chart], axes=[ax_x, ax_y])
display(fig)
scatter_chart.selected
"""
Explanation: Scatter Chart
Scatter Chart Selections
Click a point on the Scatter plot to select it. Now, run the cell below to check the selection. After you've done this, try holding the ctrl (or command key on Mac) and clicking another point. Clicking the background will reset the selection.
End of explanation
"""
scatter_chart.selected = [1, 2, 3]
"""
Explanation: Alternately, the selected attribute can be directly set on the Python side (try running the cell below):
End of explanation
"""
from ipywidgets import *
x_sc = LinearScale()
y_sc = LinearScale()
x_data = np.arange(20)
y_data = np.random.randn(20)
dd = Dropdown(options=['First', 'Second', 'Third', 'Fourth'])
scatter_chart = Scatter(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc}, default_colors=['dodgerblue'],
names=np.arange(100, 200), names_unique=False, display_names=False, display_legend=True,
labels=['Blue'])
ins = Button(icon='fa-legal')
scatter_chart.tooltip = ins
scatter_chart2 = Scatter(x=x_data, y=np.random.randn(20),
scales= {'x': x_sc, 'y': y_sc}, default_colors=['orangered'],
tooltip=dd, names=np.arange(100, 200), names_unique=False, display_names=False,
display_legend=True, labels=['Red'])
ax_x = Axis(scale=x_sc)
ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')
fig = Figure(marks=[scatter_chart, scatter_chart2], axes=[ax_x, ax_y])
display(fig)
def print_event(self, target):
print(target)
# Adding call back to scatter events
# print custom mssg on hover and background click of Blue Scatter
scatter_chart.on_hover(print_event)
scatter_chart.on_background_click(print_event)
# print custom mssg on click of an element or legend of Red Scatter
scatter_chart2.on_element_click(print_event)
scatter_chart2.on_legend_click(print_event)
# Adding figure as tooltip
x_sc = LinearScale()
y_sc = LinearScale()
x_data = np.arange(10)
y_data = np.random.randn(10)
lc = Lines(x=x_data, y=y_data, scales={'x': x_sc, 'y':y_sc})
ax_x = Axis(scale=x_sc)
ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')
tooltip_fig = Figure(marks=[lc], axes=[ax_x, ax_y], min_height=400, min_width=400)
scatter_chart.tooltip = tooltip_fig
# Changing interaction from hover to click for tooltip
scatter_chart.interactions = {'click': 'tooltip'}
"""
Explanation: Scatter Chart Interactions and Tooltips
End of explanation
"""
# Adding default tooltip to Line Chart
x_sc = LinearScale()
y_sc = LinearScale()
x_data = np.arange(100)
y_data = np.random.randn(3, 100)
def_tt = Tooltip(fields=['name', 'index'], formats=['', '.2f'], labels=['id', 'line_num'])
line_chart = Lines(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc},
tooltip=def_tt, display_legend=True, labels=["line 1", "line 2", "line 3"] )
ax_x = Axis(scale=x_sc)
ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')
fig = Figure(marks=[line_chart], axes=[ax_x, ax_y])
display(fig)
# Adding call back to print event when legend or the line is clicked
line_chart.on_legend_click(print_event)
line_chart.on_element_click(print_event)
"""
Explanation: Line Chart
End of explanation
"""
# Adding interaction to select bar on click for Bar Chart
x_sc = OrdinalScale()
y_sc = LinearScale()
x_data = np.arange(10)
y_data = np.random.randn(2, 10)
bar_chart = Bars(x=x_data, y=[y_data[0, :].tolist(), y_data[1, :].tolist()], scales= {'x': x_sc, 'y': y_sc},
interactions={'click': 'select'},
selected_style={'stroke': 'orange', 'fill': 'red'},
labels=['Level 1', 'Level 2'],
display_legend=True)
ax_x = Axis(scale=x_sc)
ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')
fig = Figure(marks=[bar_chart], axes=[ax_x, ax_y])
display(fig)
# Adding a tooltip on hover in addition to select on click
def_tt = Tooltip(fields=['x', 'y'], formats=['', '.2f'])
bar_chart.tooltip=def_tt
bar_chart.interactions = {
'legend_hover': 'highlight_axes',
'hover': 'tooltip',
'click': 'select',
}
# Changing tooltip to be on click
bar_chart.interactions = {'click': 'tooltip'}
# Call back on legend being clicked
bar_chart.type='grouped'
bar_chart.on_legend_click(print_event)
"""
Explanation: Bar Chart
End of explanation
"""
# Adding tooltip for Histogram
x_sc = LinearScale()
y_sc = LinearScale()
sample_data = np.random.randn(100)
def_tt = Tooltip(formats=['', '.2f'], fields=['count', 'midpoint'])
hist = Hist(sample=sample_data, scales= {'sample': x_sc, 'count': y_sc},
tooltip=def_tt, display_legend=True, labels=['Test Hist'], select_bars=True)
ax_x = Axis(scale=x_sc, tick_format='0.2f')
ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')
fig = Figure(marks=[hist], axes=[ax_x, ax_y])
display(fig)
# Changing tooltip to be displayed on click
hist.interactions = {'click': 'tooltip'}
# Changing tooltip to be on click of legend
hist.interactions = {'legend_click': 'tooltip'}
"""
Explanation: Histogram
End of explanation
"""
pie_data = np.abs(np.random.randn(10))
sc = ColorScale(scheme='Reds')
tooltip_widget = Tooltip(fields=['size', 'index', 'color'], formats=['0.2f', '', '0.2f'])
pie = Pie(sizes=pie_data, scales={'color': sc}, color=np.random.randn(10),
tooltip=tooltip_widget, interactions = {'click': 'tooltip'}, selected_style={'fill': 'red'})
pie.selected_style = {"opacity": "1", "stroke": "white", "stroke-width": "2"}
pie.unselected_style = {"opacity": "0.2"}
Figure(marks=[pie])
# Changing interaction to select on click and tooltip on hover
pie.interactions = {'click': 'select', 'hover': 'tooltip'}
"""
Explanation: Pie Chart
End of explanation
"""
|
abevieiramota/data-science-cookbook | 2017/07-decision-tree/decision_tree.ipynb | mit | import os
import pandas as pd
import math
import numpy as np
from sklearn.tree import DecisionTreeClassifier
headers = ["buying", "maint", "doors", "persons","lug_boot", "safety", "class"]
data = pd.read_csv("car_data.csv", header=None, names=headers)
data = data.sample(frac=1).reset_index(drop=True) # shuffle
"""
Explanation: Decision Tree
Você pode baixar o dataset em https://archive.ics.uci.edu/ml/datasets/Car+Evaluation.
End of explanation
"""
data.head()
data.dtypes
"""
Explanation: No código acima, fizemos a leitura do arquivo informando que não há cabeçalho (obrigatório) e embaralhamos os dados.
A coluna 6 (0-6) representa a classe.
End of explanation
"""
for h in headers:
data[h] = data[h].astype('category')
data[h] = data[h].cat.codes
data.set_index("class", inplace=True)
data.head()
"""
Explanation: Um problema é que nossos atributos categóricos são strings, e a implementção de Decision Tree do scikit-learn só aceita atributos numéricos. Precisamos converter os atributos.
O Pandas possui um tipo de dados categórico ("category") que simplifica essa conversão.
End of explanation
"""
size = len(data)
train_size = int(math.floor(size * 0.7))
train_data = data[:train_size]
test_data = data[train_size:]
"""
Explanation: Faremos a separação dos dados em conjunto de treino e teste
End of explanation
"""
d_tree = DecisionTreeClassifier(criterion="gini")
d_tree.fit(train_data, train_data.index)
d_tree.predict(test_data.iloc[:, 0:6])
d_tree.score(test_data, test_data.index)
# desenha a arvore
import graphviz
from sklearn import tree
dot_data = tree.export_graphviz(d_tree, out_file=None, feature_names=["buying", "maint", "doors", "persons","lug_boot", "safety", "class"])
graph = graphviz.Source(dot_data)
graph.render("car_dataset")
"""
Explanation: Preparação de dados ok!
Vamos ao que interessa...
End of explanation
"""
|
badlands-model/BayesLands | Examples/mountain/Hydrometrics.ipynb | gpl-3.0 | %matplotlib inline
from matplotlib import cm
# Import badlands grid generation toolbox
import pybadlands_companion.hydroGrid as hydr
# display plots in SVG format
%config InlineBackend.figure_format = 'svg'
"""
Explanation: Hydrometrics
In this notebook, we show how to compute several hydrometics parameters based on stream network produced from model. The analysis relies on the flow files (i.e. stream) found in Badlands outputs. If you are interested in looking at morphometrics and stratigraphic analysis there are other notebooks specially designed for that in the Badlands companion repository.
Hydrometrics here refers only to quantitative description and analysis of water surface and we don't consider groundwater analysis. We will show how you can extract a particular catchment from a given model and compute for this particular catchment a series of paramters such as:
river profile evolution based on main stream elevation and distance to outlet,
peclet number distribution which evaluates the dominant processes shaping the landscape,
$\chi$ parameter that characterizes rivers system evolution based on terrain steepness and the arrangement of tributaries,
discharge profiles
End of explanation
"""
#help(hydr.hydroGrid.__init__)
hydro1 = hydr.hydroGrid(folder='output/h5', ncpus=1, \
ptXY = [40599,7656.65])
hydro2 = hydr.hydroGrid(folder='output/h5', ncpus=1, \
ptXY = [33627.6,30672.9])
"""
Explanation: 1. Load catchments parameters
We first have to define the path to the Badlands outputs we want to analyse. In addition Badlands is creating several files for each processors that have been used, you need to specify this number in the ncpus variable.
We then need to provide a point coordinates (X,Y) contained in the catchment of interest. This point doesn't need to be the outlet of the catchment.
For more information regarding the function uncomment the following line.
End of explanation
"""
#help(hydro.getCatchment)
hydro1.getCatchment(timestep=200)
hydro2.getCatchment(timestep=200)
"""
Explanation: 2. Extract particular catchment dataset
We now extract the data from a particular time step (timestep) and for the catchment of interest, which contained the point specified in previous function.
Note
If you are interested in making some hydrometric comparisons between different time steps you can create multiple instances of the hydrometrics python class each of them associated to a given time step.
End of explanation
"""
#help(hydro.viewNetwork)
hydro1.viewNetwork(markerPlot = False, linePlot = True, lineWidth = 2, markerSize = 15,
val = 'chi', width = 300, height = 500, colorMap = cm.viridis,
colorScale = 'Viridis', reverse = False,
title = '<br>Stream network graph 1')
hydro2.viewNetwork(markerPlot = False, linePlot = True, lineWidth = 2, markerSize = 15,
val = 'chi', width = 300, height = 500, colorMap = cm.viridis,
colorScale = 'Viridis', reverse = False,
title = '<br>Stream network graph 2')
hydro1.viewNetwork(markerPlot = True, linePlot = True, lineWidth = 3, markerSize = 3,
val = 'FA', width = 300, height = 500, colorMap = cm.Blues,
colorScale = 'Blues', reverse = True,
title = '<br>Stream network graph 1')
hydro2.viewNetwork(markerPlot = True, linePlot = True, lineWidth = 3, markerSize = 3,
val = 'FA', width = 300, height = 500, colorMap = cm.Blues,
colorScale = 'Blues', reverse = True,
title = '<br>Stream network graph 2')
"""
Explanation: We can visualise the stream network using the viewNetwork function. The following paramters can be displayed:
- $\chi$ paramater 'chi',
- elevation 'Z',
- discharge 'FA' (logarithmic values)
End of explanation
"""
#help(hydro.extractMainstream)
hydro1.extractMainstream()
hydro2.extractMainstream()
"""
Explanation: 3. Extract catchment main stream
We now extract the main stream for the considered catchment based on flow
discharge values.
End of explanation
"""
#help(hydro.viewStream)
hydro1.viewStream(linePlot = False, lineWidth = 1, markerSize = 7,
val = 'Z', width = 300, height = 500, colorMap = cm.jet,
colorScale = 'Jet', reverse = False,
title = '<br>Stream network graph 1')
hydro2.viewStream(linePlot = True, lineWidth = 1, markerSize = 7,
val = 'Z', width = 300, height = 500, colorMap = cm.jet,
colorScale = 'Jet', reverse = False,
title = '<br>Stream network graph 2')
"""
Explanation: As for the global stream network, you can use the viewStream function to visualise the main stream dataset.
End of explanation
"""
hydro1.computeParams(kd=8.e-1, kc=5.e-6, m=0.5, n=1., num=100)
hydro2.computeParams(kd=8.e-1, kc=5.e-6, m=0.5, n=1., num=100)
"""
Explanation: 4. Compute main stream hydrometrics
Here, we compute the stream parameters using the distance from outlet and the Badlands simulation coefficients for the stream power law and the hillslope linear diffusion.
The formulation for the Peclet number is:
$$Pe =\frac {\kappa_{c}l^{2(m+1)-n}}{\kappa_{d}z^{1-n}}$$
where $\kappa_{c}$ is the erodibility coefficient, $\kappa_{d}$ the hillslope diffusion coefficient and m, n the exponents from the stream power law equation. Their values are defined in your model input file.
The formulation for the $\chi$ parameter follows:
$$\chi = \int_{x_b}^x \left( \frac{A_o}{A(x')} \right)^{m/n} dx' $$
where $A_o$ is an arbitrary scaling area, and the integration is performed upstream from base level to location $x$.
In addition the function computeParams requires an additional parameter num which is the number of samples to generate along the main stream profile for linear interpolation.
End of explanation
"""
#help(hydro1.viewPlot)
hydro1.viewPlot(lineWidth = 3, markerSize = 5, xval = 'dist', yval = 'Z',
width = 800, height = 500, colorLine = 'black', colorMarker = 'black',
opacity = 0.2, title = 'Chi vs distance to outlet')
hydro2.viewPlot(lineWidth = 3, markerSize = 5, xval = 'dist', yval = 'Z',
width = 800, height = 500, colorLine = 'orange', colorMarker = 'purple',
opacity = 0.2, title = 'Chi vs distance to outlet')
"""
Explanation: The following combination of parameters can be visualised with the viewPlot function:
- 'dist': distance from catchment outlet
- 'FA': flow discharge (logorithmic)
- 'Pe': Peclet number
- 'Chi': $\chi$ parameter
- 'Z': elevation from outlet.
End of explanation
"""
#help(hydro.timeProfiles)
hydro0 = hydr.hydroGrid(folder='output/h5', ncpus=1, \
ptXY = [40599,7656.65])
timeStp = [20,40,60,80,100,120,140,160,180,200]
timeMA = map(lambda x: x * 0.25, timeStp)
print 'Profile time in Ma:',timeMA
dist = []
elev = []
for t in range(len(timeStp)):
hydro0.getCatchment(timestep=timeStp[t])
hydro0.extractMainstream()
hydro0.computeParams(kd=8.e-1, kc=5.e-6, m=0.5, n=1., num=1000)
dist.append(hydro0.dist)
elev.append(hydro0.Zdata)
hydro0.timeProfiles(pData = elev, pDist = dist, width = 1000, height = 600, linesize = 3,
title = 'River profile through time')
hydro00 = hydr.hydroGrid(folder='output/h5', ncpus=1, \
ptXY = [33627.6,30672.9])
timeStp = [20,40,60,80,100,120,140,160,180,200]
timeMA = map(lambda x: x * 0.25, timeStp)
print 'Profile time in Ma:',timeMA
dist = []
elev = []
for t in range(len(timeStp)):
hydro00.getCatchment(timestep=timeStp[t])
hydro00.extractMainstream()
hydro00.computeParams(kd=8.e-1, kc=5.e-6, m=0.5, n=1., num=50)
dist.append(hydro00.dist)
elev.append(hydro00.Zdata)
hydro00.timeProfiles(pData = elev, pDist = dist, width = 1000, height = 600, linesize = 3,
title = 'River profile through time')
"""
Explanation: 5. River profile through time
Using the same functions as before we can now create the river profile evolution through time and plot it on a single graph.
End of explanation
"""
|
arne-cl/alt-mulig | python/pocores-vs-markus-conll-scoring.ipynb | gpl-3.0 | import sys
def has_valid_annotation(mmax_file, scorer_path, metric, verbose=False):
"""
Parameters
----------
metric : str
muc, bcub, ceafm, ceafe, blanc
verbose : bool or str
True, False or 'very'
"""
scorer = sh.Command(scorer_path)
mdg = MMAXDocumentGraph(mmax_file)
conll_fname = '/tmp/{}.conll'.format(os.path.basename(mmax_file))
write_conll(mdg, conll_fname)
try:
results = scorer(metric, conll_fname, conll_fname)
scores_str = results.stdout.splitlines()[-2]
if not scores_str.endswith('100%'):
if verbose == 'very':
sys.stderr.write("{}\n{}\n".format(conll_fname, results))
elif verbose:
sys.stderr.write("{}\n{}\n".format(conll_fname, scores_str))
return False
except sh.ErrorReturnCode as e:
if verbose:
sys.stderr.write("Error in '{}'\n{}".format(conll_fname, e))
return False
return True
def get_bad_scoring_files(mmax_dir, scorer_path, metric, verbose=False):
"""
returns filepaths of MMAX2 coreference files which don't produce perfect
results when testing them against themselves with scorer.pl
"""
bad_files = []
for mmax_file in glob.glob(os.path.join(mmax_dir, '*.mmax')):
if not has_valid_annotation(mmax_file, scorer_path, metric, verbose=verbose):
bad_files.append(mmax_file)
return bad_files
blanc_errors = get_bad_scoring_files(MMAX_DIR, SCORER_PATH, 'blanc', verbose=True)
"""
Explanation: Compare CoNLL files against themselves
We're now using the official CoNLL scorer to compare each
MAZ176 coreference annotated document against itself.
All comparisons should result in an F1 of 100%
End of explanation
"""
bad_scoring_files = {}
for metric in ('muc', 'bcub', 'ceafm', 'ceafe', 'blanc'):
bad_scoring_files[metric] = get_bad_scoring_files(MMAX_DIR, SCORER_PATH, metric, verbose=False)
for metric in bad_scoring_files:
print "number of erroneous files found by '{}': {}".format(metric, len(bad_scoring_files[metric]))
all_bad_files = set()
for metric in bad_scoring_files:
all_bad_files.update(bad_scoring_files[metric])
print "total number of erroneous files:", len(all_bad_files)
from discoursegraphs import get_pointing_chains
from discoursegraphs.readwrite.mmax2 import spanstring2text
def print_all_chains(docgraph):
"""
print a list of all pointing chains (i.e coreference chains)
contained in a document graph
"""
for chain in get_pointing_chains(docgraph):
for node_id in chain:
print node_id, spanstring2text(docgraph, docgraph.node[node_id][docgraph.ns+':span'])
print '\n'
mdg = MMAXDocumentGraph(os.path.join(MMAX_DIR, 'maz-3377.mmax'))
print_all_chains(mdg)
"""
Explanation: Do all metrics choke on the same files?
End of explanation
"""
from itertools import combinations
def get_ambiguous_markables(mmax_docgraph):
"""returns a list of markables that occur in more than one coreference chain"""
ambiguous_markables = []
chain_sets = (set(chain) for chain in get_pointing_chains(mmax_docgraph))
for chain1, chain2 in combinations(chain_sets, 2):
chain_intersect = chain1.intersection(chain2)
if chain_intersect:
ambiguous_markables.extend(chain_intersect)
return ambiguous_markables
files_with_ambigious_chains = []
for mmax_file in glob.glob(os.path.join(MMAX_DIR, '*.mmax')):
mdg = MMAXDocumentGraph(mmax_file)
if get_ambiguous_markables(mdg):
files_with_ambigious_chains.append(mmax_file)
print "# of files with ambiguous coreference chains: ", len(files_with_ambigious_chains)
print "# of files scorer.pl doesn't like: ", len(bad_scoring_files)
if len(files_with_ambigious_chains) > 0:
print "percent of files w/ ambiguous chains that scorer.pl doesn't like:", \
len( set(files_with_ambigious_chains).intersection(set(bad_scoring_files)) ) / len(files_with_ambigious_chains) * 100
"""
Explanation: Hypothesis 1: all markables occurring in more than one coreference chain produce scoring errors
End of explanation
"""
# test this with
# markable_32 auf beiden Seiten
# markable_56 Arafat Scharon
mdg = MMAXDocumentGraph(os.path.join(MMAX_DIR, 'maz-19074.mmax'))
mdg.node['markable_56']
"""
Explanation: Initially, this was true. After Markus fixed a bunch of annotations,
Hypothesis 1 could not be validated any longer.
Hypothesis 2: non-contiguous markables cause trouble
End of explanation
"""
from discoursegraphs import get_span, select_nodes_by_layer
def get_noncontiguous_markables(docgraph):
"""return all markables that don't represent adjacent tokens"""
noncontiguous_markables = []
id2index = {tok_id:i for i, tok_id in enumerate(docgraph.tokens)}
for markable in select_nodes_by_layer(docgraph, docgraph.ns+':markable'):
span_token_ids = get_span(docgraph, markable)
for span_index, tok_id in enumerate(span_token_ids[:-1]):
tok_index = id2index[tok_id]
next_tok_id = span_token_ids[span_index+1]
next_tok_index = id2index[next_tok_id]
if next_tok_index - tok_index != 1:
noncontiguous_markables.append(markable)
return noncontiguous_markables
files_with_noncontiguous_markables = []
for mmax_file in glob.glob(os.path.join(MMAX_DIR, '*.mmax')):
mdg = MMAXDocumentGraph(mmax_file)
if get_noncontiguous_markables(mdg):
files_with_noncontiguous_markables.append(mmax_file)
print "# of files with non-continuous markables: ", len(files_with_noncontiguous_markables)
print "# of files scorer.pl doesn't like: ", len(bad_scoring_files)
print "percent of files w/ non-continuous markables that scorer.pl doesn't like:", \
len( set(files_with_noncontiguous_markables).intersection(set(bad_scoring_files)) ) / len(files_with_noncontiguous_markables) * 100
"""
Explanation: potential error
markable is both primmark and secmark
End of explanation
"""
mysterious_files = [os.path.basename(fname)
for fname in set(all_bad_files).difference(set(files_with_ambigious_chains))]
len(mysterious_files)
# for fname in mysterious_files:
# mdg = MMAXDocumentGraph(os.path.join(MMAX_DIR, fname))
# print fname, '\n==============\n\n'
# try:
# print_all_chains(mdg)
# except Exception as e:
# print "\n{} FAILED: {}".format(fname, e)
"""
Explanation: Hypothesis 2 doesn't hold.
Let's check files w/out ambiguous coreference chains that scorer.pl doesn't like
End of explanation
"""
from collections import defaultdict
import networkx as nx
from discoursegraphs import get_text
def get_ambiguous_chains(mmax_docgraph, token_labels=False):
"""
Returns a list of networkx graphs that represent ambiguous
coreference chains. An ambiguous chain represents two or more
coreference chains that share at least one markable.
There should be no ambiguous coreference chains, but the
current version of our annotation guidelines allow them. // SRSLY?
"""
ambiguous_markables = get_ambiguous_markables(mmax_docgraph)
coreference_chains = get_pointing_chains(mmax_docgraph)
markable2chain = defaultdict(list)
for i, chain in enumerate(coreference_chains):
for markable in chain:
if markable in ambiguous_markables:
markable2chain[markable].append(i)
chain_graphs = []
for markable in markable2chain:
ambig_chain_ids = markable2chain[markable]
chain_graph = nx.MultiDiGraph()
chain_graph.name = mmax_docgraph.name
for chain_id in ambig_chain_ids:
ambig_chain = coreference_chains[chain_id]
for i, markable in enumerate(ambig_chain[:-1]):
chain_graph.add_edge(markable, ambig_chain[i+1])
if token_labels:
for markable in chain_graph.nodes_iter():
markable_text = get_text(mmax_docgraph, markable)
chain_graph.node[markable]['label'] = markable_text
chain_graphs.append(chain_graph)
return chain_graphs
def merge_ambiguous_chains(ambiguous_chains):
"""
Parameters
----------
ambiguous_chains : list of MultiDiGraph
a list of graphs, each representing an ambiguous coreference chain
"""
merged_chain = nx.DiGraph(nx.compose_all(ambiguous_chains))
merged_chain.add_node('name', shape='tab',
color='blue',
label=ambiguous_chains[0].name)
for node in merged_chain:
if merged_chain.in_degree(node) > 1 \
or merged_chain.out_degree(node) > 1:
merged_chain.node[node]['color'] = 'red'
return merged_chain
len(files_with_ambigious_chains) # nothing to see here, move on!
"""
Explanation: Visualizing ambiguous coreference annotations with discoursegraphs
fortunately, the current version doesn't have any
End of explanation
"""
from discoursegraphs import info
files_without_chains = []
for mmax_file in glob.glob(os.path.join(MMAX_DIR, '*.mmax')):
mdg = MMAXDocumentGraph(mmax_file)
if not get_pointing_chains(mdg):
files_without_chains.append(os.path.basename(mmax_file))
# info(mdg)
# print '\n\n'
print files_without_chains
"""
Explanation: Are there any files without chains? no!
End of explanation
"""
for fname in mysterious_files:
has_valid_annotation(os.path.join(MMAX_DIR, fname), SCORER_PATH, 'muc', verbose='very')
"""
Explanation: What's wrong with the remaining files?
files that produces an F1 of less than 100%
most of them are just off by one allegedly 'invented' entity
End of explanation
"""
def get_all_good_scoring_files(mmax_dir, scorer_path, verbose=False):
"""
returns filepaths of MMAX2 coreference files which don't produce perfect
results when testing them against themselves with scorer.pl
"""
all_mmax_files = glob.glob(os.path.join(mmax_dir, '*.mmax'))
all_bad_files = set()
metrics = ['muc', 'bcub', 'ceafm', 'ceafe', 'blanc']
for mmax_file in all_mmax_files:
for metric in metrics:
if not has_valid_annotation(mmax_file, scorer_path, metric, verbose=verbose):
all_bad_files.add(mmax_file)
break # continue with next mmax file
return set(all_mmax_files).difference(all_bad_files)
all_good_scoring_files = get_all_good_scoring_files(MMAX_DIR, SCORER_PATH)
len(all_good_scoring_files)
# for fname in all_good_scoring_files:
# mdg = dg.read_mmax2(fname)
# bname = os.path.basename(fname)
# print bname, '\n==============\n\n'
# try:
# # [the dog]_{markable_23} means that [the dog] is part of a
# # coreference chain whose first element is markable_23
# print dg.readwrite.brackets.gen_bracketed_output(mdg), '\n\n'
# except KeyError as e:
# print "Error in {}: {}".format(bname, e)
# print dg.get_text(mdg)
# try:
# print_all_chains(mdg)
# except Exception as e:
# print "\n{} FAILED: {}".format(fname, e)
"""
Explanation: Are the coreferences of the remaining files okay?
they seem okay, but contain numerous near-identity relations
End of explanation
"""
MMAX_DIR
mmax_14172 = os.path.join(MMAX_DIR, 'maz-14172.mmax')
mdg = dg.read_mmax2(mmax_14172)
print_all_chains(mdg)
"""
Explanation: TODO: check this rare key error
End of explanation
"""
|
SJSlavin/phys202-2015-work | assignments/midterm/AlgorithmsEx03.ipynb | mit | %matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact
"""
Explanation: Algorithms Exercise 3
Imports
End of explanation
"""
def char_probs(s):
"""Find the probabilities of the unique characters in the string s.
Parameters
----------
s : str
A string of characters.
Returns
-------
probs : dict
A dictionary whose keys are the unique characters in s and whose values
are the probabilities of those characters.
"""
probs = {}
for n in range(len(s)):
if s[n] not in probs:
probs[s[n]] = 1/len(s)
else:
probs[s[n]] += 1/len(s)
return probs
char_probs("aaannn")
test1 = char_probs('aaaa')
assert np.allclose(test1['a'], 1.0)
test2 = char_probs('aabb')
assert np.allclose(test2['a'], 0.5)
assert np.allclose(test2['b'], 0.5)
test3 = char_probs('abcd')
assert np.allclose(test3['a'], 0.25)
assert np.allclose(test3['b'], 0.25)
assert np.allclose(test3['c'], 0.25)
assert np.allclose(test3['d'], 0.25)
"""
Explanation: Character counting and entropy
Write a function char_probs that takes a string and computes the probabilities of each character in the string:
First do a character count and store the result in a dictionary.
Then divide each character counts by the total number of character to compute the normalized probabilties.
Return the dictionary of characters (keys) and probabilities (values).
End of explanation
"""
def entropy(d):
"""Compute the entropy of a dict d whose values are probabilities."""
chars = list(d.keys())
probs = list(d.values())
probs_array = np.asarray(probs)
#print(-np.sum((probs_array) * np.log2(probs_array)))
return -np.sum((probs_array) * np.log2(probs_array))
assert np.allclose(entropy({'a': 0.5, 'b': 0.5}), 1.0)
assert np.allclose(entropy({'a': 1.0}), 0.0)
"""
Explanation: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as:
$$H = - \Sigma_i P_i \log_2(P_i)$$
In this expression $\log_2$ is the base 2 log (np.log2), which is commonly used in information science. In Physics the natural log is often used in the definition of entropy.
Write a funtion entropy that computes the entropy of a probability distribution. The probability distribution will be passed as a Python dict: the values in the dict will be the probabilities.
To compute the entropy, you should:
First convert the values (probabilities) of the dict to a Numpy array of probabilities.
Then use other Numpy functions (np.log2, etc.) to compute the entropy.
Don't use any for or while loops in your code.
End of explanation
"""
# YOUR CODE HERE
def interact_charprobs(s):
print(entropy(char_probs(s)))
#return entropy(char_probs(s))
interact(interact_charprobs, s="Brian E. Granger");
assert True # use this for grading the pi digits histogram
"""
Explanation: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string.
End of explanation
"""
|
flsantos/startup_acquisition_forecast | modelling.ipynb | mit | #All imports here
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from IPython.display import display, HTML
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.decomposition import PCA
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score
from sklearn.metrics import confusion_matrix
from sklearn import grid_search
%matplotlib inline
startups = pd.read_csv('data/startups_pre_processed.csv', index_col=0)
startups[:3]
"""
Explanation: 1. Classification Modelling
Overview
In this phase, we already have our dataset completely prepared and normalized, ready to be applied into classification algorithms. We'll first split our dataset and leave aside 20% of our data, to be tested only in the end of this process, after training a few different classification models. The one with best performance in this final test dataset will be our best model.
Load pre-processed dataset
End of explanation
"""
startups = startups[startups['status'] != 'ipo']
startups['acquired'] = startups['status'].replace({'acquired':1, 'operating':0, 'closed':0})
startups = startups.drop('status', 1)
ax = startups['acquired'].replace({0:'not acquired', 1:'acquired'}).value_counts().plot(kind='bar', title="Acquired startups", rot=0)
"""
Explanation: Define acquired status as our target variable
We are trying to forecast which startups are more likely to be acquired, so we are defining 'acquired' startups as 1 and the rest as 0.
Also, in this phase, it was decided to remove from the dataset startups with status IPO, since they can be considered as successfull startups but that's not the point of this project.
End of explanation
"""
dtrain, dtest = train_test_split(startups, test_size = 0.2, random_state=42, stratify=startups['acquired'])
"""
Explanation: Train/Test Split
We'll now split the dataset, leaving the test dataset as it is in a real world observation ( imbalanced ).
End of explanation
"""
def run_classifier(parameters, classifier, df):
seed = 42
np.random.seed(seed)
X = df.iloc[:, :-1]
y = df['acquired']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=seed, stratify=y)
clf = grid_search.GridSearchCV(classifier, parameters, n_jobs=-1, scoring='roc_auc', cv=5)
clf.fit(X=X_train, y=y_train)
model = clf.best_estimator_
print ('Avg auc score: '+str(clf.best_score_), 'Best params: '+str(clf.best_params_))
print 'Auc score on Train set: {0:0.3f}'.format(roc_auc_score(y_train, model.predict(X_train)))
print 'Auc score on Validation set: {0:0.3f}'.format(roc_auc_score(y_test, model.predict(X_test)))
cm = pd.crosstab(y_test, model.predict(X_test), rownames=['True'], colnames=['Predicted'], margins=True)
print 'FP rate: {0:0.1f}%'.format(cm[1][0]/float(cm['All'][0])*100)
print 'TP rate: {0:0.1f}%'.format(cm[1][1]/float(cm['All'][1])*100)
return model
"""
Explanation: General model evualator
Let's define a helper function to train and evaluate models using GridSearchCV
End of explanation
"""
rf_parameters = {'max_depth':range(5,12), 'n_estimators': [50], 'class_weight':['balanced']}
rf_clf = run_classifier((rf_parameters), RandomForestClassifier(random_state=0), dtrain)
"""
Explanation: Random Forest
End of explanation
"""
dtrain_numeric = dtrain.iloc[:, :-1].filter(regex=('(number_of|avg_).*|.*(funding_total_usd|funding_rounds|_at)'))
pca = PCA(n_components=30)
pca.fit(dtrain_numeric)
fig, ax = plt.subplots()
plt.plot(pca.explained_variance_ratio_, marker='o', color='b', linestyle='dashed')
ax.annotate('6 dimensions', xy=(6.2, pca.explained_variance_ratio_[6]+0.001), xytext=(8, 0.04),arrowprops=dict(facecolor='black', shrink=0.01),)
plt.title('Total variance explained')
plt.xlabel('Number of dimensions')
plt.ylabel('Variance explained by dimension')
"""
Explanation: Principal Component Analysis
End of explanation
"""
import sys
sys.path.insert(0, './exploratory_code')
import visuals as vs
pca = PCA(n_components=6)
pca.fit(dtrain_numeric)
# Generate PCA results plot
vs.pca_results(dtrain_numeric, pca)
"""
Explanation: We can notice that after the 6th dimension, the total variance explained by starts to get very low.
End of explanation
"""
pca = PCA(n_components=6)
pca.fit(dtrain.iloc[:, :-1])
dtrain_pca = pca.transform(dtrain.iloc[:, :-1])
dtrain_pca = pd.DataFrame(dtrain_pca)
dtrain_pca['acquired'] = list(dtrain['acquired'])
svm_parameters = [
#{'C': [1, 10, 100], 'kernel': ['linear'], 'class_weight':['balanced']},
{'C': [1, 10, 100], 'gamma': [0.001, 0.0001], 'kernel': ['rbf'], 'class_weight':['balanced']},
#{'C': [1, 10, 100, 1000], 'kernel': ['poly'], 'degree': [2, 5], 'coef0':[0,1], 'class_weight':['balanced']},
]
svm_clf = run_classifier(svm_parameters, SVC(random_state=0), dtrain_pca)
"""
Explanation: With PCA, we can visualize the explained variance ratio of each dimension generated. From the chart above, we see, for example, a PCA transformation using only the numerical variables of our dataset. We see that the first dimension explains around 30% of the variance of the data. Most part of this percentage is explained by funding_rounds and venture_funding_rounds variables. For the second dimension, which explains 19% of the variance in the data, the most expressing features are last_funding_at and seed_fundind_rounds.
Principal Component Analysis + Support Vector Machine
End of explanation
"""
svm_parameters = [
#{'C': [1, 10, 100], 'kernel': ['linear'], 'class_weight':['balanced']},
{'C': [100], 'gamma': [0.001], 'kernel': ['rbf'], 'class_weight':['balanced']},
#{'C': [1, 10, 100, 1000], 'kernel': ['poly'], 'degree': [2, 5], 'coef0':[0,1], 'class_weight':['balanced']},
]
svm_clf = run_classifier(svm_parameters, SVC(random_state=0), dtrain)
"""
Explanation: Only SVM
End of explanation
"""
knn_parameters = {'n_neighbors':[3, 5], 'n_jobs':[-1], 'weights':['distance', 'uniform']}
knn_clf = run_classifier(knn_parameters, KNeighborsClassifier(), dtrain)
"""
Explanation: k-Nearest Neighbors
End of explanation
"""
from imblearn.under_sampling import RandomUnderSampler
rus = RandomUnderSampler(random_state=42, return_indices=True)
X_undersampled, y_undersampled, indices = rus.fit_sample(dtrain.iloc[:, :-1], dtrain['acquired'])
dtrain_subsampled = pd.DataFrame(X_undersampled)
dtrain_subsampled['acquired'] = y_undersampled
knn_parameters = {'n_neighbors':[1, 10], 'n_jobs':[-1], 'weights':['distance', 'uniform']}
knn_subsampled_clf = run_classifier(knn_parameters, KNeighborsClassifier(), dtrain_subsampled)
"""
Explanation: k-Nearest Neighbors (subsampled)
End of explanation
"""
def model_xgboost_classifier(classifier, df, sample_weight={1:4,0:1}, useTrainCV=True, cv_folds=5, early_stopping_rounds=50):
seed = 42
np.random.seed(seed)
X = df.iloc[:, :-1]
y = df['acquired']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=seed, stratify=y)
if useTrainCV:
xgb_param = classifier.get_xgb_params()
xgtrain = xgb.DMatrix(X_train.values, label=y_train.values)
cvresult = xgb.cv(xgb_param, xgtrain, num_boost_round=classifier.get_params()['n_estimators'], nfold=cv_folds, \
metrics='auc', early_stopping_rounds=early_stopping_rounds)
classifier.set_params(n_estimators=cvresult.shape[0])
print cvresult
classifier.fit(X_train, y_train, eval_metric='auc', sample_weight=y_train.replace(sample_weight))
print 'Auc score on Train set: {0:0.3f}'.format(roc_auc_score(y_train, classifier.predict(X_train)))
print 'Auc score on Validation set: {0:0.3f}'.format(roc_auc_score(y_test, classifier.predict(X_test)))
cm = pd.crosstab(y_test, classifier.predict(X_test), rownames=['True'], colnames=['Predicted'], margins=True)
print cm
print 'TP rate: {0:0.1f}%'.format(cm[1][0]/float(cm['All'][0])*100)
print 'FP rate: {0:0.1f}%'.format(cm[1][1]/float(cm['All'][1])*100)
return classifier
from xgboost import XGBClassifier
xgb1 = XGBClassifier(
learning_rate =0.1,
n_estimators=10,
max_depth=5,
min_child_weight=1,
gamma=0,
subsample=0.9,
colsample_bytree=0.9,
objective= 'binary:logistic',
n_jobs=4,
scale_pos_weight=1,
random_state=0)
xgb_clf = model_xgboost_classifier(xgb1, dtrain, useTrainCV=False, cv_folds=5, early_stopping_rounds=1000)
"""
Explanation: XGBoost
End of explanation
"""
def compare_models(models, df):
for var_model in models:
model = eval(var_model)
X = df.iloc[:, :-1]
y = df['acquired']
print '--------------'+var_model + '---------------'
print 'Auc score on Test set: {0:0.3f}'.format(roc_auc_score(y, model.predict(X)))
cm = pd.crosstab(y, model.predict(X), rownames=['True'], colnames=['Predicted'], margins=True)
#print cm
print 'FP rate: {0:0.1f}%'.format(cm[1][0]/float(cm['All'][0])*100)
print 'TP rate: {0:0.1f}%'.format(cm[1][1]/float(cm['All'][1])*100)
#compare_models(['rf_clf', 'svm_clf', 'knn_clf', 'knn_subsampled_clf'], dtest)
compare_models(['rf_clf', 'svm_clf', 'knn_clf', 'knn_subsampled_clf', 'xgb_clf'], dtest)
"""
Explanation: Final test within unseen data
End of explanation
"""
X = dtest.iloc[:, :-1]
y = dtest['acquired']
cm = confusion_matrix(y, rf_clf.predict(X))
sns.heatmap(cm, annot=True, cmap='Blues', xticklabels=['no', 'yes'], yticklabels=['no', 'yes'], fmt='g')
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.title('Confusion matrix for:\n{}'.format(rf_clf.__class__.__name__));
"""
Explanation: Confusion Matrix visualization - Random Forest
End of explanation
"""
|
paris-saclay-cds/ramp-workflow | rampwf/tests/kits/titanic_no_test_old/titanic_no_test_old_starting_kit.ipynb | bsd-3-clause | %matplotlib inline
import os
import glob
import numpy as np
from scipy import io
import matplotlib.pyplot as plt
import pandas as pd
from rampwf.utils.importing import import_module_from_source
"""
Explanation: Paris Saclay Center for Data Science
Titanic RAMP: survival prediction of Titanic passengers
Benoit Playe (Institut Curie/Mines ParisTech), Chloé-Agathe Azencott (Institut Curie/Mines ParisTech), Alex Gramfort (LTCI/Télécom ParisTech), Balázs Kégl (LAL/CNRS)
Introduction
This is an initiation project to introduce RAMP and get you to know how it works.
The goal is to develop prediction models able to identify people who survived from the sinking of the Titanic, based on gender, age, and ticketing information.
The data we will manipulate is from the Titanic kaggle challenge.
Requirements
numpy>=1.10.0
matplotlib>=1.5.0
pandas>=0.19.0
scikit-learn>=0.17 (different syntaxes for v0.17 and v0.18)
seaborn>=0.7.1
End of explanation
"""
train_filename = 'data/train.csv'
data = pd.read_csv(train_filename)
y_df = data['Survived']
X_df = data.drop(['Survived', 'PassengerId'], axis=1)
X_df.head(5)
data.describe()
data.count()
"""
Explanation: Exploratory data analysis
Loading the data
End of explanation
"""
data.groupby('Survived').count()
"""
Explanation: The original training data frame has 891 rows. In the starting kit, we give you a subset of 445 rows. Some passengers have missing information: in particular Age and Cabin info can be missing. The meaning of the columns is explained on the challenge website:
Predicting survival
The goal is to predict whether a passenger has survived from other known attributes. Let us group the data according to the Survived columns:
End of explanation
"""
from pandas.plotting import scatter_matrix
scatter_matrix(data.get(['Fare', 'Pclass', 'Age']), alpha=0.2,
figsize=(8, 8), diagonal='kde');
"""
Explanation: About two thirds of the passengers perished in the event. A dummy classifier that systematically returns "0" would have an accuracy of 62%, higher than that of a random model.
Some plots
Features densities and co-evolution
A scatterplot matrix allows us to visualize:
* on the diagonal, the density estimation for each feature
* on each of the off-diagonal plots, a scatterplot between two features. Each dot represents an instance.
End of explanation
"""
data_plot = data.get(['Age', 'Survived'])
data_plot = data.assign(LogFare=lambda x : np.log(x.Fare + 10.))
scatter_matrix(data_plot.get(['Age', 'LogFare']), alpha=0.2, figsize=(8, 8), diagonal='kde');
data_plot.plot(kind='scatter', x='Age', y='LogFare', c='Survived', s=50, cmap=plt.cm.Paired);
"""
Explanation: Non-linearly transformed data
The Fare variable has a very heavy tail. We can log-transform it.
End of explanation
"""
import seaborn as sns
sns.set()
sns.set_style("whitegrid")
sns.jointplot(data_plot.Age[data_plot.Survived == 1],
data_plot.LogFare[data_plot.Survived == 1],
kind="kde", size=7, space=0, color="b");
sns.jointplot(data_plot.Age[data_plot.Survived == 0],
data_plot.LogFare[data_plot.Survived == 0],
kind="kde", size=7, space=0, color="y");
"""
Explanation: Plot the bivariate distributions and marginals of two variables
Another way of visualizing relationships between variables is to plot their bivariate distributions.
End of explanation
"""
%%file submissions/starting_kit/feature_extractor.py
import pandas as pd
class FeatureExtractor():
def __init__(self):
pass
def fit(self, X_df, y):
pass
def transform(self, X_df):
X_df_new = pd.concat(
[X_df.get(['Fare', 'Age', 'SibSp', 'Parch']),
pd.get_dummies(X_df.Sex, prefix='Sex', drop_first=True),
pd.get_dummies(X_df.Pclass, prefix='Pclass', drop_first=True),
pd.get_dummies(
X_df.Embarked, prefix='Embarked', drop_first=True)],
axis=1)
X_df_new = X_df_new.fillna(-1)
XX = X_df_new.values
return XX
"""
Explanation: The pipeline
For submitting at the RAMP site, you will have to write two classes, saved in two different files:
* the class FeatureExtractor, which will be used to extract features for classification from the dataset and produce a numpy array of size (number of samples $\times$ number of features).
* a class Classifier to predict survival
Feature extractor
The feature extractor implements a transform member function. It is saved in the file submissions/starting_kit/feature_extractor.py. It receives the pandas dataframe X_df defined at the beginning of the notebook. It should produce a numpy array representing the extracted features, which will then be used for the classification.
Note that the following code cells are not executed in the notebook. The notebook saves their contents in the file specified in the first line of the cell, so you can edit your submission before running the local test below and submitting it at the RAMP site.
End of explanation
"""
%%file submissions/starting_kit/classifier.py
from sklearn.linear_model import LogisticRegression
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
from sklearn.base import BaseEstimator
class Classifier(BaseEstimator):
def __init__(self):
self.clf = Pipeline([
('imputer', SimpleImputer(strategy='median')),
('classifier', LogisticRegression(C=1., solver='lbfgs'))
])
def fit(self, X, y):
self.clf.fit(X, y)
def predict_proba(self, X):
return self.clf.predict_proba(X)
"""
Explanation: Classifier
The classifier follows a classical scikit-learn classifier template. It should be saved in the file submissions/starting_kit/classifier.py. In its simplest form it takes a scikit-learn pipeline, assigns it to self.clf in __init__, then calls its fit and predict_proba functions in the corresponding member functions.
End of explanation
"""
#!ramp_test_submission
"""
Explanation: Local testing (before submission)
It is <b><span style="color:red">important that you test your submission files before submitting them</span></b>. For this we provide a unit test. Note that the test runs on your files in submissions/starting_kit, not on the classes defined in the cells of this notebook.
First pip install ramp-workflow or install it from the github repo. Make sure that the python files classifier.py and feature_extractor.py are in the submissions/starting_kit folder, and the data train.csv and test.csv are in data. Then run
ramp_test_submission
If it runs and print training and test errors on each fold, then you can submit the code.
End of explanation
"""
problem = import_module_from_source('problem.py', 'problem')
"""
Explanation: Submitting to ramp.studio
Once you found a good feature extractor and classifier, you can submit them to ramp.studio. First, if it is your first time using RAMP, sign up, otherwise log in. Then find an open event on the particular problem, for example, the event titanic for this RAMP. Sign up for the event. Both signups are controlled by RAMP administrators, so there can be a delay between asking for signup and being able to submit.
Once your signup request is accepted, you can go to your sandbox and copy-paste (or upload) feature_extractor.py and classifier.py from submissions/starting_kit. Save it, rename it, then submit it. The submission is trained and tested on our backend in the same way as ramp_test_submission does it locally. While your submission is waiting in the queue and being trained, you can find it in the "New submissions (pending training)" table in my submissions. Once it is trained, you get a mail, and your submission shows up on the public leaderboard.
If there is an error (despite having tested your submission locally with ramp_test_submission), it will show up in the "Failed submissions" table in my submissions. You can click on the error to see part of the trace.
After submission, do not forget to give credits to the previous submissions you reused or integrated into your submission.
The data set we use at the backend is usually different from what you find in the starting kit, so the score may be different.
The usual way to work with RAMP is to explore solutions, add feature transformations, select models, perhaps do some AutoML/hyperopt, etc., locally, and checking them with ramp_test_submission. The script prints mean cross-validation scores
```
train auc = 0.85 ± 0.005
train acc = 0.81 ± 0.006
train nll = 0.45 ± 0.007
valid auc = 0.87 ± 0.023
valid acc = 0.81 ± 0.02
valid nll = 0.44 ± 0.024
test auc = 0.83 ± 0.006
test acc = 0.76 ± 0.003
test nll = 0.5 ± 0.005
``
The official score in this RAMP (the first score column after "historical contributivity" on the [leaderboard](http://www.ramp.studio/events/titanic/leaderboard)) is area under the roc curve ("auc"), so the line that is relevant in the output oframp_test_submissionisvalid auc = 0.87 ± 0.023`. When the score is good enough, you can submit it at the RAMP.
Working in the notebook
When you are developing and debugging your submission, you may want to stay in the notebook and execute the workflow step by step. You can import problem.py and call the ingredients directly, or even deconstruct the code from ramp-workflow.
End of explanation
"""
X_train, y_train = problem.get_train_data()
"""
Explanation: Get the training data.
End of explanation
"""
train_is, test_is = list(problem.get_cv(X_train, y_train))[0]
test_is
"""
Explanation: Get the first cv fold, creating training and validation indices.
End of explanation
"""
fe, clf = problem.workflow.train_submission(
'submissions/starting_kit', X_train, y_train, train_is)
"""
Explanation: Train your starting kit.
End of explanation
"""
y_pred = problem.workflow.test_submission((fe, clf), X_train)
"""
Explanation: Get the full prediction (train and validation).
End of explanation
"""
score_function = problem.score_types[0]
"""
Explanation: Print the training and validation scores.
End of explanation
"""
score_train = score_function(y_train[train_is], y_pred[:, 1][train_is])
print(score_train)
score_valid = score_function(y_train[test_is], y_pred[:, 1][test_is])
print(score_valid)
"""
Explanation: score_function is callable, wrapping scikit-learn's roc_auc_score. It expects a 0/1 vector as ground truth (since out labels are 0 and 1, y_train can be passed as is), and a 1D vector of predicted probabilities of class '1', which means we need the second column of y_pred.
End of explanation
"""
from sklearn.metrics import roc_auc_score
print(roc_auc_score(y_train[train_is], y_pred[:, 1][train_is]))
"""
Explanation: You can check that it is just a wrapper of roc_auc_score.
End of explanation
"""
import importlib
# problem = importlib.import_module('problem', 'problem.py')
spec = importlib.util.spec_from_file_location('problem', 'titanic_no_test_old/problem.py')
spec
feature_extractor = import_module_from_source(
'submissions/starting_kit/feature_extractor.py', 'feature_extractor')
fe = feature_extractor.FeatureExtractor()
classifier = import_module_from_source(
'submissions/starting_kit/classifier.py', 'classifier')
clf = classifier.Classifier()
"""
Explanation: If you want to execute training step by step, go to the feature_extractor_classifier, feature_extractor, and classifier workflows and deconstruct them.
First load the submission files and instantiate the feature extractor and regressor objects.
End of explanation
"""
X_train_train_df = X_train.iloc[train_is]
y_train_train = y_train[train_is]
"""
Explanation: Select the training folds.
End of explanation
"""
fe.fit(X_train_train_df, y_train_train)
"""
Explanation: Fit the feature extractor.
End of explanation
"""
X_train_train_array = fe.transform(X_train_train_df)
"""
Explanation: Transform the training dataframe into numpy array.
End of explanation
"""
clf.fit(X_train_train_array, y_train_train)
"""
Explanation: Fit the classifier.
End of explanation
"""
X_train_array = fe.transform(X_train)
y_pred = clf.predict_proba(X_train_array)
"""
Explanation: Transform the whole (training + validation) dataframe into a numpy array and compute the prediction.
End of explanation
"""
score_train = score_function(y_train[train_is], y_pred[:, 1][train_is])
print(score_train)
score_valid = score_function(y_train[test_is], y_pred[:, 1][test_is])
print(score_valid)
"""
Explanation: Print the errors.
End of explanation
"""
|
JakeColtman/BayesianSurvivalAnalysis | .ipynb_checkpoints/Full done-checkpoint.ipynb | mit | running_id = 0
output = [[0]]
with open("E:/output.txt") as file_open:
for row in file_open.read().split("\n"):
cols = row.split(",")
if cols[0] == output[-1][0]:
output[-1].append(cols[1])
output[-1].append(True)
else:
output.append(cols)
output = output[1:]
for row in output:
if len(row) == 6:
row += [datetime(2016, 5, 3, 20, 36, 8, 92165), False]
output = output[1:-1]
def convert_to_days(dt):
day_diff = dt / np.timedelta64(1, 'D')
if day_diff == 0:
return 23.0
else:
return day_diff
df = pd.DataFrame(output, columns=["id", "advert_time", "male","age","search","brand","conversion_time","event"])
df["lifetime"] = pd.to_datetime(df["conversion_time"]) - pd.to_datetime(df["advert_time"])
df["lifetime"] = df["lifetime"].apply(convert_to_days)
df["male"] = df["male"].astype(int)
df["search"] = df["search"].astype(int)
df["brand"] = df["brand"].astype(int)
df["age"] = df["age"].astype(int)
df["event"] = df["event"].astype(int)
df = df.drop('advert_time', 1)
df = df.drop('conversion_time', 1)
df = df.set_index("id")
df = df.dropna(thresh=2)
df.median()
###Parametric Bayes
#Shout out to Cam Davidson-Pilon
## Example fully worked model using toy data
## Adapted from http://blog.yhat.com/posts/estimating-user-lifetimes-with-pymc.html
## Note that we've made some corrections
N = 2500
##Generate some random data
lifetime = pm.rweibull( 2, 5, size = N )
birth = pm.runiform(0, 10, N)
censor = ((birth + lifetime) >= 10)
lifetime_ = lifetime.copy()
lifetime_[censor] = 10 - birth[censor]
alpha = pm.Uniform('alpha', 0, 20)
beta = pm.Uniform('beta', 0, 20)
@pm.observed
def survival(value=lifetime_, alpha = alpha, beta = beta ):
return sum( (1-censor)*(log( alpha/beta) + (alpha-1)*log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(50000, 30000)
pm.Matplot.plot(mcmc)
mcmc.trace("alpha")[:]
"""
Explanation: The first step in any data analysis is acquiring and munging the data
Our starting data set can be found here:
http://jakecoltman.com in the pyData post
It is designed to be roughly similar to the output from DCM's path to conversion
Download the file and transform it into something with the columns:
id,lifetime,age,male,event,search,brand
where lifetime is the total time that we observed someone not convert for and event should be 1 if we see a conversion and 0 if we don't. Note that all values should be converted into ints
It is useful to note that end_date = datetime.datetime(2016, 5, 3, 20, 36, 8, 92165)
End of explanation
"""
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
alpha = pm.Uniform("alpha", 0,50)
beta = pm.Uniform("beta", 0,50)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000)
def weibull_median(alpha, beta):
return beta * ((log(2)) ** ( 1 / alpha))
plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))])
"""
Explanation: Problems:
1 - Try to fit your data from section 1
2 - Use the results to plot the distribution of the median
Note that the media of a Weibull distribution is:
$$β(log 2)^{1/α}$$
End of explanation
"""
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
alpha = pm.Uniform("alpha", 0,50)
beta = pm.Uniform("beta", 0,50)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000, burn = 3000, thin = 20)
pm.Matplot.plot(mcmc)
#Solution to Q5
## Adjusting the priors impacts the overall result
## If we give a looser, less informative prior then we end up with a broader, shorter distribution
## If we give much more informative priors, then we get a tighter, taller distribution
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
## Note the narrowing of the prior
alpha = pm.Normal("alpha", 1.7, 10000)
beta = pm.Normal("beta", 18.5, 10000)
####Uncomment this to see the result of looser priors
## Note this ends up pretty much the same as we're already very loose
#alpha = pm.Uniform("alpha", 0, 30)
#beta = pm.Uniform("beta", 0, 30)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000, burn = 5000, thin = 20)
pm.Matplot.plot(mcmc)
#plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))])
"""
Explanation: Problems:
4 - Try adjusting the number of samples for burning and thinnning
5 - Try adjusting the prior and see how it affects the estimate
End of explanation
"""
medians = [weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))]
testing_value = 14.9
number_of_greater_samples = sum([x >= testing_value for x in medians])
100 * (number_of_greater_samples / len(medians))
"""
Explanation: Problems:
7 - Try testing whether the median is greater than a different values
End of explanation
"""
#Fitting solution
cf = lifelines.CoxPHFitter()
cf.fit(df, 'lifetime', event_col = 'event')
cf.summary
"""
Explanation: If we want to look at covariates, we need a new approach.
We'll use Cox proprtional hazards, a very popular regression model.
To fit in python we use the module lifelines:
http://lifelines.readthedocs.io/en/latest/
End of explanation
"""
#Solution to 1
fig, axis = plt.subplots(nrows=1, ncols=1)
cf.baseline_survival_.plot(ax = axis, title = "Baseline Survival")
regressors = np.array([[1,45,0,0]])
survival = cf.predict_survival_function(regressors)
survival.head()
#Solution to plotting multiple regressors
fig, axis = plt.subplots(nrows=1, ncols=1, sharex=True)
regressor1 = np.array([[1,45,0,1]])
regressor2 = np.array([[1,23,1,1]])
survival_1 = cf.predict_survival_function(regressor1)
survival_2 = cf.predict_survival_function(regressor2)
plt.plot(survival_1,label = "45 year old male - search")
plt.plot(survival_2,label = "45 year old male - display")
plt.legend(loc = "upper")
odds = survival_1 / survival_2
plt.plot(odds, c = "red")
"""
Explanation: Once we've fit the data, we need to do something useful with it. Try to do the following things:
1 - Plot the baseline survival function
2 - Predict the functions for a particular set of features
3 - Plot the survival function for two different set of features
4 - For your results in part 3 caculate how much more likely a death event is for one than the other for a given period of time
End of explanation
"""
from pyBMA import CoxPHFitter
bmaCox = CoxPHFitter.CoxPHFitter()
bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.5]*4)
bmaCox.summary
#Low probability for everything favours parsimonious models
bmaCox = CoxPHFitter.CoxPHFitter()
bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.1]*4)
bmaCox.summary
#Boost probability of brand
bmaCox = CoxPHFitter.CoxPHFitter()
bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.3, 0.9, 0.001, 0.3])
print(bmaCox.summary)
"""
Explanation: Model selection
Difficult to do with classic tools (here)
Problem:
1 - Calculate the BMA coefficient values
2 - Try running with different priors
End of explanation
"""
|
Almaz-KG/MachineLearning | ml-for-finance/python-for-financial-analysis-and-algorithmic-trading/01-Python-Crash-Course/Python Crash Course Exercises - Solutions.ipynb | apache-2.0 | price = 300
price**0.5
import math
math.sqrt(price)
"""
Explanation: Python Crash Course Exercises - Solutions
This is an optional exercise to test your understanding of Python Basics. The questions tend to have a financial theme to them, but don't look to deeply into these tasks themselves, many of them don't hold any significance and are meaningless. If you find this extremely challenging, then you probably are not ready for the rest of this course yet and don't have enough programming experience to continue. I would suggest you take another course more geared towards complete beginners, such as Complete Python Bootcamp
Exercises
Answer the questions or complete the tasks outlined in bold below, use the specific method described if applicable.
Task #1
Given price = 300 , use python to figure out the square root of the price.
End of explanation
"""
stock_index = "SP500"
stock_index[2:]
"""
Explanation: Task #2
Given the string:
stock_index = "SP500"
Grab '500' from the string using indexing.
End of explanation
"""
stock_index = "SP500"
price = 300
print("The {} is at {} today.".format(stock_index,price))
"""
Explanation: Task #3
Given the variables:
stock_index = "SP500"
price = 300
Use .format() to print the following string:
The SP500 is at 300 today.
End of explanation
"""
stock_info = {'sp500':{'today':300,'yesterday': 250}, 'info':['Time',[24,7,365]]}
stock_info['sp500']['yesterday']
stock_info['info'][1][2]
"""
Explanation: Task #4
Given the variable of a nested dictionary with nested lists:
stock_info = {'sp500':{'today':300,'yesterday': 250}, 'info':['Time',[24,7,365]]}
Use indexing and key calls to grab the following items:
Yesterday's SP500 price (250)
The number 365 nested inside a list nested inside the 'info' key.
End of explanation
"""
def source_finder(s):
return s.split('--')[-1]
source_finder("PRICE:345.324:SOURCE--QUANDL")
"""
Explanation: Task #5
Given strings with this form where the last source value is always separated by two dashes --
"PRICE:345.324:SOURCE--QUANDL"
Create a function called source_finder() that returns the source. For example, the above string passed into the function would return "QUANDL"
End of explanation
"""
def price_finder(s):
return 'price' in s.lower()
price_finder("What is the price?")
price_finder("DUDE, WHAT IS PRICE!!!")
price_finder("The price is 300")
"""
Explanation: Task #5
Create a function called price_finder that returns True if the word 'price' is in a string. Your function should work even if 'Price' is capitalized or next to punctuation ('price!')
End of explanation
"""
def count_price(s):
count = 0
for word in s.lower().split():
# Need to use in, can't use == or will get error with punctuation
if 'price' in word:
count += 1
# Note the indentation!
return count
# Simpler Alternative
def count_price(s):
return s.lower().count('price')
s = 'Wow that is a nice price, very nice Price! I said price 3 times.'
count_price(s)
"""
Explanation: Task #6
Create a function called count_price() that counts the number of times the word "price" occurs in a string. Account for capitalization and if the word price is next to punctuation.
End of explanation
"""
def avg_price(stocks):
return sum(stocks)/len(stocks) # Python 2 users should multiply numerator by 1.0
avg_price([3,4,5])
"""
Explanation: Task #7
Create a function called avg_price that takes in a list of stock price numbers and calculates the average (Sum of the numbers divided by the number of elements in the list). It should return a float.
End of explanation
"""
|
tanmay987/deepLearning | sentiment-rnn/Sentiment_RNN_Solution.ipynb | mit | import numpy as np
import tensorflow as tf
with open('../sentiment-network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment-network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
"""
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
"""
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
"""
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
"""
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
"""
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
"""
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
"""
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
"""
non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]
len(non_zero_idx)
reviews_ints[-1]
"""
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
"""
reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]
labels = np.array([labels[ii] for ii in non_zero_idx])
"""
Explanation: Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general.
End of explanation
"""
seq_len = 200
features = np.zeros((len(reviews_ints), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
features[:10,:100]
"""
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
"""
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
"""
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
"""
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
"""
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
"""
n_words = len(vocab_to_int)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
"""
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
"""
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
"""
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
"""
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
"""
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
"""
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed,
initial_state=initial_state)
"""
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
"""
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
"""
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
"""
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
"""
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
"""
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
"""
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
"""
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
"""
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
"""
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
"""
Explanation: Testing
End of explanation
"""
|
post2web/nbloader | .ipynb_checkpoints/tutorial-checkpoint.ipynb | mit | from nbloader import Notebook
loaded_notebook = Notebook('test.ipynb')
"""
Explanation: Importing Jupyter Notebooks as "Objects"
Jupyter Notebooks are great for data exploration, visualizing, documenting, prototyping and iteracting with the code, but when it comes to creating an actual program out of a notebook they fall short. I often get myself copying cells from a notebook into a script so that I can run the code with command line arguments. There is no easy way to run a notebook and return a result from its execution, passing arguments to a notebook and running individual code cells programatically. Have you ever wrapped a code cell to a function just so you want to call it in a loop with different paramethers?
I wrote a small utility tool nbloader that enables code reusing from jupyter notebooks. With it you can import a notebook as an object, pass variables to it's name space, run code cells and pull out variables from its name space.
This tutorial will show you how to make your notebooks resusable with nbloader.
Instal nbloader with pip
shell
pip install nbloader --upgrade
Load a Notebook
End of explanation
"""
loaded_notebook.run_all()
"""
Explanation: The above commad loades a notebook as an object. This can be done inside a jupyter notebook or a regular python script.
Run all cells
End of explanation
"""
loaded_notebook.ns['a']
"""
Explanation: After loaded_notebook.run_all() is called:
- The notebook is initialized with empty starting namespace.
- All cells of the loaded notebook are executed one after another by the order they are the file.
- All print statement or any other stdout from the loaded notebook will output.
- All warnings or errors will be raised unless catched.
- All variables from the loaded notebook's namespace will be accesable.
Here is the contents of loaded_notebook.ipynb
<img src="tests/loaded_notebook.png" width="400">
This is how you access the namespace of the loaded notebook
End of explanation
"""
loaded_notebook.ns['b']
"""
Explanation: The notebooks namesace is just a dict so if you try to get something thats not there will get an error.
End of explanation
"""
loaded_notebook.run_tag('add_one')
print(loaded_notebook.ns['a'])
loaded_notebook.run_tag('add_one')
print(loaded_notebook.ns['a'])
"""
Explanation: Run individual cells if they are tagged.
End of explanation
"""
loaded_notebook.ns['a'] = 0
loaded_notebook.run_tag('add_one')
print(loaded_notebook.ns['a'])
"""
Explanation: If a cell have a comment on its first line it will become a tag.
This is how you mess with its namespace
End of explanation
"""
|
radu941208/DeepLearning | Hyperparameter_Tuning_Regularization_Optimization/Regularization.ipynb | mit | # import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
"""
Explanation: Regularization
Welcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that overfitting can be a serious problem, if the training dataset is not big enough. Sure it does well on the training set, but the learned network doesn't generalize to new examples that it has never seen!
You will learn to: Use regularization in your deep learning models.
Let's first import the packages you are going to use.
End of explanation
"""
train_X, train_Y, test_X, test_Y = load_2D_dataset()
"""
Explanation: Problem Statement: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head.
<img src="images/field_kiank.png" style="width:600px;height:350px;">
<caption><center> <u> Figure 1 </u>: Football field<br> The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head </center></caption>
They give you the following 2D dataset from France's past 10 games.
End of explanation
"""
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
"""
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
"""
Explanation: Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.
- If the dot is blue, it means the French player managed to hit the ball with his/her head
- If the dot is red, it means the other team's player hit the ball with their head
Your goal: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball.
Analysis of the dataset: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well.
You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem.
1 - Non-regularized model
You will use the following neural network (already implemented for you below). This model can be used:
- in regularization mode -- by setting the lambd input to a non-zero value. We use "lambd" instead of "lambda" because "lambda" is a reserved keyword in Python.
- in dropout mode -- by setting the keep_prob to a value less than one
You will first try the model without any regularization. Then, you will implement:
- L2 regularization -- functions: "compute_cost_with_regularization()" and "backward_propagation_with_regularization()"
- Dropout -- functions: "forward_propagation_with_dropout()" and "backward_propagation_with_dropout()"
In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
End of explanation
"""
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
"""
Explanation: Let's train the model without any regularization, and observe the accuracy on the train/test sets.
End of explanation
"""
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
"""
Explanation: The train accuracy is 94.8% while the test accuracy is 91.5%. This is the baseline model (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
End of explanation
"""
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
"""
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = (lambd/(2*m))*(np.sum(np.square(W1))+np.sum(np.square(W2))+np.sum(np.square(W3)))
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
"""
Explanation: The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.
2 - L2 Regularization
The standard way to avoid overfitting is called L2 regularization. It consists of appropriately modifying your cost function, from:
$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{L}\right) + (1-y^{(i)})\log\left(1- a^{L}\right) \large{)} \tag{1}$$
To:
$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{L}\right) + (1-y^{(i)})\log\left(1- a^{L}\right) \large{)} }\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$
Let's modify your cost and observe the consequences.
Exercise: Implement compute_cost_with_regularization() which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :
python
np.sum(np.square(Wl))
Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
End of explanation
"""
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
"""
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + (lambd/m)*W3
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + (lambd/m)*W2
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + (lambd/m)*W1
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = "+ str(grads["dW1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("dW3 = "+ str(grads["dW3"]))
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**cost**
</td>
<td>
1.78648594516
</td>
</tr>
</table>
Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost.
Exercise: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
End of explanation
"""
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**dW1**
</td>
<td>
[[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
</td>
</tr>
<tr>
<td>
**dW2**
</td>
<td>
[[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
</td>
</tr>
<tr>
<td>
**dW3**
</td>
<td>
[[-1.77691347 -0.11832879 -0.09397446]]
</td>
</tr>
</table>
Let's now run the model with L2 regularization $(\lambda = 0.7)$. The model() function will call:
- compute_cost_with_regularization instead of compute_cost
- backward_propagation_with_regularization instead of backward_propagation
End of explanation
"""
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
"""
Explanation: Congrats, the test set accuracy increased to 93%. You have saved the French football team!
You are not overfitting the training data anymore. Let's plot the decision boundary.
End of explanation
"""
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
"""
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
"""
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(A1.shape[0],A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = (D1<keep_prob) # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = A1*D1 # Step 3: shut down some neurons of A1
A1 = A1/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(A2.shape[0],A2.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = (D2<keep_prob) # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = A2*D2 # Step 3: shut down some neurons of A2
A2 = A2/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
"""
Explanation: Observations:
- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.
- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.
What is L2-regularization actually doing?:
L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes.
<font color='blue'>
What you should remember -- the implications of L2-regularization on:
- The cost computation:
- A regularization term is added to the cost
- The backpropagation function:
- There are extra terms in the gradients with respect to weight matrices
- Weights end up smaller ("weight decay"):
- Weights are pushed to smaller values.
3 - Dropout
Finally, dropout is a widely used regularization technique that is specific to deep learning.
It randomly shuts down some neurons in each iteration. Watch these two videos to see what this means!
<!--
To understand drop-out, consider this conversation with a friend:
- Friend: "Why do you need all these neurons to train your network and classify images?".
- You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"
- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"
- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."
!-->
<center>
<video width="620" height="440" src="images/dropout1_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<br>
<caption><center> <u> Figure 2 </u>: Drop-out on the second hidden layer. <br> At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep_prob$ or keep it with probability $keep_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. </center></caption>
<center>
<video width="620" height="440" src="images/dropout2_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> <u> Figure 3 </u>: Drop-out on the first and third hidden layers. <br> $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. </center></caption>
When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time.
3.1 - Forward propagation with dropout
Exercise: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer.
Instructions:
You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:
1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using np.random.rand() to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{1} d^{1} ... d^{1}] $ of the same dimension as $A^{[1]}$.
2. Set each entry of $D^{[1]}$ to be 0 with probability (1-keep_prob) or 1 with probability (keep_prob), by thresholding values in $D^{[1]}$ appropriately. Hint: to set all the entries of a matrix X to 0 (if entry is less than 0.5) or 1 (if entry is more than 0.5) you would do: X = (X < 0.5). Note that 0 and 1 are respectively equivalent to False and True.
3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.
4. Divide $A^{[1]}$ by keep_prob. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
End of explanation
"""
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
"""
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = dA2*D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = dA2/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1 = dA1*D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = " + str(gradients["dA1"]))
print ("dA2 = " + str(gradients["dA2"]))
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**A3**
</td>
<td>
[[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
</td>
</tr>
</table>
3.2 - Backward propagation with dropout
Exercise: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache.
Instruction:
Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:
1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to A1. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to dA1.
2. During forward propagation, you had divided A1 by keep_prob. In backpropagation, you'll therefore have to divide dA1 by keep_prob again (the calculus interpretation is that if $A^{[1]}$ is scaled by keep_prob, then its derivative $dA^{[1]}$ is also scaled by the same keep_prob).
End of explanation
"""
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**dA1**
</td>
<td>
[[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
</td>
</tr>
<tr>
<td>
**dA2**
</td>
<td>
[[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
</td>
</tr>
</table>
Let's now run the model with dropout (keep_prob = 0.86). It means at every iteration you shut down each neurons of layer 1 and 2 with 24% probability. The function model() will now call:
- forward_propagation_with_dropout instead of forward_propagation.
- backward_propagation_with_dropout instead of backward_propagation.
End of explanation
"""
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
"""
Explanation: Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you!
Run the code below to plot the decision boundary.
End of explanation
"""
|
CNS-OIST/STEPS_Example | user_manual/source/diffusion_boundary.ipynb | gpl-2.0 | import steps.model as smodel
import steps.geom as sgeom
import steps.rng as srng
import steps.solver as solvmod
import steps.utilities.meshio as meshio
import numpy
import pylab
"""
Explanation: Diffusion Boundary
The simulation script described in this chapter is available at STEPS_Example repository.
In some systems it may be a convenient simulation feature to be able to localize certain chemical species in one particular region of a volume without diffusion to neighboring regions even if they are not separated by a physical boundary. For example, in some biological systems certain proteins may exist only in local regions and though the structural features are simplified in a model the proteins are assumed to diffuse in a local region to meet and react with each other. So it is sometimes important to restrict the diffusional space of some proteins and not the others from a biologically feasible perspective. Similarly, it may be convenient to separate a large simulation volume into a number of compartments that are not physically separated by a membrane and so are connected to other compartments by chemical diffusion. Such an approach allows for different biochemical behavior in different regions of the volume to be specified and may simplify simulation initialization and data recording considerably. In this brief chapter we'll introduce an object termed the “Diffusion Boundary“ (steps.geom.DiffBoundary) which allows for this important simulation convenience: optional chemical diffusion between connected mesh compartments.
The Diffusion Boundary of course can only be added to a mesh-based (i.e. not a well-mixed) simulation and is described by a collection of triangles. These triangles must form some or all of the connection between two (and only two) compartments, and none of the triangles may already be described as part of a “Patch“ (steps.geom.TmPatch). It is not possible for two compartments to be connected in the same area by a Patch and a Diffusion Boundary since a Patch is intended to model a membrane and a Diffusion Boundary is just some internal area within a volume that may block diffusion, and it would be unrealistic to allow surface reactions and free diffusion to occur in the same area. Once a Diffusion Boundary is in place the modeler may specify which chemical species (if any) may freely diffuse through the boundary. Diffusion boundaries are currently supported in solvers steps.solver.Tetexact and steps.mpi.solver.TetOpSplit, but for this chapter we will only demonstrate usage in Tetexact. For approximate MPI simulations with TetOpSplit please see later chapters.
For the example we'll set up a simple system to introduce the steps.geom.DiffBoundary object and expand on our mesh manipulation in previous chapters through STEPS methods provided in the Python interface. The simple examples here may of course be expanded and built on for more complex mesh manipulations in detailed, realistic simulations, though greater complexity is beyond the scope of this chapter.
Modeling solution
Organisation of code
To run our simulation we'll, as usual, create a Python script, following a similar structure to previous chapters. Again, for clarity, we'll show Python code as if typed at the prompt and go through the code step by step looking at some statements in detail as we go.
To get started we import STEPS and outside packages as usual:
End of explanation
"""
def gen_model():
# Create the model container object
mdl = smodel.Model()
# Create the chemical species
X = smodel.Spec('X', mdl)
Y = smodel.Spec('Y', mdl)
# Create separate volume systems for compartments A and B
vsysA = smodel.Volsys('vsysA', mdl)
vsysB = smodel.Volsys('vsysB', mdl)
# Describe diffusion of molecules in compartments A and B
diff_X_A = smodel.Diff('diff_X_A', vsysA, X, dcst = 0.1e-9)
diff_X_B = smodel.Diff('diff_X_B', vsysB, X, dcst = 0.1e-9)
diff_Y_A = smodel.Diff('diff_Y_A', vsysA, Y, dcst = 0.1e-9)
diff_Y_B = smodel.Diff('diff_Y_B', vsysB, Y, dcst = 0.1e-9)
# Return the container object
return mdl
"""
Explanation: Model specification
We'll go straight into our function that will set up the biochemical model. Here we will create two chemical species objects, 'X' and 'Y', and describe their diffusion rules. Notice that this time we use separate volume systems for the two compartments we will create, as is our option. We intend volume system 'vsysA' to be added to a compartment 'A' and 'vsysB' to be added to compartment 'B', the reason for which will become clear as we progress:
End of explanation
"""
mesh = meshio.loadMesh('meshes/cyl_len10_diam1')[0]
"""
Explanation: Note that if our model were set up with the following code instead, diffusion would NOT be defined for species 'X' in compartment 'B' (if we add only volume system 'B' to compartment 'B' as we intend):
# Describe diffusion of molecules in compartments A and B
# NOTE: diffusion is not defined for species X in compartment B
diff_X = smodel.Diff('diff_X', vsysA, X, dcst = 0.1e-9)
diff_Y_A = smodel.Diff('diff_Y_A', vsysA, Y, dcst = 0.1e-9)
diff_Y_B = smodel.Diff('diff_Y_B', vsysB, Y, dcst = 0.1e-9)
This is an important point because if a species does not react or diffuse within a compartment (as is the case for 'X' in compartment 'B' here) it is undefined in the compartment by the solver- it does nothing in the compartment so memory and simulation time is not wasted by including the species in that compartment during simulation. For this reason if we were to later try to allow diffusion of 'X' across the diffusion boundary during our simulation in this example we would receive an error message because it may not diffuse into compartment 'B' since it is undefined there.
Geometry specification
Next we define our geometry function. Because some of the operations are new we'll look at the code in more detail.
First we import our mesh, a cylinder of axial length 10 microns (on the z-axis) which we have previously imported and saved in STEPS format (with method steps.utilities.meshio.saveMesh) in folder 'meshes' in the current directory.
Here, the object that is returned to us and stored by mesh will be a steps.geom.Tetmesh object, which is the zeroth element of the tuple returned by the function:
End of explanation
"""
ntets = mesh.countTets()
"""
Explanation: Now we'll create our two compartments. We'll split our cylinder down the middle of the z-axis creating two compartments of (approximately) equal volume. Since our cylinder is oriented on the z-axis we simply need to separate tetrahedrons by those that are below the centre point on the z-axis and those that are above.
Firstly we count the number of tetrahedrons using method steps.geom.Tetmesh.countTets:
End of explanation
"""
tets_compA = []
tets_compB = []
"""
Explanation: And create empty lists to group the tetrahedrons as those that belong to compartment 'A' and those to 'B':
End of explanation
"""
tris_compA = set()
tris_compB = set()
"""
Explanation: And similarly create empty sets to group all the triangles in compartments 'A' and 'B'. All tetrahedrons are comprised of 4 triangles, and we store all triangles belonging to all tetrahedrons in the compartment (in a set so as not to store more than once). The reason for doing so will become apparent soon:
End of explanation
"""
z_max = mesh.getBoundMax()[2]
z_min = mesh.getBoundMin()[2]
z_mid = z_min+(z_max-z_min)/2.0
"""
Explanation: Next we find the bounds of the mesh, and the mid point (on the z-axis)- the point at which we want our Diffusion Boundary to appear:
End of explanation
"""
for t in range(ntets):
# Fetch the z coordinate of the barycenter
barycz = mesh.getTetBarycenter(t)[2]
# Fetch the triangle indices of the tetrahedron, a tuple of length 4:
tris = mesh.getTetTriNeighb(t)
if barycz < z_mid:
tets_compA.append(t)
tris_compA.add(tris[0])
tris_compA.add(tris[1])
tris_compA.add(tris[2])
tris_compA.add(tris[3])
else:
tets_compB.append(t)
tris_compB.add(tris[0])
tris_compB.add(tris[1])
tris_compB.add(tris[2])
tris_compB.add(tris[3])
"""
Explanation: Now we’ll run a for loop over all tetrahedrons to sort tetrahedrons and triangles into the compartments. The criterior is that tetrahedrons with their barycenter less than the mid point on the z-axis will belong to compartment A and those with their barycenter greater than the mid point will belong to compartment B:
End of explanation
"""
compA = sgeom.TmComp('compA', mesh, tets_compA)
compB = sgeom.TmComp('compB', mesh, tets_compB)
"""
Explanation: With our tetrahedrons sorted in this way we can create our mesh compartments. As we have seen in the previous chapter, a steps.geom.TmComp requires to the constructor, in order: a unique string identifier, a reference to the parent steps.geom.Tetmesh container object, and all the indices of the tetrahedrons that comprise the compartment in a Python sequence such as a list or tuple
End of explanation
"""
compA.addVolsys('vsysA')
compB.addVolsys('vsysB')
"""
Explanation: And add volume system 'vsysA' to compartment 'A' and volume system 'vsysB' to compartment 'B':
End of explanation
"""
tris_DB = tris_compA.intersection(tris_compB)
"""
Explanation: Now comes our diffusion boundary as part of our geometry description and therefore the Diffusion Boundary class is to be found in module steps.geom which we have imported with name sgeom. Recall that, to create a diffusion boundary, we must have a sequence of all the triangle indices that comprise the diffusion boundary and all of the triangles must connect the same two compartments. The reason that the user has to explicitly declare which triangles to use is that the diffusion boundary between compartments may not necessarily form the whole surface between the two compartments and may comprise a smaller area. However here we will use the entire surface between the two compartments.
The way that we find the triangle indices is very straightforward- they are simply the common triangles to both compartments. We have the triangle indices of both compartments stored in Python sets, the common triangles are therefore the intersection:
End of explanation
"""
tris_DB = list(tris_DB)
"""
Explanation: If this point is not very clear, consider the simple example where two tetrahedrons are connected at a surface (triangle). Lets say tetrahedron A is comprised of triangles (0,1,2,3) and tetrahedron B is comprised of triangles (0,4,5,6). That would mean that their common triangle (0) forms their connection. The common triangle could be found by finding the intersection of two sets of the triangles, that is the intersection of (0,1,2,3) and (0,4,5,6) is (0). That is what the above code does on a larger scale where the sets contain all triangles in the entire compartment and the intersection therefore gives the entire surface connection between the two compartments.
Now we have to convert the set to a list (or other sequence such as a tuple or NumPy array) as this is what the diffusion boundary constructor requires:
End of explanation
"""
diffb = sgeom.DiffBoundary('diffb', mesh, tris_DB)
"""
Explanation: Finally we can create the diffusion boundary between compartment 'A' and compartment 'B'. The object constructor looks similar to that for a mesh compartment or patch, but with some important differences. That is the constructor expects, in order: a unique string identifier, a reference to the parent steps.geom.Tetmesh container, and a sequence of all triangles that comprise the boundary. Note that no references to the compartments that the boundary connects are required- these are found internally and checked to be common amongst all triangles in the diffusion boundary:
End of explanation
"""
def gen_geom():
mesh = meshio.loadMesh('meshes/cyl_len10_diam1')[0]
ntets = mesh.countTets()
tets_compA = []
tets_compB = []
tris_compA = set()
tris_compB = set()
z_max = mesh.getBoundMax()[2]
z_min = mesh.getBoundMin()[2]
z_mid = z_min+(z_max-z_min)/2.0
for t in range(ntets):
# Fetch the z coordinate of the barycenter
barycz = mesh.getTetBarycenter(t)[2]
# Fetch the triangle indices of the tetrahedron, which is a tuple of length 4
tris = mesh.getTetTriNeighb(t)
if (barycz < z_mid):
tets_compA.append(t)
tris_compA.add(tris[0])
tris_compA.add(tris[1])
tris_compA.add(tris[2])
tris_compA.add(tris[3])
else:
tets_compB.append(t)
tris_compB.add(tris[0])
tris_compB.add(tris[1])
tris_compB.add(tris[2])
tris_compB.add(tris[3])
compA = sgeom.TmComp('compA', mesh, tets_compA)
compB = sgeom.TmComp('compB', mesh, tets_compB)
compA.addVolsys('vsysA')
compB.addVolsys('vsysB')
tris_DB = tris_compA.intersection(tris_compB)
tris_DB = list(tris_DB)
diffb = sgeom.DiffBoundary('diffb', mesh, tris_DB)
return mesh, tets_compA, tets_compB
"""
Explanation: And that is basically all we need to do create our diffusion boundary. As usual we should note the string identifier because that is what we will need to control the diffusion boundary during simulation. The technique for finding the common triangles between two compartments is a very useful technique that may be applied or adapted when creating diffusion boundaries in most STEPS simulations.
We return the parent steps.geom.Tetmesh object, along with the lists of tetrahedrons by compartment at the end of our function body.
Our entire function code is:
End of explanation
"""
mdl = gen_model()
"""
Explanation: Simulation with Tetexact
So now we come to our example simulation run. As in the previous chapter we will create the 3 important objects required to the solver constructor, which are: a steps.model.Model object (returned by our gen_model function), a steps.geom.Tetmesh object (for a mesh-based simulation; returned by our gen_geom function) and a steps.rng.RNG object that we will create.
We generate our steps.model.Model container object with a call to function gen_geom and store in variable mdl:
End of explanation
"""
mesh, tets_compA, tets_compB = gen_geom()
"""
Explanation: Note that, as well as the steps.geom.Tetmesh container object, the gen_geom function also returns the indices of the terahedrons for both compartments, which we will store in variables tets_compA and tets_compB:
End of explanation
"""
rng = srng.create('mt19937', 256)
rng.initialize(654)
"""
Explanation: As in previous chapters, create our random number generator and initialise with some seed value:
End of explanation
"""
sim = solvmod.Tetexact(mdl, mesh, rng)
sim.reset()
"""
Explanation: And create our solver object, using steps.solver.Tetexact for a mesh-based diffusion simulation with the usual object references to the solver constructor, which to recall are (in order): a steps.model.Model object, a steps.geom.Tetmesh object, and a steps.rng.RNG object:
End of explanation
"""
tpnts = numpy.arange(0.0, 0.101, 0.001)
ntpnts = tpnts.shape[0]
"""
Explanation: Now to create the data structures for running our simulation and storing data. There are many ways to achieve our aims here, but we will follow a pattern set by previous chapters which is first to create a NumPy array for “time-points“ to run the simulation, and find how many “time-points“ we have:
End of explanation
"""
ntets = mesh.countTets()
resX = numpy.zeros((ntpnts, ntets))
resY = numpy.zeros((ntpnts, ntets))
"""
Explanation: And now create our structures for storing data, again NumPy arrays, but this time of a size to record data from every tetrahedron in the mesh. We record how many tetrahedrons there are by using method countTets on our mesh object (steps.geom.Tetmesh.countTets). We also separate our results arrays, one to record from compartment 'A' and one for compartment 'B':
End of explanation
"""
tetX = mesh.findTetByPoint([0, 0, -4.99e-6])
tetY = mesh.findTetByPoint([0, 0, 4.99e-6])
"""
Explanation: Next, let's assume we wish to inject our molecules at the two ends of the cylinders, that is the points at which our z-axis is at a minimum and and a maximum. From creating our mesh (or finding out through methods getBoundMax and getBoundMin on our steps.geom.Tetmesh object) we know that our boundaries are at z = -5 microns and +5 microns. To find the tetrahedrons at the centre points of the two boundaries (i.e. at x=0 and y=0) we use steps.geom.Tetmesh method findTetByPoint, which will return the index of the tetrahedron that encompasses any point given in 3D cartesian coordinates as a Python sequence. We give points slightly inside the boundary so as to be sure that our point is inside the mesh (the method will return -1 if not):
End of explanation
"""
sim.setTetCount(tetX , 'X', 1000)
sim.setTetCount(tetY, 'Y', 500)
"""
Explanation: Let's set our initial conditions by injecting 1000 molecules of species 'X' into the lower Z boundary tetrahedron (which will be contained in compartment 'A') and 500 molecules of species 'Y' into the upper z boundary tetrahedron (which will be contained in compartment 'B'):
End of explanation
"""
sim.setDiffBoundaryDiffusionActive('diffb', 'Y', True)
"""
Explanation: Now for the main focus of this chapter, which is to allow diffusion between the compartments joined by a Diffusion Boundary. During our geometry construction we already created our steps.geom.DiffBoundary object (named rather unimaginatively 'diffb') which will be included in the simulation, with the default behaviour to block diffusion between compartment 'A' and 'B' completely for all molecules. We now wish to allow diffusion of species 'Y' through the boundary which we achieve with one simple solver method call. Importantly, we would be unable to activate diffusion through the boundary for species 'X'; this is because 'X' is undefined in compartment 'B' since it does not appear in any reaction or diffusion rules there.
To activate diffusion through the boundary we call the rather wordy steps.solver.Tetexact solver method setDiffBoundaryDiffusionActive (steps.solver.Tetexact.setDiffBoundaryDiffusionActive), with 3 arguments to the function; the string identifier of the diffusion boundary ('diffb'), the string identifier of the species ('Y') and a bool as to whether diffusion through the boundary is active or not (True):
End of explanation
"""
for i in range(ntpnts):
sim.run(tpnts[i])
for k in range(ntets):
resX[i, k] = sim.getTetCount(k, 'X')
resY[i, k] = sim.getTetCount(k, 'Y')
"""
Explanation: And that is all we need to do to activate diffusion of species 'Y' through the diffusion boundary 'diffb' and therefore allow diffusion of 'Y' between compartments 'A' and 'B'. To inactivate diffusion (which is incidentally the default behaviour for all species) we would call the same function with boolean False.
So now a simple for loop to run our simulation. We have already constructed our NumPy arrays for this purpose: tpnts stores the times that we run our simulation and collect our data for (we chose 1ms increments up to 100ms) and ntpnts stores how many of these 'time-points' there are, which is 101 (including time=0). At every time-point we will collect our data, here recording the number of molecules of 'X' and 'Y' in every tetrahedron in the mesh.
End of explanation
"""
from __future__ import print_function # for backward compatibility with Py2
def plot_binned(t_idx, bin_n = 100, solver='unknown'):
if (t_idx > tpnts.size):
print("Time index is out of range.")
return
# Create structure to record z-position of tetrahedron
z_tets = numpy.zeros(ntets)
zbound_min = mesh.getBoundMin()[2]
# Now find the distance of the centre of the tets to the Z lower face
for i in range(ntets):
baryc = mesh.getTetBarycenter(i)
z = baryc[2] - zbound_min
# Convert to microns and save
z_tets[i] = z*1.0e6
# Find the maximum and minimum z of all tetrahedrons
z_max = z_tets.max()
z_min = z_tets.min()
# Set up the bin structures, recording the individual bin volumes
z_seg = (z_max-z_min)/bin_n
bin_mins = numpy.zeros(bin_n+1)
z_tets_binned = numpy.zeros(bin_n)
bin_vols = numpy.zeros(bin_n)
# Now sort the counts into bins for species 'X'
z = z_min
for b in range(bin_n + 1):
bin_mins[b] = z
if (b!=bin_n): z_tets_binned[b] = z +z_seg/2.0
z+=z_seg
bin_counts = [None]*bin_n
for i in range(bin_n): bin_counts[i] = []
for i in range((resX[t_idx].size)):
i_z = z_tets[i]
for b in range(bin_n):
if(i_z>=bin_mins[b] and i_z<bin_mins[b+1]):
bin_counts[b].append(resX[t_idx][i])
bin_vols[b]+=sim.getTetVol(i)
break
# Convert to concentration in arbitrary units
bin_concs = numpy.zeros(bin_n)
for c in range(bin_n):
for d in range(bin_counts[c].__len__()):
bin_concs[c] += bin_counts[c][d]
bin_concs[c]/=(bin_vols[c]*1.0e18)
t = tpnts[t_idx]
# Plot the data
pylab.scatter(z_tets_binned, bin_concs, label = 'X', color = 'blue')
# Repeat the process for species 'Y'- separate from 'X' for clarity:
z = z_min
for b in range(bin_n + 1):
bin_mins[b] = z
if (b!=bin_n): z_tets_binned[b] = z +z_seg/2.0
z+=z_seg
bin_counts = [None]*bin_n
for i in range(bin_n): bin_counts[i] = []
for i in range((resY[t_idx].size)):
i_z = z_tets[i]
for b in range(bin_n):
if(i_z>=bin_mins[b] and i_z<bin_mins[b+1]):
bin_counts[b].append(resY[t_idx][i])
break
bin_concs = numpy.zeros(bin_n)
for c in range(bin_n):
for d in range(bin_counts[c].__len__()):
bin_concs[c] += bin_counts[c][d]
bin_concs[c]/=(bin_vols[c]*1.0e18)
pylab.scatter(z_tets_binned, bin_concs, label = 'Y', color = 'red')
pylab.xlabel('Z axis (microns)', fontsize=16)
pylab.ylabel('Bin concentration (N/m^3)', fontsize=16)
pylab.ylim(0)
pylab.xlim(0, 10)
pylab.legend(numpoints=1)
pylab.title('Simulation with '+ solver)
pylab.show()
"""
Explanation: Plotting simulation output
Having run our simulation it now comes to visualizing and analyzing the output of the simulation. One way to do this is to plot the data, once again using the plotting capability from Matplotlib.
Let's assume we want a spatial plot- distance on the z-axis vs concentration- but don't want to plot every tetrahedron individually. In other words we want to split the cylinder into bins with equal width on the z-axis. Then we record counts from a tetrahedron and add it to the bin that the tetrahedron belongs to. We could, of course, have set up structures to record data from bins before and during our simulation, but instead we will use the data that we have recorded in all individual tetrahedrons (in the code above) to read and split into bins for plotting. And that is exactly what is achieved in the following function, which won't contain a detailed step-by-step explanation as it is not strictly STEPS code, but is included for the user to see how such tasks may be achieved. This function does use some structures defined outside of the function, such as tpnts, so would have to appear after the previous code in a Python script to work as it is:
End of explanation
"""
pylab.figure(figsize=(10,7))
plot_binned(100, 50)
"""
Explanation: This function will plot the bin concentration of both species 'X' and 'Y' along the z-axis for any of our “time-points“, with the default number of bins being 100. We run our simulation up to 100ms, which was the 100th time-point so lets plot that with call
End of explanation
"""
|
xtr33me/deep-learning | autoencoder/Simple_Autoencoder.ipynb | mit | img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
"""
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
"""
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
image_shape = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32, (None,image_shape), name="inputs")
targets_ = tf.placeholder(tf.float32, (None,image_shape), name="targets")
# Output of hidden layer
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(encoded, image_shape)
# Sigmoid output from logits
decoded = tf.nn.sigmoid(logits, name='decoded')
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
"""
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
"""
# Create the session
sess = tf.Session()
"""
Explanation: Training
End of explanation
"""
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
"""
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
"""
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
"""
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation
"""
|
lowRISC/ot-sca | jupyter/otbn_attack_100M.ipynb | apache-2.0 | import numpy as np
wave = np.load('waves_p256_100M_2s.npy')
#wave = np.load('waves_p256_100M_2s_12bits.npy')
#wave = np.load('waves_p256_100M_2s_12bits830.npy')
#wave = np.load('waves_p256_100M_2s_12bitsf0c.npy')
import numpy as np
import pandas as pd
from scipy import signal
def butter_highpass(cutoff, fs, order=5):
nyq = 0.5 * fs
normal_cutoff = cutoff / nyq
b, a = signal.butter(order, normal_cutoff, btype='high', analog=False)
return b, a
def butter_highpass_filter(data, cutoff, fs, order=9):
b, a = butter_highpass(cutoff, fs, order=order)
y = signal.filtfilt(b, a, data)
return y
filtered_wave = butter_highpass_filter(wave, 6e6, 100e6) # for NON-streamed 100M capture
"""
Explanation: Essentially same as otbn_find_bits.ipynb but streamlined for 100M captures.
End of explanation
"""
#samples = len(waves[0])
samples = 600000
base = 0
import holoviews as hv
from holoviews.operation import decimate
from holoviews.operation.datashader import datashade, shade, dynspread
hv.extension('bokeh')
wf = datashade(hv.Curve(filtered_wave[base:base+samples]), cmap=['black'])
(wf).opts(width=2000, height=600)
"""
Explanation: optional, if we need to plot to understand why we're not finding good bit times:
End of explanation
"""
def moving_average(x, w):
return np.convolve(x, np.ones(w), 'valid') / w
mfw = moving_average(np.abs(filtered_wave), 3000)
len(mfw)
samples = 600000
base = 0
mwf = datashade(hv.Curve(mfw[base:base+samples]), cmap=['black'])
mwf.opts(width=2000, height=600)
base = 0
samples = len(filtered_wave)
from scipy.signal import find_peaks
peaks, _ = find_peaks(-mfw[base:base+samples], distance=30000)
len(peaks), peaks
bit_starts3 = peaks[1:]
bit_starts3
deltas = []
good_deltas = []
good_bits = 0
for i in range(len(bit_starts3)-2):
delta = bit_starts3[i+1] - bit_starts3[i]
deltas.append(delta)
print(delta, end='')
if 32000 < delta < 32300:
good_bits += 1
good_deltas.append(delta)
print()
else:
print(' oops!')
good_bits
hv.Curve(good_deltas).opts(width=2000, height=900)
duration = int(np.average(good_deltas))
duration, np.average(good_deltas), max(good_deltas)-min(good_deltas)
bbstarts = []
for i in range(256):
bbstarts.append(42970 + i*32153)
"""
Explanation: p384 alignment method:
End of explanation
"""
bit_starts = bit_starts3[:256]
#bit_starts = bbstarts
bits = []
bit_size = bit_starts[1] - bit_starts[0]
for start in bit_starts:
bits.append(filtered_wave[start:start+bit_size])
len(bits)
duration
# Can plot all the bits, but it's slow:
#numbits = len(bits)
#duration = 1000
duration = 32152
numbits = 4
import holoviews as hv
from holoviews.operation import decimate
from holoviews.operation.datashader import datashade, shade, dynspread
hv.extension('bokeh')
xrange = range(duration)
from operator import mul
from functools import reduce
curves = [hv.Curve(zip(xrange, filtered_wave[bit_starts[i]:bit_starts[i]+duration])) for i in range(numbits)]
#curves = [hv.Curve(zip(xrange, filtered_wave[bbstarts[i]:bbstarts[i]+duration])) for i in range(numbits)]
reduce(mul, curves).opts(width=2000, height=900)
"""
Explanation: Superimpose all the bits!
Plot overlayed bit traces to visualize alignment and guess at success of time extraction:
End of explanation
"""
import chipwhisperer.analyzer.preprocessing as preprocess
resync = preprocess.ResyncDTW()
import fastdtw as fastdtw
def align_traces(N, r, ref, trace, cython=True):
#try:
if cython:
# cython version can't take numpy.memmap inputs, so we convert them to arrays:
aref = np.array(list(ref))
atrace = np.array(list(trace))
dist, path = fastdtw.fastdtw(aref, atrace, radius=r, dist=None)
else:
dist, path = old_dtw(ref, trace, radius=r, dist=None)
#except:
# return None
px = [x for x, y in path]
py = [y for x, y in path]
n = [0] * N
s = [0.0] * N
for x, y in path:
s[x] += trace[y]
n[x] += 1
ret = [s[i] / n[i] for i in range(N)]
return ret
ref = bits[0]
target = filtered_wave[bit_starts[1]:bit_starts[1]+duration]
from tqdm.notebook import tnrange
realigns = [ref]
for b in tnrange(1,256):
target = bits[b]
realigns.append(np.asarray(align_traces(N=len(ref), r=3, ref=ref, trace=target)))
#numbits = len(bits)
numbits = 40
#curves = [hv.Curve(zip(xrange, realigns[i])) for i in range(numbits)]
curves = [hv.Curve(zip(xrange, realigns[i])) for i in range(128,160)]
reduce(mul, curves).opts(width=2000, height=900)
b0 = hv.Curve(ref)
b1 = hv.Curve(target)
re = hv.Curve(realigned)
#(b0 * b1 * re).opts(width=2000, height=900)
#(b0 * b1).opts(width=2000, height=900)
(b0 * re).opts(width=2000, height=900)
"""
Explanation: Now try resync:
End of explanation
"""
def contiguous_regions(condition):
"""Finds contiguous True regions of the boolean array "condition". Returns
a 2D array where the first column is the start index of the region and the
second column is the end index."""
# Find the indicies of changes in "condition"
d = np.diff(condition.astype(int))
idx, = d.nonzero()
# We need to start things after the change in "condition". Therefore,
# we'll shift the index by 1 to the right.
idx += 1
if condition[0]:
# If the start of condition is True prepend a 0
idx = np.r_[0, idx]
if condition[-1]:
# If the end of condition is True, append the length of the array
idx = np.r_[idx, condition.size] # Edit
# Reshape the result into two columns
idx.shape = (-1,2)
return idx
"""
Explanation: Original approach:
End of explanation
"""
# for 100M NOT streamed:
THRESHOLD = 0.015
MIN_RUN_LENGTH = 60 # default for the 128 1's / 128 0's
#MIN_RUN_LENGTH = 40
STOP=len(filtered_wave)
#STOP=360000
condition = np.abs(filtered_wave[:STOP]) < THRESHOLD
# Print the start and stop indices of each region where the absolute
# values of x are below 1, and the min and max of each of these regions
results = contiguous_regions(condition)
#print(len(results))
goods = results[np.where(results[:,1] - results[:,0] > MIN_RUN_LENGTH)]
print(len(goods))
# to help debug:
last_stop = 0
for g in goods:
start = g[0]
stop = g[1]
l = stop-start
delta = start - last_stop
if 13000 < delta < 18000:
stat = 'ok'
else:
stat = 'OOOOPS?!?'
print('%8d %8d %8d %8d %s' % (l, delta, start, stop, stat))
last_stop = stop
"""
Explanation: Find runs of samples below threshold value:
(keep only runs that are long enough)
End of explanation
"""
raw_starts = []
for i in range(1, len(goods), 2):
raw_starts.append(goods[i][1])
raw_starts[:12]
duration = raw_starts[1] - raw_starts[0]
print(duration)
"""
Explanation: Use these runs to guess at bit start times:
End of explanation
"""
wstart = 500
wend = 700
#wstart = 1550
#wend = 1620
base = np.argmax(filtered_wave[raw_starts[0]+wstart:raw_starts[0]+wend])
bit_starts = [raw_starts[0]]
for s in raw_starts[1:]:
loc = np.argmax(filtered_wave[s+wstart:s+wend])
offset = base-loc
#print(offset)
bit_starts.append(s + offset)
len(raw_starts), len(bit_starts)
for b in range(11):
delta = raw_starts[b+1] - raw_starts[b]
print(delta, end='')
if not 31000 < delta < 33000:
print(' Ooops!')
else:
print()
"""
Explanation: Now we make the bit start times more accurate by using the single isolated large peak that's about 650 samples in:
hmm, not sure if this actually improves the results...
End of explanation
"""
from bokeh.plotting import figure, show
from bokeh.resources import INLINE
from bokeh.io import output_notebook
output_notebook(INLINE)
samples = 120000
xrange = range(samples)
S = figure(width=2000, height=900)
S.line(xrange, filtered_wave[:samples], color='blue')
show(S)
#base = 45973
#base = 43257
base = 45067
#cycles = 32150 # full bit
#cycles = 32150//2 # half bit
cycles = 2000 # something short
#cycles = 80000 # *more* than one bit
refbit = filtered_wave[base:base+cycles]
from tqdm.notebook import tnrange
diffs = []
for i in tnrange(78000, 500000):
diffs.append(np.sum(abs(refbit - filtered_wave[i:i+len(refbit)])))
base + 31350
import holoviews as hv
from holoviews.operation import decimate
from holoviews.operation.datashader import datashade, shade, dynspread
hv.extension('bokeh')
datashade(hv.Curve(diffs)).opts(width=2000, height=900)
"""
Explanation: What if we use the SAD approach to find bits instead?
End of explanation
"""
duration
#starts = raw_starts
#starts = bit_starts
starts = bit_starts3[:256]
# f0c: 1111_0000_1111
avg_trace = np.zeros(duration)
avg_ones = np.zeros(duration)
avg_zeros = np.zeros(duration)
for i, start in enumerate(starts[:12]):
avg_trace += filtered_wave[start:start+duration]
#if i < 6:
if i < 4 or i > 7:
avg_ones += filtered_wave[start:start+duration]
#elif i < 12:
elif 3 < i < 8:
avg_zeros += filtered_wave[start:start+duration]
avg_trace /= 12 #len(bit_starts)
#avg_ones /= 6 #len(bit_starts)/2
#avg_zeros /= 6 #len(bit_starts)/2
avg_ones /= 8 #len(bit_starts)/2
avg_zeros /= 4 #len(bit_starts)/2
for b in range(10):
print(len(realigns[b]))
duration = 32151
avg_trace = np.zeros(duration)
avg_ones = np.zeros(duration)
avg_zeros = np.zeros(duration)
for i in range(256):
avg_trace += realigns[i]
if i < 128:
avg_ones += realigns[i]
else:
avg_zeros += realigns[i]
avg_trace /= 256
avg_ones /= 128
avg_zeros /= 128
# what if we don't realign?
duration = 32151
avg_trace = np.zeros(duration)
avg_ones = np.zeros(duration)
avg_zeros = np.zeros(duration)
for i in range(256):
avg_trace += bits[i]
if i < 128:
avg_ones += bits[i]
else:
avg_zeros += bits[i]
avg_trace /= 256
avg_ones /= 128
avg_zeros /= 128
import holoviews as hv
from holoviews.operation import decimate
from holoviews.operation.datashader import datashade, shade, dynspread
hv.extension('bokeh')
xrange = range(duration)
cavg_all = datashade(hv.Curve(avg_trace), cmap=['black'])
cavg_ones = datashade(hv.Curve(avg_ones), cmap=['blue'])
cavg_zeros = datashade(hv.Curve(avg_zeros), cmap=['green'])
cdiff = datashade(hv.Curve((avg_ones - avg_zeros)), cmap=['red'])
#(cavg_all * cavg_ones * cavg_zeros).opts(width=2000, height=900)
#(cdiff * cavg_all).opts(width=2000, height=600)
#(cavg_ones*cavg_zeros).opts(width=2000, height=600)
(cavg_zeros*cavg_ones).opts(width=2000, height=600)
(cdiff).opts(width=2000, height=600)
np.average(avg_ones), np.average(avg_zeros)
np.sum(abs(avg_ones)) / np.sum(abs(avg_zeros))
"""
Explanation: Average 'one' and 'zero'
End of explanation
"""
scores = []
#for b in bit_starts:
for b in raw_starts:
scores.append(np.sum(abs(filtered_wave[b:b+duration])))
cscores = hv.Curve(scores[:12])
(cscores).opts(width=2000, height=600)
"""
Explanation: attack using just the sum of the power trace segment:
End of explanation
"""
markers = np.where((avg_ones - avg_zeros) > 0.01)[0]
#markers = np.where(abs(avg_ones - avg_zeros) > 0.005)[0]
len(markers)
markers
scores = []
for b in starts:
score = 0
for marker in markers:
#score += abs(filtered_wave[b + marker])
score += filtered_wave[b + marker]
scores.append(score)
cscores = hv.Curve(scores)
(cscores).opts(width=2000, height=600)
scores = []
for b in range(256):
score = 0
for marker in markers:
score += abs(realigns[b][marker])
scores.append(score)
scores = []
for b in range(256):
score = 0
for marker in markers:
score += bits[b][marker]
scores.append(score)
scores = []
for b in range(256):
score = 0
for m in range(18000,19200):
score += abs(bits[b][m])
scores.append(score)
np.average(scores[:128]), np.average(scores[128:])
np.average(scores[:10])
np.average(scores[128:138])
scores[128:138]
max(scores), min(scores)
"""
Explanation: attack using markers:
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.23/_downloads/9bd293f49554a21d68d4f2a842cc6cc2/59_head_positions.ipynb | bsd-3-clause | # Authors: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
from os import path as op
import mne
print(__doc__)
data_path = op.join(mne.datasets.testing.data_path(verbose=True), 'SSS')
fname_raw = op.join(data_path, 'test_move_anon_raw.fif')
raw = mne.io.read_raw_fif(fname_raw, allow_maxshield='yes').load_data()
raw.plot_psd()
"""
Explanation: Extracting and visualizing subject head movement
Continuous head movement can be encoded during MEG recordings by use of
HPI coils that continuously emit sinusoidal signals. These signals can then be
extracted from the recording and used to estimate head position as a function
of time. Here we show an example of how to do this, and how to visualize
the result.
HPI frequencies
First let's load a short bit of raw data where the subject intentionally moved
their head during the recording. Its power spectral density shows five peaks
(most clearly visible in the gradiometers) corresponding to the HPI coil
frequencies, plus other peaks related to power line interference (60 Hz and
harmonics).
End of explanation
"""
chpi_amplitudes = mne.chpi.compute_chpi_amplitudes(raw)
"""
Explanation: Estimating continuous head position
First, let's extract the HPI coil amplitudes as a function of time:
End of explanation
"""
chpi_locs = mne.chpi.compute_chpi_locs(raw.info, chpi_amplitudes)
"""
Explanation: Second, let's compute time-varying HPI coil locations from these:
End of explanation
"""
head_pos = mne.chpi.compute_head_pos(raw.info, chpi_locs, verbose=True)
"""
Explanation: Lastly, compute head positions from the coil locations:
End of explanation
"""
mne.viz.plot_head_positions(head_pos, mode='traces')
"""
Explanation: Note that these can then be written to disk or read from disk with
:func:mne.chpi.write_head_pos and :func:mne.chpi.read_head_pos,
respectively.
Visualizing continuous head position
We can plot as traces, which is especially useful for long recordings:
End of explanation
"""
mne.viz.plot_head_positions(head_pos, mode='field')
"""
Explanation: Or we can visualize them as a continuous field (with the vectors pointing
in the head-upward direction):
End of explanation
"""
|
brclark-usgs/flopy | examples/Notebooks/flopy3_gridgen.ipynb | bsd-3-clause | %matplotlib inline
import os
import sys
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import flopy
from flopy.utils.gridgen import Gridgen
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
"""
Explanation: FloPy Creating Layered Quadtree Grids with GRIDGEN
FloPy has a module that can be used to drive the GRIDGEN program. This notebook shows how it works.
The Flopy GRIDGEN module requires that the gridgen executable can be called using subprocess (i.e., gridgen is in your path).
End of explanation
"""
Lx = 100.
Ly = 100.
nlay = 2
nrow = 51
ncol = 51
delr = Lx / ncol
delc = Ly / nrow
h0 = 10
h1 = 5
top = h0
botm = np.zeros((nlay, nrow, ncol), dtype=np.float32)
botm[1, :, :] = -10.
ms = flopy.modflow.Modflow(rotation=-20.)
dis = flopy.modflow.ModflowDis(ms, nlay=nlay, nrow=nrow, ncol=ncol, delr=delr,
delc=delc, top=top, botm=botm)
"""
Explanation: Setup Base MODFLOW Grid
GRIDGEN works off of a base MODFLOW grid. The following information defines the basegrid.
End of explanation
"""
model_ws = os.path.join('.', 'data')
g = Gridgen(dis, model_ws=model_ws)
"""
Explanation: Create the Gridgen Object
End of explanation
"""
# setup the active domain
adshp = os.path.join(model_ws, 'ad0')
adpoly = [[[(0, 0), (0, 60), (40, 80), (60, 0), (0, 0)]]]
# g.add_active_domain(adpoly, range(nlay))
"""
Explanation: Add an Optional Active Domain
Cells outside of the active domain will be clipped and not numbered as part of the final grid. If this step is not performed, then all cells will be included in the final grid.
End of explanation
"""
x = Lx * np.random.random(10)
y = Ly * np.random.random(10)
wells = list(zip(x, y))
g.add_refinement_features(wells, 'point', 3, range(nlay))
rf0shp = os.path.join(model_ws, 'rf0')
river = [[[(-20, 10), (60, 60)]]]
g.add_refinement_features(river, 'line', 3, range(nlay))
rf1shp = os.path.join(model_ws, 'rf1')
g.add_refinement_features(adpoly, 'polygon', 1, range(nlay))
rf2shp = os.path.join(model_ws, 'rf2')
"""
Explanation: Refine the Grid
End of explanation
"""
fig = plt.figure(figsize=(15, 15))
ax = fig.add_subplot(1, 1, 1, aspect='equal')
mm = flopy.plot.ModelMap(model=ms)
mm.plot_grid()
flopy.plot.plot_shapefile(rf2shp, ax=ax, facecolor='yellow', edgecolor='none')
flopy.plot.plot_shapefile(rf1shp, ax=ax, linewidth=10)
flopy.plot.plot_shapefile(rf0shp, ax=ax, facecolor='red', radius=1)
"""
Explanation: Plot the Gridgen Input
End of explanation
"""
g.build(verbose=False)
"""
Explanation: Build the Grid
End of explanation
"""
fig = plt.figure(figsize=(15, 15))
ax = fig.add_subplot(1, 1, 1, aspect='equal')
g.plot(ax, linewidth=0.5)
flopy.plot.plot_shapefile(rf2shp, ax=ax, facecolor='yellow', edgecolor='none', alpha=0.2)
flopy.plot.plot_shapefile(rf1shp, ax=ax, linewidth=10, alpha=0.2)
flopy.plot.plot_shapefile(rf0shp, ax=ax, facecolor='red', radius=1, alpha=0.2)
"""
Explanation: Plot the Grid
End of explanation
"""
mu = flopy.modflow.Modflow(model_ws=model_ws, modelname='mfusg')
disu = g.get_disu(mu)
disu.write_file()
# print(disu)
"""
Explanation: Create a Flopy ModflowDisu Object
End of explanation
"""
adpoly_intersect = g.intersect(adpoly, 'polygon', 0)
print(adpoly_intersect.dtype.names)
print(adpoly_intersect)
print(adpoly_intersect.nodenumber)
well_intersect = g.intersect(wells, 'point', 0)
print(well_intersect.dtype.names)
print(well_intersect)
print(well_intersect.nodenumber)
river_intersect = g.intersect(river, 'line', 0)
print(river_intersect.dtype.names)
# print(river_intersect)
# print(river_intersect.nodenumber)
"""
Explanation: Intersect Features with the Grid
End of explanation
"""
a = np.zeros((g.nodes), dtype=np.int)
a[adpoly_intersect.nodenumber] = 1
a[well_intersect.nodenumber] = 2
a[river_intersect.nodenumber] = 3
fig = plt.figure(figsize=(15, 15))
ax = fig.add_subplot(1, 1, 1, aspect='equal')
g.plot(ax, a=a, masked_values=[0], edgecolor='none', cmap='jet')
flopy.plot.plot_shapefile(rf2shp, ax=ax, facecolor='yellow', alpha=0.25)
"""
Explanation: Plot Intersected Features
End of explanation
"""
|
sudhanshuptl/Machine-Learning | Data Analysis learning/Data_Analysis_2(Numpy Pandas).ipynb | gpl-2.0 | import numpy as np
"""
Explanation: Basics of NUmpy & Pandas
Numpy
Numpy uses array whereas pandas used scaler <br />
End of explanation
"""
num = np.array([3,4,2,5,7,23,56,23,7,23,89,43,676,43])
num
"""
Explanation: Array are similar to python list , but it all element must be of same data type, and it faster than list
End of explanation
"""
print "Mean :",num.mean()
print "sum :",num.sum()
print "max :",num.max()
print "std :",num.std()
#slicing
num[:5]
#find index of any element let say max
print "index of max :",num.argmax()
print "data Type of array :",num.dtype
"""
Explanation: Lets see some of functionality
End of explanation
"""
a=np.array([5,6,15])
b=np.array([5,4,-5])
# Addition
print "{} + {} = {}".format(a,b,a+b)
print "{} * {} = {}".format(a,b,a*b)
print "{} / {} = {}".format(a,b,a/b)
# If size mismatch then error occure
b=np.array([5,4,-5,5])
print "{} + {} = {}".format(a,b,a+b)
"""
Explanation: Vector Operation
End of explanation
"""
print "{} + {} = {}".format(a,3,a+3)
print "{} * {} = {}".format(a,3,a*3)
print "{} / {} = {}".format(a,3,a/3)
"""
Explanation: vector [+-*/] Scaler
End of explanation
"""
num=np.array([5,6,15,65,32,656,23,435,2,45,21])
bl=np.array([False,True,True,False,True,False,True,False,True,True,False])
num[6]
"""
Explanation: vector & boolean vector
End of explanation
"""
num[bl]
"""
Explanation: num[bl],, what it will return ??
<h4>It return array of values corresponding to which elemnt in bl is True
End of explanation
"""
num[num>100]
"""
Explanation: find all elemnt greter than 100 from num
End of explanation
"""
num[num<50]
"""
Explanation: <h5> All element less than 50 ??
End of explanation
"""
a=np.array([5,6,15])
b=a
a += 2
print b
print "this happen becouse a and b both point to same array and += is In-place operation so it maintain that"
a=np.array([5,6,15])
b=a
a = a + 2
print b
"""
Explanation: In-place operation in numpay (Diff between += and +)
End of explanation
"""
a=np.array([5,6,15])
b=a[:3]
b[0]=1000
print a,"Reason is similar as +="
"""
Explanation: <h5>this happen becouse a and b both point to same array and + operation create a new array and then a point to that so b remain unaffected" </h5>
End of explanation
"""
import pandas as pd
num = pd.Series([3,4,2,5,7,23,56,23,7,23,89,43,676,43])
num
"""
Explanation: Pandas Series
<h4> Basics are same as numpy array but pandas series also contain lots of functionality and speciality
End of explanation
"""
num.describe()
"""
Explanation: <h6>See All basic results using describe() function
End of explanation
"""
|
jrrembert/cybernetic-organism | dato/recommendations/Analyzing product sentiment.ipynb | gpl-2.0 | import graphlab
"""
Explanation: Predicting sentiment from product reviews
Fire up GraphLab Create
End of explanation
"""
products = graphlab.SFrame('amazon_baby.gl/')
"""
Explanation: Read some product review data
Loading reviews for a set of baby products.
End of explanation
"""
products.head()
"""
Explanation: Let's explore this data together
Data includes the product name, the review text and the rating of the review.
End of explanation
"""
products['word_count'] = graphlab.text_analytics.count_words(products['review'])
products.head()
graphlab.canvas.set_target('ipynb')
products['name'].show()
"""
Explanation: Build the word count vector for each review
End of explanation
"""
giraffe_reviews = products[products['name'] == 'Vulli Sophie the Giraffe Teether']
len(giraffe_reviews)
giraffe_reviews['rating'].show(view='Categorical')
"""
Explanation: Examining the reviews for most-sold product: 'Vulli Sophie the Giraffe Teether'
End of explanation
"""
products['rating'].show(view='Categorical')
"""
Explanation: Build a sentiment classifier
End of explanation
"""
#ignore all 3* reviews
products = products[products['rating'] != 3]
#positive sentiment = 4* or 5* reviews
products['sentiment'] = products['rating'] >=4
products.head()
"""
Explanation: Define what's a positive and a negative sentiment
We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment. Reviews with a rating of 4 or higher will be considered positive, while the ones with rating of 2 or lower will have a negative sentiment.
End of explanation
"""
train_data,test_data = products.random_split(.8, seed=0)
sentiment_model = graphlab.logistic_classifier.create(train_data,
target='sentiment',
features=['word_count'],
validation_set=test_data)
"""
Explanation: Let's train the sentiment classifier
End of explanation
"""
sentiment_model.evaluate(test_data, metric='roc_curve')
sentiment_model.show(view='Evaluation')
"""
Explanation: Evaluate the sentiment model
End of explanation
"""
giraffe_reviews['predicted_sentiment'] = sentiment_model.predict(giraffe_reviews, output_type='probability')
giraffe_reviews.head()
"""
Explanation: Applying the learned model to understand sentiment for Giraffe
End of explanation
"""
giraffe_reviews = giraffe_reviews.sort('predicted_sentiment', ascending=False)
giraffe_reviews.head()
"""
Explanation: Sort the reviews based on the predicted sentiment and explore
End of explanation
"""
giraffe_reviews[0]['review']
giraffe_reviews[1]['review']
"""
Explanation: Most positive reviews for the giraffe
End of explanation
"""
giraffe_reviews[-1]['review']
giraffe_reviews[-2]['review']
selected_words = ['awesome', 'great', 'fantastic', 'amazing', 'love', 'horrible', 'bad', 'terrible', 'awful', 'wow', 'hate']
def sent_word_count(word_counts):
if 'hate' in word_counts:
return word_counts['hate']
else:
return 0
products['hate'] = products['word_count'].apply(sent_word_count)
word_dict = {}
for word in selected_words:
word_dict[word] = products[word].sum()
train_data, test_data = products.random_split(.8, seed=0)
selected_words_model = graphlab.logistic_classifier.create(train_data,
target='sentiment',
features=['awesome'],
validation_set=test_data)
selected_words_model['coefficients']
swm_coefficients = selected_words_model['coefficients']
swm_coefficients.sort('value')
selected_words_model.evaluate(test_data, metric='roc_curve')
baby_products = products[products['name'] == 'Baby Trend Diaper Champ']
baby_products['predicted_sentiment'] = selected_words_model.predict(baby_products, output_type='probability')
baby_products = baby_products.sort('predicted_sentiment', ascending=False)
baby_products.head()
baby_products['review'][0]
baby_products['predicted_sentiment'] = sentiment_model.predict(baby_products, output_type='probability')
baby_products = baby_products.sort('predicted_sentiment', ascending=False)
baby_products['review'][0]
baby_products.head()
"""
Explanation: Show most negative reviews for giraffe
End of explanation
"""
|
AbhilashReddyM/GeometricMultigrid | notebooks/Making_a_Preconditioner-vectorized.ipynb | mit | import numpy as np
"""
Explanation: This is functionally similar to the the other notebook. All the operations here have been vectorized. This results in much much faster code, but is also much unreadable. The vectorization also necessitated the replacement of the Gauss-Seidel smoother with under-relaxed Jacobi. That change has had some effect since GS is "twice as better" as Jacobi.
The Making of a Preconditioner ---Vectorized Version
This is a demonstration of a multigrid preconditioned krylov solver in python3. The code and more examples are present on github here. The problem solved is a Poisson equation on a rectangular domain with homogenous dirichlet boundary conditions. Finite difference with cell-centered discretization is used to get a second order accurate solution, that is further improved to 4th order using deferred correction.
The first step is a multigrid algorithm. This is the simplest 2D geometric multigrid solver.
1. Multigrid algorithm
We need some terminology before going further.
- Approximation:
- Residual:
- Exact solution (of the discrete problem)
- Correction
This is a geometric multigrid algorithm, where a series of nested grids are used. There are four parts to a multigrid algorithm
- Smoothing Operator (a.k.a Relaxation)
- Restriction Operator
- Interpolation Operator (a.k.a Prolongation Operator)
- Bottom solver
We will define each of these in sequence. These operators act of different quantities that are stored at the cell center. We will get to exactly what later on. To begin import numpy.
End of explanation
"""
def Jacrelax(nx,ny,u,f,iters=1):
'''
under-relaxed Jacobi iteration
'''
dx=1.0/nx; dy=1.0/ny
Ax=1.0/dx**2; Ay=1.0/dy**2
Ap=1.0/(2.0*(Ax+Ay))
#Dirichlet BC
u[ 0,:] = -u[ 1,:]
u[-1,:] = -u[-2,:]
u[:, 0] = -u[:, 1]
u[:,-1] = -u[:,-2]
for it in range(iters):
u[1:nx+1,1:ny+1] = 0.8*Ap*(Ax*(u[2:nx+2,1:ny+1] + u[0:nx,1:ny+1])
+ Ay*(u[1:nx+1,2:ny+2] + u[1:nx+1,0:ny])
- f[1:nx+1,1:ny+1])+0.2*u[1:nx+1,1:ny+1]
#Dirichlet BC
u[ 0,:] = -u[ 1,:]
u[-1,:] = -u[-2,:]
u[:, 0] = -u[:, 1]
u[:,-1] = -u[:,-2]
res=np.zeros([nx+2,ny+2])
res[1:nx+1,1:ny+1]=f[1:nx+1,1:ny+1]-(( Ax*(u[2:nx+2,1:ny+1]+u[0:nx,1:ny+1])
+ Ay*(u[1:nx+1,2:ny+2]+u[1:nx+1,0:ny])
- 2.0*(Ax+Ay)*u[1:nx+1,1:ny+1]))
return u,res
"""
Explanation: 1.1 Smoothing operator
This can be a certain number of Jacobi or a Gauss-Seidel iterations. Below is defined smoother that does under-relaxed Jacobi sweeps and returns the result along with the residual.
End of explanation
"""
def prolong(nx,ny,v):
'''
interpolate 'v' to the fine grid
'''
v_f=np.zeros([2*nx+2,2*ny+2])
v_f[1:2*nx:2 ,1:2*ny:2 ] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[0:nx ,1:ny+1]+v[1:nx+1,0:ny] )+0.0625*v[0:nx ,0:ny ]
v_f[2:2*nx+1:2,1:2*ny:2 ] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[2:nx+2,1:ny+1]+v[1:nx+1,0:ny] )+0.0625*v[2:nx+2,0:ny ]
v_f[1:2*nx:2 ,2:2*ny+1:2] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[0:nx ,1:ny+1]+v[1:nx+1,2:ny+2])+0.0625*v[0:nx ,2:ny+2]
v_f[2:2*nx+1:2,2:2*ny+1:2] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[2:nx+2,1:ny+1]+v[1:nx+1,2:ny+2])+0.0625*v[2:nx+2,2:ny+2]
return v_f
"""
Explanation: 1.2 Interpolation Operator
This operator takes values on a coarse grid and transfers them onto a fine grid. It is also called prolongation. The function below uses bilinear interpolation for this purpose. 'v' is on a coarse grid and we want to interpolate it on a fine grid and store it in v_f.
End of explanation
"""
def restrict(nx,ny,v):
'''
restrict 'v' to the coarser grid
'''
v_c=np.zeros([nx+2,ny+2])
v_c[1:nx+1,1:ny+1]=0.25*(v[1:2*nx:2,1:2*ny:2]+v[1:2*nx:2,2:2*ny+1:2]+v[2:2*nx+1:2,1:2*ny:2]+v[2:2*nx+1:2,2:2*ny+1:2])
return v_c
"""
Explanation: 1.3 Restriction
This is exactly the opposite of the interpolation. It takes values from the find grid and transfers them onto the coarse grid. It is kind of an averaging process. This is fundamentally different from interpolation. Each coarse grid point is surrounded by four fine grid points. So quite simply we take the value of the coarse point to be the average of 4 fine points. Here 'v' is the fine grid quantity and 'v_c' is the coarse grid quantity
End of explanation
"""
def V_cycle(nx,ny,num_levels,u,f,level=1):
if(level==num_levels):#bottom solve
u,res=Jacrelax(nx,ny,u,f,iters=50)
return u,res
#Step 1: Relax Au=f on this grid
u,res=Jacrelax(nx,ny,u,f,iters=1)
#Step 2: Restrict residual to coarse grid
res_c=restrict(nx//2,ny//2,res)
#Step 3:Solve A e_c=res_c on the coarse grid. (Recursively)
e_c=np.zeros_like(res_c)
e_c,res_c=V_cycle(nx//2,ny//2,num_levels,e_c,res_c,level+1)
#Step 4: Interpolate(prolong) e_c to fine grid and add to u
u+=prolong(nx//2,ny//2,e_c)
#Step 5: Relax Au=f on this grid
u,res=Jacrelax(nx,ny,u,f,iters=1)
return u,res
"""
Explanation: 1.4 Bottom Solver
Note that we have looped over the coarse grid in both the cases above. It is easier to access the variables this way. The last part is the Bottom Solver. This must be something that gives us the exact/converged solution to what ever we feed it. What we feed to the bottom solver is the problem at the coarsest level. This has generally has very few points (e.g 2x2=4 in our case) and can be solved exactly by the smoother itself with few iterations. That is what we do here but, any other direct method can also be used. 50 Iterations are used here. If we coarsify to just one point, then just one iteration will solve it exactly.
1.5 V-cycle
Now that we have all the parts, we are ready to build our multigrid algorithm. First we will look at a V-cycle. It is self explanatory. It is a recursive function ,i.e., it calls itself. It takes as input an initial guess 'u', the rhs 'f', the number of multigrid levels 'num_levels' among other things. At each level the V cycle calls another V-cycle. At the lowest level the solving is exact.
End of explanation
"""
#analytical solution
def Uann(x,y):
return (x**3-x)*(y**3-y)
#RHS corresponding to above
def source(x,y):
return 6*x*y*(x**2+ y**2 - 2)
"""
Explanation: Thats it! Now we can see it in action. We can use a problem with a known solution to test our code. The following functions set up a rhs for a problem with homogenous dirichlet BC on the unit square.
End of explanation
"""
#input
max_cycles = 30
nlevels = 6
NX = 2*2**(nlevels-1)
NY = 2*2**(nlevels-1)
tol = 1e-12
#the grid has one layer of ghost cellss
uann=np.zeros([NX+2,NY+2])#analytical solution
u =np.zeros([NX+2,NY+2])#approximation
f =np.zeros([NX+2,NY+2])#RHS
#calcualte the RHS and exact solution
DX=1.0/NX
DY=1.0/NY
xc=np.linspace(0.5*DX,1-0.5*DX,NX)
yc=np.linspace(0.5*DY,1-0.5*DY,NY)
XX,YY=np.meshgrid(xc,yc,indexing='ij')
uann[1:NX+1,1:NY+1]=Uann(XX,YY)
f[1:NX+1,1:NY+1] =source(XX,YY)
"""
Explanation: Let us set up the problem, discretization and solver details. The number of divisions along each dimension is given as a power of two function of the number of levels. In principle this is not required, but having it makes the inter-grid transfers easy.
The coarsest problem is going to have a 2-by-2 grid.
End of explanation
"""
print('mgd2d.py solver:')
print('NX:',NX,', NY:',NY,', tol:',tol,'levels: ',nlevels)
for it in range(1,max_cycles+1):
u,res=V_cycle(NX,NY,nlevels,u,f)
rtol=np.max(np.max(np.abs(res)))
if(rtol<tol):
break
error=uann[1:NX+1,1:NY+1]-u[1:NX+1,1:NY+1]
print(' cycle: ',it,', L_inf(res.)= ',rtol,',L_inf(true error): ',np.max(np.max(np.abs(error))))
error=uann[1:NX+1,1:NY+1]-u[1:NX+1,1:NY+1]
print('L_inf (true error): ',np.max(np.max(np.abs(error))))
"""
Explanation: Now we can call the solver
End of explanation
"""
def FMG(nx,ny,num_levels,f,nv=1,level=1):
if(level==num_levels):#bottom solve
u=np.zeros([nx+2,ny+2])
u,res=Jacrelax(nx,ny,u,f,iters=50)
return u,res
#Step 1: Restrict the rhs to a coarse grid
f_c=restrict(nx//2,ny//2,f)
#Step 2: Solve the coarse grid problem using FMG
u_c,_=FMG(nx//2,ny//2,num_levels,f_c,nv,level+1)
#Step 3: Interpolate u_c to the fine grid
u=prolong(nx//2,ny//2,u_c)
#step 4: Execute 'nv' V-cycles
for _ in range(nv):
u,res=V_cycle(nx,ny,num_levels-level,u,f)
return u,res
"""
Explanation: True error is the difference of the approximation with the analytical solution. It is largely the discretization error. This what would be present when we solve the discrete equation with a direct/exact method like gaussian elimination. We see that true error stops reducing at the 5th cycle. The approximation is not getting any better after this point. So we can stop after 5 cycles. But, in general we dont know the true error. In practice we use the norm of the (relative) residual as a stopping criterion. As the cycles progress the floating point round-off error limit is reached and the residual also stops decreasing.
This was the multigrid V cycle. We can use this as preconditioner to a Krylov solver. But before we get to that let's complete the multigrid introduction by looking at the Full Multi-Grid algorithm. You can skip this section safely.
1.6 Full Multi-Grid
We started with a zero initial guess for the V-cycle. Presumably, if we had a better initial guess we would get better results. So we solve a coarse problem exactly and interpolate it onto the fine grid and use that as the initial guess for the V-cycle. The result of doing this recursively is the Full Multi-Grid(FMG) Algorithm. Unlike the V-cycle which was an iterative procedure, FMG is a direct solver. There is no successive improvement of the approximation. It straight away gives us an approximation that is within the discretization error. The FMG algorithm is given below.
End of explanation
"""
print('mgd2d.py FMG solver:')
print('NX:',NX,', NY:',NY,', levels: ',nlevels)
u,res=FMG(NX,NY,nlevels,f,nv=1)
rtol=np.max(np.max(np.abs(res)))
print(' FMG L_inf(res.)= ',rtol)
error=uann[1:NX+1,1:NY+1]-u[1:NX+1,1:NY+1]
print('L_inf (true error): ',np.max(np.max(np.abs(error))))
"""
Explanation: Lets call the FMG solver for the same problem
End of explanation
"""
from scipy.sparse.linalg import LinearOperator,bicgstab,cg
def MGVP(nx,ny,num_levels):
'''
Multigrid Preconditioner. Returns a (scipy.sparse.linalg.) LinearOperator that can
be passed to Krylov solvers as a preconditioner.
'''
def pc_fn(v):
u =np.zeros([nx+2,ny+2])
f =np.zeros([nx+2,ny+2])
f[1:nx+1,1:ny+1] =v.reshape([nx,ny]) #in practice this copying can be avoived
#perform one V cycle
u,res=V_cycle(nx,ny,num_levels,u,f)
return u[1:nx+1,1:ny+1].reshape(v.shape)
M=LinearOperator((nx*ny,nx*ny), matvec=pc_fn)
return M
"""
Explanation: It works wonderfully. The residual is large but the true error is within the discretization level. FMG is said to be scalable because the amount of work needed is linearly proportional to the the size of the problem. In big-O notation, FMG is $\mathcal{O}(N)$. Where N is the number of unknowns. Exact methods (Gaussian Elimination, LU decomposition ) are typically $\mathcal{O}(N^3)$
2. Stationary iterative methods as preconditioners
A preconditioner reduces the condition number of the coefficient matrix, thereby making it easier to solve. We dont explicitly need a matrix because we dont access the elements by index, coefficient matrix or preconditioner. What we do need is the action of the matrix on a vector. That is, we need only the matrix-vector product. The coefficient matrix can be defined as a function that takes in a vector and returns the matrix vector product.
Any stationary method has an iteration matrix associated with it. This is easily seen for Jacobi or GS methods. This iteration matrix can be used as a preconditioner. But we dont explicitly need it. The stationary iterative method for solving an equation can be written as a Richardson iteration. When the initial guess is set to zero and one iteration is performed, what you get is the action of the preconditioner on the RHS vector. That is, we get a preconditioner-vector product, which is what we want.
This allows us to use any blackbox stationary iterative method as a preconditioner
To repeat, if there is a stationary iterative method that you want to use as a preconditioner, set the initial guess to zero, set the RHS to the vector you want to multiply the preconditioner with and perform one iteration of the stationary method.
We can use the multigrid V-cycle as a preconditioner this way. We cant use FMG because it is not an iterative method.
The matrix as a function can be defined using LinearOperator from scipy.sparse.linalg. It gives us an object which works like a matrix in-so-far as the product with a vector is concerned. It can be used as a regular 2D numpy array in multiplication with a vector. This can be passed to CG(), GMRES() or BiCGStab() as a preconditioner.
Having a symmetric preconditioner would be nice because it will retain the symmetry if the original problem is symmetric and we can still use CG. If the preconditioner is not symmetric CG will not converge, and we would have to use a more general solver.
Below is the code for defining a V-Cycle preconditioner. The default is one V-cycle. In the V-cycle, the defaults are one pre-sweep, one post-sweep.
End of explanation
"""
def Laplace(nx,ny):
'''
Action of the Laplace matrix on a vector v
'''
def mv(v):
u =np.zeros([nx+2,ny+2])
u[1:nx+1,1:ny+1]=v.reshape([nx,ny])
dx=1.0/nx; dy=1.0/ny
Ax=1.0/dx**2; Ay=1.0/dy**2
#BCs. Needs to be generalized!
u[ 0,:] = -u[ 1,:]
u[-1,:] = -u[-2,:]
u[:, 0] = -u[:, 1]
u[:,-1] = -u[:,-2]
ut = (Ax*(u[2:nx+2,1:ny+1]+u[0:nx,1:ny+1])
+ Ay*(u[1:nx+1,2:ny+2]+u[1:nx+1,0:ny])
- 2.0*(Ax+Ay)*u[1:nx+1,1:ny+1])
return ut.reshape(v.shape)
A = LinearOperator((nx*ny,nx*ny), matvec=mv)
return A
"""
Explanation: Let us define the Poisson matrix also as a LinearOperator
End of explanation
"""
def solve_sparse(solver,A, b,tol=1e-10,maxiter=500,M=None):
num_iters = 0
def callback(xk):
nonlocal num_iters
num_iters+=1
x,status=solver(A, b,tol=tol,maxiter=maxiter,callback=callback,M=M)
return x,status,num_iters
"""
Explanation: The nested function is required because "matvec" in LinearOperator takes only one argument-- the vector. But we require the grid details and boundary condition information to create the Poisson matrix. Now will use these to solve a problem. Unlike earlier where we used an analytical solution and RHS, we will start with a random vector which will be our exact solution, and multiply it with the Poisson matrix to get the Rhs vector for the problem. There is no analytical equation associated with the matrix equation.
The scipy sparse solve routines do not return the number of iterations performed. We can use this wrapper to get the number of iterations
End of explanation
"""
A = Laplace(NX,NY)
#Exact solution and RHS
uex=np.random.rand(NX*NY,1)
b=A*uex
#Multigrid Preconditioner
M=MGVP(NX,NY,nlevels)
u,info,iters=solve_sparse(bicgstab,A,b,tol=1e-10,maxiter=500)
print('Without preconditioning. status:',info,', Iters: ',iters)
error=uex.reshape([NX,NY])-u.reshape([NX,NY])
print('error :',np.max(np.abs(error)))
u,info,iters=solve_sparse(bicgstab,A,b,tol=1e-10,maxiter=500,M=M)
print('With preconditioning. status:',info,', Iters: ',iters)
error=uex.reshape([NX,NY])-u.reshape([NX,NY])
print('error :',np.max(np.abs(error)))
"""
Explanation: Lets look at what happens with and without the preconditioner.
End of explanation
"""
u,info,iters=solve_sparse(cg,A,b,tol=1e-10,maxiter=500)
print('Without preconditioning. status:',info,', Iters: ',iters)
error=uex.reshape([NX,NY])-u.reshape([NX,NY])
print('error :',np.max(np.abs(error)))
u,info,iters=solve_sparse(cg,A,b,tol=1e-10,maxiter=500,M=M)
print('With preconditioning. status:',info,', Iters: ',iters)
error=uex.reshape([NX,NY])-u.reshape([NX,NY])
print('error :',np.max(np.abs(error)))
"""
Explanation: Without the preconditioner ~150 iterations were needed, where as with the V-cycle preconditioner the solution was obtained in far fewer iterations. Let's try with CG:
End of explanation
"""
|
walkon302/CDIPS_Recommender | notebooks/Create_Datasets_for_Evaluation.ipynb | apache-2.0 | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# get data
user_profile = pd.read_csv('../data_user_view_buy/user_profile.csv',sep='\t',header=None)
user_profile.columns = ['user_id','buy_spu','buy_sn','buy_ct3','view_spu','view_sn','view_ct3','time_interval','view_cnt','view_seconds']
user_profile.head()
spu_fea = pd.read_pickle("../data_nn_features/spu_fea.pkl") #takes forever to load
spu_fea.head()
spu_fea = spu_fea.reset_index()
"""
Explanation: Save out dataset for Evaluation: Dataset Eval 1
this saves out a smaller dataset to compare different recommendation algorithms on
it removes rows with viewed items that do not have features
it removes items viewed less 20 minutes before buying
it then removes users with <5 viewed items before buying.
Versions of Dataset:
- v1: starting point
- v2: removing rows for second items bought by user - I only want want trajectory per user so that I don't mess things up later (calculating similarity etc).
End of explanation
"""
spu_fea['view_spu']=spu_fea['spu_id']
spu_fea['view_spu']=spu_fea['spu_id']
user_profile_w_features = user_profile.merge(spu_fea,on='view_spu',how='left')
print('before merge nrow: {0}').format(len(user_profile))
print('after merge nrows:{0}').format(len(user_profile_w_features))
user_profile_w_features.head(20)
# takes too long
# user_profile_w_features.to_csv('../../data_user_view_buy/user_profile_items_with_features.csv') # I think this takes to long to save.
"""
Explanation: Merge user data with feature data
End of explanation
"""
len(user_profile_w_features)
user_profile_w_features_nonnull = user_profile_w_features.loc[~user_profile_w_features.features.isnull(),]
len(user_profile_w_features_nonnull)
"""
Explanation: Eliminate Rows with viewed items that don't have features
this may break up some trajectories (view1,view2,view3-removed, view4,buy).
End of explanation
"""
spus_with_features =user_profile_w_features_nonnull.spu_id.unique() #
user_profile_w_features_nonnull = user_profile_w_features_nonnull[user_profile_w_features_nonnull['buy_spu'].isin(spus_with_features)]
len(user_profile_w_features_nonnull)
"""
Explanation: Eliminate Rows with bought items that don't have features
this will eliminate whole trajectories (view1,view2,view3,buy), because each of these rows is labeled with the buy id
End of explanation
"""
# remove rows <20 minutes before
user_profile_w_features_nonnull_20 = user_profile_w_features_nonnull.loc[(user_profile_w_features_nonnull.time_interval/60.0)>20.0]
len(user_profile_w_features_nonnull_20)
"""
Explanation: Eliminate Rows <20 minutes before buy
End of explanation
"""
view_counts_per_user = user_profile_w_features_nonnull_20[['user_id','view_spu']].groupby(['user_id']).agg(['count'])
view_counts_per_user.head()
user_profile_w_features_nonnull_20_5 = user_profile_w_features_nonnull_20.join(view_counts_per_user, on='user_id', rsuffix='_r')
columns = user_profile_w_features_nonnull_20_5.columns.values
columns[-1]='view_spu_count'
user_profile_w_features_nonnull_20_5.columns=columns
user_profile_w_features_nonnull_20_5.head()
user_profile_w_features_nonnull_20_5 = user_profile_w_features_nonnull_20_5.loc[user_profile_w_features_nonnull_20_5.view_spu_count>5,]
len(user_profile_w_features_nonnull_20_5)
"""
Explanation: Eliminate Users with <5 previously viewed items
End of explanation
"""
user_profile_w_features_nonnull_20_5.user_id.unique()
# (super slow way of doing it)
user_profile_w_features_nonnull_20_5['drop']=0
for user_id in user_profile_w_features_nonnull_20_5.user_id.unique():
# get bought items per user
buy_spus = user_profile_w_features_nonnull_20_5.loc[user_profile_w_features_nonnull_20_5.user_id==user_id,'buy_spu'].unique()
# eliminate second, third .. purchases
if len(buy_spus)>1:
for buy_spu in buy_spus[1::]:
user_profile_w_features_nonnull_20_5.loc[(user_profile_w_features_nonnull_20_5.user_id==user_id)&(user_profile_w_features_nonnull_20_5.buy_spu==buy_spu),'drop']=1
print(len(user_profile_w_features_nonnull_20_5))
user_profile_w_features_nonnull_20_5 = user_profile_w_features_nonnull_20_5.loc[user_profile_w_features_nonnull_20_5['drop']!=1]
print(len(user_profile_w_features_nonnull_20_5))
"""
Explanation: Only use First Buy per User
End of explanation
"""
user_profile_w_features_nonnull_20_5_nofeatures = user_profile_w_features_nonnull_20_5.drop('features',axis=1)
"""
Explanation: Remove Features from DF before Saving
End of explanation
"""
user_profile_w_features_nonnull_20_5_nofeatures.to_pickle('../data_user_view_buy/user_profile_items_nonnull_features_20_mins_5_views_v2.pkl')
"""
Explanation: Save Out
End of explanation
"""
# sample 1000 users
np.random.seed(1000)
users_sample = np.random.choice(user_profile_w_features_nonnull_20_5_nofeatures.user_id.unique(),size=1000)
print(users_sample[0:10])
user_profile_sample = user_profile_w_features_nonnull_20_5_nofeatures.loc[user_profile_w_features_nonnull_20_5_nofeatures.user_id.isin(users_sample),]
print(len(user_profile_sample))
print(len(user_profile_sample.user_id.unique()))
user_profile_sample.to_pickle('../data_user_view_buy/user_profile_items_nonnull_features_20_mins_5_views_v2_sample1000.pkl')
"""
Explanation: Sub-Sample (save out v1000)
End of explanation
"""
intersection_of_spus = set(list(user_profile_sample.view_spu.unique())+list(user_profile_sample.buy_spu.unique()))
spu_fea_sample = spu_fea.loc[spu_fea['spu_id'].isin(list(intersection_of_spus))]
len(spu_fea)
len(spu_fea_sample)
spu_fea_sample.to_pickle('../data_nn_features/spu_fea_sample1000.pkl')
"""
Explanation: Create Smaller spu_fea for subsample
End of explanation
"""
|
dereneaton/ipyrad | tests/cookbook-quartet-species-tree.ipynb | gpl-3.0 | ## conda install ipyrad -c ipyrad
## conda install toytree -c eaton-lab
import ipyrad.analysis as ipa
import ipyparallel as ipp
import toytree
"""
Explanation: Inferring species trees with tetrad
When you install ipyrad a number of analysis tools are installed as well. This includes the program tetrad, which applies the theory of phylogenetic invariants (see Lake 1987) to infer quartet trees based on a SNP alignment. It then uses the software wQMC to join the quartets into a species tree. This combined approach was first developed by Chifman and Kubatko (2015) in the software SVDQuartets.
Required software
End of explanation
"""
## connect to a cluster
ipyclient = ipp.Client()
print("connected to {} cores".format(len(ipyclient)))
"""
Explanation: Connect to a cluster
End of explanation
"""
## initiate a tetrad object
tet = ipa.tetrad(
name="pedic-full",
seqfile="analysis-ipyrad/pedic-full_outfiles/pedic-full.snps.phy",
mapfile="analysis-ipyrad/pedic-full_outfiles/pedic-full.snps.map",
nboots=100,
)
## run tetrad on the cluster
tet.run(ipyclient=ipyclient)
"""
Explanation: Run tetrad
End of explanation
"""
## plot the resulting unrooted tree
import toytree
tre = toytree.tree(tet.trees.nhx)
tre.draw(
width=350,
node_labels=tre.get_node_values("support"),
);
## save the tree as a pdf
import toyplot.pdf
toyplot.pdf.render(canvas, "analysis-tetrad/tetrad-tree.pdf")
"""
Explanation: Plot the tree
End of explanation
"""
|
kimkipyo/dss_git_kkp | Python 복습/05일차.화_디버깅,예외,예외처리,우분투,숙제_하노이의탑/5일차_디버깅,예외,예외처리,우분투.ipynb | mit | for i in range(3):
a = i * 7 #0, 7, 14
b = i + 2 #2, 3, 4
c = a * b # 0, 21, 56
#만약 이 range값이 3017, 5033일 경우에는 무슨 값인지 알 수 없다. 이 때 쉽게 a,b,c값이 무엇인지 찾는 방법을 소개
"""
Explanation: 1T_디버깅(Debugging), 오류(errors), 예외(Exceptions)처리(Handling)
End of explanation
"""
name = "KiPyo Kim"
age = 29
from IPython import embed
embed()
for i in range(3):
a = i * 7
b = i + 2
c = a * b
embed()
from IPython import embed; embed() #보통 디버깅 할 때 이렇게 해서 단축키로 지정하고 많이 쓴다.
for i in range(100):
from IPython import embed; embed()
#이렇게 하면 무한루프에 빠진다. 커널을 종료하는 수밖에...
"""
Explanation: 디버깅? de+bugg+ing => 버그를 잡는다
Jupyter는 Python interpreter이다. 이것을 웹 환경에서 쓸 수 있도록 바꾼 것이 Jupyter Notebook 이다.
예전 이름은 Ipython이었다.
실제 환경(cmd에서 python 입력한 환경)에서는 tap키가 안 된다. 그래서 작성하기 어려운 환경이다.
Jupyter Notebook => Multi-user => Jupyterhub. 주피터 노트북은 멀티유저모드로 주피터허브라고 한다. 여러 사람이 같이 쓸 수 있는 환경
디버깅은 IPython에 내장되어 있는 기능
End of explanation
"""
def fibo(n):
if n <= 0:
return 0
if n == 1:
return 1
embed()
return fibo(n-1) + fibo(n-2)
fibo(5)
class Student():
def __init__(self, name, age):
self.name = name
self.age = age
def introduce(self):
from IPython import embed; embed()
return (self.name, self.age)
kkp = Student("kipoy", 29)
kkp.introduce()
def fibo(n):
if n <= 0:
return 0
if n == 1:
return 1
print("fibo({n}) = fibo({n_1}) + fibo({n-2})".format(n=n, n_1=n-1, n_2=n-2))
return fibo(n-1) + fibo(n-2)
fibo(3)
fibo(10)
"""
Explanation: 만약 안에 있는 것을 exit()로 종료하지 않고 dd로 밖에서 강제 종료할 경우
계속 돌아가기 때문에 다음 명령어가 *에서 멈춘다. 그래서 노트북을 종료하고 재시작해야 한다.
embed는 IPython 내부에서 IPython을 또 띄운 것
Ipython embed가 디버깅 할 때 가장 많이 사용할 방법이다.일반적으로 사용
End of explanation
"""
import logging
# logging -> debug, info, warning, danger, critical...
logging.warning("hello world")
"""
Explanation: print를 쓰면 직관적으로 볼 수 있지만 안 좋은 이유는 결과에 영향을 미치기 때문
그리고 특정값만 보고 싶은데 전체를 봐야 하기 때문에
(print는 결과에 영향을 안 미치지 않나?) -> 결과값에 영향이 아니라 output에 영향을 준다는 의미
출력을 하기는 한다. 다만, print로 직접 하지는 않는다.
로그를 쌓는다. ( Log ) -> logging
End of explanation
"""
name #뒤에 따라 오는 주석들 Traceback 이 있다.
"""
Explanation: cmd에서 python으로 실행할 경우에는 바로 WARNING이 뜬다.
그런데 여기서 바로 안 뜨는 이유는 관리자로 봐야 하기 때문에
주피터 노트북이 돌아가는 웹서버or관리창에 워닝이 기록된다.
데이터분석은 로깅을 쓰는 것보다는 프린터를 쓰는 것이 더 낫다.
머신러닝 프로젝트의 경우 장기적이거나 할 경우에는 로깅 형태가 낫다. 기록을 해 두는 것이 좋다.
오류에 대해서
End of explanation
"""
if True
print("hello world")
"""
Explanation: Error
실행 자체가 되지 않은 것
문법 자체의 오류가 있어서 SyntaxError ( ParsingError )
End of explanation
"""
ab
def error(a, b):
a = b + 1
print(a)
error
2 + "김기표"
{}["somtthing"]
{}.append()
with open("./not_exist_file", "r") as f:
pass
"""
Explanation: 오류에는 2가지가 있다.
SyntaxError와 SyntaxError가 아닌 것(Exceptions)
End of explanation
"""
NameError
Exception?
def append_string_to_hello(string):
return "hello, " + string
append_string_to_hello("world")
append_string_to_hello(13)
"hello, " + 3
#str + int, implicit(함축적)한 방법이다.
#Python은 형변환이 되지 않기 때문이다.(형변환 잘 되는 것은 루비)
#자유도가 높거나 해서 좋은 언어는 아니다. 언어 특성에 맞게 사용하면 된다.
"hello" + str(3)
# str + str(int), explicit(명시적)한 방법이다.
awesome_list = ["world", "hello", "python", 5678, "fastcampus"]
for awesome in awesome_list:
# 예외 처리가 가능한 장소 [1] => 밑에 있는 케이스에서만 예외처리가 가능
print(append_string_to_hello(awesome))
def append_string_to_hello(string):
# 예외처리가 가능한 장소(2) => 함수 불렀을 모든 경우에서 예외 처리 가능
# 그래서 2번에서 해보겠다.
return "hello, " + string
def append_string_to_hello(string):
# 예외처리가 가능한 장소
# 예외처리 => try:-except: (항상 이런 방법으로 한다.)
try:
return "hello, " + string
except TypeError as err:
print("오류" * 40)
append_string_to_hello(123)
def append_string_to_hello(string):
try:
return "hello, " + string
except TypeError as err:
#TypeError는 class다. 즉 에러 하나하나 자체는 클래스로 만들어진 에러 객체이다.
print(err)
append_string_to_hello(123)
def append_string_to_hello(string):
try:
return "hello, " + string
except TypeError as err:
return err
append_string_to_hello(123)
def append_string_to_hello(string):
try:
return "hello, " + string
except: #이렇게 할 경우 모든 에러에 대해 예외처리. 일반적으로 많이 사용
return err
append_string_to_hello(123)
def append_string_to_hello(string):
try:
return "hello, " + string
except {TypeError, AttributeError} as err: #이렇게도 쓰나 잘 쓰진 않는다.
return err
append_string_to_hello(123)
def append_string_to_hello(string):
try:
return "hello, " + string
except TypeError as err:
#알람을 받을 수 있도록 넣을수도 있다.
print("send_sms")
raise
append_string_to_hello(123)
def append_string_to_hello(string):
try:
return "hello, " + string
except TypeError as err:
print("send_sms")
raise
except AttributeError as err:
pass
append_string_to_hello(123)
def append_string_to_hello(string):
try:
return "hello, " + string
except TypeError as err:
print("send_sms")
print(err)
# raise
except AttributeError as err:
pass
finally: #예외가 발생했던 아니던 어쨌든 실행하고 싶을 때
print("어쨌던 끝남")
append_string_to_hello(123)
awesome_list = ["world", "hello", "python", 5678, "fastcampus"]
for awesome in awesome_list:
print(append_string_to_hello(awesome))
"""
Explanation: Exception이 뜨면 좋은 것이다. 일단 남이 의도하지 않은 방향대로 뭔가 클래스나 함수, 객체 등을 쓰고 있는 것이기 때문
2T_예외를 처리하는 방법
Exception Handling
예외를 처리하는 방법 => AttributeError, FileNotFoundError, AttributeError, ...
우리가 직접 예외를 만드는 방법(FibonacciShouldGetPositiveNumberError => 우리만의 예외)
Class를 만들어야 한다.
End of explanation
"""
def fibo(x):
if x < 0:
# err = FibonacciShouldNotHaveNegativeNumberError()
# raise err
#일반적으로 위와 같은 2줄이 아닌 아래와 같은 1줄로 적는다.
raise FibonacciShouldNotHaveNegativeNumberError()
if x == 0:
return 0
if x == 1:
return 1
return fibo(x-1) + fibo(x-2)
#Exception Class
class FibonacciShouldNotHaveNegativeNumberError(Exception):
def __init__(self):
pass
"""
Explanation: 우리만의 예외처리 class를 만들어보겠다.
End of explanation
"""
fibo(-1)
raise FibonacciShouldNotHaveNegativeNumberError()
# 다른 에러들은 수정 요구 사항까지 나온다.
"hello, " + 5678
# 노란색 에러는 수정 요구 사항
# 이건 어디서 수정 가능하냐? def __str__(self): 여기서!
class FibonacciShouldNotHaveNegativeNumberError(Exception):
def __init__(self):
pass
def __str__(self):
return "피보나치 수열은 index 값으로 양수를 받아야 합니다"
raise FibonacciShouldNotHaveNegativeNumberError()
def fibo(x):
if x < 0:
raise FibonacciShouldNotHaveNegativeNumberError(x)
if x == 0:
return 0
if x == 1:
return 1
return fibo(x-1) + fibo(x-2)
class FibonacciShouldNotHaveNegativeNumberError(Exception):
def __init__(self, n):
self.n = n
def __str__(self):
return "이 함수는 index 값으로 양수를 받아야 합니다. (입력받은 값: {n})".format(n=self.n) #에러메세지 요구사항
fibo(-13)
"""
Explanation: 상속 순서
object -> Exception -> Fibo...
End of explanation
"""
def Factorial(n):
if n == 1:
return 1
return n * Factorial(n-1)
Factorial(5)
def Factorial(n):
if n < 0:
raise FactorialShouldGetPositiveIndexError(n)
if n == 1:
return 1
return n * Factorial(n-1)
class FactorialShouldGetPositiveIndexError(Exception):
def __init__(self, n):
self.n = n
def __str__(self):
return "factorial function should get positive index. (input: {n})".format(n=self.n)
Factorial(-3)
"""
Explanation: 예외처리를 포함하여,
Factorial 함수를 구현하세요. n! = n * (n-1) * (n-2) * ... * 2 * 1
End of explanation
"""
|
jalabort/templatetracker | notebooks/KCF Tracker.ipynb | bsd-3-clause | video_path = '../data/video.mp4'
cam = cv2.VideoCapture(video_path)
print 'Is video capture opened?', cam.isOpened()
n_frames = 1000
resolution = (640, 360)
frames = []
for _ in range(n_frames):
# read frame
frame = cam.read()[1]
# scale down
frame = cv2.resize(frame, resolution)
# bgr to rgb
frame = frame[..., ::-1]
# pixel values from 0 to 1
frame = np.require(frame, dtype=np.double)
frame /= 255
# roll channel axis to the front
frame = np.rollaxis(frame, -1)
# build menpo image and turn it to grayscale
frame = Image(frame)
# append to frame list
frames.append(frame)
cam.release()
visualize_images(frames)
"""
Explanation: Kernel Correlation Filter (CF) based Tracker
This tracker is a first initial implementation of the ideas describes in the following 3 papers regarding template tracking using adaptive correlation filters:
David S. Bolme, J. Ross Beveridge, Bruce A. Draper and Yui Man Lui. "Visual Object Tracking using Adaptive Correlation Filters". CVPR, 2010
Hamed Kiani Galoogahi, Terence Sim, Simon Lucey. "Multi-Channel Correlation Filters". ICCV, 2013.
J. F. Henriques, R. Caseiro, P. Martins, J. Batista. "High-Speed Tracking with Kernelized Correlation Filters". TPAMI, 2015.
Load and manipulate basket ball video
Read, pre-process and store a particular number of frames of the provided basket ball video.
End of explanation
"""
# first frame
frame0 = frames[0]
# manually define target centre
target_centre0 = PointCloud(np.array([168.0, 232.0])[None])
# manually define target size
target_shape = (31.0, 31.0)
# build bounding box containing the target
target_bb = generate_bounding_box(target_centre0, target_shape)
# add target centre and bounding box as frame landmarks
frame0.landmarks['target_centre'] = target_centre0
frame0.landmarks['target_bb'] = target_bb
# visualize initialization
frame0.view_widget()
"""
Explanation: Define the position and size of the target on the first frame. Note that we need to this manually!
End of explanation
"""
# set options
# specify the kind of filters to be learned and incremented
learn_filter = learn_kcf # learn_mosse or learn_mccf
# specify image representation used for tracking
features = greyscale_hog # no_op, greyscale, greyscale_hog
tracker = KCFTracker(frame0, target_centre0, target_shape, learn_filter=learn_filter,
features=features, sigma=0.2)
"""
Explanation: Track basket ball video
Create and initialize the correlation filter based tracker by giving it the first frame and the target position and size on the first frame.
End of explanation
"""
# only the up to the first 5 channels are shown
n_channels = np.minimum(5, tracker.alpha.shape[0])
fig_size = (3*n_channels, 3*n_channels)
fig = plt.figure()
fig.set_size_inches(fig_size)
for j, c in enumerate(tracker.alpha[:n_channels]):
plt.subplot(1, n_channels, j+1)
plt.title('KCF in spatial domain')
plt.imshow(tracker.alpha[j])
fig = plt.figure()
fig.set_size_inches(fig_size)
for j, c in enumerate(tracker.alpha[:n_channels]):
plt.subplot(1, n_channels, j+1)
plt.title('KCF in frequency domain')
plt.imshow(np.abs(fftshift(fft2(tracker.alpha[j]))))
"""
Explanation: Visualize the learned correlation filters.
End of explanation
"""
# set options
# filter adaptive parameter; values close to 0 give more weight to filters derive from the last tracked frames,
# values close to 0 give more weight to the initial filter
nu = 0.125
# specifies a threshold on the peak to sidelobe measure below which there is to much uncertainty wrt the target
# position and concequently filters are not updated based on the current frame
psr_threshold = 100
# specifies how the next target position is obtained given the filter response
compute_peak = compute_max_peak # compute_max_peak or compute_meanshift_peak
target_centre = target_centre0
filters = []
targets = []
psrs = []
rs = []
for j, frame in enumerate(frames):
# track target
target_centre, psr, r = tracker.track(frame, target_centre, nu=nu,
psr_threshold=psr_threshold,
compute_peak=compute_peak)
# add target centre and its bounding box as landmarks
frame.landmarks['tracked_centre'] = target_centre
frame.landmarks['tracked_bb'] = generate_bounding_box(target_centre, target_shape)
# add psr to list
psrs.append(psr)
rs.append(r)
# print j
"""
Explanation: Track the previous frames.
End of explanation
"""
visualize_images(frames)
"""
Explanation: Explore tracked frames.
End of explanation
"""
plt.title('Peak to sidelobe ratio (PSR)')
plt.plot(range(len(psrs)), psrs)
"""
Explanation: Show peak to sidelobe ratio (PSR) over the entire sequence.
End of explanation
"""
|
yunqu/PYNQ | boards/Pynq-Z1/base/notebooks/board/asyncio_buttons.ipynb | bsd-3-clause | from pynq import PL
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
"""
Explanation: Using Interrupts and asyncio for Buttons and Switches
This notebook provides a simple example for using asyncio I/O to interact asynchronously with multiple input devices. A task is created for each input device and coroutines used to process the results. To demonstrate, we recreate the flashing LEDs example in the getting started notebook but using interrupts to avoid polling the GPIO devices. The aim is have holding a button result in the corresponding LED flashing.
Initialising the Enviroment
First we import an instantiate all required classes to interact with the buttons, switches and LED and ensure the base overlay is loaded.
End of explanation
"""
import asyncio
@asyncio.coroutine
def flash_led(num):
while True:
yield from base.buttons[num].wait_for_value_async(1)
while base.buttons[num].read():
base.leds[num].toggle()
yield from asyncio.sleep(0.1)
base.leds[num].off()
"""
Explanation: Define the flash LED task
Next step is to create a task that waits for the button to be pressed and flash the LED until the button is released. The while True loop ensures that the coroutine keeps running until cancelled so that multiple presses of the same button can be handled.
End of explanation
"""
tasks = [asyncio.ensure_future(flash_led(i)) for i in range(4)]
"""
Explanation: Create the task
As there are four buttons we want to check, we create four tasks. The function asyncio.ensure_future is used to convert the coroutine to a task and schedule it in the event loop. The tasks are stored in an array so they can be referred to later when we want to cancel them.
End of explanation
"""
import psutil
@asyncio.coroutine
def print_cpu_usage():
# Calculate the CPU utilisation by the amount of idle time
# each CPU has had in three second intervals
last_idle = [c.idle for c in psutil.cpu_times(percpu=True)]
while True:
yield from asyncio.sleep(3)
next_idle = [c.idle for c in psutil.cpu_times(percpu=True)]
usage = [(1-(c2-c1)/3) * 100 for c1,c2 in zip(last_idle, next_idle)]
print("CPU Usage: {0:3.2f}%, {1:3.2f}%".format(*usage))
last_idle = next_idle
tasks.append(asyncio.ensure_future(print_cpu_usage()))
"""
Explanation: Monitoring the CPU Usage
One of the advantages of interrupt-based I/O is to minimised CPU usage while waiting for events. To see how CPU usages is impacted by the flashing LED tasks we create another task that prints out the current CPU utilisation every 3 seconds.
End of explanation
"""
if base.switches[0].read():
print("Please set switch 0 low before running")
else:
base.switches[0].wait_for_value(1)
"""
Explanation: Run the event loop
All of the blocking wait_for commands will run the event loop until the condition is met. All that is needed is to call the blocking wait_for_level method on the switch we are using as the termination condition.
While waiting for switch 0 to get high, users can press any push button on the board to flash the corresponding LED. While this loop is running, try opening a terminal and running top to see that python is consuming no CPU cycles while waiting for peripherals.
As this code runs until the switch 0 is high, make sure it is low before running the example.
End of explanation
"""
[t.cancel() for t in tasks]
"""
Explanation: Clean up
Even though the event loop has stopped running, the tasks are still active and will run again when the event loop is next used. To avoid this, the tasks should be cancelled when they are no longer needed.
End of explanation
"""
base.switches[0].wait_for_value(0)
"""
Explanation: Now if we re-run the event loop, nothing will happen when we press the buttons. The process will block until the switch is set back down to the low position.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/ipsl/cmip6/models/sandbox-2/ocnbgchem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-2', 'ocnbgchem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: IPSL
Source ID: SANDBOX-2
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
"""
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
"""
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
"""
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
"""
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
"""
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
"""
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation
"""
|
tkzeng/molecular-design-toolkit | moldesign/_notebooks/Example 4. HIV Protease bound to an inhibitor.ipynb | apache-2.0 | import moldesign as mdt
import moldesign.units as u
"""
Explanation: <span style="float:right"><a href="http://moldesign.bionano.autodesk.com/" target="_blank" title="About">About</a> <a href="https://forum.bionano.autodesk.com/c/Molecular-Design-Toolkit" target="_blank" title="Forum">Forum</a> <a href="https://github.com/autodesk/molecular-design-toolkit/issues" target="_blank" title="Issues">Issues</a> <a href="http://bionano.autodesk.com/MolecularDesignToolkit/explore.html" target="_blank" title="Tutorials">Tutorials</a> <a href="http://autodesk.github.io/molecular-design-toolkit/" target="_blank" title="Documentation">Documentation</a></span>
</span>
<br>
<center><h1>Example 4: The Dynamics of HIV Protease bound to a small molecule </h1> </center>
This notebook prepares a co-crystallized protein / small molecule ligand structure from the PDB database and prepares it for molecular dynamics simulation.
Author: Aaron Virshup, Autodesk Research<br>
Created on: August 9, 2016
Tags: HIV Protease, small molecule, ligand, drug, PDB, MD
End of explanation
"""
protease = mdt.from_pdb('3AID')
protease
protease.draw()
"""
Explanation: Contents
I. The crystal structure
A. Download and visualize
B. Try assigning a forcefield
II. Parameterizing a small molecule
A. Isolate the ligand
B. Assign bond orders and hydrogens
C. Generate forcefield parameters
III. Prepping the protein
A. Strip waters
B. Histidine
IV. Prep for dynamics
A. Assign the forcefield
B. Attach and configure simulation methods
D. Equilibrate the protein
I. The crystal structure
First, we'll download and investigate the 3AID crystal structure.
A. Download and visualize
End of explanation
"""
newmol = mdt.assign_forcefield(protease)
"""
Explanation: B. Try assigning a forcefield
This structure is not ready for MD - this command will raise a ParameterizationError Exception. After running this calculation, click on the Errors/Warnings tab to see why.
End of explanation
"""
sel = mdt.widgets.ResidueSelector(protease)
sel
drugres = mdt.Molecule(sel.selected_residues[0])
drugres.draw2d(width=700, show_hydrogens=True)
"""
Explanation: You should see 3 errors:
1. The residue name ARQ not recognized
1. Atom HD1 in residue HIS69, chain A was not recognized
1. Atom HD1 in residue HIS69, chain B was not recognized
(There's also a warning about bond distances, but these can be generally be fixed with an energy minimization before running dynamics)
We'll start by tackling the small molecule "ARQ".
II. Parameterizing a small molecule
We'll use the GAFF (generalized Amber force field) to create force field parameters for the small ligand.
A. Isolate the ligand
Click on the ligand to select it, then we'll use that selection to create a new molecule.
End of explanation
"""
drugmol = mdt.add_missing_data(drugres)
drugmol.draw(width=500)
drugmol.draw2d(width=700, show_hydrogens=True)
"""
Explanation: B. Assign bond orders and hydrogens
A PDB file provides only limited information; they often don't provide indicate bond orders, hydrogen locations, or formal charges. These can be added, however, with the add_missing_pdb_data tool:
End of explanation
"""
drug_parameters = mdt.parameterize(drugmol, charges='gasteiger')
"""
Explanation: C. Generate forcefield parameters
We'll next generate forcefield parameters using this ready-to-simulate structure.
NOTE: for computational speed, we use the gasteiger charge model. This is not advisable for production work! am1-bcc or esp are far likelier to produce sensible results.
End of explanation
"""
dehydrated = mdt.Molecule([atom for atom in protease.atoms if atom.residue.type != 'water'])
"""
Explanation: III. Prepping the protein
Section II. dealt with getting forcefield parameters for an unknown small molecule. Next, we'll prep the other part of the structure.
A. Strip waters
Waters in crystal structures are usually stripped from a simulation as artifacts of the crystallization process. Here, we'll remove the waters from the protein structure.
End of explanation
"""
mdt.guess_histidine_states(dehydrated)
"""
Explanation: B. Histidine
Histidine is notoriously tricky, because it exists in no less than three different protonation states at biological pH (7.4) - the "delta-protonated" form, referred to with residue name HID; the "epsilon-protonated" form aka HIE; and the doubly-protonated form HIP, which has a +1 charge. Unfortunately, crystallography isn't usually able to resolve the difference between these three.
Luckily, these histidines are pretty far from the ligand binding site, so their protonation is unlikely to affect the dynamics. We'll therefore use the guess_histidine_states function to assign a reasonable starting guess.
End of explanation
"""
sim_mol = mdt.assign_forcefield(dehydrated, parameters=drug_parameters)
"""
Explanation: IV. Prep for dynamics
With these problems fixed, we can succesfully assigne a forcefield and set up the simulation.
A. Assign the forcefield
Now that we have parameters for the drug and have dealt with histidine, the forcefield assignment will succeed:
End of explanation
"""
sim_mol.set_energy_model(mdt.models.OpenMMPotential, implicit_solvent='obc', cutoff=8.0*u.angstrom)
sim_mol.set_integrator(mdt.integrators.OpenMMLangevin, timestep=2.0*u.fs)
sim_mol.configure_methods()
"""
Explanation: B. Attach and configure simulation methods
Armed with the forcefield parameters, we can connect an energy model to compute energies and forces, and an integrator to create trajectories:
End of explanation
"""
mintraj = sim_mol.minimize()
mintraj.draw()
traj = sim_mol.run(20*u.ps)
viewer = traj.draw(display=True)
viewer.autostyle()
"""
Explanation: C. Equilibrate the protein
The next series of cells first minimize the crystal structure to remove clashes, then heats the system to 300K.
End of explanation
"""
|
myfunprograms/machine-learning | finding_donors/finding_donors_original.ipynb | apache-2.0 | # Import libraries necessary for this project
import numpy as np
import pandas as pd
from time import time
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualization code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Census dataset
data = pd.read_csv("census.csv")
# Success - Display the first record
display(data.head(n=1))
"""
Explanation: Machine Learning Engineer Nanodegree
Supervised Learning
Project: Finding Donors for CharityML
Welcome to the second project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Please specify WHICH VERSION OF PYTHON you are using when submitting this notebook. Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Getting Started
In this project, you will employ several supervised algorithms of your choice to accurately model individuals' income using data collected from the 1994 U.S. Census. You will then choose the best candidate algorithm from preliminary results and further optimize this algorithm to best model the data. Your goal with this implementation is to construct a model that accurately predicts whether an individual makes more than $50,000. This sort of task can arise in a non-profit setting, where organizations survive on donations. Understanding an individual's income can help a non-profit better understand how large of a donation to request, or whether or not they should reach out to begin with. While it can be difficult to determine an individual's general income bracket directly from public sources, we can (as we will see) infer this value from other publically available features.
The dataset for this project originates from the UCI Machine Learning Repository. The datset was donated by Ron Kohavi and Barry Becker, after being published in the article "Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid". You can find the article by Ron Kohavi online. The data we investigate here consists of small changes to the original dataset, such as removing the 'fnlwgt' feature and records with missing or ill-formatted entries.
Exploring the Data
Run the code cell below to load necessary Python libraries and load the census data. Note that the last column from this dataset, 'income', will be our target label (whether an individual makes more than, or at most, $50,000 annually). All other columns are features about each individual in the census database.
End of explanation
"""
# TODO: Total number of records
n_records = None
# TODO: Number of records where individual's income is more than $50,000
n_greater_50k = None
# TODO: Number of records where individual's income is at most $50,000
n_at_most_50k = None
# TODO: Percentage of individuals whose income is more than $50,000
greater_percent = None
# Print the results
print "Total number of records: {}".format(n_records)
print "Individuals making more than $50,000: {}".format(n_greater_50k)
print "Individuals making at most $50,000: {}".format(n_at_most_50k)
print "Percentage of individuals making more than $50,000: {:.2f}%".format(greater_percent)
"""
Explanation: Implementation: Data Exploration
A cursory investigation of the dataset will determine how many individuals fit into either group, and will tell us about the percentage of these individuals making more than \$50,000. In the code cell below, you will need to compute the following:
- The total number of records, 'n_records'
- The number of individuals making more than \$50,000 annually, 'n_greater_50k'.
- The number of individuals making at most \$50,000 annually, 'n_at_most_50k'.
- The percentage of individuals making more than \$50,000 annually, 'greater_percent'.
Hint: You may need to look at the table above to understand how the 'income' entries are formatted.
End of explanation
"""
# Split the data into features and target label
income_raw = data['income']
features_raw = data.drop('income', axis = 1)
# Visualize skewed continuous features of original data
vs.distribution(data)
"""
Explanation: Preparing the Data
Before data can be used as input for machine learning algorithms, it often must be cleaned, formatted, and restructured — this is typically known as preprocessing. Fortunately, for this dataset, there are no invalid or missing entries we must deal with, however, there are some qualities about certain features that must be adjusted. This preprocessing can help tremendously with the outcome and predictive power of nearly all learning algorithms.
Transforming Skewed Continuous Features
A dataset may sometimes contain at least one feature whose values tend to lie near a single number, but will also have a non-trivial number of vastly larger or smaller values than that single number. Algorithms can be sensitive to such distributions of values and can underperform if the range is not properly normalized. With the census dataset two features fit this description: 'capital-gain' and 'capital-loss'.
Run the code cell below to plot a histogram of these two features. Note the range of the values present and how they are distributed.
End of explanation
"""
# Log-transform the skewed features
skewed = ['capital-gain', 'capital-loss']
features_raw[skewed] = data[skewed].apply(lambda x: np.log(x + 1))
# Visualize the new log distributions
vs.distribution(features_raw, transformed = True)
"""
Explanation: For highly-skewed feature distributions such as 'capital-gain' and 'capital-loss', it is common practice to apply a <a href="https://en.wikipedia.org/wiki/Data_transformation_(statistics)">logarithmic transformation</a> on the data so that the very large and very small values do not negatively affect the performance of a learning algorithm. Using a logarithmic transformation significantly reduces the range of values caused by outliers. Care must be taken when applying this transformation however: The logarithm of 0 is undefined, so we must translate the values by a small amount above 0 to apply the the logarithm successfully.
Run the code cell below to perform a transformation on the data and visualize the results. Again, note the range of values and how they are distributed.
End of explanation
"""
# Import sklearn.preprocessing.StandardScaler
from sklearn.preprocessing import MinMaxScaler
# Initialize a scaler, then apply it to the features
scaler = MinMaxScaler()
numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
features_raw[numerical] = scaler.fit_transform(data[numerical])
# Show an example of a record with scaling applied
display(features_raw.head(n = 1))
"""
Explanation: Normalizing Numerical Features
In addition to performing transformations on features that are highly skewed, it is often good practice to perform some type of scaling on numerical features. Applying a scaling to the data does not change the shape of each feature's distribution (such as 'capital-gain' or 'capital-loss' above); however, normalization ensures that each feature is treated equally when applying supervised learners. Note that once scaling is applied, observing the data in its raw form will no longer have the same original meaning, as exampled below.
Run the code cell below to normalize each numerical feature. We will use sklearn.preprocessing.MinMaxScaler for this.
End of explanation
"""
# TODO: One-hot encode the 'features_raw' data using pandas.get_dummies()
features = None
# TODO: Encode the 'income_raw' data to numerical values
income = None
# Print the number of features after one-hot encoding
encoded = list(features.columns)
print "{} total features after one-hot encoding.".format(len(encoded))
# Uncomment the following line to see the encoded feature names
#print encoded
"""
Explanation: Implementation: Data Preprocessing
From the table in Exploring the Data above, we can see there are several features for each record that are non-numeric. Typically, learning algorithms expect input to be numeric, which requires that non-numeric features (called categorical variables) be converted. One popular way to convert categorical variables is by using the one-hot encoding scheme. One-hot encoding creates a "dummy" variable for each possible category of each non-numeric feature. For example, assume someFeature has three possible entries: A, B, or C. We then encode this feature into someFeature_A, someFeature_B and someFeature_C.
| | someFeature | | someFeature_A | someFeature_B | someFeature_C |
| :-: | :-: | | :-: | :-: | :-: |
| 0 | B | | 0 | 1 | 0 |
| 1 | C | ----> one-hot encode ----> | 0 | 0 | 1 |
| 2 | A | | 1 | 0 | 0 |
Additionally, as with the non-numeric features, we need to convert the non-numeric target label, 'income' to numerical values for the learning algorithm to work. Since there are only two possible categories for this label ("<=50K" and ">50K"), we can avoid using one-hot encoding and simply encode these two categories as 0 and 1, respectively. In code cell below, you will need to implement the following:
- Use pandas.get_dummies() to perform one-hot encoding on the 'features_raw' data.
- Convert the target label 'income_raw' to numerical entries.
- Set records with "<=50K" to 0 and records with ">50K" to 1.
End of explanation
"""
# Import train_test_split
from sklearn.cross_validation import train_test_split
# Split the 'features' and 'income' data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(features, income, test_size = 0.2, random_state = 0)
# Show the results of the split
print "Training set has {} samples.".format(X_train.shape[0])
print "Testing set has {} samples.".format(X_test.shape[0])
"""
Explanation: Shuffle and Split Data
Now all categorical variables have been converted into numerical features, and all numerical features have been normalized. As always, we will now split the data (both features and their labels) into training and test sets. 80% of the data will be used for training and 20% for testing.
Run the code cell below to perform this split.
End of explanation
"""
# TODO: Calculate accuracy
accuracy = None
# TODO: Calculate F-score using the formula above for beta = 0.5
fscore = None
# Print the results
print "Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]".format(accuracy, fscore)
"""
Explanation: Evaluating Model Performance
In this section, we will investigate four different algorithms, and determine which is best at modeling the data. Three of these algorithms will be supervised learners of your choice, and the fourth algorithm is known as a naive predictor.
Metrics and the Naive Predictor
CharityML, equipped with their research, knows individuals that make more than \$50,000 are most likely to donate to their charity. Because of this, CharityML is particularly interested in predicting who makes more than \$50,000 accurately. It would seem that using accuracy as a metric for evaluating a particular model's performace would be appropriate. Additionally, identifying someone that does not make more than \$50,000 as someone who does would be detrimental to CharityML, since they are looking to find individuals willing to donate. Therefore, a model's ability to precisely predict those that make more than \$50,000 is more important than the model's ability to recall those individuals. We can use F-beta score as a metric that considers both precision and recall:
$$ F_{\beta} = (1 + \beta^2) \cdot \frac{precision \cdot recall}{\left( \beta^2 \cdot precision \right) + recall} $$
In particular, when $\beta = 0.5$, more emphasis is placed on precision. This is called the F$_{0.5}$ score (or F-score for simplicity).
Looking at the distribution of classes (those who make at most \$50,000, and those who make more), it's clear most individuals do not make more than \$50,000. This can greatly affect accuracy, since we could simply say "this person does not make more than \$50,000" and generally be right, without ever looking at the data! Making such a statement would be called naive, since we have not considered any information to substantiate the claim. It is always important to consider the naive prediction for your data, to help establish a benchmark for whether a model is performing well. That been said, using that prediction would be pointless: If we predicted all people made less than \$50,000, CharityML would identify no one as donors.
Question 1 - Naive Predictor Performace
If we chose a model that always predicted an individual made more than \$50,000, what would that model's accuracy and F-score be on this dataset?
Note: You must use the code cell below and assign your results to 'accuracy' and 'fscore' to be used later.
End of explanation
"""
# TODO: Import two metrics from sklearn - fbeta_score and accuracy_score
def train_predict(learner, sample_size, X_train, y_train, X_test, y_test):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_test: features testing set
- y_test: income testing set
'''
results = {}
# TODO: Fit the learner to the training data using slicing with 'sample_size'
start = time() # Get start time
learner = None
end = time() # Get end time
# TODO: Calculate the training time
results['train_time'] = None
# TODO: Get the predictions on the test set,
# then get predictions on the first 300 training samples
start = time() # Get start time
predictions_test = None
predictions_train = None
end = time() # Get end time
# TODO: Calculate the total prediction time
results['pred_time'] = None
# TODO: Compute accuracy on the first 300 training samples
results['acc_train'] = None
# TODO: Compute accuracy on test set
results['acc_test'] = None
# TODO: Compute F-score on the the first 300 training samples
results['f_train'] = None
# TODO: Compute F-score on the test set
results['f_test'] = None
# Success
print "{} trained on {} samples.".format(learner.__class__.__name__, sample_size)
# Return the results
return results
"""
Explanation: Supervised Learning Models
The following supervised learning models are currently available in scikit-learn that you may choose from:
- Gaussian Naive Bayes (GaussianNB)
- Decision Trees
- Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting)
- K-Nearest Neighbors (KNeighbors)
- Stochastic Gradient Descent Classifier (SGDC)
- Support Vector Machines (SVM)
- Logistic Regression
Question 2 - Model Application
List three of the supervised learning models above that are appropriate for this problem that you will test on the census data. For each model chosen
- Describe one real-world application in industry where the model can be applied. (You may need to do research for this — give references!)
- What are the strengths of the model; when does it perform well?
- What are the weaknesses of the model; when does it perform poorly?
- What makes this model a good candidate for the problem, given what you know about the data?
Answer:
Implementation - Creating a Training and Predicting Pipeline
To properly evaluate the performance of each model you've chosen, it's important that you create a training and predicting pipeline that allows you to quickly and effectively train models using various sizes of training data and perform predictions on the testing data. Your implementation here will be used in the following section.
In the code block below, you will need to implement the following:
- Import fbeta_score and accuracy_score from sklearn.metrics.
- Fit the learner to the sampled training data and record the training time.
- Perform predictions on the test data X_test, and also on the first 300 training points X_train[:300].
- Record the total prediction time.
- Calculate the accuracy score for both the training subset and testing set.
- Calculate the F-score for both the training subset and testing set.
- Make sure that you set the beta parameter!
End of explanation
"""
# TODO: Import the three supervised learning models from sklearn
# TODO: Initialize the three models
clf_A = None
clf_B = None
clf_C = None
# TODO: Calculate the number of samples for 1%, 10%, and 100% of the training data
samples_1 = None
samples_10 = None
samples_100 = None
# Collect results on the learners
results = {}
for clf in [clf_A, clf_B, clf_C]:
clf_name = clf.__class__.__name__
results[clf_name] = {}
for i, samples in enumerate([samples_1, samples_10, samples_100]):
results[clf_name][i] = \
train_predict(clf, samples, X_train, y_train, X_test, y_test)
# Run metrics visualization for the three supervised learning models chosen
vs.evaluate(results, accuracy, fscore)
"""
Explanation: Implementation: Initial Model Evaluation
In the code cell, you will need to implement the following:
- Import the three supervised learning models you've discussed in the previous section.
- Initialize the three models and store them in 'clf_A', 'clf_B', and 'clf_C'.
- Use a 'random_state' for each model you use, if provided.
- Note: Use the default settings for each model — you will tune one specific model in a later section.
- Calculate the number of records equal to 1%, 10%, and 100% of the training data.
- Store those values in 'samples_1', 'samples_10', and 'samples_100' respectively.
Note: Depending on which algorithms you chose, the following implementation may take some time to run!
End of explanation
"""
# TODO: Import 'GridSearchCV', 'make_scorer', and any other necessary libraries
# TODO: Initialize the classifier
clf = None
# TODO: Create the parameters list you wish to tune
parameters = None
# TODO: Make an fbeta_score scoring object
scorer = None
# TODO: Perform grid search on the classifier using 'scorer' as the scoring method
grid_obj = None
# TODO: Fit the grid search object to the training data and find the optimal parameters
grid_fit = None
# Get the estimator
best_clf = grid_fit.best_estimator_
# Make predictions using the unoptimized and model
predictions = (clf.fit(X_train, y_train)).predict(X_test)
best_predictions = best_clf.predict(X_test)
# Report the before-and-afterscores
print "Unoptimized model\n------"
print "Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta = 0.5))
print "\nOptimized Model\n------"
print "Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions))
print "Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5))
"""
Explanation: Improving Results
In this final section, you will choose from the three supervised learning models the best model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (X_train and y_train) by tuning at least one parameter to improve upon the untuned model's F-score.
Question 3 - Choosing the Best Model
Based on the evaluation you performed earlier, in one to two paragraphs, explain to CharityML which of the three models you believe to be most appropriate for the task of identifying individuals that make more than \$50,000.
Hint: Your answer should include discussion of the metrics, prediction/training time, and the algorithm's suitability for the data.
Answer:
Question 4 - Describing the Model in Layman's Terms
In one to two paragraphs, explain to CharityML, in layman's terms, how the final model chosen is supposed to work. Be sure that you are describing the major qualities of the model, such as how the model is trained and how the model makes a prediction. Avoid using advanced mathematical or technical jargon, such as describing equations or discussing the algorithm implementation.
Answer:
Implementation: Model Tuning
Fine tune the chosen model. Use grid search (GridSearchCV) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following:
- Import sklearn.grid_search.GridSearchCV and sklearn.metrics.make_scorer.
- Initialize the classifier you've chosen and store it in clf.
- Set a random_state if one is available to the same state you set before.
- Create a dictionary of parameters you wish to tune for the chosen model.
- Example: parameters = {'parameter' : [list of values]}.
- Note: Avoid tuning the max_features parameter of your learner if that parameter is available!
- Use make_scorer to create an fbeta_score scoring object (with $\beta = 0.5$).
- Perform grid search on the classifier clf using the 'scorer', and store it in grid_obj.
- Fit the grid search object to the training data (X_train, y_train), and store it in grid_fit.
Note: Depending on the algorithm chosen and the parameter list, the following implementation may take some time to run!
End of explanation
"""
# TODO: Import a supervised learning model that has 'feature_importances_'
# TODO: Train the supervised model on the training set
model = None
# TODO: Extract the feature importances
importances = None
# Plot
vs.feature_plot(importances, X_train, y_train)
"""
Explanation: Question 5 - Final Model Evaluation
What is your optimized model's accuracy and F-score on the testing data? Are these scores better or worse than the unoptimized model? How do the results from your optimized model compare to the naive predictor benchmarks you found earlier in Question 1?
Note: Fill in the table below with your results, and then provide discussion in the Answer box.
Results:
| Metric | Benchmark Predictor | Unoptimized Model | Optimized Model |
| :------------: | :-----------------: | :---------------: | :-------------: |
| Accuracy Score | | | |
| F-score | | | EXAMPLE |
Answer:
Feature Importance
An important task when performing supervised learning on a dataset like the census data we study here is determining which features provide the most predictive power. By focusing on the relationship between only a few crucial features and the target label we simplify our understanding of the phenomenon, which is most always a useful thing to do. In the case of this project, that means we wish to identify a small number of features that most strongly predict whether an individual makes at most or more than \$50,000.
Choose a scikit-learn classifier (e.g., adaboost, random forests) that has a feature_importance_ attribute, which is a function that ranks the importance of features according to the chosen classifier. In the next python cell fit this classifier to training set and use this attribute to determine the top 5 most important features for the census dataset.
Question 6 - Feature Relevance Observation
When Exploring the Data, it was shown there are thirteen available features for each individual on record in the census data.
Of these thirteen records, which five features do you believe to be most important for prediction, and in what order would you rank them and why?
Answer:
Implementation - Extracting Feature Importance
Choose a scikit-learn supervised learning algorithm that has a feature_importance_ attribute availble for it. This attribute is a function that ranks the importance of each feature when making predictions based on the chosen algorithm.
In the code cell below, you will need to implement the following:
- Import a supervised learning model from sklearn if it is different from the three used earlier.
- Train the supervised model on the entire training set.
- Extract the feature importances using '.feature_importances_'.
End of explanation
"""
# Import functionality for cloning a model
from sklearn.base import clone
# Reduce the feature space
X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]]
X_test_reduced = X_test[X_test.columns.values[(np.argsort(importances)[::-1])[:5]]]
# Train on the "best" model found from grid search earlier
clf = (clone(best_clf)).fit(X_train_reduced, y_train)
# Make new predictions
reduced_predictions = clf.predict(X_test_reduced)
# Report scores from the final model using both versions of data
print "Final Model trained on full data\n------"
print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, best_predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5))
print "\nFinal Model trained on reduced data\n------"
print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, reduced_predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, reduced_predictions, beta = 0.5))
"""
Explanation: Question 7 - Extracting Feature Importance
Observe the visualization created above which displays the five most relevant features for predicting if an individual makes at most or above \$50,000.
How do these five features compare to the five features you discussed in Question 6? If you were close to the same answer, how does this visualization confirm your thoughts? If you were not close, why do you think these features are more relevant?
Answer:
Feature Selection
How does a model perform if we only use a subset of all the available features in the data? With less features required to train, the expectation is that training and prediction time is much lower — at the cost of performance metrics. From the visualization above, we see that the top five most important features contribute more than half of the importance of all features present in the data. This hints that we can attempt to reduce the feature space and simplify the information required for the model to learn. The code cell below will use the same optimized model you found earlier, and train it on the same training set with only the top five important features.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/hammoz-consortium/cmip6/models/sandbox-3/aerosol.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-3', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: HAMMOZ-CONSORTIUM
Source ID: SANDBOX-3
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:03
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
gdsfactory/gdsfactory | docs/notebooks/06_yaml_component.ipynb | mit | # %matplotlib widget
import ipywidgets
from IPython.display import clear_output
import matplotlib.pyplot as plt
import gdsfactory as gf
x = ipywidgets.Textarea(rows=20, columns=480)
x.value = """
name: sample_different_factory
instances:
bl:
component: pad
tl:
component: pad
br:
component: pad
tr:
component: pad
placements:
tl:
x: 200
y: 500
br:
x: 400
y: 400
tr:
x: 400
y: 600
routes:
electrical:
settings:
separation: 20
layer: [31, 0]
width: 10
links:
tl,e3: tr,e1
bl,e3: br,e1
optical:
settings:
radius: 100
links:
bl,e4: br,e3
"""
out = ipywidgets.Output()
display(x, out)
def f(change, out=out):
try:
c = gf.read.from_yaml(change["new"])
# clear_output()
fig = c.plot()
c.show()
out.clear_output()
except Exception as e:
out.clear_output()
with out:
display(e)
x.observe(f, "value")
f({"new": x.value})
"""
Explanation: Netlist driven flow (circuits)
You have two options for working with gdsfactory:
layout driven flow: you code your layout using python functions, and then extract the YAML netlist to simulate the circuit. This is the flow that you have been doing so far.
netlist driven flow: you define your circuit (instances, placements and routes) in YAML. From the netlist you can simulate the circuit or generate the layout.
Using the netlist driven flow you can define components, circuits and masks.
YAML is a more human readable version of JSON
to define a Component from YAML you need to define:
instances: with each instance setting
placements: with X and Y
And optional:
routes: between instances
connections: to connect components ports
ports: define input and output circuit ports
When running this tutorial make sure you UNCOMMENT this line %matplotlib widget so you can live update your changes in the YAML file
# %matplotlib widget -> %matplotlib widget
End of explanation
"""
x.value = """
name: mmis
instances:
mmi_long:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 10
mmi_short:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 5
placements:
mmi_long:
port: o1
x: 20
y: 10
mirror: False
"""
display(x, out)
x.value = """
name: mmi_mirror
instances:
mmi_long:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 10
mmi_short:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 5
placements:
mmi_long:
port: o1
x: 20
y: 10
mirror: False
"""
display(x, out)
"""
Explanation: Lets start by defining the instances and placements section in YAML
Lets place an mmi_long where you can place the W0 port at x=20, y=10
End of explanation
"""
x.value = """
instances:
mmi_long:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 5
placements:
mmi_long:
port: o1
x: 20
y: 10
mirror: True
ports:
o3: mmi_long,o3
o2: mmi_long,o2
o1: mmi_long,o1
"""
display(x, out)
"""
Explanation: ports
You can expose any ports of any instance to the new Component with a ports section in YAML
Lets expose all the ports from mmi_long into the new component.
Ports are exposed as new_port_name: instance_name, port_name
you can see the ports in red and subports in blue
End of explanation
"""
x.value = """
instances:
mmi_long:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 5
placements:
mmi_long:
x: 0
y: 0
mirror: o1
rotation: 0
"""
display(x, out)
"""
Explanation: You can also define a mirror placement using a port
Try mirroring with other ports o2, o3 or with a number as well as with a rotation 90, 180, 270
End of explanation
"""
x.value = """
instances:
b:
component: bend_circular
mmi_long:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 10
mmi_short:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 5
placements:
mmi_short:
port: o1
x: 10
y: 20
connections:
b,o1 : mmi_short,o2
mmi_long,o1: b, o2
ports:
o1: mmi_short,o1
o2: mmi_long,o2
o3: mmi_long,o3
"""
display(x, out)
"""
Explanation: connections
You can connect any two instances by defining a connections section in the YAML file.
it follows the syntax.
instance_source,port : instance_destination,port
End of explanation
"""
x.value = """
instances:
mmi_long:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 10
mmi_short:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 5
placements:
mmi_short:
port: o1
x: 0
y: 0
mmi_long:
port: o1
x: mmi_short,o2
y: mmi_short,o2
dx : 10
dy: -10
"""
display(x, out)
"""
Explanation: Relative port placing
You can also place a component with respect to another instance port
You can also define an x and y offset with dx and dy
End of explanation
"""
x.value = """
instances:
mmi_long:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 10
mmi_short:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 5
placements:
mmi_long:
x: 100
y: 100
routes:
optical:
links:
mmi_short,o2: mmi_long,o1
settings:
cross_section:
cross_section: strip
settings:
layer: [2, 0]
"""
display(x, out)
"""
Explanation: routes
You can define routes between two instanes by defining a routes section in YAML
it follows the syntax
```YAML
routes:
route_name:
links:
instance_source,port: instance_destination,port
settings: # for the route (optional)
waveguide: strip
width: 1.2
```
End of explanation
"""
x.value = """
name:
connections_2x2_problem
instances:
mmi_bottom:
component: mmi2x2
mmi_top:
component: mmi2x2
placements:
mmi_top:
x: 100
y: 100
routes:
optical:
links:
mmi_bottom,o4: mmi_top,o1
mmi_bottom,o3: mmi_top,o2
"""
display(x, out)
"""
Explanation: You can rotate and instance specifying the angle in degrees
You can also access the routes in the newly created component
instances, placements, connections, ports, routes
Lets combine all you learned so far.
You can define the netlist connections of a component by a netlist in YAML format
Note that you define the connections as instance_source.port ->
instance_destination.port so the order is important and therefore you can only
change the position of the instance_destination
You can define several routes that will be connected using gf.routing.get_bundle
End of explanation
"""
@gf.cell
def pad_new(size=(100, 100), layer=gf.LAYER.WG):
c = gf.Component()
compass = c << gf.components.compass(size=size, layer=layer)
c.ports = compass.ports
return c
gf.get_active_pdk().register_cells(pad_new=pad_new)
c = pad_new(cache=False)
f = c.plot()
x.value = """
name:
connections_2x2_problem
instances:
bot:
component: pad_new
top:
component: pad_new
placements:
top:
x: 0
y: 200
"""
display(x, out)
x.value = """
name: custom_routes
instances:
t:
component: pad_array
settings:
orientation: 270
columns: 3
b:
component: pad_array
settings:
orientation: 90
columns: 3
placements:
t:
x: 200
y: 400
routes:
electrical:
settings:
layer: [31, 0]
width: 10.
end_straight_length: 150
links:
t,e11: b,e11
t,e13: b,e13
"""
display(x, out)
"""
Explanation: You can also add custom component_factories to gf.read.from_yaml
End of explanation
"""
x.value = """
name: sample_settings
instances:
bl:
component: pad
tl:
component: pad
br:
component: pad
tr:
component: pad
placements:
tl:
x: 0
y: 200
br:
x: 400
y: 400
tr:
x: 400
y: 600
routes:
optical_r100:
settings:
radius: 100
layer: [31, 0]
width: 50
links:
tl,e2: tr,e2
optical_r200:
settings:
radius: 200
width: 10
layer: [31, 0]
links:
bl,e3: br,e3
"""
display(x, out)
x.value = """
instances:
t:
component: pad_array
settings:
orientation: 270
columns: 3
b:
component: pad_array
settings:
orientation: 90
columns: 3
placements:
t:
x: 200
y: 500
routes:
optical:
settings:
radius: 50
width: 40
layer: [31,0]
end_straight_length: 150
separation: 50
links:
t,e11: b,e11
t,e12: b,e12
t,e13: b,e13
"""
display(x, out)
x.value = """
instances:
t:
component: pad_array
settings:
orientation: 270
columns: 3
b:
component: pad_array
settings:
orientation: 90
columns: 3
placements:
t:
x: 100
y: 1000
routes:
route1:
routing_strategy: get_bundle_path_length_match
settings:
extra_length: 500
width: 2
layer: [31,0]
end_straight_length: 500
links:
t,e11: b,e11
t,e12: b,e12
"""
display(x, out)
x.value = """
instances:
t:
component: pad_array
settings:
orientation: 270
columns: 3
b:
component: pad_array
settings:
orientation: 90
columns: 3
placements:
t:
x: -250
y: 1000
routes:
route1:
routing_strategy: get_bundle_from_waypoints
settings:
waypoints:
- [0, 300]
- [400, 300]
- [400, 400]
- [-250, 400]
auto_widen: False
links:
b,e11: t,e11
b,e12: t,e12
"""
display(x, out)
"""
Explanation: Also, you can define route aliases, that have different settings and specify the route factory as a parameter as well as the settings for that particular route alias.
End of explanation
"""
mmi1x2_faba = gf.partial(gf.components.mmi1x2, length_mmi=30)
mmi2x2_faba = gf.partial(gf.components.mmi2x2, length_mmi=30)
gf.get_active_pdk().register_cells(mmi1x2_faba=mmi1x2_faba, mmi2x2_faba=mmi2x2_faba)
x.value = """
name: sample_custom_cells
instances:
mmit:
component: mmi2x2_faba
mmib:
component: mmi1x2_faba
settings:
width_mmi: 4.5
placements:
mmit:
x: 100
y: 100
routes:
route1:
links:
mmib,o2: mmit,o2
ports:
o1: mmib,o1
o2: mmit,o2
o3: mmit,o3
o4: mmit,o4
"""
display(x, out)
c = gf.components.mzi()
c
c.plot_netlist()
n = c.get_netlist()
print(c.get_netlist_dict().keys())
"""
Explanation: Note that you define the connections as instance_source.port -> instance_destination.port so the order is important and therefore you can only change the position of the instance_destination
Custom factories
You can leverage netlist defined components to define more complex circuits
End of explanation
"""
x.value = """
settings:
length_mmi: 10
instances:
mmi_long:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: ${settings.length_mmi}
mmi_short:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 5
"""
display(x, out)
"""
Explanation: variables
You can define a global variables settings in your YAML file, and use the variable in the other YAML settings by using ${settings.length_mmi}
End of explanation
"""
import io
from omegaconf import OmegaConf
import gdsfactory as gf
c = gf.components.ring_single()
c
c.plot_netlist()
netlist = c.get_netlist()
n = netlist
c.write_netlist("ring.yml")
n = OmegaConf.load("ring.yml")
i = list(n["instances"].keys())
i
instance_name0 = i[0]
n["instances"][instance_name0]["settings"]
"""
Explanation: get_netlist (Component -> YAML)
Any component exports its netlist get_netlist and returns an OmegaConf dict that can be easily converted into JSON and YAML.
While component_from_yaml converts YAML -> Component
get_netlist converts Component -> YAML
End of explanation
"""
import gdsfactory as gf
c = gf.components.mzi()
c
c = gf.components.mzi()
n = c.get_netlist()
print(c.get_netlist_dict().keys())
c.plot_netlist()
n.keys()
import gdsfactory as gf
yaml = """
name: mmi_with_bend
instances:
mmi1x2_12_0:
component: mmi1x2
bend_circular_R10p00_32_4:
component: bend_circular
straight_L1p00_35_11:
component: straight
settings:
length: 10
layer: [2, 0]
connections:
bend_circular_R10p00_32_4,o1: mmi1x2_12_0,o2
straight_L1p00_35_11,o1: bend_circular_R10p00_32_4,o2
"""
c = gf.read.from_yaml(yaml)
c.name = "mmi_with_bend_circular"
print(c.name)
c
n = c.get_netlist()
print(c.get_netlist_yaml())
n["connections"]
c.plot_netlist()
c = gf.components.mzi()
c
c.plot_netlist()
c = gf.components.ring_single()
c
c.plot_netlist()
c = gf.components.ring_double()
c
c.plot_netlist()
import gdsfactory as gf
c = gf.components.ring_single()
c
c.plot_netlist()
c = gf.components.ring_double()
c
c.plot_netlist()
print(c.get_netlist_yaml())
c = gf.components.mzi()
c
c.plot_netlist()
c = gf.components.mzit()
c
c.plot_netlist()
c = gf.components.mzi_lattice()
c
import gdsfactory as gf
coupler_lengths = [10, 20, 30]
coupler_gaps = [0.1, 0.2, 0.3]
delta_lengths = [10, 100]
c = gf.components.mzi_lattice(
coupler_lengths=coupler_lengths,
coupler_gaps=coupler_gaps,
delta_lengths=delta_lengths,
)
c
print(c.get_netlist_yaml())
c.plot_netlist()
coupler_lengths = [10, 20, 30, 40]
coupler_gaps = [0.1, 0.2, 0.4, 0.5]
delta_lengths = [10, 100, 200]
c = gf.components.mzi_lattice(
coupler_lengths=coupler_lengths,
coupler_gaps=coupler_gaps,
delta_lengths=delta_lengths,
)
c
n = c.get_netlist()
c.plot_netlist()
"""
Explanation: ```python
import gdsfactory as gf
from omegaconf import OmegaConf
import pathlib
c1 = gf.read.from_yaml('ring.yml')
c1
```
python
n = c1.get_netlist(full_settings=True)
connections = n['connections']
len(connections)
Plot netlist
You can plot the netlist of components.
Every gdsfactory component can either be defined by its netlist or using layout friendly functions such as component sequence to define it and then get_netlist() method.
Connections are determined by extracting all the ports of a component, and asuming that ports with the same (x, y) are connected.
When you do get_netlist() for a component it will only show connections for the instances that belong to that component (it trims the netlist). So despite having a lot of connections, it will show only the meaningful connections for that component. For example, a ring has a ring_coupler. but if you want to digg deeper, the connections that made that ring coupler are still available.
End of explanation
"""
|
maxrose61/GA_DS | FInal_Project/.ipynb_checkpoints/Quantifying_Influence_Analysis_maxrose_DSFinal-checkpoint.ipynb | gpl-3.0 | ### Import as many items as possible to have available.
### Import data from CSV
%matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import metrics
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
from sklearn.cross_validation import train_test_split
from sklearn.cross_validation import cross_val_score
from sklearn.naive_bayes import MultinomialNB
data = pd.read_csv('data/Influence_clean.csv', header=0,encoding= 'utf-8', delimiter='|')
data['minreleasedate'] = pd.to_datetime(pd.Series(data.minreleasedate))
data['times_covered'].fillna(0, inplace=True)
data['artistid'] = data.artist.map({'The Beatles':0, 'The Rolling Stones':1})
data['artist'] = data.artist.astype('category')
data['songname'] = data.songname.astype('category')
# Add column for year of release for simpler grouping.
data['year'] = data['minreleasedate'].apply(lambda x: x.year)
# Make binary response - song has been covered or not. Far better accuracy over "times covered".
data['is_covered'] = data.times_covered.map(lambda x: 1 if x > 0 else 0)
"""
Explanation: Quantifying Influence of The Beatle and The Rolling Stones
Using data acquired from the MusicBrainz database, cleaned and aggregated in this notebook, I have developed a set of observations around the songs recorded and released by two artists - The Beatles and The Rolling Stones. I chose these two artists for the enduring popularity and the time-tested legacy of their recorded output. The response by which I will measure their influence is measured by other artists recorded use of their songs, "cover versions" or "covers" in the vernacular.<br><br>
I have compiled data from the source artists representing all songs recorded, the number of times the song was released by the artist, the number of countries for each release and the average rating for the song or release as reported by MusicBrainz.<br>
I also acquired the lyrics for as many of the original songs as I could using Beautiful Soup to scrape the lyrics from (lyrics.wikia.com)[http://lyrics.wikia.com). I was able to acquire lyrical content for 96.5% of the original songs to apply sentiment analysis using TextBlob. The sentiment polarity was applied separately to both the song title and the lyrics themselves to create features to augment the song/release data. As I am able to show below, the lyric sentiment is one of the more predictive measurements, second only to the year a song was released.<br><br>
With each row in my dataset - songname, artist, etc., I then merged the number of times the song was covered ("times_covered") and the number of artists covering the song ("artist_cnt"). I then created a binary response ("is_covered") as a simpler indicator that the song was used over the number of times used. This proved to be more predictable than the number of times covered. Also included is the average rating of the cover versions per song though I suspect the data is not so useful.
End of explanation
"""
data[(data.is_cover == 0) & (data.lyrics.isnull())].workid.count().astype(float)/data[(data.is_cover == 0)].workid.count()
"""
Explanation: Check coverage of lyrics for the original songs.
96.6% have lyrics (and lyric sentiment polarity score) in the dataset.
End of explanation
"""
feature_cols = [ 'year','num_releases','lyric_sent','title_sent', 'countries', 'avg_rating']
X= data[data.is_cover == 0][feature_cols]
y = data[data.is_cover == 0].is_covered
print X.shape
print y.shape
### TODO - get null accuracy score. ????
means = []
for col in X.columns:
newmean = np.sqrt(metrics.mean_squared_error(X[col], y))
means.append(newmean.astype(float))
means = pd.Series(means)
print means.mean()
print zip(X.columns,means)
"""
Explanation: Create base set of features for fitting to models
End of explanation
"""
rfreg = RandomForestClassifier(n_estimators=175, max_features=6,oob_score=True, random_state=50)
rfreg.fit(X, y)
print rfreg.oob_score_
print cross_val_score(rfreg, X, y, cv=10, scoring='accuracy').mean()
"""
Explanation: Build model with Random Forest Classifier
End of explanation
"""
feature_range = range(1, len(feature_cols)+1)
# list to store the average RMSE for each value of max_features
Acc_scores = []
# use 10-fold cross-validation with each value of max_features (WARNING: SLOW!)
for feature in feature_range:
rfreg = RandomForestClassifier(n_estimators=500, max_features=feature, random_state=50)
acc_val_scores = cross_val_score(rfreg, X, y, cv=10, scoring='accuracy')
Acc_scores.append(acc_val_scores.mean())
"""
Explanation: At 92% and 90% respectively, both the out of bag and cross-validation scores are quite positive for the Random Forest Classifier. <br>
I chose 6 features after running a looped evaluation of the maximum features for the model using cross validation.
End of explanation
"""
# plot max_features (x-axis) versus Accuracy score (y-axis)
plt.plot(feature_range, Acc_scores)
plt.xlabel('max_features')
plt.ylabel('Accuracy (higher is better)')
"""
Explanation: To this point, my somewhat thin collection of features have not shown a very useful measure of predicting the number of covers of a song (my basic premise for measuring influence. However the Random Forest appears to prefer a whopping 4 features for the meager measure of accuracy per the graph below.
End of explanation
"""
treeclf = DecisionTreeClassifier(max_depth = 10, random_state=1)
treeclf.fit(X, y)
### As shown in the bar chart below, year and lyric sentiment are better predictors of whether or not a song is covered.
pd.DataFrame({'feature':feature_cols, 'importance':treeclf.feature_importances_}).sort_values('importance').plot(kind='bar',x='feature',figsize=(16,5),fontsize='14',title="Feature Importance")
"""
Explanation: Test with DecisionTreeClassifier
While seeking a less opaque model than the Random Forest, I tried the DecisionTreeClassifier, which has a very nice method to rank the importance of the features.
End of explanation
"""
### First plot is The Beatles, second The Rolling Stones. Sum of times_covered by year of oriiginal relase of the song.
yticks = np.arange(100, 1000, 100)
data[data.times_covered > 0].groupby('year').times_covered.sum().plot(kind='bar',x='year',y='times_covered',figsize=(16,9))
plt.title('Total songs covered by year of original',size =28)
plt.ylabel('Times Covered(Sum)', size = 24)
plt.yticks(yticks)
bar = data.sort_values(by='year').groupby(['year', 'artist'])['times_covered'].sum().unstack('artist')
yticks = np.arange(25, 1000, 50)
bar.plot(kind='bar', stacked=True,figsize=(16,12),subplots='True')
plt.yticks(yticks)
plt.title('Total songs covered per Artist, by year of original',size =24)
plt.ylabel('Times Covered', size = 24)
"""
Explanation: Beginning to try out time series analysis below, however this exposed problems in the release dates either from my original queries or in the aggregation in Notebook 1. THis is on the TODO list.
A quick measure of songs covered by release year for both artists.
<br> While The Beatle disbanded by 1970, The Stones continue to this day. However, their early work appears far more influential, with the greatest body of influence more or less paralelling that of The more covered Beatles. Let's put a number on that, shall we?
End of explanation
"""
# Throw out covers recorded by each band and see what percentage of their catalogs have been covered.
data[(data.is_cover == 0) & (data.is_covered == 0)].groupby('artist').count()/data[(data.is_cover == 0)].groupby('artist').count()
data[data.is_covered > 0].groupby(['artist', 'year']).songname.count().unstack('artist').plot(kind='bar',subplots='True',figsize=(16,9))
plt.title("Original Songs Released Per Year", size=20)
"""
Explanation: It appears that a pretty large percentage of the artist's catalogs have been covered at least once.
End of explanation
"""
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=101)
# Compute Null accuracy
y_null = np.zeros_like(y_test, dtype=float)
# fill the array with the mean value of y_test
y_null.fill(y_test.mean())
y_null
np.sqrt(metrics.mean_squared_error(y_test, y_null))
"""
Explanation: Test Logistic Regression Model
End of explanation
"""
logreg = LogisticRegression(C=1e9)
logreg.fit(X_train, y_train)
zip(feature_cols, logreg.coef_[0])
print cross_val_score(logreg, X, y, cv=10, scoring='accuracy').mean()
y_pred_prob = logreg.predict_proba(X_test)[:,1 ]
print(y_pred_prob).mean()
"""
Explanation: Null Accuracy result is 37%.
As we will see below, when we fit the Logistic Regression estimator with our data and compute a cross validation score we improve significantly over the null test result.
End of explanation
"""
# plot ROC curve
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred_prob)
plt.plot(fpr, tpr)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
# calculate AUC
print metrics.roc_auc_score(y_test, y_pred_prob)
print cross_val_score(logreg, X, y, cv=10, scoring='roc_auc').mean()
# TODO: Work on improving accuracy with Random Forest model, TIme Series Analysis
# and at least one Linear or Logistic Regression Model. Also create some basic bar charts to illustrate some basic assumptions about the data.
"""
Explanation: Compute ROC curve and AUC score.
End of explanation
"""
|
tensorflow/workshops | tfx_labs/Lab_6_Model_Analysis.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright © 2019 The TensorFlow Authors.
End of explanation
"""
!pip install -q -U \
tensorflow==2.0.0 \
tfx==0.15.0rc0
"""
Explanation: TensorFlow Model Analysis
An Example of a Key TFX Library
This example colab notebook illustrates how TensorFlow Model Analysis (TFMA) can be used to investigate and visualize the characteristics of a dataset and the performance of a model. We'll use a model that we trained previously, and now you get to play with the results!
The model we trained was for the Chicago Taxi Example, which uses the Taxi Trips dataset released by the City of Chicago.
Note: This site provides applications using data that has been modified for use from its original source, www.cityofchicago.org, the official website of the City of Chicago. The City of Chicago makes no claims as to the content, accuracy, timeliness, or completeness of any of the data provided at this site. The data provided at this site is subject to change at any time. It is understood that the data provided at this site is being used at one’s own risk.
Read more about the dataset in Google BigQuery. Explore the full dataset in the BigQuery UI.
Key Point: As a modeler and developer, think about how this data is used and the potential benefits and harm a model's predictions can cause. A model like this could reinforce societal biases and disparities. Is a feature relevant to the problem you want to solve or will it introduce bias? For more information, read about <a target='_blank' href='https://developers.google.com/machine-learning/fairness-overview/'>ML fairness</a>.
Key Point: In order to understand TFMA and how it works with Apache Beam, you'll need to know a little bit about Apache Beam itself. The <a target='_blank' href='https://beam.apache.org/documentation/programming-guide/'>Beam Programming Guide</a> is a great place to start.
The columns in the dataset are:
<table>
<tr><td>pickup_community_area</td><td>fare</td><td>trip_start_month</td></tr>
<tr><td>trip_start_hour</td><td>trip_start_day</td><td>trip_start_timestamp</td></tr>
<tr><td>pickup_latitude</td><td>pickup_longitude</td><td>dropoff_latitude</td></tr>
<tr><td>dropoff_longitude</td><td>trip_miles</td><td>pickup_census_tract</td></tr>
<tr><td>dropoff_census_tract</td><td>payment_type</td><td>company</td></tr>
<tr><td>trip_seconds</td><td>dropoff_community_area</td><td>tips</td></tr>
</table>
Install Jupyter Extensions
Note: If running TFMA in a local Jupyter notebook, then these Jupyter extensions must be installed in the environment before running Jupyter.
bash
jupyter nbextension enable --py widgetsnbextension
jupyter nbextension install --py --symlink tensorflow_model_analysis
jupyter nbextension enable --py tensorflow_model_analysis
Setup
First, we install the necessary packages, download data, import modules and set up paths.
Install TensorFlow, TensorFlow Model Analysis (TFMA) and TensorFlow Data Validation (TFDV)
End of explanation
"""
import csv
import io
import os
import requests
import tempfile
import zipfile
from google.protobuf import text_format
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
from tensorflow_metadata.proto.v0 import schema_pb2
tf.__version__
tfma.version.VERSION_STRING
"""
Explanation: Import packages
We import necessary packages, including standard TFX component classes.
End of explanation
"""
# Download the zip file from GCP and unzip it
BASE_DIR = tempfile.mkdtemp()
TFMA_DIR = os.path.join(BASE_DIR, 'eval_saved_models-2.0')
DATA_DIR = os.path.join(TFMA_DIR, 'data')
OUTPUT_DIR = os.path.join(TFMA_DIR, 'output')
SCHEMA = os.path.join(TFMA_DIR, 'schema.pbtxt')
response = requests.get('https://storage.googleapis.com/tfx-colab-datasets/eval_saved_models-2.0.zip', stream=True)
zipfile.ZipFile(io.BytesIO(response.content)).extractall(BASE_DIR)
print("Here's what we downloaded:")
!cd {TFMA_DIR} && find .
"""
Explanation: Load The Files
We'll download a zip file that has everything we need. That includes:
Training and evaluation datasets
Data schema
Training results as EvalSavedModels
Note: We are downloading with HTTPS from a Google Cloud server.
End of explanation
"""
schema = schema_pb2.Schema()
contents = tf.io.read_file(SCHEMA).numpy()
schema = text_format.Parse(contents, schema)
tfdv.display_schema(schema)
"""
Explanation: Parse the Schema
Among the things we downloaded was a schema for our data that was created by TensorFlow Data Validation. Let's parse that now so that we can use it with TFMA.
End of explanation
"""
datafile = os.path.join(DATA_DIR, 'eval', 'data.csv')
reader = csv.DictReader(open(datafile))
examples = []
for line in reader:
example = tf.train.Example()
for feature in schema.feature:
key = feature.name
if len(line[key]) > 0:
if feature.type == schema_pb2.FLOAT:
example.features.feature[key].float_list.value[:] = [float(line[key])]
elif feature.type == schema_pb2.INT:
example.features.feature[key].int64_list.value[:] = [int(line[key])]
elif feature.type == schema_pb2.BYTES:
example.features.feature[key].bytes_list.value[:] = [line[key].encode('utf8')]
else:
if feature.type == schema_pb2.FLOAT:
example.features.feature[key].float_list.value[:] = []
elif feature.type == schema_pb2.INT:
example.features.feature[key].int64_list.value[:] = []
elif feature.type == schema_pb2.BYTES:
example.features.feature[key].bytes_list.value[:] = []
examples.append(example)
TFRecord_file = os.path.join(BASE_DIR, 'train_data.rio')
with tf.io.TFRecordWriter(TFRecord_file) as writer:
for example in examples:
writer.write(example.SerializeToString())
writer.flush()
writer.close()
!ls {TFRecord_file}
"""
Explanation: Use the Schema to Create TFRecords
We need to give TFMA access to our dataset, so let's create a TFRecords file. We can use our schema to create it, since it gives us the correct type for each feature.
End of explanation
"""
def run_and_render(eval_model=None, slice_list=None, slice_idx=0):
"""Runs the model analysis and renders the slicing metrics
Args:
eval_model: An instance of tf.saved_model saved with evaluation data
slice_list: A list of tfma.slicer.SingleSliceSpec giving the slices
slice_idx: An integer index into slice_list specifying the slice to use
Returns:
A SlicingMetricsViewer object if in Jupyter notebook; None if in Colab.
"""
eval_result = tfma.run_model_analysis(eval_shared_model=eval_model,
data_location=TFRecord_file,
file_format='tfrecords',
slice_spec=slice_list,
output_path='sample_data',
extractors=None)
return tfma.view.render_slicing_metrics(eval_result, slicing_spec=slice_list[slice_idx] if slice_list else None)
"""
Explanation: Run TFMA and Render Metrics
Now we're ready to create a function that we'll use to run TFMA and render metrics. It requires an EvalSavedModel, a list of SliceSpecs, and an index into the SliceSpec list. It will create an EvalResult using tfma.run_model_analysis, and use it to create a SlicingMetricsViewer using tfma.view.render_slicing_metrics, which will render a visualization of our dataset using the slice we created.
End of explanation
"""
# Load the TFMA results for the first training run
# This will take a minute
eval_model_base_dir_0 = os.path.join(TFMA_DIR, 'run_0', 'eval_model_dir')
eval_model_dir_0 = os.path.join(eval_model_base_dir_0,
max(os.listdir(eval_model_base_dir_0)))
eval_shared_model_0 = tfma.default_eval_shared_model(
eval_saved_model_path=eval_model_dir_0)
# Slice our data by the trip_start_hour feature
slices = [tfma.slicer.SingleSliceSpec(columns=['trip_start_hour'])]
run_and_render(eval_model=eval_shared_model_0, slice_list=slices, slice_idx=0)
"""
Explanation: Slicing and Dicing
We previously trained a model, and now we've loaded the results. Let's take a look at our visualizations, starting with using TFMA to slice along particular features. But first we need to read in the EvalSavedModel from one of our previous training runs.
To define the slice you want to visualize you create a tfma.slicer.SingleSliceSpec
To use tfma.view.render_slicing_metrics you can either use the name of the column (by setting slicing_column) or provide a tfma.slicer.SingleSliceSpec (by setting slicing_spec)
If neither is provided, the overview will be displayed
Plots are interactive:
Click and drag to pan
Scroll to zoom
Right click to reset the view
Simply hover over the desired data point to see more details. Select from four different types of plots using the selections at the bottom.
For example, we'll be setting slicing_column to look at the trip_start_hour feature in our SliceSpec.
End of explanation
"""
slices = [tfma.slicer.SingleSliceSpec(columns=['trip_start_hour']),
tfma.slicer.SingleSliceSpec(columns=['trip_start_day']),
tfma.slicer.SingleSliceSpec(columns=['trip_start_month'])]
run_and_render(eval_model=eval_shared_model_0, slice_list=slices, slice_idx=0)
"""
Explanation: Slices Overview
The default visualization is the Slices Overview when the number of slices is small. It shows the values of metrics for each slice. Since we've selected trip_start_hour above, it's showing us metrics like accuracy and AUC for each hour, which allows us to look for issues that are specific to some hours and not others.
In the visualization above:
Try sorting the feature column, which is our trip_start_hours feature, by clicking on the column header
Try sorting by precision, and notice that the precision for some of the hours with examples is 0, which may indicate a problem
The chart also allows us to select and display different metrics in our slices.
Try selecting different metrics from the "Show" menu
Try selecting recall in the "Show" menu, and notice that the recall for some of the hours with examples is 0, which may indicate a problem
It is also possible to set a threshold to filter out slices with smaller numbers of examples, or "weights". You can type a minimum number of examples, or use the slider.
Metrics Histogram
This view also supports a Metrics Histogram as an alternative visualization, which is also the default view when the number of slices is large. The results will be divided into buckets and the number of slices / total weights / both can be visualized. Columns can be sorted by clicking on the column header. Slices with small weights can be filtered out by setting the threshold. Further filtering can be applied by dragging the grey band. To reset the range, double click the band. Filtering can also be used to remove outliers in the visualization and the metrics tables. Click the gear icon to switch to a logarithmic scale instead of a linear scale.
Try selecting "Metrics Histogram" in the Visualization menu
More Slices
Let's create a whole list of SliceSpecs, which will allow us to select any of the slices in the list. We'll select the trip_start_day slice (days of the week) by setting the slice_idx to 1. Try changing the slice_idx to 0 or 2 and running again to examine different slices.
End of explanation
"""
slices = [tfma.slicer.SingleSliceSpec(columns=['trip_start_day', 'trip_start_hour'])]
run_and_render(eval_shared_model_0, slices, 0)
"""
Explanation: You can create feature crosses to analyze combinations of features. Let's create a SliceSpec to look at a cross of trip_start_day and trip_start_hour:
End of explanation
"""
slices = [tfma.slicer.SingleSliceSpec(columns=['trip_start_day'], features=[('trip_start_hour', 12)])]
run_and_render(eval_shared_model_0, slices, 0)
"""
Explanation: Crossing the two columns creates a lot of combinations! Let's narrow down our cross to only look at trips that start at noon. Then let's select accuracy from the visualization:
End of explanation
"""
def get_eval_result(base_dir, run_name, data_loc, slice_spec):
eval_model_base_dir = os.path.join(base_dir, run_name, "eval_model_dir")
versions = os.listdir(eval_model_base_dir)
eval_model_dir = os.path.join(eval_model_base_dir, max(versions))
output_dir = os.path.join(base_dir, "output", run_name)
eval_shared_model = tfma.default_eval_shared_model(eval_saved_model_path=eval_model_dir)
return tfma.run_model_analysis(eval_shared_model=eval_shared_model,
data_location=data_loc,
file_format='tfrecords',
slice_spec=slice_spec,
output_path=output_dir,
extractors=None)
slices = [tfma.slicer.SingleSliceSpec()]
result_ts0 = get_eval_result(TFMA_DIR, 'run_0', TFRecord_file, slices)
result_ts1 = get_eval_result(TFMA_DIR, 'run_1', TFRecord_file, slices)
result_ts2 = get_eval_result(TFMA_DIR, 'run_2', TFRecord_file, slices)
"""
Explanation: Tracking Model Performance Over Time
Your training dataset will be used for training your model, and will hopefully be representative of your test dataset and the data that will be sent to your model in production. However, while the data in inference requests may remain the same as your training data, in many cases it will start to change enough so that the performance of your model will change.
That means that you need to monitor and measure your model's performance on an ongoing basis, so that you can be aware of and react to changes. Let's take a look at how TFMA can help.
Measure Performance For New Data
We downloaded the results of three different training runs above, so let's load them now:
End of explanation
"""
output_dirs = [os.path.join(TFMA_DIR, "output", run_name)
for run_name in ("run_0", "run_1", "run_2")]
eval_results_from_disk = tfma.load_eval_results(
output_dirs[:2], tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results_from_disk, slices[0])
"""
Explanation: Next, let's use TFMA to see how these runs compare using render_time_series.
How does it look today?
First, we'll imagine that we've trained and deployed our model yesterday, and now we want to see how it's doing on the new data coming in today. We can specify particular slices to look at. Let's compare our training runs for trips that started at noon.
Note:
* The visualization will start by displaying accuracy. Add AUC and average loss by using the "Add metric series" menu.
* Hover over the curves to see the values.
* In the metric series charts the X axis is the model ID number of the model run that you're examining. The numbers themselves are not meaningful.
End of explanation
"""
eval_results_from_disk = tfma.load_eval_results(
output_dirs, tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results_from_disk, slices[0])
"""
Explanation: Now we'll imagine that another day has passed and we want to see how it's doing on the new data coming in today, compared to the previous two days. Again add AUC and average loss by using the "Add metric series" menu:
End of explanation
"""
|
dataworkshop/titanic | vladimir/src/Titanic.ipynb | mit | train_df = pd.read_csv('../input/train.csv')
test_df = pd.read_csv('../input/test.csv')
all_df = train_df.append(test_df)
all_df['is_test'] = all_df.Survived.isnull()
all_df.index = all_df.Survived
del all_df['Survived']
all_df.head()
"""
Explanation: Read data
End of explanation
"""
train_df.describe()
"""
Explanation: Target variable
End of explanation
"""
def select_features(df):
non_obj_feats = df.columns[ df.dtypes != 'object' ]
black_list = ['is_test']
return [feat for feat in non_obj_feats if feat not in black_list ]
def get_X_y(df):
feats = select_features(df)
X = df[feats].values
y = df.index.values.astype(int)
return X, y
def check_quality(model, X, y, n_folds=5, random_state=0, shuffle=False):
skf = StratifiedKFold(y, n_folds=n_folds, random_state=random_state, shuffle=shuffle)
scores = []
for train_index, test_index in skf:
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
score = accuracy_score(y_test, y_pred)
scores.append(score)
return np.mean(scores), np.std(scores)
def train_and_verify(all_df, model):
X, y = get_X_y( all_df[ all_df.is_test == False ] )
return check_quality(model, X, y)
class SingleVariableModel(BaseEstimator, ClassifierMixin):
def __init__(self, seed=1):
np.random.seed(seed)
def fit(self, X, y):
return self
def predict(self, X):
return [0] * len(X)
def __repr__(self):
return 'SingleVariableModel'
"""
Explanation: Target variable is Survived.
Quality metric
Your score is the percentage of passengers you correctly predict. That means - accuracy.
Model
One variable model
Let's build a very simple model, based on one variable.
That nobody will survived.
End of explanation
"""
train_and_verify(all_df, SingleVariableModel())
"""
Explanation: Run & evoluate single variable model
End of explanation
"""
all_df.fillna(-1, inplace=True)
train_and_verify(all_df, RandomForestClassifier())
"""
Explanation: What do you think about this result?
Let's build more advanced model
Missing values
There're several methods how to manage missing values, let's fill out -1.
End of explanation
"""
|
Neuroglycerin/neukrill-net-work | notebooks/model_run_and_result_analyses/Analyse alexnet_learning_rate model.ipynb | mit | cd ..
%run check_test_score.py -v run_settings/alexnet_based_norm_global.json
"""
Explanation: This notebook investigates alexnet-based model with normalisation and a new learning rate schedule.
The changes made include: increasing the number of epochs on which learning rate and momentum saturate to 250 (instead of original 25); making scaling factors of the learning rate smaller (shrink_amt = 0.99 instead of 0.9 and grow_amt = 1.01 instead 1.1); and monitoring valid_y_nll. Both best and most recent models are saved.
Quite soon, valid_y_nll started looking pretty good. Let's check the score. But first, we want to know what to compare it to. Looking at the equivalent model with original learning rate schedule (alexnet_based_norm_global.pkl):
End of explanation
"""
%run check_test_score.py -v run_settings/alexnet_learning_rate.json
"""
Explanation: Just to make sure nothing goes wrong with reads/writes (as this model takes a lot less time per epoch), get a backup of the best model so far.
Now check score of our model:
End of explanation
"""
%matplotlib inline
%run ~/Neuroglycerin/pylearn2/pylearn2/scripts/plot_monitor.py /disk/scratch/neuroglycerin/models/alexnet_based_norm_global.pkl
%run ~/Neuroglycerin/pylearn2/pylearn2/scripts/plot_monitor.py /disk/scratch/neuroglycerin/models/alexnet_learning_rate.pkl.backup
"""
Explanation: Was hoping for a better score there... Check how many epochs each model saw:
End of explanation
"""
%run ~/Neuroglycerin/pylearn2/pylearn2/scripts/plot_monitor.py /disk/scratch/neuroglycerin/models/alexnet_learning_rate.pkl
"""
Explanation: Let it run a little longer.
Check again, best file first:
End of explanation
"""
%run ~/Neuroglycerin/pylearn2/pylearn2/scripts/plot_monitor.py /disk/scratch/neuroglycerin/models/alexnet_learning_rate_recent.pkl
"""
Explanation: And the most recent now:
End of explanation
"""
%run check_test_score.py -v run_settings/alexnet_learning_rate.json
"""
Explanation: The score it gets on the holdout set is:
End of explanation
"""
%run check_test_score.py -v run_settings/alexnet_learning_rate2.json
%run check_test_score.py -v run_settings/alexnet_learning_rate3.json
"""
Explanation: It got worse!
Now check scores of two models with only number of saturating epoch changed. One monitored valid_y_nll, another valid_objective.
End of explanation
"""
|
google-research/google-research | domain_conditional_predictors/validation_experiment.ipynb | apache-2.0 | #@test {"skip": true}
!pip install dm-sonnet==2.0.0 --quiet
!pip install tensorflow_addons==0.12 --quiet
#@test {"output": "ignore"}
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_addons as tfa
try:
import sonnet.v2 as snt
except ModuleNotFoundError:
import sonnet as snt
"""
Explanation: Experiments reported in "Domain Conditional Predictors for Domain Adaptation"
Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Preamble
End of explanation
"""
#@test {"skip": true}
print(" TensorFlow version: {}".format(tf.__version__))
print(" Sonnet version: {}".format(snt.__version__))
print("TensorFlow Addons version: {}".format(tfa.__version__))
"""
Explanation: Colab tested with:
TensorFlow version: 2.4.1
Sonnet version: 2.0.0
TensorFlow Addons version: 0.12.0
End of explanation
"""
#@test {"output": "ignore"}
batch_size = 100
NUM_DOMAINS = 4
def process_batch_train(images, labels):
images = tf.image.grayscale_to_rgb(images)
images = tf.cast(images, dtype=tf.float32)
images = images / 255.
domain_index_candidates = tf.convert_to_tensor(list(range(NUM_DOMAINS)))
samples = tf.random.categorical(tf.math.log([[1/NUM_DOMAINS for i in range(NUM_DOMAINS)]]), 1) # note log-prob
domain_index=domain_index_candidates[tf.cast(samples[0][0], dtype=tf.int64)]
if tf.math.equal(domain_index, tf.constant(0)):
images = tfa.image.rotate(images, np.pi/3)
elif tf.math.equal(domain_index, tf.constant(1)):
images = tfa.image.gaussian_filter2d(images, filter_shape=[8,8])
elif tf.math.equal(domain_index, tf.constant(2)):
images = tf.ones_like(images) - images
elif tf.math.equal(domain_index, tf.constant(3)):
images = tf.image.flip_left_right(images)
domain_label = tf.cast(domain_index, tf.int64)
return images, labels, domain_label
def process_batch_test(images, labels):
images = tf.image.grayscale_to_rgb(images)
images = tf.cast(images, dtype=tf.float32)
images = images / 255.
return images, labels
def mnist(split, multi_domain_test=False):
dataset = tfds.load("mnist", split=split, as_supervised=True)
if split == "train":
process_batch = process_batch_train
else:
if multi_domain_test:
process_batch = process_batch_train
else:
process_batch = process_batch_test
dataset = dataset.map(process_batch)
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
dataset = dataset.cache()
return dataset
mnist_train = mnist("train").shuffle(1000)
mnist_test = mnist("test")
mnist_test_multidomain = mnist("test", multi_domain_test=True)
"""
Explanation: Data preparation
Define 4 domains by transforming the data on the fly. Current transformations are rotation, blurring, flipping colors between background and digits, and horizontal flip.
End of explanation
"""
#@test {"skip": true}
import matplotlib.pyplot as plt
images, label, domain_label = next(iter(mnist_train))
print(label[0], domain_label[0])
plt.imshow(images[0]);
"""
Explanation: Look at samples from the training domains. Domain labels are such that: Rotation >> 0, Blurring >> 1, Color flipping >> 2, Horizontal flip >> 3.
End of explanation
"""
class M_unconditional(snt.Module):
def __init__(self):
super(M_unconditional, self).__init__()
self.hidden1 = snt.Conv2D(output_channels=10, kernel_shape=5, name="hidden1")
self.hidden2 = snt.Conv2D(output_channels=20, kernel_shape=5, name="hidden2")
self.flatten = snt.Flatten()
self.logits = snt.Linear(10, name="logits")
def __call__(self, images):
output = tf.nn.relu(self.hidden1(images))
output = tf.nn.relu(self.hidden2(output))
output = self.flatten(output)
output = self.logits(output)
return output
m_unconditional = M_unconditional()
"""
Explanation: Baseline 1: Unconditional model
A baseline model is defined below and referred to as unconditional since it does not take domain labels into account in any way.
End of explanation
"""
#@test {"output": "ignore"}
opt_unconditional = snt.optimizers.SGD(learning_rate=0.01)
num_epochs = 10
loss_log_unconditional = []
def step(images, labels):
"""Performs one optimizer step on a single mini-batch."""
with tf.GradientTape() as tape:
logits_unconditional = m_unconditional(images)
loss_unconditional = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_unconditional,
labels=labels)
loss_unconditional = tf.reduce_mean(loss_unconditional)
params_unconditional = m_unconditional.trainable_variables
grads_unconditional = tape.gradient(loss_unconditional, params_unconditional)
opt_unconditional.apply(grads_unconditional, params_unconditional)
return loss_unconditional
for images, labels, domain_labels in mnist_train.repeat(num_epochs):
loss_unconditional = step(images, labels)
loss_log_unconditional.append(loss_unconditional.numpy())
print("\n\nFinal loss: {}".format(loss_log_unconditional[-1]))
REDUCTION_FACTOR = 0.2 ## Factor in [0,1] used to check whether the training loss reduces during training
## Checks whether the training loss reduces
assert loss_log_unconditional[-1] < REDUCTION_FACTOR*loss_log_unconditional[0]
"""
Explanation: Training of the baseline:
End of explanation
"""
#@test {"skip": true}
class DANN_task(snt.Module):
def __init__(self):
super(DANN_task, self).__init__()
self.hidden1 = snt.Conv2D(output_channels=10, kernel_shape=5, name="hidden1")
self.hidden2 = snt.Conv2D(output_channels=20, kernel_shape=5, name="hidden2")
self.flatten = snt.Flatten()
self.logits = snt.Linear(10, name="logits")
def __call__(self, images):
output = tf.nn.relu(self.hidden1(images))
output = tf.nn.relu(self.hidden2(output))
z = self.flatten(output)
output = self.logits(z)
return output, z
#@test {"skip": true}
class DANN_domain(snt.Module):
def __init__(self):
super(DANN_domain, self).__init__()
self.logits = snt.Linear(NUM_DOMAINS, name="logits")
def __call__(self, z):
output = self.logits(z)
return output
#@test {"skip": true}
m_DANN_task = DANN_task()
m_DANN_domain = DANN_domain()
"""
Explanation: Baseline 2: Domain invariant representations
DANN-like model where the domain discriminator is replaced by a domain classifier aiming to induce invariance across training domains
End of explanation
"""
#@test {"skip": true}
opt_task = snt.optimizers.SGD(learning_rate=0.01)
opt_domain = snt.optimizers.SGD(learning_rate=0.01)
domain_loss_weight = 0.2 ## Hyperparameter - factor to be multiplied by the domain entropy term when training the task classifier
num_epochs = 20 ## Doubled the number of epochs to train the task classifier for as many iterations as the other methods since we have alternate updates
loss_log_dann = {'task_loss':[],'domain_loss':[]}
number_of_iterations = 0
def step(images, labels, domain_labels, iteration_count):
"""Performs one optimizer step on a single mini-batch."""
if iteration_count%2==0: ## Alternate between training the class classifier and the domain classifier
with tf.GradientTape() as tape:
logits_DANN_task, z_DANN = m_DANN_task(images)
logist_DANN_domain = m_DANN_domain(z_DANN)
loss_DANN_task = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_DANN_task,
labels=labels)
loss_DANN_domain = tf.nn.softmax_cross_entropy_with_logits(logits=logist_DANN_domain,
labels=1/NUM_DOMAINS*tf.ones_like(logist_DANN_domain)) ## Negative entropy of P(Y|X) measured as the cross-entropy against the uniform dist.
loss_DANN = tf.reduce_mean(loss_DANN_task + domain_loss_weight*loss_DANN_domain)
params_DANN = m_DANN_task.trainable_variables
grads_DANN = tape.gradient(loss_DANN, params_DANN)
opt_task.apply(grads_DANN, params_DANN)
return 'task_loss', loss_DANN
else:
with tf.GradientTape() as tape:
_, z_DANN = m_DANN_task(images)
logist_DANN_domain_classifier = m_DANN_domain(z_DANN)
loss_DANN_domain_classifier = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logist_DANN_domain_classifier,
labels=domain_labels)
loss_DANN_domain_classifier = tf.reduce_mean(loss_DANN_domain_classifier)
params_DANN_domain_classifier = m_DANN_domain.trainable_variables
grads_DANN_domain_classifier = tape.gradient(loss_DANN_domain_classifier, params_DANN_domain_classifier)
opt_domain.apply(grads_DANN_domain_classifier, params_DANN_domain_classifier)
return 'domain_loss', loss_DANN_domain_classifier
for images, labels, domain_labels in mnist_train.repeat(num_epochs):
number_of_iterations += 1
loss_tag, loss_dann = step(images, labels, domain_labels, number_of_iterations)
loss_log_dann[loss_tag].append(loss_dann.numpy())
print("\n\nFinal losses: {} - {}, {} - {}".format('task_loss', loss_log_dann['task_loss'][-1], 'domain_loss', loss_log_dann['domain_loss'][-1]))
"""
Explanation: Training of the DANN baseline
End of explanation
"""
#@test {"skip": true}
class FiLM(snt.Module):
def __init__(self, features_shape):
super(FiLM, self).__init__()
self.features_shape = features_shape
target_dimension = np.prod(features_shape)
self.hidden_W = snt.Linear(output_size=target_dimension)
self.hidden_B = snt.Linear(output_size=target_dimension)
def __call__(self, features, z):
W = snt.reshape(self.hidden_W(z), output_shape=self.features_shape)
B = snt.reshape(self.hidden_B(z), output_shape=self.features_shape)
output = W*features+B
return output
#@test {"skip": true}
class M_task(snt.Module):
def __init__(self):
super(M_task, self).__init__()
self.hidden1 = snt.Conv2D(output_channels=10, kernel_shape=5, name="hidden1")
self.film1 = FiLM(features_shape=[28,28,10])
self.hidden2 = snt.Conv2D(output_channels=20, kernel_shape=5, name="hidden2")
self.film2 = FiLM(features_shape=[28,28,20])
self.flatten = snt.Flatten()
self.logits = snt.Linear(10, name="logits")
def __call__(self, images, z):
output = tf.nn.relu(self.hidden1(images))
output = self.film1(output,z)
output = tf.nn.relu(self.hidden2(output))
output = self.film2(output,z)
output = self.flatten(output)
output = self.logits(output)
return output
#@test {"skip": true}
class M_domain(snt.Module):
def __init__(self):
super(M_domain, self).__init__()
self.hidden = snt.Conv2D(output_channels=10, kernel_shape=5, name="hidden")
self.flatten = snt.Flatten()
self.logits = snt.Linear(NUM_DOMAINS, name="logits")
def __call__(self, images):
output = tf.nn.relu(self.hidden(images))
z = self.flatten(output)
output = self.logits(z)
return output, z
#@test {"skip": true}
m_task = M_task()
m_domain = M_domain()
#@test {"skip": true}
images, labels = next(iter(mnist_test))
domain_logits, z = m_domain(images)
logits = m_task(images, z)
prediction = tf.argmax(logits[0]).numpy()
actual = labels[0].numpy()
print("Predicted class: {} actual class: {}".format(prediction, actual))
plt.imshow(images[0])
"""
Explanation: Definition of our models
The models for our proposed setting are defined below.
The FiLM layer simply projects z onto 2 tensors (independent dense layers for each projection) matching the shape of the features. Each such tensor is used for element-wise multiplication and addition with the input features.
m_domain corresponds to a domain classifier. It outputs the output of the second conv. layer to be used as z, as well as a set of logits over the set of train domains.
m_task is the main classifier and it contains FiLM layers that take z as input. Its output corresponds to the set of logits over the labels.
End of explanation
"""
#@test {"skip": true}
from tqdm import tqdm
# MNIST training set has 60k images.
num_images = 60000
def progress_bar(generator):
return tqdm(
generator,
unit='images',
unit_scale=batch_size,
total=(num_images // batch_size) * num_epochs)
#@test {"skip": true}
opt = snt.optimizers.SGD(learning_rate=0.01)
num_epochs = 10
loss_log = {'total_loss':[], 'task_loss':[], 'domain_loss':[]}
def step(images, labels, domain_labels):
"""Performs one optimizer step on a single mini-batch."""
with tf.GradientTape() as tape:
domain_logits, z = m_domain(images)
logits = m_task(images, z)
loss_task = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits,
labels=labels)
loss_domain = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=domain_logits,
labels=domain_labels)
loss = loss_task + loss_domain
loss = tf.reduce_mean(loss)
loss_task = tf.reduce_mean(loss_task)
loss_domain = tf.reduce_mean(loss_domain)
params = m_task.trainable_variables + m_domain.trainable_variables
grads = tape.gradient(loss, params)
opt.apply(grads, params)
return loss, loss_task, loss_domain
for images, labels, domain_labels in progress_bar(mnist_train.repeat(num_epochs)):
loss, loss_task, loss_domain = step(images, labels, domain_labels)
loss_log['total_loss'].append(loss.numpy())
loss_log['task_loss'].append(loss_task.numpy())
loss_log['domain_loss'].append(loss_domain.numpy())
print("\n\nFinal total loss: {}".format(loss.numpy()))
print("\n\nFinal task loss: {}".format(loss_task.numpy()))
print("\n\nFinal domain loss: {}".format(loss_domain.numpy()))
"""
Explanation: Training of the proposed model
End of explanation
"""
#@test {"skip": true}
class M_learned_z(snt.Module):
def __init__(self):
super(M_learned_z, self).__init__()
self.context = snt.Embed(vocab_size=NUM_DOMAINS, embed_dim=128)
self.hidden1 = snt.Conv2D(output_channels=10, kernel_shape=5, name="hidden1")
self.film1 = FiLM(features_shape=[28,28,10])
self.hidden2 = snt.Conv2D(output_channels=20, kernel_shape=5, name="hidden2")
self.film2 = FiLM(features_shape=[28,28,20])
self.flatten = snt.Flatten()
self.logits = snt.Linear(10, name="logits")
def __call__(self, images, domain_labels):
z = self.context(domain_labels)
output = tf.nn.relu(self.hidden1(images))
output = self.film1(output,z)
output = tf.nn.relu(self.hidden2(output))
output = self.film2(output,z)
output = self.flatten(output)
output = self.logits(output)
return output
#@test {"skip": true}
m_learned_z = M_learned_z()
#@test {"skip": true}
opt_learned_z = snt.optimizers.SGD(learning_rate=0.01)
num_epochs = 10
loss_log_learned_z = []
def step(images, labels, domain_labels):
"""Performs one optimizer step on a single mini-batch."""
with tf.GradientTape() as tape:
logits_learned_z = m_learned_z(images, domain_labels)
loss_learned_z = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_learned_z,
labels=labels)
loss_learned_z = tf.reduce_mean(loss_learned_z)
params_learned_z = m_learned_z.trainable_variables
grads_learned_z = tape.gradient(loss_learned_z, params_learned_z)
opt_learned_z.apply(grads_learned_z, params_learned_z)
return loss_learned_z
for images, labels, domain_labels in mnist_train.repeat(num_epochs):
loss_learned_z = step(images, labels, domain_labels)
loss_log_learned_z.append(loss_learned_z.numpy())
print("\n\nFinal loss: {}".format(loss_log_learned_z[-1]))
"""
Explanation: Ablation 1: Learned domain-wise context variable z
Here we consider a case where the context variables z used for conditioning are learned directly from data, and the domain predictor is discarded. This only allows for in-domain prediction though.
End of explanation
"""
#@test {"skip": true}
m_task_ablation = M_task()
m_domain_ablation = M_domain()
m_DANN_ablation = DANN_domain() ## Used for evaluating how domain dependent the representations are
#@test {"skip": true}
opt_ablation = snt.optimizers.SGD(learning_rate=0.01)
num_epochs = 10
loss_log_ablation = []
def step(images, labels, domain_labels):
"""Performs one optimizer step on a single mini-batch."""
with tf.GradientTape() as tape:
domain_logits_ablation, z = m_domain_ablation(images)
logits_ablation = m_task_ablation(images, z)
loss_ablation = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_ablation,
labels=labels)
loss_ablation = tf.reduce_mean(loss_ablation)
params_ablation = m_task_ablation.trainable_variables + m_domain_ablation.trainable_variables
grads_ablation = tape.gradient(loss_ablation, params_ablation)
opt_ablation.apply(grads_ablation, params_ablation)
return loss_ablation
for images, labels, domain_labels in mnist_train.repeat(num_epochs):
loss_ablation = step(images, labels, domain_labels)
loss_log_ablation.append(loss_ablation.numpy())
print("\n\nFinal task loss: {}".format(loss_ablation.numpy()))
#@test {"skip": true}
opt_ablation_domain_classifier = snt.optimizers.SGD(learning_rate=0.01)
num_epochs = 10
log_loss_ablation_domain_classification = []
def step(images, labels, domain_labels):
"""Performs one optimizer step on a single mini-batch."""
with tf.GradientTape() as tape:
_, z = m_domain_ablation(images)
logits_ablation_domain_classification = m_DANN_ablation(z)
loss_ablation_domain_classification = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_ablation_domain_classification,
labels=domain_labels)
loss_ablation_domain_classification = tf.reduce_mean(loss_ablation_domain_classification)
params_ablation_domain_classification = m_DANN_ablation.trainable_variables
grads_ablation_domain_classification = tape.gradient(loss_ablation_domain_classification, params_ablation_domain_classification)
opt_ablation.apply(grads_ablation_domain_classification, params_ablation_domain_classification)
return loss_ablation_domain_classification
for images, labels, domain_labels in mnist_train.repeat(num_epochs):
loss_ablation_domain_classifier = step(images, labels, domain_labels)
log_loss_ablation_domain_classification.append(loss_ablation_domain_classifier.numpy())
print("\n\nFinal task loss: {}".format(loss_ablation_domain_classifier.numpy()))
"""
Explanation: Ablation 2: Dropping the domain classification term of the loss
We consider an ablation where the same models as in our conditional predictor are used, but training is carried out with the classification loss only. This gives us a model with the same capacity as ours but no explicit mechanism to account for domain variations in train data. The goal of this ablation is to understand whether the improvement might be simply coming from the added capacity rather than the conditional modeling.
End of explanation
"""
#@test {"skip": true}
f = plt.figure(figsize=(32,8))
ax = f.add_subplot(1,3,1)
ax.plot(loss_log['total_loss'])
ax.set_title('Total Loss')
ax = f.add_subplot(1,3,2)
ax.plot(loss_log['task_loss'])
ax.set_title('Task loss')
ax = f.add_subplot(1,3,3)
ax.plot(loss_log['domain_loss'])
ax.set_title('Domain loss')
#@test {"skip": true}
f = plt.figure(figsize=(8,8))
ax = f.add_axes([1,1,1,1])
ax.plot(loss_log_unconditional)
ax.set_title('Unconditional baseline - Train Loss')
#@test {"skip": true}
f = plt.figure(figsize=(16,8))
ax = f.add_subplot(1,2,1)
ax.plot(loss_log_dann['task_loss'])
ax.set_title('Domain invariant baseline - Task loss (Class. + -Entropy)')
ax = f.add_subplot(1,2,2)
ax.plot(loss_log_dann['domain_loss'])
ax.set_title('Domain invariant baseline - Domain classification loss')
#@test {"skip": true}
f = plt.figure(figsize=(16,8))
ax = f.add_subplot(1,2,1)
ax.plot(loss_log_learned_z)
ax.set_title('Ablation 1 - Task loss')
ax = f.add_subplot(1,2,2)
ax.plot(loss_log_ablation)
ax.set_title('Ablation 2 - Task loss')
#@test {"skip": true}
f = plt.figure(figsize=(8,8))
ax = f.add_axes([1,1,1,1])
ax.plot(log_loss_ablation_domain_classification)
ax.set_title('Ablation 2: Domain classification - Train Loss')
"""
Explanation: Results
Plots of training losses
End of explanation
"""
#@test {"skip": true}
total = 0
correct = 0
correct_unconditional = 0
correct_adversarial = 0
correct_ablation2 = 0 ## The model corresponding to ablation 1 can only be used with in-domain data (with domain labels)
for images, labels in mnist_test:
domain_logits, z = m_domain(images)
logits = m_task(images, z)
logits_unconditional = m_unconditional(images)
logits_adversarial, _ = m_DANN_task(images)
domain_logits_ablation, z_ablation = m_domain_ablation(images)
logits_ablation2 = m_task_ablation(images, z_ablation)
predictions = tf.argmax(logits, axis=1)
predictions_unconditional = tf.argmax(logits_unconditional, axis=1)
predictions_adversarial = tf.argmax(logits_adversarial, axis=1)
predictions_ablation2 = tf.argmax(logits_ablation2, axis=1)
correct += tf.math.count_nonzero(tf.equal(predictions, labels))
correct_unconditional += tf.math.count_nonzero(tf.equal(predictions_unconditional, labels))
correct_adversarial += tf.math.count_nonzero(tf.equal(predictions_adversarial, labels))
correct_ablation2 += tf.math.count_nonzero(tf.equal(predictions_ablation2, labels))
total += images.shape[0]
print("Got %d/%d (%.02f%%) correct" % (correct, total, correct / total * 100.))
print("Unconditional baseline perf.: %d/%d (%.02f%%) correct" % (correct_unconditional, total, correct_unconditional / total * 100.))
print("Adversarial baseline perf.: %d/%d (%.02f%%) correct" % (correct_adversarial, total, correct_adversarial / total * 100.))
print("Ablation 2 perf.: %d/%d (%.02f%%) correct" % (correct_ablation2, total, correct_ablation2 / total * 100.))
"""
Explanation: Out-of-domain evaluations
The original test set of mnist without any transformations is considered
End of explanation
"""
#@test {"skip": true}
n_repetitions = 10 ## Going over the test set multiple times to account for multiple transformations
total = 0
correct_class = 0
correct_unconditional = 0
correct_adversarial = 0
correct_ablation1 = 0
correct_ablation2 = 0
correct_domain = 0
correct_domain_adversarial = 0
correct_domain_ablation = 0
for images, labels, domain_labels in mnist_test_multidomain.repeat(n_repetitions):
domain_logits, z = m_domain(images)
class_logits = m_task(images, z)
logits_unconditional = m_unconditional(images)
logits_adversarial, z_adversarial = m_DANN_task(images)
domain_logits_adversarial = m_DANN_domain(z_adversarial)
logits_ablation1 = m_learned_z(images, domain_labels)
_, z_ablation = m_domain_ablation(images)
domain_logits_ablation = m_DANN_ablation(z_ablation)
logits_ablation2 = m_task_ablation(images, z_ablation)
predictions_class = tf.argmax(class_logits, axis=1)
predictions_unconditional = tf.argmax(logits_unconditional, axis=1)
predictions_adversarial = tf.argmax(logits_adversarial, axis=1)
predictions_ablation1 = tf.argmax(logits_ablation1, axis=1)
predictions_ablation2 = tf.argmax(logits_ablation2, axis=1)
predictions_domain = tf.argmax(domain_logits, axis=1)
predictions_domain_adversarial = tf.argmax(domain_logits_adversarial, axis=1)
predictions_domain_ablation = tf.argmax(domain_logits_ablation, axis=1)
correct_class += tf.math.count_nonzero(tf.equal(predictions_class, labels))
correct_unconditional += tf.math.count_nonzero(tf.equal(predictions_unconditional, labels))
correct_adversarial += tf.math.count_nonzero(tf.equal(predictions_adversarial, labels))
correct_ablation1 += tf.math.count_nonzero(tf.equal(predictions_ablation1, labels))
correct_ablation2 += tf.math.count_nonzero(tf.equal(predictions_ablation2, labels))
correct_domain += tf.math.count_nonzero(tf.equal(predictions_domain, domain_labels))
correct_domain_adversarial += tf.math.count_nonzero(tf.equal(predictions_domain_adversarial, domain_labels))
correct_domain_ablation += tf.math.count_nonzero(tf.equal(predictions_domain_ablation, domain_labels))
total += images.shape[0]
print("In domain unconditional baseline perf.: %d/%d (%.02f%%) correct" % (correct_unconditional, total, correct_unconditional / total * 100.))
print("In domain adversarial baseline perf.: %d/%d (%.02f%%) correct" % (correct_adversarial, total, correct_adversarial / total * 100.))
print("In domain ablation 1: %d/%d (%.02f%%) correct" % (correct_ablation1, total, correct_ablation1 / total * 100.))
print("In domain ablation 2: %d/%d (%.02f%%) correct" % (correct_ablation2, total, correct_ablation2 / total * 100.))
print("In domain class predictions: Got %d/%d (%.02f%%) correct" % (correct_class, total, correct_class / total * 100.))
print("\n\nDomain predictions: Got %d/%d (%.02f%%) correct" % (correct_domain, total, correct_domain / total * 100.))
print("Adversarial baseline domain predictions: Got %d/%d (%.02f%%) correct" % (correct_domain_adversarial, total, correct_domain_adversarial / total * 100.))
print("Ablation 2 domain predictions: Got %d/%d (%.02f%%) correct" % (correct_domain_ablation, total, correct_domain_ablation / total * 100.))
#@test {"skip": true}
def sample(correct, rows, cols):
n = 0
f, ax = plt.subplots(rows, cols)
if rows > 1:
ax = tf.nest.flatten([tuple(ax[i]) for i in range(rows)])
f.set_figwidth(14)
f.set_figheight(4 * rows)
for images, labels in mnist_test:
domain_logits, z = m_domain(images)
logits = m_task(images, z)
predictions = tf.argmax(logits, axis=1)
eq = tf.equal(predictions, labels)
for i, x in enumerate(eq):
if x.numpy() == correct:
label = labels[i]
prediction = predictions[i]
image = images[i]
ax[n].imshow(image)
ax[n].set_title("Prediction:{}\nActual:{}".format(prediction, label))
n += 1
if n == (rows * cols):
break
if n == (rows * cols):
break
"""
Explanation: In-domain evaluations and domain prediction
The same transformations applied in train data are applied during test
End of explanation
"""
#@test {"skip": true}
sample(correct=True, rows=1, cols=5)
#@test {"skip": true}
sample(correct=False, rows=2, cols=5)
"""
Explanation: Samples and corresponding predictions
End of explanation
"""
|
adukic/nd101 | intro-to-tflearn/Sentiment Analysis with TFLearn.ipynb | mit | import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
"""
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
"""
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
"""
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
"""
from collections import Counter
total_counts = Counter()
for _, row in reviews.iterrows():
total_counts.update(row[0].split(' '))
print("Total words in data set: ", len(total_counts))
"""
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
End of explanation
"""
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
"""
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
"""
print(vocab[-1], ': ', total_counts[vocab[-1]])
"""
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
"""
word2idx = {word: i for i, word in enumerate(vocab)}
"""
Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
"""
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
for word in text.split(' '):
idx = word2idx.get(word, None)
if idx is None:
continue
else:
word_vector[idx] += 1
return np.array(word_vector)
"""
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
"""
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
"""
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
"""
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
"""
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
"""
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
"""
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
"""
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
# Inputs
net = tflearn.input_data([None, 10000])
# Hidden layer(s)
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 25, activation='ReLU')
# Output layer
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd',
learning_rate=0.1,
loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
"""
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. There for, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
"""
model = build_model()
"""
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
"""
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=100)
"""
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
"""
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
"""
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure it's performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
"""
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
"""
Explanation: Try out your own text!
End of explanation
"""
|
shareactorIO/pipeline | source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/Distributed/Distributed_Tensorflow_Training_HybridCloud.ipynb | apache-2.0 | import tensorflow as tf
!pip install version_information
%load_ext version_information
%version_information numpy, scipy, matplotlib, pandas, tensorflow, sklearn, skflow
"""
Explanation: Distributed Tensorflow
End of explanation
"""
!kubectl get pod
CLUSTER_SPEC= """
{
'ps' : ['clustered-tensorflow-ps0:2222', 'clustered-tensorflow-ps1:2222'],
'worker' : [ 'clustered-tensorflow-worker0','clustered-tensorflow-worker1:2222']
}
"""
import ast
cluster_def = ast.literal_eval(CLUSTER_SPEC)
cluster_def
clusterSpec = tf.train.ClusterSpec(cluster_def)
clusterSpec.jobs
for job in clusterSpec.jobs:
print(job, clusterSpec.job_tasks(job))
workers = ['/job:worker/task:{}'.format(i) for i in range(len(cluster_def['worker']))]
param_servers = ['/job:ps/task:{}'.format(i) for i in range(len(cluster_def['ps']))]
workers
param_servers
"""
Explanation: Overview of Components
Cluster
To define a distributed computation in tensorflow we need to specify two kinds of jobs:
worker jobs
parameter server (ps) jobs
Each job is defined by one ore more tasks. Each task is usually specified with a simple numerical index, i.e. 0,1,2,3, ...
End of explanation
"""
l = tf.Variable("local_cpu")
l.device
"""
Explanation: Pinning of Variables
Each Variable is assigned to a specific device.
End of explanation
"""
for ps in param_servers:
with tf.device(ps):
v = tf.Variable("my_var")
v.device
"""
Explanation: We can enforce the assigned device using the tf.device context.
End of explanation
"""
v.to_proto()
"""
Explanation: Tensorflow Server
The server is responsible to handle the actual communication. On each of the cluster's node we will spawn a simple gRPC Server.
python
def launch_worker(job_name, task_id, cluster_def):
server = tf.train.Server(
cluster_def,
job_name=job_name,
task_index=task_id
)
server.join()
Connecting to a Server
to connect to any server you can specify the 'target' of the session,direct ip:port of the server when creating a Session object.
Note that the server is generic and can assume either the role of parameter server or of worker.The Cluster configuration decides the role.
The best practice is to create a single Image launching the tensorflow worker.
Environment variables then specify the exact role for the worker at run time.
gRPC
gRPC Is a Remote Procedure Call protocol based on Protocol Buffers.
Each object in tensorflow that has to be sent over the wire has a gRPC definition.
Client figures out what variables need to be serialized to gRPC.
Client makes the gRPC remote call to the Server and sends the values.
If the Server accepts the call, the serialized tensors are de-serialized
The Server runs the requested operation on the graph and all its dependencies
The Server serializes the result and sends it back on the same connection to the Client
The Client receives the results and deserializes.
Example of a gRPC declaration for the Variable
```javascript
syntax = "proto3";
package tensorflow;
// Protocol buffer representing a Variable.
message VariableDef {
// Name of the variable tensor.
string variable_name = 1;
// Name of the initializer op.
string initializer_name = 2;
// Name of the snapshot tensor.
string snapshot_name = 3;
}
```
Each variable can then be serialized using the to_proto method:
End of explanation
"""
batch_size = 1000
#graph = tf.Graph()
#with graph.as_default():
# with tf.device('/job:ps/task:0'):
# input_array = tf.placeholder(tf.int32, shape=[None])
# final_result = tf.Variable(0)
# divide the input across the cluster:
# all_reduce = []
# splitted = tf.split(0, len(workers), input_array)
# for idx, (portion, worker) in enumerate(zip(splitted,workers)):
# with tf.device(worker):
# print(worker)
# local_reduce = tf.reduce_sum(portion)
# local_reduce = tf.Print(portion, [local_reduce], message="portion is")
# all_reduce.append(local_reduce)
# final_result = tf.reduce_sum(tf.pack(all_reduce))
# print(final_result)
batch_size = 1000
graph = tf.Graph()
with graph.as_default():
all_reduce = []
initializer = tf.truncated_normal_initializer(mean=0.0, stddev=1.0, seed=None, dtype=tf.float32)
with tf.device(tf.train.replica_device_setter(cluster=clusterSpec)):
input_array = tf.placeholder(tf.int32, shape=[None])
final_result = tf.Variable(0)
# divide the input across the cluster:
splitted = tf.split(0, len(workers), input_array)
for idx, (portion, worker) in enumerate(zip(splitted,workers)):
with tf.device(worker):
print(worker)
local_reduce = tf.reduce_sum(portion)
local_reduce = tf.Print(portion, [local_reduce], message="portion is")
all_reduce.append(local_reduce)
final_result = tf.reduce_sum(tf.pack(all_reduce))
sess_config = tf.ConfigProto(
allow_soft_placement=True,
log_device_placement=True)
"""
Explanation: Simple reduce sum Example
End of explanation
"""
import numpy as np
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
#, config=sess_config
with tf.Session("grpc://clustered-tensorflow-worker0:2222", graph=graph) as session:
result = session.run(final_result, feed_dict={ input_array: np.ones([1000]) }, options=run_options)
print(result)
"""
Explanation: We can now run the graph
End of explanation
"""
final_result.device
"""
Explanation: We can also inspect any remote variable:
End of explanation
"""
|
mhdella/scipy_2015_sklearn_tutorial | notebooks/04.3 Analyzing Model Capacity.ipynb | cc0-1.0 | import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.svm import SVR
from sklearn import cross_validation
rng = np.random.RandomState(42)
n_samples = 200
kernels = ['linear', 'poly', 'rbf']
true_fun = lambda X: X ** 3
X = np.sort(5 * (rng.rand(n_samples) - .5))
y = true_fun(X) + .01 * rng.randn(n_samples)
plt.figure(figsize=(14, 5))
for i in range(len(kernels)):
ax = plt.subplot(1, len(kernels), i + 1)
plt.setp(ax, xticks=(), yticks=())
model = SVR(kernel=kernels[i], C=5)
model.fit(X[:, np.newaxis], y)
# Evaluate the models using crossvalidation
scores = cross_validation.cross_val_score(model,
X[:, np.newaxis], y, scoring="mean_squared_error", cv=10)
X_test = np.linspace(3 * -.5, 3 * .5, 100)
plt.plot(X_test, model.predict(X_test[:, np.newaxis]), label="Model")
plt.plot(X_test, true_fun(X_test), label="True function")
plt.scatter(X, y, label="Samples")
plt.xlabel("x")
plt.ylabel("y")
plt.xlim((-3 * .5, 3 * .5))
plt.ylim((-1, 1))
plt.legend(loc="best")
plt.title("Kernel {}\nMSE = {:.2e}(+/- {:.2e})".format(
kernels[i], -scores.mean(), scores.std()))
plt.show()
"""
Explanation: The issues associated with validation and
cross-validation are some of the most important
aspects of the practice of machine learning. Selecting the optimal model
for your data is vital, and is a piece of the problem that is not often
appreciated by machine learning practitioners.
Of core importance is the following question:
If our estimator is underperforming, how should we move forward?
Use simpler or more complicated model?
Add more features to each observed data point?
Add more training samples?
The answer is often counter-intuitive. In particular, sometimes using a
more complicated model will give worse results. Also, sometimes adding
training data will not improve your results. The ability to determine
what steps will improve your model is what separates the successful machine
learning practitioners from the unsuccessful.
Learning Curves and Validation Curves
One way to address this issue is to use what are often called Learning Curves.
Given a particular dataset and a model we'd like to fit (e.g. using feature creation and linear regression), we'd
like to tune our value of the hyperparameter kernel to give us the best fit. We can visualize the different regimes with the following plot, modified from the sklearn examples here
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
from sklearn import cross_validation
rng = np.random.RandomState(0)
n_samples = 200
true_fun = lambda X: X ** 3
X = np.sort(5 * (rng.rand(n_samples) - .5))
y = true_fun(X) + .01 * rng.randn(n_samples)
X = X[:, None]
y = y
f, axarr = plt.subplots(1, 3)
axarr[0].scatter(X[::20], y[::20])
axarr[0].set_xlim((-3 * .5, 3 * .5))
axarr[0].set_ylim((-1, 1))
axarr[1].scatter(X[::10], y[::10])
axarr[1].set_xlim((-3 * .5, 3 * .5))
axarr[1].set_ylim((-1, 1))
axarr[2].scatter(X, y)
axarr[2].set_xlim((-3 * .5, 3 * .5))
axarr[2].set_ylim((-1, 1))
plt.show()
"""
Explanation: Learning Curves
What the right model for a dataset is depends critically on how much data we have. More data allows us to be more confident about building a complex model. Lets built some intuition on why that is. Look at the following datasets:
End of explanation
"""
from sklearn.learning_curve import learning_curve
from sklearn.svm import SVR
# This is actually negative MSE!
training_sizes, train_scores, test_scores = learning_curve(SVR(kernel='linear'), X, y, cv=10,
scoring="mean_squared_error",
train_sizes=[.6, .7, .8, .9, 1.])
# Use the negative because we want to maximize score
print(train_scores.mean(axis=1))
plt.plot(training_sizes, train_scores.mean(axis=1), label="training scores")
plt.plot(training_sizes, test_scores.mean(axis=1), label="test scores")
#plt.ylim((0, 50))
plt.legend(loc='best')
"""
Explanation: They all come from the same underlying process. But if you were asked to make a prediction, you would be more likely to draw a straight line for the left-most one, as there are only very few datapoints, and no real rule is apparent. For the dataset in the middle, some structure is recognizable, though the exact shape of the true function is maybe not obvious. With even more data on the right hand side, you would probably be very comfortable with drawing a curved line with a lot of certainty.
A great way to explore how a model fit evolves with different dataset sizes are learning curves.
A learning curve plots the validation error for a given model against different training set sizes.
But first, take a moment to think about what we're going to see:
Questions:
As the number of training samples are increased, what do you expect to see for the training error? For the validation error?
Would you expect the training error to be higher or lower than the validation error? Would you ever expect this to change?
We can run the following code to plot the learning curve for a kernel = linear model:
End of explanation
"""
from sklearn.learning_curve import learning_curve
from sklearn.svm import SVR
training_sizes, train_scores, test_scores = learning_curve(SVR(kernel='rbf'), X, y, cv=10,
scoring="mean_squared_error",
train_sizes=[.6, .7, .8, .9, 1.])
# Use the negative because we want to minimize squared error
plt.plot(training_sizes, train_scores.mean(axis=1), label="training scores")
plt.plot(training_sizes, test_scores.mean(axis=1), label="test scores")
plt.legend(loc='best')
"""
Explanation: You can see that for the model with kernel = linear, the validation score doesn't really improve as more data is given.
Notice that the validation error generally improves with a growing training set,
while the training error generally gets worse with a growing training set. From
this we can infer that as the training size increases, they will converge to a single
value.
From the above discussion, we know that kernel = linear
underfits the data. This is indicated by the fact that both the
training and validation errors are very poor. When confronted with this type of learning curve,
we can expect that adding more training data will not help matters: both
lines will converge to a relatively high error.
When the learning curves have converged to a poor error, we have an underfitting model.
An underfitting model can be improved by:
Using a more sophisticated model (i.e. in this case, increase complexity of the kernel parameter)
Gather more features for each sample.
Decrease regularization in a regularized model.
A underfitting model cannot be improved, however, by increasing the number of training
samples (do you see why?)
Now let's look at an overfit model:
End of explanation
"""
|
mrcinv/matpy | 02b_vrste.ipynb | gpl-2.0 | from sympy import *
init_printing()
n = Symbol('n')
a = lambda n: 1/(n*(n+2))
Sum(a(n),(n,1,oo))
"""
Explanation: ^ gor: Uvod
Številske vrste
Vrsta je neskončna vsota
$$a_0+a_1+a_2+\ldots = \sum_{n=0}^\infty a_n.$$
Vsoto vrste definiramo z zaporedjem delnih vsot
$$S_n=a_0+a_1+\ldots a_n =\sum_{k=0}^n a_k$$
in
$$\sum_{n=0}^\infty a_n=\lim_{n\to \infty}S_n.$$
Trditve
Vrsta konvergentna, če je konvergentno zaporedja delnih vsot,
če je vrsta konvergenta, je $\lim_{n\to\infty}a_n=0$.
Primer
Dano je vrsta
$$\sum_{n=1}^{\infty} \frac{1}{n \left(n + 2\right)}$$
izračunaj formulo za delne vsote
seštej vrsto
End of explanation
"""
Sum(a(n),(n,1,oo)).doit() # vsota vrste
k = Symbol('k')
Sum(a(k),(k,1,n)).doit() # Funkcija sympy.Sum izračuna tudi delne vsote
"""
Explanation: Vsoto vrste in delne vsote lahko izračunamo s funkcijo sympy.Sum. Za zgornjo mejo neskončno $\infty$ uporabimo znak sympy.oo.
End of explanation
"""
ul = apart(a(n))
ul
"""
Explanation: Postopek „na roke“
Poglejmo si, kako bi vsoto izračunali „na roke“. Člen vrste
$$\frac{1}{n(n+2)}$$
razstavimo na parcialne ulomke in upamo, da se nam vmesni členi pokrajšajo.
End of explanation
"""
args = ul.args
s = []
for i in range(5):
s=s + [arg.subs(n,i+1) for arg in args]
s = s + [arg.subs(n,n-1) for arg in args] + list(args)
s
"""
Explanation: Metoda args() vrne n-terico(tuple) posameznih ulomkov. Če namesto $n$ vstavimo prvih nekaj števil, lahko vidimo, kateri členi se pokrajšajo.
End of explanation
"""
Sn = s[0]+s[2]+s[-3]+s[-1]
Sn
Sn.together().factor()
limit(Sn,n,oo) #vsota vrste je limita delnih vsot
"""
Explanation: Vidimo, da se vmesni členi pokrajšajo, razen 1. in 3. ter zadnjega in predpredzadnjega. Delna vsota je torej enaka
End of explanation
"""
import disqus
%reload_ext disqus
%disqus matpy
"""
Explanation: << nazaj: zaporedja
End of explanation
"""
|
bmeaut/python_nlp_2017_fall | course_material/05_Generator_expressions_list_comprehension/05_OOP_Comprehension_ContextM_lab_solution.ipynb | mit | from math import gcd
class RationalNumber(object):
# TODO
r = RationalNumber(43, 2)
assert r + r == RationalNumber(43) # q = 1 in this case
assert r * 2 == r + r
r1 = RationalNumber(3, 2)
r2 = RationalNumber(4, 3)
assert r1 * r2 == RationalNumber(12, 6)
assert r1 / r2 == RationalNumber(9, 8)
assert r1 == RationalNumber(6, 4)
"""
Explanation: Laboratory 04, Week 05
1. RationalNumber class
Write a class that represents a rational number. A number is rational if it is can be expressed as the quotient of two integers (p and q). Define the operators seen in the tests below.
Make sure that p and q are always relative primes (you can use math.gcd).
End of explanation
"""
r1 = RationalNumber(3)
r2 = RationalNumber(3, 1)
r3 = RationalNumber(3, 2)
d = {r1: 1, r2: 2, r3: 12}
assert(len(d) == 2)
"""
Explanation: RationalNumber advanced exercises
Make the class usable as a dictionary key.
End of explanation
"""
try:
r1.p = 3.4
except RationalNumberValueError:
print("This should happen")
else:
print("This shouldn't happen")
try:
r1.q = 3.4
except ValueError:
print("This should happen")
else:
print("This shouldn't happen")
"""
Explanation: p and q can only be integers. Raise a RationalNumberValueError if someone tries to set them to anything else.
End of explanation
"""
r = RationalNumber(3, -2)
assert r.p == -3 and r.q == 2
assert abs(r) == 1.5
"""
Explanation: Rational numbers may be negative. Make sure that q is never negative.
End of explanation
"""
r = RationalNumber(-3, 2)
assert RationalNumber.from_str("-3/2") == r
assert RationalNumber.from_str("3/-2") == r
assert RationalNumber.from_str("3 / -2") == r
"""
Explanation: Add a from_str factory method which parses the following formats:
End of explanation
"""
l = []
for i in range(-5, 10, 2):
l.append(i-2)
[i-2 for i in range(-5, 10, 2)]
l = []
for i in range(100):
if i % 10 == 4:
l.append(i)
[i for i in range(100) if i % 4 == 0]
l1 = [12, 1, 0, 13, -3, -4, 0, 2]
l2 = []
for e in l1:
if e % 2 == 1:
l2.append(e)
[e for e in l2 if e % 2 == 1]
l1 = [12, 1, 0, 13, -3, -4, 0, 2]
l2 = []
for e in l1:
if e % 2 == 1:
l2.append(True)
else:
l2.append(False)
[e % 2 == 1 for e in l1]
l1 = [3, 5, 17, 19]
l2 = [2, 4, 6, 8, 10]
products = []
for x in l1:
for y in l2:
products.append(x*y)
products
[x*y for x in l1 for y in l2]
l1 = [3, 5, 7, 19]
l2 = [2, 4, 6, 8, 10]
products = []
for x in l1:
for y in l2:
if (x + y) % 3 == 0:
products.append(x*y)
[x*y for x in l1 for y in l2 if (x+y) % 3 == 0]
fruits = ["apple", "plum", "pear", "avocado"]
mtx = []
for fruit in fruits:
row = []
for i, c in enumerate(fruit):
row.append(c*(i+1))
mtx.append(row)
mtx
[[c*(i+1) for i, c in enumerate(row)] for row in fruits]
text = "ababaacdsadb"
char_freqs = {}
for c in text:
try:
char_freqs[c] += 1
except KeyError:
char_freqs[c] = 1
char_freqs
{c: text.count(c) for c in set(text)}
d1 = {"a": 1, "b": 3, "c": 2}
d2 = {"a": 2, "b": 1}
d3 = {}
for key in set(d1.keys()) | set(d2.keys()):
max_val = max(d1.get(key, 0), d2.get(key, 0))
d3[key] = max_val
d3
{key: max(d1.get(key, 0), d2.get(key, 0)) for key in set(d1.keys()) | set(d2.keys())}
"""
Explanation: 2. Comprehension
Convert the following for loops into comprehensions:
End of explanation
"""
import os
import urllib.request
fn = 'web2-4p-9-17'
zipname = fn + '.zip'
if not os.path.exists(zipname):
print("Downloading corpus")
webcorp_url = "http://avalon.aut.bme.hu/~judit/resources/webcorp_parts/web2-4p-9-17.zip"
u = urllib.request.URLopener()
u.retrieve(webcorp_url, zipname)
if not os.path.exists(fn):
from zipfile import ZipFile
with ZipFile(zipname) as myzip:
myzip.extractall()
"""
Explanation: 3. Generators
The following piece of code downloads a small sample of the Hungarian Webcorpus. We will work on this in later exercises.
The corpus contains a single word-per-line and sentence boundaries are denoted by empty lines.
The file has 4 columns separated by TABs:
1. original word
2. lemma (stemmed word)
3. morphological analysis
4. morphological analysis candidates.
Take a look at the file before continuing.
End of explanation
"""
import types
def read_sentences(filename):
sent = []
with open(filename) as corp:
for line in corp:
if not line.strip():
if sent:
yield sent
sent = []
else:
sent.append(line.split('\t')[0])
if sent:
yield sent
sentence = next(read_sentences(fn))
assert(len(sentence) == 19)
assert isinstance(sentence, list)
sentences = read_sentences(fn)
assert isinstance(sentences, types.GeneratorType)
sentences = list(sentences)
assert(len(sentences) == 90764)
"""
Explanation: 3.1. Write a generator function that yields one sentence at a time as a list of tokens. Make sure to yield the very last sentence of the file as well.
End of explanation
"""
def read_long_sentences(filename, min_length=5):
sent = []
with open(filename) as corp:
for line in corp:
if not line.strip():
if sent:
if len(sent) >= min_length:
yield sent
sent = []
else:
sent.append(line.split('\t')[0])
if sent:
if len(sent) >= min_length:
yield sent
sentences = read_long_sentences(fn)
assert isinstance(sentences, types.GeneratorType)
sentences = list(sentences)
assert len(sentences) == 85163
sentences = read_long_sentences(fn, 15)
sentences = list(sentences)
assert len(sentences) == 50059
"""
Explanation: 3.2 Write a generator function that yields one sentence at a time but skips short sentences. The length limit should be a parameter of the generator which defaults to 5.
End of explanation
"""
from datetime import datetime
class Timer(object):
def __init__(self, name="unnamed"):
self.name = name
def __enter__(self):
self.start = datetime.now()
def __exit__(self, *args):
duration = datetime.now() - self.start
print("{} ran for {} seconds".format(self.name, duration.total_seconds()))
# prints "slow code ran for F seconds
# F is the total_seconds the block took to finish (float)
with Timer("slow code"):
s = sum(range(100000))
# prints "unnamed ran for F seconds
with Timer():
s = sum(range(100000))
"""
Explanation: 4. Context managers
Create a Timer context manager that measures the running time of the with block. The context manager takes an optional name argument and prints the block's name at the end too.
End of explanation
"""
|
unapiedra/BBChop | analysis/Example Run.ipynb | gpl-2.0 | with open('example_run.csv') as f: s = f.read()
N = 10
runs = [[1/N for _ in range(N)]]
for line in s.split('\n'):
line = line.strip('[]')
if len(line) > 0:
li = [float(i) for i in line.split(',')]
runs.append(li)
"""
Explanation: In this little experiment, I printed the likelihoods after each iteration.
The test case was failing with 10% probability and the history had 10 locations. And I requested a certainty for termination of 90%.
End of explanation
"""
for i, r in enumerate(runs):
plt.bar(list(range(10)), r)
plt.xlabel('Location')
plt.ylabel('Likelihood after {} iterations'.format(i))
plt.xticks(range(N))
plt.show()
fig, ax = plt.subplots()
# fig.set_tight_layout(True)
ax.set_xlim((0, 10))
ax.set_ylim((0, 1))
line, = ax.plot([], [])
x = list(range(N))
ylabel_func = lambda i: 'Likelihood after {} iterations'.format(i)
def init():
line.set_data([], [])
return (line, )
def animate(i):
y = runs[i]
line.set_data(x, y)
return (line,)
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=72, interval=40, blit=True)
# HTML(anim.to_html5_video())
rc('animation', html='html5')
anim
"""
Explanation: In the next plots you will see that at the beginning the likelihood for the fault location is evenly distributed. There was no observation made.
Then, in the next plot there was an observation ("No fault detected.") at Location 3. Therefore, after Location 3 things are more likely, and on or before it, the fault is less likely.
Scroll down the graphs 'Iteration 14'. Notice that Locations 8 and 9 have dropped to 0, and Location 7 is around 0.25. This means that there was a fault detected at Location 7. Therefore, the algorithm knows that 8 and 9 cannot be the first faulty location.
Afterwards the algorithm evaluates at Location 6 until it is certain that it will not detect the fault there. Once Location 7 reaches certainty/likelihood 90%, the algorithm terminates.
End of explanation
"""
|
chivalrousS/word2vec | examples/doc2vec.ipynb | apache-2.0 | from __future__ import unicode_literals
import os
import nltk
directories = ['train/pos', 'train/neg', 'test/pos', 'test/neg', 'train/unsup']
input_file = open('/Users/drodriguez/Downloads/alldata.txt', 'w')
id_ = 0
for directory in directories:
rootdir = os.path.join('/Users/drodriguez/Downloads/aclImdb', directory)
for subdir, dirs, files in os.walk(rootdir):
for file_ in files:
with open(os.path.join(subdir, file_), 'r') as f:
doc_id = '_*%i' % id_
id_ = id_ + 1
text = f.read()
text = text.decode('utf-8')
tokens = nltk.word_tokenize(text)
doc = ' '.join(tokens).lower()
doc = doc.encode('ascii', 'ignore')
input_file.write('%s %s\n' % (doc_id, doc))
input_file.close()
"""
Explanation: doc2vec
This is an experimental code developed by Tomas Mikolov found in the word2vec google group: https://groups.google.com/d/msg/word2vec-toolkit/Q49FIrNOQRo/J6KG8mUj45sJ
This is not yet available on Pypi you need the latest master branch from git.
The input format for doc2vec is still one big text document but every line should be one document prepended with an unique id, for example:
_*0 This is sentence 1
_*1 This is sentence 2
Requirements
nltk
Download some data: http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Untar that data: tar -xvf aclImdb_v1.tar.gz
Preprocess
Merge data into one big document with an id per line and do some basic preprocessing: word tokenizer.
End of explanation
"""
%load_ext autoreload
%autoreload 2
import word2vec
word2vec.doc2vec('/Users/drodriguez/Downloads/alldata.txt', '/Users/drodriguez/Downloads/vectors.bin', cbow=0, size=100, window=10, negative=5, hs=0, sample='1e-4', threads=12, iter_=20, min_count=1, verbose=True)
"""
Explanation: doc2vec
End of explanation
"""
%load_ext autoreload
%autoreload 2
import word2vec
model = word2vec.load('/Users/drodriguez/Downloads/vectors.bin')
model.vectors.shape
"""
Explanation: Predictions
Is possible to load the vectors using the same wordvectors class as a regular word2vec binary file.
End of explanation
"""
model['_*1']
"""
Explanation: The documents vector are going to be identified by the id we used in the preprocesing section, for example document 1 is going to have vector:
End of explanation
"""
indexes, metrics = model.cosine('_*1')
model.generate_response(indexes, metrics).tolist()
"""
Explanation: We can ask for similarity words or documents on document 1
End of explanation
"""
|
hagne/atm-py | examples/instruments_POPS_mie.ipynb | mit | from atmPy.aerosols.instruments.POPS import mie
%matplotlib inline
import matplotlib.pylab as plt
plt.rcParams['figure.dpi'] = 200
"""
Explanation: Introduction
This module provides tools to simulate scattering intensities detected by POPS as a function of particle size, refractive index, and some more less obvious parameters. Simulations are based on mie scattering, which gives the module the name. The only function that is worth mentioning is the one below. Its worth exploring all the optional parameters.
Imports
End of explanation
"""
d,amp = mie.makeMie_diameter(noOfdiameters=1000)
f,a = plt.subplots()
a.plot(d,amp)
a.loglog()
a.set_xlim((0.1,3))
a.set_ylabel('Signal intensity (arb. u.)')
a.set_xlabel('Diameter ($\mu$m)')
"""
Explanation: standard settings
End of explanation
"""
noofpoints = 1000
d,amp405 = mie.makeMie_diameter(noOfdiameters=noofpoints, WavelengthInUm=0.405)
d,amp445 = mie.makeMie_diameter(noOfdiameters=noofpoints, WavelengthInUm=0.445)
f,a = plt.subplots()
a.plot(d, amp405, label = '405')
a.plot(d, amp445, label = '445')
a.loglog()
lim = [0.14, 3]
arglim = [abs(d - lim[0]).argmin(), abs(d - lim[1]).argmin()]
# arglim
scs_at_lim_405= amp405[arglim]
# scs_at_lim_405
#w the lower detection limit will go up to
d[abs(amp445 - scs_at_lim_405[0]).argmin()]
"""
Explanation: Wavelength dependence
as an example what would it mean to use a 445nm instead of a 405nm laser
End of explanation
"""
nop = 1000
dI,ampI = mie.makeMie_diameter(noOfdiameters=nop, IOR=1.4)
dII,ampII = mie.makeMie_diameter(noOfdiameters=nop, IOR=1.5)
dIII,ampIII = mie.makeMie_diameter(noOfdiameters=nop, IOR=1.6)
f,a = plt.subplots()
a.plot(dI,ampI)
a.plot(dII,ampII)
a.plot(dIII,ampIII)
a.loglog()
a.set_xlim((0.1,3))
a.set_ylabel('Signal intensity (arb. u.)')
a.set_xlabel('Diameter ($\mu$m)')
"""
Explanation: refractive index dependence
End of explanation
"""
|
European-XFEL/h5tools-py | docs/parallel_example.ipynb | bsd-3-clause | from karabo_data import RunDirectory
import multiprocessing
import numpy as np
"""
Explanation: Parallel processing with a virtual dataset
This example demonstrates splitting up some data to be processed by several worker processes, and collecting the results back together.
For this example, we'll use data from an XGM, and find the average intensity of each pulse across all the trains in the run. This doesn't actually need parallel processing: we can easily do it directly in the notebook. But the same techniques should work with much more data and more complex calculations.
End of explanation
"""
!ls /gpfs/exfel/exp/XMPL/201750/p700000/raw/r0002/RAW-R0034-DA01-S*.h5
run = RunDirectory('/gpfs/exfel/exp/XMPL/201750/p700000/raw/r0002/')
"""
Explanation: The data that we want is separated over these seven sequence files:
End of explanation
"""
vds_filename = 'xgm_vds.h5'
xgm_vds = run.get_virtual_dataset(
'SA1_XTD2_XGM/XGM/DOOCS:output', 'data.intensityTD',
filename=vds_filename
)
xgm_vds
"""
Explanation: By making a virtual dataset, we can see the shape of it, as if it was one big numpy array:
End of explanation
"""
simple_mean = xgm_vds[:, :40].mean(axis=0, dtype=np.float64)
simple_mean.round(4)
"""
Explanation: Let's read this into memory and calculate the means directly, to check our parallel calculations against.
We can do this for this example because the calculation is simple and the data is small;
it wouldn't be practical in real situations where parallelisation is useful.
These data are recorded in 32-bit floats, but to minimise rounding errors we'll tell numpy to give the results as 64-bit floats. Try re-running this example with 32-bit floats to see how much the results change!
End of explanation
"""
N_proc = 4
cuts = [int(xgm_vds.shape[0] * i / N_proc) for i in range(N_proc + 1)]
chunks = list(zip(cuts[:-1], cuts[1:]))
chunks
"""
Explanation: Now, we're going to define chunks of the data for each of 4 worker processes.
End of explanation
"""
def sum_chunk(chunk, filename=vds_filename, ds_name=xgm_vds.name):
start, end = chunk
# Reopen the file in the worker process:
import h5py, numpy
with h5py.File(filename, 'r') as f:
ds = f[ds_name]
data = ds[start:end] # Read my chunk
return data.sum(axis=0, dtype=numpy.float64)
"""
Explanation: Using multiprocessing
This is the function we'll ask each worker process to run, adding up the data and returning a 1D numpy array.
We're using default arguments as a convenient way to copy the filename and the dataset path into the worker process.
End of explanation
"""
with multiprocessing.Pool(N_proc) as pool:
res = pool.map(sum_chunk, chunks)
"""
Explanation: Using Python's multiprocessing module, we start four workers, farm the chunks out to them, and collect the results back.
End of explanation
"""
multiproc_mean = (np.stack(res).sum(axis=0, dtype=np.float64)[:40] / xgm_vds.shape[0])
np.testing.assert_allclose(multiproc_mean, simple_mean)
multiproc_mean.round(4)
"""
Explanation: res is now a list of 4 arrays, containing the sums from each chunk. To get the mean, we'll add these up to get a grand total, and then divide by the number of trains we have data from.
End of explanation
"""
from getpass import getuser
import h5py
import subprocess
"""
Explanation: Using SLURM
What if we need more power? The example above is limited to one machine, but we can use SLURM to spread the work over multiple machines on the Maxwell cluster.
This is massive overkill for this example calculation - we'll only use one CPU core for a fraction of a second on each machine. But we could do something similar for a much bigger problem.
End of explanation
"""
%%writefile parallel_eg_worker.py
#!/gpfs/exfel/sw/software/xfel_anaconda3/1.1/bin/python
import h5py
import numpy as np
import sys
filename = sys.argv[1]
ds_name = sys.argv[2]
chunk_start = int(sys.argv[3])
chunk_end = int(sys.argv[4])
worker_idx = sys.argv[5]
with h5py.File(filename, 'r') as f:
ds = f[ds_name]
data = ds[chunk_start:chunk_end] # Read my chunk
chunk_totals = data.sum(axis=0, dtype=np.float64)
with h5py.File(f'parallel_eg_result_{worker_idx}.h5', 'w') as f:
f['chunk_totals'] = chunk_totals
"""
Explanation: We'll write a Python script for each worker to run. Like the sum_chunk function above, this reads a chunk of data from the virtual dataset and sums it along the train axis. It saves the result into another HDF5 file for us to collect.
End of explanation
"""
partition = 'upex' # External users
partition = 'exfel' # Staff
"""
Explanation: The Maxwell cluster is divided into various partitions for different groups of users. If you're running this as an external user, comment out the 'Staff' line below.
End of explanation
"""
for i, (start, end) in enumerate(chunks):
cmd = ['sbatch', '-p', partition, 'parallel_eg_worker.py', vds_filename, xgm_vds.name, str(start), str(end), str(i)]
print(subprocess.check_output(cmd))
"""
Explanation: Now we submit 4 jobs with the sbatch command:
End of explanation
"""
!squeue -u {getuser()}
"""
Explanation: We can use squeue to monitor the jobs running. Re-run this until all the jobs have disappeared, meaning they're finished.
End of explanation
"""
res = []
for i in range(N_proc):
with h5py.File(f'parallel_eg_result_{i}.h5', 'r') as f:
res.append(f['chunk_totals'][:])
"""
Explanation: Now, so long as all the workers succeeded, we can collect the results.
If any workers failed, you'll find tracebacks in slurm-*.out files in the working directory.
End of explanation
"""
slurm_mean = np.stack(res).sum(axis=0)[:40] / xgm_vds.shape[0]
np.testing.assert_allclose(slurm_mean, simple_mean)
slurm_mean.round(4)
"""
Explanation: Now res is once again a list of 1D numpy arrays, representing the totals from each chunk. So we can finish the calculation as in the previous section:
End of explanation
"""
|
uwoseis/anemoi | notebooks/Compare Solutions Homogeneous.ipynb | mit | import sys
sys.path.append('../')
import numpy as np
from anemoi import MiniZephyr, SimpleSource, AnalyticalHelmholtz
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('png')
matplotlib.rcParams['savefig.dpi'] = 150 # Change this to adjust figure size
systemConfig = {
'dx': 1., # m
'dz': 1., # m
'c': 2500., # m/s
'rho': 1., # kg/m^3
'nx': 100, # count
'nz': 200, # count
'freq': 2e2, # Hz
}
nx = systemConfig['nx']
nz = systemConfig['nz']
dx = systemConfig['dx']
dz = systemConfig['dz']
MZ = MiniZephyr(systemConfig)
AH = AnalyticalHelmholtz(systemConfig)
SS = SimpleSource(systemConfig)
xs, zs = 25, 25
sloc = np.array([xs, zs]).reshape((1,2))
q = SS(sloc)
uMZ = MZ*q
uAH = AH(sloc)
clip = 0.1
plotopts = {
'vmin': -np.pi,
'vmax': np.pi,
'extent': [0., dx * nx, dz * nz, 0.],
'cmap': cm.bwr,
}
fig = plt.figure()
ax1 = fig.add_subplot(1,4,1)
plt.imshow(np.angle(uAH.reshape((nz, nx))), **plotopts)
plt.title('AH Phase')
ax2 = fig.add_subplot(1,4,2)
plt.imshow(np.angle(uMZ.reshape((nz, nx))), **plotopts)
plt.title('MZ Phase')
plotopts.update({
'vmin': -clip,
'vmax': clip,
})
ax3 = fig.add_subplot(1,4,3)
plt.imshow(uAH.reshape((nz, nx)).real, **plotopts)
plt.title('AH Real')
ax4 = fig.add_subplot(1,4,4)
plt.imshow(uMZ.reshape((nz, nx)).real, **plotopts)
plt.title('MZ Real')
fig.tight_layout()
"""
Explanation: Compare Solutions - Homogenous
Brendan Smithyman | October 2015
This notebook shows comparisons between the responses of the different solvers.
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(1,1,1, aspect=100)
plt.plot(uAH.real.reshape((nz, nx))[:,xs], label='AnalyticalHelmholtz')
plt.plot(uMZ.real.reshape((nz, nx))[:,xs], label='MiniZephyr')
plt.legend(loc=4)
plt.title('Real part of response through xs=%d'%xs)
"""
Explanation: Error plots for MiniZephyr vs. the AnalyticalHelmholtz response
Response of the field (showing where the numerical case does not match the analytical case):
Source region
PML regions
End of explanation
"""
uMZr = uMZ.reshape((nz, nx))
uAHr = uAH.reshape((nz, nx))
plotopts.update({
'cmap': cm.jet,
'vmin': 0.,
'vmax': 20.,
})
fig = plt.figure()
ax1 = fig.add_subplot(1,2,1)
plt.imshow(abs(uAHr - uMZr)/(abs(uAHr)+1e-15) * 100, **plotopts)
cb = plt.colorbar()
cb.set_label('Percent error')
plotopts.update({'vmax': 5.})
ax2 = fig.add_subplot(1,2,2)
plt.imshow(abs(uAHr - uMZr)/(abs(uAHr)+1e-15) * 100, **plotopts)
cb = plt.colorbar()
cb.set_label('Percent error')
fig.tight_layout()
"""
Explanation: Relative error of the MiniZephyr solution (in %)
End of explanation
"""
|
datascienceguide/datascienceguide.github.io | tutorials/.ipynb_checkpoints/Linear-Regression-Tutorial-Copy1-checkpoint.ipynb | mit | import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from math import log
from sklearn import linear_model
#comment below if not using ipython notebook
%matplotlib inline
"""
Explanation: Linear Regression Tutorial
Author: Andrew Andrade (andrew@andrewandrade.ca)
This is part one of a series of tutorials related to regression used in data science. The coorosponding notes can be found here.
In this tutorial, We will first learn to fit a simple line using Least Squares Linear Regression (LSLR), plot residuals, residual distribution, statistics approach to linear regression, horizontal residuals and end with total least squares linear regression.
Part 1: Fitting a line using LSLR
First let us import the necessary libraries and read the data file. You can follow along by downloading the dataset from here: TODO.
End of explanation
"""
#read csv
anscombe_i = pd.read_csv('../datasets/anscombe_i.csv')
plt.scatter(anscombe_i.x, anscombe_i.y, color='black')
plt.ylabel("Y")
plt.xlabel("X")
"""
Explanation: Now lets read the first set of data, and make a simple scatter plot.
End of explanation
"""
regr_i = linear_model.LinearRegression()
#We need to reshape the data to be a matrix
# with only one column
X = anscombe_i.x.reshape((len(anscombe_i.x), 1))
y = anscombe_i.y.reshape((len(anscombe_i.y), 1))
#Fit a line
regr_i.fit(X,y)
# The coefficients
print('Coefficients: \n', regr_i.coef_)
# The mean square error
print("Residual sum of squares: %.2f"
% np.mean((regr_i.predict(X) - y) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr_i.score(X, y))
plt.plot(X,regr_i.predict(X), color='green',
linewidth=3)
plt.scatter(anscombe_i.x, anscombe_i.y, color='black')
plt.ylabel("X")
plt.xlabel("y")
"""
Explanation: Luckly for us, we do not need to implement linear regression, since scikit learn already has a very efficient implementation. The straight line can be seen in the plot below, showing how linear regression attempts to draw a straight line that will best minimize the residual sum of squares between the observed responses in the dataset, and the responses predicted by the linear approximation.
The coefficients, the residual sum of squares and the variance score are also calculated.
Note: from reading the documentation this method computes the least squares solution using a singular value decomposition of X. If X is a matrix of size (n, p) this method has a cost of O($n p^2$), assuming that $n \geq p$. A more efficient alternative (for large number of features) is to use Stochastic Gradient Descent or another method outlined in the linear models documentation
If you do not know what BigO is, please read the background information from the notes (or take a algorithms course).
End of explanation
"""
from pylab import *
# determine the line-fit
k,d = polyfit(anscombe_i.x,y,1)
yfit = k*anscombe_i.x+d
# plot the data
figure(1)
scatter(anscombe_i.x,y, color='black')
plot(anscombe_i.x, yfit, 'green')
#plot line from point to regression line
for ii in range(len(X)):
plot([anscombe_i.x[ii], anscombe_i.x[ii]], [yfit[ii], y[ii]], 'k')
xlabel('X')
ylabel('Y')
"""
Explanation: Residuals
From the notes, we learnt that we use ordinary linear regression when y is dependant on x since the algorithm reduces the vertical residual (y_observed - y predicted). The figure below outlines this using a different method for linear regression (using a polyfit with 1 polynomial).
End of explanation
"""
import pylab as P
figure(1)
scatter(anscombe_i.x,y, color='black')
plot(anscombe_i.x, yfit, 'green')
#plot line from point to regression line
for ii in range(len(X)):
plot([anscombe_i.x[ii], anscombe_i.x[ii]], [yfit[ii], y[ii]], 'k')
xlabel('X')
ylabel('Y')
residual_error= anscombe_i.y - yfit
error_mean = np.mean(residual_error)
error_sigma = np.std(residual_error)
plt.figure(2)
plt.scatter(anscombe_i.x,residual_error,label='residual error')
plt.xlabel("X")
plt.ylabel("residual error")
plt.figure(3)
n, bins, patches = plt.hist(residual_error, 10, normed=1, facecolor='blue', alpha=0.75)
y_pdf = P.normpdf( bins, error_mean, error_sigma)
l = P.plot(bins, y_pdf, 'k--', linewidth=1.5)
plt.xlabel("residual error in y")
plt.title("Residual Distribution")
"""
Explanation: Now let us plot the residual (y - y predicted) vs x.
End of explanation
"""
# load statsmodels as alias ``sm``
import statsmodels.api as sm
y = anscombe_i.y
X = anscombe_i.x
# Adds a constant term to the predictor
# y = mx +b
X = sm.add_constant(X)
#fit ordinary least squares
est = sm.OLS(y, X)
est = est.fit()
est.summary()
"""
Explanation: As seen the the histogram, the residual error should be (somewhat) normally distributed and centered around zero. This post explains why.
If the residuals are not randomly distributed around zero, consider applying a transform to the data or applying non-linear regression. In addition to looking at the residuals, one could use the statsmodels library to take a statistical approach to ordinary least squares regression.
End of explanation
"""
plt.scatter(anscombe_i.x, anscombe_i.y, color='black')
X_prime = np.linspace(min(anscombe_i.x), max(anscombe_i.x), 100)[:, np.newaxis]
# add constant as we did before
X_prime = sm.add_constant(X_prime)
y_hat = est.predict(X_prime)
# Add the regression line (provides same as above)
plt.plot(X_prime[:, 1], y_hat, 'r')
"""
Explanation: The important parts of the summary are the:
R-squared (or coefficeient of determination which is the statistical measure of how well the regression line approximates the real data points.
Adj. R-squared (adjusted based on the number of observations and the degrees-of-freedom of the residuals)
P > |t| which is the P-value that the null-hypothesis that the coefficient = 0 is true. If it is less than the confidence level, often 0.05, it indicates that there is a statistically significant relationship between the term and the response.
[95.0% Conf. Interval] The lower and upper values. See here for more details
If these measures do make make sense to you, consider learning or revising statistics. http://onlinestatbook.com or http://stattrek.com/tutorials/ap-statistics-tutorial.aspx are great free resources which outlines all the necessary background to be a great statstician and data scientist. Both http://onlinestatbook.com/2/regression/inferential.html, and http://stattrek.com/regression/slope-confidence-interval.aspx?Tutorial=AP provide the specifics of confidence intervals for linear regression
We can now plot the fitted line to the data and observe the same results as the previous two methods for linear regression.
End of explanation
"""
import seaborn as sns
#this just makes the plots pretty (in my opion)
sns.set(style="darkgrid", color_codes=True)
g = sns.jointplot("x", "y", data=anscombe_i, kind="reg",
xlim=(0, 20), ylim=(0, 12), color="r", size=7)
"""
Explanation: If we want to be even more fancier, we can use the seaborn library to plot Linear regression with marginal distributions which also states the pearsonr and p value on the plot. Using the statsmodels approach is more rigourous, but sns provides quick visualizations.
End of explanation
"""
X = anscombe_i.x.reshape((len(anscombe_i.x), 1))
y = anscombe_i.y.reshape((len(anscombe_i.y), 1))
k,d = polyfit(anscombe_i.y,anscombe_i.x,1)
xfit = k*y+d
figure(2)
# plot the data
scatter(anscombe_i.x,y, color='black')
plot(xfit, y, 'blue')
for ii in range(len(y)):
plot([xfit[ii], anscombe_i.x[ii]], [y[ii], y[ii]], 'k')
xlabel('X')
ylabel('Y')
"""
Explanation: Usually we calculate the (vertical) residual, or the difference in the observed and predicted in the y. This is because "the use of the least squares method to calculate the best-fitting line through a two-dimensional scatter plot typically requires the user to assume that one of the variables depends on the other. (We caculate the difference in the y) However, in many cases the relationship between the two variables is more complex, and it is not valid to say that one variable is independent and the other is dependent. When analysing such data researchers should consider plotting the three regression lines that can be calculated for any two-dimensional scatter plot."
Regression using Horizontal Residual
If X is dependant on y, then the regression line can be made based on horizontal residuals as shown below.
End of explanation
"""
from scipy.odr import Model, Data, ODR
from scipy.stats import linregress
import numpy as np
def orthoregress(x, y):
# get initial guess by first running linear regression
linregression = linregress(x, y)
model = Model(fit_function)
data = Data(x, y)
od = ODR(data, model, beta0=linregression[0:2])
out = od.run()
return list(out.beta)
def fit_function(p, x):
#return y = m x + b
return (p[0] * x) + p[1]
m, b = orthoregress(anscombe_i.x, anscombe_i.y)
# determine the line-fit
y_ortho_fit = m*anscombe_i.x+b
# plot the data
scatter(anscombe_i.x,anscombe_i.y, color = 'black')
plot(anscombe_i.x, y_ortho_fit, 'r')
xlabel('X')
ylabel('Y')
"""
Explanation: Total Least Squares Regression
Finally, a line of best fit can be made using Total least squares regression, a least squares data modeling technique in which observational errors on both dependent and independent variables are taken into account. This is done by minizing the errors perpendicular to the line, rather than just vertically. It is more complicated to implement than standard linear regression, but there is Fortran code called ODRPACK that has this efficiently implemented and wrapped scipy.odr Python module (which can be used out of the box). The details of odr are in the Scipy documentation and in even more detail in the ODRPACK guide.
In the code below (inspired from here uses an inital guess for the parameters, and makes a fit using total least squares regression.
End of explanation
"""
scatter(anscombe_i.x,anscombe_i.y,color = 'black')
plot(xfit, anscombe_i.y, 'b', label= "horizontal residuals")
plot(anscombe_i.x, yfit, 'g', label= "vertical residuals")
plot(anscombe_i.x, y_ortho_fit, 'r', label = "perpendicular residuals" )
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
"""
Explanation: Plotting all three regression lines gives a fuller picture of the data, and comparing their slopes provides a simple graphical assessment of the correlation coefficient. Plotting the orthogonal regression line (red) provides additional information because it makes no assumptions about the dependence or independence of the variables; as such, it appears to more accurately describe the trend in the data compared to either of the ordinary least squares regression lines.
End of explanation
"""
|
sempwn/ABCPRC | Tutorial_Ecology.ipynb | mit | %matplotlib inline
import ABCPRC as prc
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
"""
Explanation: Ecology Example
We use here the example of migratory birds in order to demonstrate the model fitting of this package
End of explanation
"""
def ibm(*ps):
m0,k = ps[0],ps[1]
T0 = 0.5
#measurements in regular increments throughout the year
ms,ts = np.zeros(100),np.linspace(0,1,100)
ms = (m0/2)*(np.sin(np.pi/2 + (ts-T0)*2*np.pi)+1)
ps = k/(k+ms)
outs = np.array([stats.nbinom(n=k,p=p).rvs() if p>0 else 0 for m,p in zip(ms,ps)])
return outs
m0,k = 50.0,1.0
xs = ibm(m0,k)
plt.plot(xs,'ko')
"""
Explanation: Model
Here we use a model of migratory birds, were the rate of observation changes across time and the observed counts are negative-binomially
distributed. Associated parameters are
* peak rate of observations $m_0$
* aggregation parameter $k$
* where peak occurs in year $T_0$
End of explanation
"""
#measurements in regular increments throughout the year
ms,ts = np.zeros(100),np.linspace(0,1,100)
ms = (m0/2)*(np.sin(np.pi/2 + (ts-0.5)*2*np.pi)+1)
plt.plot(ts,ms)
plt.ylabel('rate')
plt.xlabel('time (yrs)')
m = prc.ABC()
priors = [stats.expon(scale=100.0).rvs,stats.expon(scale=0.5).rvs]
m.setup(modelFunc=ibm,xs=xs,priors=priors)
m.fit(sample_size=100)
m.run(1000)
res = m.trace()
plt.figure()
print('Initial Distribution')
m.trace(plot=True,tol=0)
plt.figure()
print('Middle Tolerance')
m.trace(plot=True,tol=5)
plt.figure()
print('Final Distribution')
m.trace(plot=True,tol=-1)
ps = np.round(m.paramMAP(),decimals=2)
print('MAP for max rate is : {}, MAP for heterogeneity is {}'.format(*ps))
res = m.fitSummary()
"""
Explanation: check rates
First let's check that the functional form of the rates are corrected. Let's choose $m_0=10.0$ and $T_0=0.2$
End of explanation
"""
m.save('ecology_model_example')
"""
Explanation: It slightly underestimates heterogeneity, but is close for max rate
End of explanation
"""
|
tingelst/pymanopt | examples/MoG_singularity_heuristic.ipynb | bsd-3-clause | import autograd.numpy as np
np.set_printoptions(precision=2)
import matplotlib.pyplot as plt
%matplotlib inline
# Number of data
N = 1000
# Dimension of data
D = 2
# Number of clusters
K = 3
pi = [0.1, 0.6, 0.3]
mu = [np.array([-4, 1]), np.array([0, 0]), np.array([2, -1])]
Sigma = [np.array([[3, 0],[0, 1]]), np.array([[1, 1.], [1, 3]]), .5 * np.eye(2)]
components = np.random.choice(K, size=N, p=pi)
samples = np.zeros((N, D))
# for each component, generate all needed samples
for k in range(K):
# indices of current component in X
indices = (k == components)
# number of those occurrences
n_k = indices.sum()
if n_k > 0:
samples[indices] = np.random.multivariate_normal(mu[k], Sigma[k], n_k)
colors = ['r', 'g', 'b', 'c', 'm']
for i in range(K):
indices = (i == components)
plt.scatter(samples[indices, 0], samples[indices, 1], alpha=.4, color=colors[i%K])
plt.axis('equal')
plt.show()
"""
Explanation: Riemannian Optimisation with Pymanopt for Inference in MoG models
The Mixture of Gaussians (MoG) model assumes that datapoints $\mathbf{x}_i\in\mathbb{R}^d$ follow a distribution described by the following probability density function:
$p(\mathbf{x}) = \sum_{m=1}^M \pi_m p_\mathcal{N}(\mathbf{x};\mathbf{\mu}m,\mathbf{\Sigma}_m)$ where $\pi_m$ is the probability that the data point belongs to the $m^\text{th}$ mixture component and $p\mathcal{N}(\mathbf{x};\mathbf{\mu}_m,\mathbf{\Sigma}_m)$ is the probability density function of a multivariate Gaussian distribution with mean $\mathbf{\mu}_m \in \mathbb{R}^d$ and psd covariance matrix $\mathbf{\Sigma}_m \in {\mathbf{M}\in\mathbb{R}^{d\times d}: \mathbf{M}\succeq 0}$.
As an example consider the mixture of three Gaussians with means
$\mathbf{\mu}_1 = \begin{bmatrix} -4 \ 1 \end{bmatrix}$,
$\mathbf{\mu}_2 = \begin{bmatrix} 0 \ 0 \end{bmatrix}$ and
$\mathbf{\mu}_3 = \begin{bmatrix} 2 \ -1 \end{bmatrix}$, covariances
$\mathbf{\Sigma}_1 = \begin{bmatrix} 3 & 0 \ 0 & 1 \end{bmatrix}$,
$\mathbf{\Sigma}_2 = \begin{bmatrix} 1 & 1 \ 1 & 3 \end{bmatrix}$ and
$\mathbf{\Sigma}_3 = \begin{bmatrix} 0.5 & 0 \ 0 & 0.5 \end{bmatrix}$
and mixture probability vector $\pi=\left[0.1, 0.6, 0.3\right]$.
Let's generate $N=1000$ samples of that MoG model and scatter plot the samples:
End of explanation
"""
class LineSearchMoG(object):
"""
Back-tracking line-search that checks for close to singular matrices.
"""
def __init__(self, contraction_factor=.5, optimism=2,
suff_decr=1e-4, maxiter=25, initial_stepsize=1):
self.contraction_factor = contraction_factor
self.optimism = optimism
self.suff_decr = suff_decr
self.maxiter = maxiter
self.initial_stepsize = initial_stepsize
self._oldf0 = None
def search(self, objective, manifold, x, d, f0, df0):
"""
Function to perform backtracking line-search.
Arguments:
- objective
objective function to optimise
- manifold
manifold to optimise over
- x
starting point on the manifold
- d
tangent vector at x (descent direction)
- df0
directional derivative at x along d
Returns:
- stepsize
norm of the vector retracted to reach newx from x
- newx
next iterate suggested by the line-search
"""
# Compute the norm of the search direction
norm_d = manifold.norm(x, d)
if self._oldf0 is not None:
# Pick initial step size based on where we were last time.
alpha = 2 * (f0 - self._oldf0) / df0
# Look a little further
alpha *= self.optimism
else:
alpha = self.initial_stepsize / norm_d
alpha = float(alpha)
# Make the chosen step and compute the cost there.
newx, newf, reset = self._newxnewf(x, alpha * d, objective, manifold)
step_count = 1
# Backtrack while the Armijo criterion is not satisfied
while (newf > f0 + self.suff_decr * alpha * df0 and
step_count <= self.maxiter and
not reset):
# Reduce the step size
alpha = self.contraction_factor * alpha
# and look closer down the line
newx, newf, reset = self._newxnewf(x, alpha * d, objective, manifold)
step_count = step_count + 1
# If we got here without obtaining a decrease, we reject the step.
if newf > f0 and not reset:
alpha = 0
newx = x
stepsize = alpha * norm_d
self._oldf0 = f0
return stepsize, newx
def _newxnewf(self, x, d, objective, manifold):
newx = manifold.retr(x, d)
try:
newf = objective(newx)
except np.linalg.LinAlgError:
replace = np.asarray([np.linalg.matrix_rank(newx[0][k, :, :]) != newx[0][0, :, :].shape[0]
for k in range(newx[0].shape[0])])
x[0][replace, :, :] = manifold.rand()[0][replace, :, :]
return x, objective(x), True
return newx, newf, False
"""
Explanation: Given a data sample the de facto standard method to infer the parameters is the expectation maximisation (EM) algorithm that, in alternating so-called E and M steps, maximises the log-likelihood of the data.
In arXiv:1506.07677 Hosseini and Sra propose Riemannian optimisation as a powerful counterpart to EM. Importantly, they introduce a reparameterisation that leaves local optima of the log-likelihood unchanged while resulting in a geodesically convex optimisation problem over a product manifold $\prod_{m=1}^M\mathcal{PD}^{(d+1)\times(d+1)}$ of manifolds of $(d+1)\times(d+1)$ positive definite matrices.
The proposed method is on par with EM and shows less variability in running times.
The reparameterised optimisation problem for augmented data points $\mathbf{y}_i=[\mathbf{x}_i\ 1]$ can be stated as follows:
$$\min_{(S_1, ..., S_m, \nu_1, ..., \nu_{m-1}) \in \prod_{m=1}^M \mathcal{PD}^{(d+1)\times(d+1)}\times\mathbb{R}^{M-1}}
-\sum_{n=1}^N\log\left(
\sum_{m=1}^M \frac{\exp(\nu_m)}{\sum_{k=1}^M\exp(\nu_k)}
q_\mathcal{N}(\mathbf{y}_n;\mathbf{S}_m)
\right)$$
where
$\mathcal{PD}^{(d+1)\times(d+1)}$ is the manifold of positive definite
$(d+1)\times(d+1)$ matrices
$\mathcal{\nu}_m = \log\left(\frac{\alpha_m}{\alpha_M}\right), \ m=1, ..., M-1$ and $\nu_M=0$
$q_\mathcal{N}(\mathbf{y}_n;\mathbf{S}_m) =
2\pi\exp\left(\frac{1}{2}\right)
|\operatorname{det}(\mathbf{S}_m)|^{-\frac{1}{2}}(2\pi)^{-\frac{d+1}{2}}
\exp\left(-\frac{1}{2}\mathbf{y}_i^\top\mathbf{S}_m^{-1}\mathbf{y}_i\right)$
Optimisation problems like this can easily be solved using Pymanopt – even without the need to differentiate the cost function manually!
A well-known problem when fitting parameters of a MoG model is that one Gaussian may collapse onto a single data point resulting in singular covariance matrices (cf. e.g. p. 434 in Bishop, C. M. "Pattern Recognition and Machine Learning." 2001). This problem can be avoided by the following heuristic: if a component's covariance matrix is close to being singular we reset its mean and covariance matrix. Using Pymanopt this can be accomplished by using an appropriate linesearch (based on LineSearchBackTracking) -- here we demonstrate this approach:
End of explanation
"""
import autograd.numpy as np
from autograd.scipy.misc import logsumexp
from pymanopt.manifolds import Product, Euclidean, PositiveDefinite
from pymanopt import Problem
from pymanopt.solvers import SteepestDescent
# (1) Instantiate the manifold
manifold = Product([PositiveDefinite(D+1, k=K), Euclidean(K-1)])
# (2) Define cost function
# The parameters must be contained in a list theta.
def cost(theta):
# Unpack parameters
nu = np.concatenate([theta[1], [0]], axis=0)
S = theta[0]
logdetS = np.expand_dims(np.linalg.slogdet(S)[1], 1)
y = np.concatenate([samples.T, np.ones((1, N))], axis=0)
# Calculate log_q
y = np.expand_dims(y, 0)
# 'Probability' of y belonging to each cluster
log_q = -0.5 * (np.sum(y * np.linalg.solve(S, y), axis=1) + logdetS)
alpha = np.exp(nu)
alpha = alpha / np.sum(alpha)
alpha = np.expand_dims(alpha, 1)
loglikvec = logsumexp(np.log(alpha) + log_q, axis=0)
return -np.sum(loglikvec)
problem = Problem(manifold=manifold, cost=cost, verbosity=1)
# (3) Instantiate a Pymanopt solver
solver = SteepestDescent(linesearch=LineSearchMoG())
# let Pymanopt do the rest
Xopt = solver.solve(problem)
"""
Explanation: So let's infer the parameters of our toy example by Riemannian optimisation using Pymanopt:
End of explanation
"""
mu1hat = Xopt[0][0][0:2,2:3]
Sigma1hat = Xopt[0][0][:2, :2] - mu1hat.dot(mu1hat.T)
mu2hat = Xopt[0][1][0:2,2:3]
Sigma2hat = Xopt[0][1][:2, :2] - mu2hat.dot(mu2hat.T)
mu3hat = Xopt[0][2][0:2,2:3]
Sigma3hat = Xopt[0][2][:2, :2] - mu3hat.dot(mu3hat.T)
pihat = np.exp(np.concatenate([Xopt[1], [0]], axis=0))
pihat = pihat / np.sum(pihat)
"""
Explanation: Once Pymanopt has finished the optimisation we can obtain the inferred parameters as follows:
End of explanation
"""
print(mu[0])
print(Sigma[0])
print(mu[1])
print(Sigma[1])
print(mu[2])
print(Sigma[2])
print(pi[0])
print(pi[1])
print(pi[2])
"""
Explanation: And convince ourselves that the inferred parameters are close to the ground truth parameters.
The ground truth parameters $\mathbf{\mu}_1, \mathbf{\Sigma}_1, \mathbf{\mu}_2, \mathbf{\Sigma}_2, \mathbf{\mu}_3, \mathbf{\Sigma}_3, \pi_1, \pi_2, \pi_3$:
End of explanation
"""
print(mu1hat)
print(Sigma1hat)
print(mu2hat)
print(Sigma2hat)
print(mu3hat)
print(Sigma3hat)
print(pihat[0])
print(pihat[1])
print(pihat[2])
"""
Explanation: And the inferred parameters $\hat{\mathbf{\mu}}_1, \hat{\mathbf{\Sigma}}_1, \hat{\mathbf{\mu}}_2, \hat{\mathbf{\Sigma}}_2, \hat{\mathbf{\mu}}_3, \hat{\mathbf{\Sigma}}_3, \hat{\pi}_1, \hat{\pi}_2, \hat{\pi}_3$:
End of explanation
"""
|
wmfschneider/CHE30324 | Homework/HW4-soln.ipynb | gpl-3.0 | import sympy as sy
from sympy import *
x=Symbol('x')
a=Symbol('a',positive=True)
b=Symbol('b',positive=True)
Wavefunction=a*exp(-x**2/2/b**2)
A=integrate(Wavefunction**2,(x,-oo,+oo)) # calculate the integral of (wavefunc) * (wavefunc*) from -oo to +oo
Wavefunction_normalized=Wavefunction/sqrt(A)
pprint(Wavefunction_normalized)
"""
Explanation: Chem 30324, Spring 2020, Homework 4
Due February 18, 2020
Schrödinger developed a wave equation to describe the motion (mechanics) of quantum-scale particles moving in potentials. An electron (mass $m_e$) is moving in a one-dimensional potential given by
$$V(x) = \frac{1}{2} k x^ 2, \quad -\infty < x < \infty$$
where $k$ is a real number.
1. Write down the time-independent Schrödinger equation for this system. Remember to include the domain of the equation. Indicate the parts of the equation corresponding to the kinetic, potential, and total energies of the system. (Hint: Leave your expression in terms of $m_e$ and $k$.)
\begin{equation}
- \frac{\hbar^{2}}{2m_e} \frac{d^2\psi}{dx^2}\,+ \frac{1}{2}kx^2 \psi= E \psi\quad -\infty < x < \infty
\end{equation}
Kinetic: $$ - \frac{\hbar^{2}}{2m_e} \frac{d^2\psi}{dx^2}\$$
Potential: $$ \frac{1}{2}kx^2 \psi $$
Total Energy: $$ E \psi\ $$
2. Only one of the three following candidates could be an acceptable wavefunction for this system. Which one, and why? (In each case, $b=\left ( \hbar^2/m_ek\right )^{1/4}$ is a unit of length, and $a$ is an arbitrary normalization constant.
$$\psi(x) = a \sin(bx) \quad\quad \quad \psi(x)=a \exp\left(-\frac{x^2}{2b^2}\right) \quad\quad \quad \psi(x)= \begin{cases}1-|x|/b, |x|\le b \ 0, |x|>b \end{cases}$$
The second one is the acceptable wavefunction for this system. The first one is not square integrable. The third one is not differentiable.
3. Normalize the "good" wavefunction. You can leave your answer in terms of $b$.
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
x_d = np.linspace(-2.5,2.5,100) # x_d=x/b which is the dimensionless length
V=1/2*x_d**2 # in unit of kb^2
# V = 1/2*k*x^2 = 1/2*x_d^2 *k*b^2.
# Since k and b are constant, we can consider kb^2 to be the unit of energy.
Wavefunction2_normalized=Wavefunction**2/A
print(Wavefunction2_normalized)
Wavefunction2_normalized=np.exp(-x_d**2)/(sqrt(pi)) # in unit of 1/b
#normalized_wavefunction^2 = exp(-x^2/b^2)/sqrt(pi)/b = exp(-x_d**2)/(sqrt(pi))/b
#Since b is constant, we can consider b to be the unit of length.
fig, ax1 = plt.subplots()
color = 'tab:green'
ax1.set_xlabel('x (b)')
ax1.set_ylabel('V (kb^2)', color=color)
ax1.plot(x_d, V, color=color, label='V(x)')
ax1.tick_params(axis='y', labelcolor=color)
plt.legend(loc='upper left')
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
ax2.set_ylabel('Wavefunction2_normalized (1/b)') # we already handled the x-label with ax1
ax2.plot(x_d, Wavefunction2_normalized,label='Wavefunc2_norm')
plt.legend(loc='lower left')
plt.title('V(x) & Wavefunction2_normalized')
"""
Explanation: The normalized wavefunction is:
$$ \frac{e^ \frac{-x^2}{2b^2}} {\pi ^ \frac{1} {4} b ^ \frac{1}{2}}$$
4. Plot $V(x)$ and your normalized $\tilde{\psi}^2(x)$ along the same $x$ axis. (Hint: Plot your answers in units of $b$ length and units of $k$ force constant.)
End of explanation
"""
# Analytical solution
Wavefunction2_normalized=Wavefunction**2/A
print(Wavefunction2_normalized)
#normalized_wavefunction^2 = exp(-x^2/b^2)/sqrt(pi)/b = exp(-x_d**2)/(sqrt(pi)*b)
x_d=Symbol('x_d')
b=Symbol('b', positive = True)
Prob=integrate(exp(-x_d**2)/(sqrt(pi)),(x_d,0,1))
float(Prob)
"""
Explanation: 5. If you look for this particle, what is the probability you find it in the region $0 < x <b $?
End of explanation
"""
m=Symbol('m',positive=True)
b=Symbol('b',positive=True)
h=Symbol('h',positive=True)
k=h**2/b**4/m
x = sy.Symbol('x')
der2 = Wavefunction_normalized.diff(x,2)
E1=(der2*(-h**2)/2/m+1/2*k*x**2*Wavefunction_normalized)/Wavefunction_normalized
E1= simplify(E1)
if x in E1.free_symbols: # check if E1 is a function of x or not
print('No. The normalized wavefunction is not a solution of Schrodinger equation.')
else:
print('Yes. The normalized wavefunction is a solution of Schrodinger equation.')
print('The total energy is', E1)
"""
Explanation: Prob = $$\int_{0}^{b} \frac{e^ \frac{-x^2}{b^2}} { \pi^\frac{1}{2} b} dx
=\int_{0}^{1} \frac{e^{-x_d^2}} { \pi^\frac{1}{2}b} b*dx_d
=\int_{0}^{1} \frac{e^{-x_d^2}} { \pi^\frac{1}{2}} dx_d$$
6. Is your normalized $\tilde{\psi}(x)$ a solution of your Schrödinger equation? If so, what is its total energy? (Hint: $\dfrac{d^2}{dx^2}e^{-ax^2} = 2 a e^{-ax^2} \left (2a x^2 -1\right)$. You can leave your answer in terms of fundamental constants.)
End of explanation
"""
import numpy as np
der1 = Wavefunction_normalized.diff(x,1)
Eigenvalue=complex(0,-1)*h*der1/Wavefunction_normalized
if x in Eigenvalue.free_symbols: # check if Eigenvalue is a function of x or not
print('No. It is not an eigenfunction of the linear momentum operator.')
else:
print('Yes. It is an eigenfunction of the linear momentum operator.')
"""
Explanation: 7. Is your normalized $\tilde{\psi}(x)$ an eigenfunction of the linear momentum operator? If so, what is its eigenvalue?
End of explanation
"""
# Not a eigenfunction
print('No. Because this wavefunction is not an eigenfunction of momentum.')
print('Only the eigenfunction of momentum can give us a single result.')
"""
Explanation: 8. If you were to measure the linear momentum of many electrons, all with the same wavefunction $\tilde{\psi}(x)$ , will you get the same answer every time?
End of explanation
"""
# Momentum operator
der1 = Wavefunction_normalized.diff(x,1)
P1 = complex(0,-1)*h*der1 # First momentum operator
Ep_p= integrate(P1*Wavefunction_normalized,(x,-oo,+oo))
print('Expectation value is equal to ', Ep_p)
"""
Explanation: 9. If you were to measure the linear momentum of many electrons, all with the same wavefunction $\tilde{\psi}(x)$ , what will you get on average?
End of explanation
"""
# Momentum operator^2
der2 = Wavefunction_normalized.diff(x,2)
Ep_p2= integrate(-h**2*der2*Wavefunction_normalized,(x,-oo,+oo))
dp=sqrt(Ep_p2-Ep_p*2)
print('The uncertainty in the momentum of the electron is ' ,dp)
"""
Explanation: 9. What is the uncertainty in the momentum of the electron? (Recall the uncertainty is given by $\Delta p = \sqrt{\langle p^2 \rangle - \langle p \rangle^2}$.) You can give your answer in units of $m_e$, $\hbar$, and $b$.
End of explanation
"""
dx=h/2/dp
print('The maximum precision with the position of the electron is ',dx)
"""
Explanation: 10. What is the maximum precision with which you could measure the position of the electron? Give your answer in units of $b$.
End of explanation
"""
x = np.linspace(-5,5,300)
Wavefunction_normalized=Wavefunction/sqrt(A)
print(Wavefunction_normalized)
b=1
Wavefunction_normalized=np.exp(-x**2/(2*b**2))/(pi**(1/4)*sqrt(b))
plt.plot(x,Wavefunction_normalized)
plt.xlabel('x')
plt.ylabel('Wavefunction_normalized')
plt.title('Wavefunction_normalized')
print('It is non-zero all the way to infinity as long as A is a finite value. ')
"""
Explanation: 11. You probably recognize $V(x)$ as the potential for a harmonic oscillator, and you remember that a classic harmonic oscillator always oscillates within some amplitude $A$. Look at $\tilde{\psi}(x)$. Does it go to zero at some $A$? Or is it non-zero all the way to infinity?
End of explanation
"""
|
Jackie789/JupyterNotebooks | Naive+Bayes+for+Classification+of+Positive-Negative+reviews.ipynb | gpl-3.0 | %matplotlib inline
import numpy as np
import pandas as pd
import scipy
import sklearn
import matplotlib.pyplot as plt
import seaborn as sns
# Grab and process the raw data.
data_path = ("/Users/jacquelynzuker/Desktop/sentiment labelled sentences/amazon_cells_labelled.txt"
)
amazon_raw = pd.read_csv(data_path, delimiter= '\t', header=None)
amazon_raw.columns = ['message', 'satisfaction']
"""
Explanation: Using Naive Bayes for Classification of Positive/Negative Reviews
The dataset in use comes from https://archive.ics.uci.edu/ml/datasets/Sentiment+Labelled+Sentences# . We begin our analysis by inspecting Amazon reviews.
End of explanation
"""
keywords = ['must have', 'excellent', 'awesome', 'recommend', 'good',
'great', 'happy', 'love', 'satisfied', 'best', 'works',
'liked', 'easy', 'quick', 'incredible', 'perfectly',
'right', 'cool', 'joy', 'easier', 'fast', 'nice', 'family',
'incredible', 'sweetest', 'poor', 'broke', 'doesn\'t work',
'not work', 'died', 'don\'t buy', 'problem', 'useless',
'awful', 'failed', 'terrible', 'horrible', 'the', '10',
'cool']
for key in keywords:
# Note that we add spaces around the key so that we're getting the word,
# not just pattern matching.
amazon_raw[str(key)] = amazon_raw.message.str.contains(
'' + str(key) + '',
case = False
)
"""
Explanation: Keywords are chosen for input into the Naive Bayes model. The keywords were chosen by browsing through the raw Amazon dataset and selecting "emotionally-charged" words that might be good predictors. The model was then run, and if the chosen keyword provided no benefit, it was discarded from the list.
End of explanation
"""
amazon_raw.satisfaction.value_counts()
myList = [0] * 1000
for column in amazon_raw.columns[2:]:
myColumn = amazon_raw[column]
for index in amazon_raw.index:
if amazon_raw[column][index] == True:
myList[index] += 1
np.sum(myList)
amazon_raw.sum()
"""
Explanation: Checking the balance of the inputted dataset.
End of explanation
"""
plt.figure(figsize=(15,10))
sns.heatmap(amazon_raw.corr())
plt.title("Correlation between keywords in the model")
plt.show()
data = amazon_raw[keywords]
target = amazon_raw['satisfaction']
# Our data is binary / boolean, so we're importing the Bernoulli classifier.
from sklearn.naive_bayes import BernoulliNB
# Instantiate our model and store it in a new variable.
bnb = BernoulliNB()
# Fit our model to the data.
bnb.fit(data, target)
# Classify, storing the result in a new variable.
y_pred = bnb.predict(data)
# Display our results.
print("Number of mislabeled points out of a total {} points : {}".format(
data.shape[0],
(target != y_pred).sum()
))
"""
Explanation: One disadvantage inherent in the naive Bayes model is that it does not work well with keywords which are correlated in the reviews
End of explanation
"""
data_path = ("/Users/jacquelynzuker/Desktop/sentiment labelled sentences/imdb_labelled.txt"
)
imdb_raw = pd.read_csv(data_path, delimiter= '\t', header=None)
imdb_raw.columns = ['message', 'satisfaction']
amazon_raw = imdb_raw
#imdb_raw
"""
Explanation: For the Amazon dataset, this model of Naive Bayes predicts the correct outcome 82.7% of the time. Let's see how it fares on the imdb dataset
IMDB
End of explanation
"""
data_path = ("/Users/jacquelynzuker/Desktop/sentiment labelled sentences/yelp_labelled.txt"
)
yelp_raw = pd.read_csv(data_path, delimiter= '\t', header=None)
yelp_raw.columns = ['message', 'satisfaction']
amazon_raw = yelp_raw
"""
Explanation: This set of keywords in the model correctly predicted the outcome 464/748 = 62.0% of the time.
Yelp
This set of keywords in the model correctly guessed the outcome 70.1% of the time.
End of explanation
"""
|
vyvojer/ploev | notebooks/Board matching.ipynb | gpl-3.0 | odds_oracle = OddsOracle()
calc = Calc(odds_oracle)
"""
Explanation: Сначала присоединяемся к серверу OddsOracle
End of explanation
"""
# Префлоп диапазоны (используем сохранненные диапазоны PokerJuice)
main_ranges = ['$FI12', '$FI20', '$FI25', '$FI30', '$FI40', '$FI50',
'$3b4i', '$3b6i', '$3b8i', '$3b10i', '$3b12i', '$3b15i',
'$3b4o', '$3b6o', '$3b8o', '$3b10o', '$3b12o', '$3b15o',]
# Поддиапазоны попадания во флоп в формате easy_range
sub_easy_ranges = [
'MS+, SD16_16+, (T2P+):(SD8_8+), (TP,MP,BP,OP3+):SD12_12+',
'T2P+, SD12_12+, (2P+):(SD8+), (TP,MP,BP,OP3+):SD12+',
'TP,OP2+, SD8_8+',
'*',
]
# Флоп
flop = '9d8s2h'
# Переводим поддиапазоны в формат PPT с помощью easy_range.BoardExplorer
board_explorer = BoardExplorer(Board.from_str(flop))
sub_ranges = [board_explorer.ppt(sub_easy_range) for sub_easy_range in sub_easy_ranges]
matching = [] # В этом list'e будут храниться проценты попадания каждого диапазона
"""
Explanation: Попадание во флоп
Мы хотим посмотреть, как часто (и как сильно) префлоп диапазоны попадают в определенные флопы.
Для этого нам нужно задать:
1. Префлоп диапазоны
2. Поддиапазоны попадания во флоп (для разных типов флопов они будут разные), например
* Strong
* Good
* Medium
* Weak
3. Разные флопы (в этом примере будет только один флоп)
Задаем начальные условия
End of explanation
"""
def get_matching(board, main_range, sub_ranges):
return calc.range_distribution(main_range, sub_ranges, board, equity=False)
"""
Explanation: Функция, которая обращается к OddsOracle, чтобы получить проценты попадания
End of explanation
"""
for main_range in main_ranges:
calc_results = get_matching(flop, main_range, sub_ranges)
fractions = [round(calc_result.fraction * 100) for calc_result in calc_results]
matching.append(fractions)
"""
Explanation: В цикле для каждого префлоп диапазона получам проценты попадания
End of explanation
"""
columns = ['Strong', 'Good', 'Medium', 'Weak']
df = pd.DataFrame.from_records(matching, columns=columns, index=main_ranges)
"""
Explanation: Создаем pandas.DataFrame из нашего list'а
Это нужно только для удобства. Этот DataFrame аккуратно отображается в блокноте, из него можно быстро сделать разные графики, его можно легко экспортировать в разные форматы (CVS, excel...) и т.д
End of explanation
"""
df
"""
Explanation: Вот результат
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
df.plot(kind='bar',stacked='True', figsize=(10, 4))
df.plot(kind='bar', figsize=(15, 4))
df[:6].plot(figsize=(10, 4)) # Только для $FI
df[6:12].plot(figsize=(10, 4)) # Только для $3bi
"""
Explanation: Несколько графиков
End of explanation
"""
df.to_excel("matching.xls")
"""
Explanation: Сохраняем как excel файл
End of explanation
"""
|
kit-cel/wt | wt/vorlesung/ch7_9/weakly_stationary.ipynb | gpl-2.0 | # importing
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# showing figures inline
%matplotlib inline
# plotting options
font = {'size' : 20}
plt.rc('font', **font)
plt.rc('text', usetex=True)
matplotlib.rc('figure', figsize=(18, 6) )
"""
Explanation: Content and Objective
Checking for (weak) stationarity
Several stochastic processes (sine, chirp) are sampled and their empirical acf is shown
Import
End of explanation
"""
# sampling time and observation time
t_s = 0.05
t_max = 2
# number of samples for normalizing
N_samples = int( t_max / t_s )
# vector of times and delays (for acf)
t = np.arange( 0, t_max + t_s, t_s )
tau_acf = np.arange( - t_max, t_max + t_s, t_s)
# get frequency, random amplitude and random phase
f_0 = 2
A = np.random.randn()
phi = 2 * np.pi * np.random.rand()
# get realization of the process
X = A * np.sin( 2 * np.pi * f_0 * t + phi )
# get autocorrelation
acf = np.correlate( X, X , 'full')
# plotting
plt.plot( tau_acf, acf, label='$\\varphi_{XX}(\\tau)$')
plt.grid(1)
plt.xlabel('$\\tau$')
plt.legend(loc='upper right')
"""
Explanation: Random Sine Process
Get Process and Correlate in Time Domain
End of explanation
"""
# number of trials
N_trials = int( 1e3 )
# initialize acf
acf = np.zeros_like( tau_acf )
# get frequency, random amplitude and random phase
f_0 = 2
A = np.random.randn( N_trials )
phi = 2 * np.pi * np.random.rand( N_trials )
# loop for all delays and determine acf by expectation along realizations
for ind_tau, val_tau in enumerate( tau_acf ):
# get acf
corr = [ A[_n] * np.sin( 2 * np.pi * f_0 * t + phi[_n] )
* A[_n] * np.sin( 2 * np.pi * f_0 * (t-val_tau) + phi[_n] )
for _n in range(N_trials)
]
acf[ ind_tau ] = np.sum( corr) / N_trials / N_samples
# plotting
plt.plot( tau_acf, acf, label='$\\varphi_{XX}(\\tau)$')
plt.grid(1)
plt.xlabel('$\\tau$')
plt.legend(loc='upper right')
"""
Explanation: Questions:
Why are we observing a triangular-shaped acf?
Why are we observing such a "strange" value of $\varphi_{XX}(0)$?
Reason for the $\tau$ values used in the plot.
Different Solution by Using Expectation instead of Time-Domain Correlation
End of explanation
"""
N_trials = int( 1e3 )
# initialize empty two-dim array (t and tau)
acf_2d = np.zeros( ( len(t), len(tau_acf) ) )
# get frequency, random amplitude and random phase
f_0 = 2
A = np.random.randn( N_trials )
phi = 2 * np.pi * np.random.rand( N_trials )
# loop for all times
for ind_t, val_t in enumerate( t ):
# loop for all delays
for ind_tau, val_tau in enumerate( tau_acf ):
# get acf at according index/time/delay
corr = [ A[_n] * np.sin( 2 * np.pi * f_0 * val_t + phi[_n] )
* A[_n] * np.sin( 2 * np.pi * f_0 * (val_t-val_tau) + phi[_n] )
for _n in range(N_trials)
]
acf_2d[ ind_t, ind_tau ] = np.sum( corr ) / N_trials
# parameters for meshing
T, Tau_acf = np.meshgrid( tau_acf, t )
# plotting
plt.contourf( T, Tau_acf , acf_2d[ : , : ] )
plt.xlabel('$\\tau$')
plt.ylabel('$t$')
plt.colorbar()
"""
Explanation: <b> Exercise:</b> Reason the value of $\varphi_{XX}(0)$.
Showing ACF in the $(t,\tau)$-Domain
<b>NOTE:</b> So, acf is given by $\varphi_{XX}(t, \tau)=E(X(t)X(t-\tau))$
End of explanation
"""
N_trials = int( 1e2 )
# initialize
acf = np.zeros_like( tau_acf )
# get frequency, random amplitude and random phase
f_0 = 2
delta_f = .2
A = np.random.randn( N_trials )
phi = 2 * np.pi * np.random.rand( N_trials )
# loop for all delays
for ind_tau, val_tau in enumerate( tau_acf ):
# get acf
acf[ ind_tau ] = np.sum( [
A[_n] * np.sin( 2 * np.pi * ( f_0 + delta_f * t ) * t + phi[_n] )
* A[_n] * np.sin( 2 * np.pi * ( f_0 + delta_f * t )* (t-val_tau) + phi[_n] )
for _n in range(N_trials)
] ) / N_trials
# plotting
plt.plot( tau_acf, acf, label='$\\varphi_{XX}(\\tau)$')
plt.grid(1)
plt.xlabel('$\\tau$')
plt.legend(loc='upper right')
"""
Explanation: Now Looking at a Chirp
<b>Note:</b> A chirp describes a sinusoid with a frequency that--in this example--increases linearly with time.
Is it stationary?
End of explanation
"""
# number of realizations
N_trials = int( 1e3 )
# initialize empty values for acf
acf_2d = np.zeros( ( len(t), len(tau_acf) ) )
# get frequency, random amplitude and random phase
f_0 = 2
A = np.random.randn( N_trials )
phi = 2 * np.pi * np.random.rand( N_trials )
# loop for all times
for ind_t, val_t in enumerate( t ):
# loop for all delays
for ind_tau, val_tau in enumerate( tau_acf ):
# get acf
acf_2d[ ind_t, ind_tau ] = np.sum( [
A[_n] * np.sin( 2 * np.pi * ( f_0 + delta_f * val_t ) * val_t + phi[_n] )
* A[_n] * np.sin( 2 * np.pi * ( f_0 + delta_f * val_t )* (val_t-val_tau) + phi[_n] )
for _n in range(N_trials)
] ) / N_trials
T, Tau_acf = np.meshgrid( tau_acf, t )
# plotting
plt.contourf( T, Tau_acf , acf_2d[ : , : ] )
plt.xlabel('$\\tau$')
plt.ylabel('$t$')
plt.colorbar()
"""
Explanation: <b>Remark:</b> ... depending on $\tau$, so everything is fine?!?
Showing ACF of the Chirp in the $(t,\tau)$-Domain
End of explanation
"""
|
tyamamot/h29iro | codes/5_Learning_to_Rank.ipynb | mit | ! ../bin/svm_rank_learn -c 0.03 ../data/svmrank_sample/train.dat ../data/svmrank_sample/model
"""
Explanation: 第5回 ランキング学習(Ranking SVM)
この演習課題ページでは,Ranking SVMの実装であるSVM-rankの使い方を説明します.この演習ページの目的は,SVM-rankを用いてモデルの学習,テストデータに対するランク付けが可能になることです.
この演習ページでは以下のツールを使用します.
- SVM-rank (by Prof. Thorsten Joachims)
- https://www.cs.cornell.edu/people/tj/svm_light/svm_rank.html
1. SVM-rankのインストール
SVM-rankのページに従って,SVM-rankをインストールします.
まずは,svm_rank.tar.gzをダウンロードします.
- http://download.joachims.org/svm_rank/current/svm_rank.tar.gz
ダウンロードしたらファイルを解凍し,コンパイルしてください.
以下はその一例です.
$ mkdir svm_rank # 解凍するファイルを入れるフォルダを作成
$ mv svm_rank.tar.gz svm_rank #ダウンロードしたアーカイブを今作成したフォルダに移動
$ cd svm_rank
$ tar xzvf svm_rank.tar.gz #ファイルを解凍
$ make
正しくコンパイルができていれば, svm_rank_learn と svm_rank_classify という2つのファイルが生成されているはずです.
作成したsvm_rank_learn と svm_rank_classifyを適当な場所にコピーします.この演習ページでは, h29iro/bin/にコピーした前提でコードを進めます.
2.サンプルファイルの実行
h29iro/data/svmrank_sample/ にサンプルファイルを置いています.これは,SVM-rankのページで配布されている以下のファイルをコピーしたものです.
- http://download.joachims.org/svm_light/examples/example3.tar.gz
このサンプルファイルには,training.dat(訓練データ)と test.dat(テストデータ)が含まれています.
2.1 訓練データ
訓練データ(../data/svmrank_sample/train.dat)の中身はこのようになっています
3 qid:1 1:1 2:1 3:0 4:0.2 5:0 # 1A
2 qid:1 1:0 2:0 3:1 4:0.1 5:1 # 1B
1 qid:1 1:0 2:1 3:0 4:0.4 5:0 # 1C
1 qid:1 1:0 2:0 3:1 4:0.3 5:0 # 1D
1 qid:2 1:0 2:0 3:1 4:0.2 5:0 # 2A
2 qid:2 1:1 2:0 3:1 4:0.4 5:0 # 2B
1 qid:2 1:0 2:0 3:1 4:0.1 5:0 # 2C
1 qid:2 1:0 2:0 3:1 4:0.2 5:0 # 2D
2 qid:3 1:0 2:0 3:1 4:0.1 5:1 # 3A
3 qid:3 1:1 2:1 3:0 4:0.3 5:0 # 3B
4 qid:3 1:1 2:0 3:0 4:0.4 5:1 # 3C
1 qid:3 1:0 2:1 3:1 4:0.5 5:0 # 3D
詳しいフォーマットの中身は,SVM-rankのページを参照してください.
各行1列目の数値が,その文書のクエリqidに対する重要性を表しており,SVM-rankはこの値を元にpairwise preference集合を生成し,学習データとします.
たとえば,上記訓練データは,下記のpairwise preference集合を訓練データとして与えていることになります.
1A>1B, 1A>1C, 1A>1D, 1B>1C, 1B>1D, 2B>2A, 2B>2C, 2B>2D, 3C>3A, 3C>3B, 3C>3D, 3B>3A, 3B>3D, 3A>3D
(SVM-rankのページより引用)
また, 3列目以降の x:y という文字列は特徴量を表しており,x次元目の値がyであることを示しています.
たとえば,1行目のデータは,クエリ$q_1$に対して, $f_1 = 1.0, f_2=1.0, f_3=0.0, f_4=0.2, f_5=0.0$という特徴ベクトルを持った文書1Aの重要性が3であることを示しています.
2.2 訓練データの学習
訓練データを学習し,モデルを生成するには, svm_rank_learn を用います.
End of explanation
"""
!cat ../data/svmrank_sample/model
"""
Explanation: 正しく学習できていれば, ../data/svmrank_sample/model というファイルが生成されているはずです.
End of explanation
"""
!cat ../data/svmrank_sample/test.dat
"""
Explanation: 2.3 テストデータへの適用
先ほど学習したモデルを使って,実際にテストデータに対してランキングを行うには,svm_rank_classifyを利用します.
End of explanation
"""
! ../bin/svm_rank_classify ../data/svmrank_sample/test.dat ../data/svmrank_sample/model ../data/svmrank_sample/prediction
"""
Explanation: なお,テストデータ中の1列目の値は,正解順位(正確には重要度)です. テストデータに対する精度(テストデータ中のペアの順序関係をどれだけ正しく再現できたか)を計算する際に利用されます.
End of explanation
"""
!cat ../data/svmrank_sample/prediction
"""
Explanation: テストデータに対する実際のランキングはpredictionファイルを確認します.
End of explanation
"""
|
tuanavu/coursera-university-of-washington | machine_learning/1_machine_learning_foundations/assignment/week4/Document retrieval.ipynb | mit | import graphlab
"""
Explanation: Document retrieval from wikipedia data
Fire up GraphLab Create
End of explanation
"""
people = graphlab.SFrame('people_wiki.gl/')
"""
Explanation: Load some text data - from wikipedia, pages on people
End of explanation
"""
people.head()
len(people)
"""
Explanation: Data contains: link to wikipedia article, name of person, text of article.
End of explanation
"""
obama = people[people['name'] == 'Barack Obama']
obama
obama['text']
"""
Explanation: Explore the dataset and checkout the text it contains
Exploring the entry for president Obama
End of explanation
"""
clooney = people[people['name'] == 'George Clooney']
clooney['text']
"""
Explanation: Exploring the entry for actor George Clooney
End of explanation
"""
obama['word_count'] = graphlab.text_analytics.count_words(obama['text'])
print obama['word_count']
"""
Explanation: Get the word counts for Obama article
End of explanation
"""
obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])
"""
Explanation: Sort the word counts for the Obama article
Turning dictonary of word counts into a table
End of explanation
"""
obama_word_count_table.head()
obama_word_count_table.sort('count',ascending=False)
"""
Explanation: Sorting the word counts to show most common words at the top
End of explanation
"""
people['word_count'] = graphlab.text_analytics.count_words(people['text'])
people.head()
tfidf = graphlab.text_analytics.tf_idf(people['word_count'])
tfidf
people['tfidf'] = tfidf['docs']
"""
Explanation: Most common words include uninformative words like "the", "in", "and",...
Compute TF-IDF for the corpus
To give more weight to informative words, we weigh them by their TF-IDF scores.
End of explanation
"""
obama = people[people['name'] == 'Barack Obama']
obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
"""
Explanation: Examine the TF-IDF for the Obama article
End of explanation
"""
clinton = people[people['name'] == 'Bill Clinton']
beckham = people[people['name'] == 'David Beckham']
"""
Explanation: Words with highest TF-IDF are much more informative.
Manually compute distances between a few people
Let's manually compare the distances between the articles for a few famous people.
End of explanation
"""
graphlab.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0])
graphlab.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])
"""
Explanation: Is Obama closer to Clinton than to Beckham?
We will use cosine distance, which is given by
(1-cosine_similarity)
and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham.
End of explanation
"""
knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name')
"""
Explanation: Build a nearest neighbor model for document retrieval
We now create a nearest-neighbors model and apply it to document retrieval.
End of explanation
"""
knn_model.query(obama)
"""
Explanation: Applying the nearest-neighbors model for retrieval
Who is closest to Obama?
End of explanation
"""
swift = people[people['name'] == 'Taylor Swift']
knn_model.query(swift)
jolie = people[people['name'] == 'Angelina Jolie']
knn_model.query(jolie)
arnold = people[people['name'] == 'Arnold Schwarzenegger']
knn_model.query(arnold)
"""
Explanation: As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians.
Other examples of document retrieval
End of explanation
"""
|
nansencenter/nansat-lectures | notebooks/03 object oriented programming.ipynb | gpl-3.0 | class A():
pass
"""
Explanation: Intorduction to Object Oriented Programming in Python
Definition of the minimal class in two lines
Define class
End of explanation
"""
a = A() # create an instance of class A
print (a)
print (type(a))
"""
Explanation: Use class
End of explanation
"""
class Human(object):
name = ''
age = 0
human1 = Human() # create instance of Human
human1.name = 'Anton' # name him (add data to this object)
human1.age = 39 # set the age (add data to this object)
print (type(human1))
print (human1.name)
print (human1.age)
"""
Explanation: Definition of a class with attributes (properties)
End of explanation
"""
class Human(object):
name = ''
age = 0
def __init__(self, name):
self.name = name
"""
Explanation: Definition of a class with constructor
End of explanation
"""
h1 = Human('Anton')
print (h1.name)
print (h1.age)
"""
Explanation: Create a Human instance and give him a name instantly
End of explanation
"""
class Human(object):
''' Human being '''
name = ''
age = 0
def __init__(self, name):
''' Create a Human '''
self.name = name
def grow(self):
''' Grow a Human by one year (in-place) '''
self.age += 1
"""
Explanation: Definition of a class with several methods
End of explanation
"""
human1 = Human('Adam')
human1.grow()
print (human1.name)
print (human1.age)
"""
Explanation: Create a Human, give him a name, grow by one year (in-place)
End of explanation
"""
class Human(object):
''' Human being '''
name = ''
age = 0
def __init__(self, name):
''' Create a Human '''
self.name = name
def grow(self):
''' Grow a Human by one year (in-place) '''
self.age += 1
def get_name(self):
''' Return name of a Human '''
return self.name
def get_age(self):
''' Return name of a Human '''
return self.age
h1 = Human('Eva')
print (h1.get_name())
"""
Explanation: Add get_ methods to the class
End of explanation
"""
class Teacher(Human):
''' Teacher of Python '''
def give_lecture(self):
''' Print lecture on the screen '''
print ('bla bla bla')
"""
Explanation: Create a class with Inheritance
End of explanation
"""
t1 = Teacher('Anton')
while t1.get_age() < 50:
t1.grow()
print (t1.get_name())
print (t1.get_age())
t1.give_lecture()
"""
Explanation: Create an Teacher with name, grow him sufficiently, use him.
End of explanation
"""
# add directory scripts to PYTHONPATH (searchable path)
import sys
sys.path.append('scripts')
from human_teacher import Teacher
t1 = Teacher('Morten')
t1.give_lecture()
"""
Explanation: Import class definition from a module
Store class definition in a separate file. E.g.:
https://github.com/nansencenter/nansat-lectures/blob/master/human_teacher.py
End of explanation
"""
## add scripts to the list of searchable directories
import sys
sys.path.append('scripts')
# import class definiton our module
from ts_profile import Profile
# load data
p = Profile('data/tsprofile.txt')
# work with the object
print (p.get_ts_at_level(5))
print (p.get_ts_at_depth(200))
print (p.get_mixed_layer_depth(.1))
"""
Explanation: Practical example
End of explanation
"""
from st_profile import load_profile, get_ts_at_level, get_ts_at_depth
from st_profile import get_mixed_layer_depth, plot_ts
"""
Explanation: How would it look without OOP?
1. A lot of functions to import
End of explanation
"""
depth, temp, sal = load_profile('tsprofile.txt')
print (get_ts_at_level(depth, temp, sal))
"""
Explanation: 2. A lot of data to unpack and to pass between functions
End of explanation
"""
from nansat import Nansat
n = Nansat('satellite_filename.hdf')
"""
Explanation: 3. And imagine now we open a satellite image which has:
many matrices with data
georeference information (e.g. lon, lat of corners)
description of data (metadata)
and so on...
And here comes OOP:
End of explanation
"""
|
robertoalotufo/ia898 | 2S2018/13 Correlacao de fase.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
from numpy.fft import *
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
f = mpimg.imread('../data/cameraman.tif')
# Transladando a imagem para (x,y)
x = 20
y = 30
#f_trans = ia.ptrans(f,(20,30))
f_trans = np.zeros(f.shape)
f_trans[x:,y:] = f[:-x,:-y]
plt.figure(1,(10,10))
plt.subplot(1,2,1)
plt.imshow(f, cmap='gray')
plt.title('Imagem original')
plt.subplot(1,2,2)
plt.imshow(f_trans, cmap='gray')
plt.title('Imagem transladada')
# Calculando a correlação de fase
g = ia.phasecorr(f,f_trans)
# Encontrando o ponto de máxima correlação
i = np.argmax(g)
row,col = np.unravel_index(i,g.shape)
v = np.array(f.shape) - np.array((row,col))
print('Ponto de máxima correlação: ',v)
plt.figure(2,(6,6))
f[v[0]-1:v[0]+1,v[1]-1:v[1]+1] = 0
plt.imshow(f, cmap='gray')
plt.title('Ponto de máxima correlação marcado (em preto)')
"""
Explanation: Correlação de Fase
A correlação de fase diz que, se calcularmos a Transformada Discreta de Fourier de duas imagens $f$ and $h$:
$$ F = \mathcal{F}(f); $$$$ H = \mathcal{F}(h) $$
E em seguida calcularmos a correlação $R$ das transformadas:
$$ R = \dfrac{F H^}{|F H^|} $$
Depois, aplicarmos a transformada inversa a $R$
$$ g = \mathcal{F}^{-1}(R) $$
A translação entre as duas imagens pode ser encontrada fazendo-se:
$$ (row, col) = arg max{g} $$
Identificando a translação entre 2 imagens
Calcular a Transformada de Fourier das 2 imagens que se quer comparar;
Calcular a correlação de fase usando a função phasecorr
Encontrar o ponto de máximo do mapa de correlação resultante
End of explanation
"""
f = mpimg.imread('../data/cameraman.tif')
# Inserindo uma borda de zeros para permitir a rotação da imagem
t = np.zeros(np.array(f.shape)+100,dtype=np.uint8)
t[50:f.shape[0]+50,50:f.shape[1]+50] = f
f = t
t1 = np.array([
[1,0,-f.shape[0]/2.],
[0,1,-f.shape[1]/2.],
[0,0,1]]);
t2 = np.array([
[1,0,f.shape[0]/2.],
[0,1,f.shape[1]/2.],
[0,0,1]]);
# Rotacionando a imagem 30 graus
theta = np.radians(30)
r1 = np.array([
[np.cos(theta),-np.sin(theta),0],
[np.sin(theta),np.cos(theta),0],
[0,0,1]]);
T = t2.dot(r1).dot(t1)
f_rot = ia.affine(f,T,0)
plt.figure(1,(10,10))
plt.subplot(1,2,1)
plt.imshow(f, cmap='gray')
plt.title('Imagem original')
plt.subplot(1,2,2)
plt.imshow(f_rot, cmap='gray')
plt.title('Imagem rotacionada')
W,H = f.shape
f_polar = ia.polar(f,(150,200),2*np.pi)
f_rot_polar = ia.polar(f_rot,(150,200),2*np.pi)
plt.figure(1,(10,10))
plt.subplot(1,2,1)
plt.imshow(f_polar, cmap='gray')
plt.title('Imagem original (coord. polar)')
plt.subplot(1,2,2)
plt.imshow(f_rot_polar, cmap='gray')
plt.title('Imagem rotacionada (coord. polar)')
# Calculando a correlação de fase
g = ia.phasecorr(f_polar,f_rot_polar)
# Encontrando o ponto de máxima correlação
i = np.argmax(g)
corr = np.unravel_index(i,g.shape)
# Calculate the angle
ang = (float(corr[1])/g.shape[1])*360
print('Ponto de máxima correlação: ',ang)
"""
Explanation: Identificando a rotação entre 2 imagens
Calcular a Transformada de Fourier das 2 imagens que se quer comparar;
Converter as imagens obtidas para coordenadas polares
Calcular a correlação de fase usando a função phasecorr
Encontrar o ponto de máximo do mapa de correlação resultante
End of explanation
"""
|
dipanjank/ml | data_analysis/abalone.ipynb | gpl-3.0 | %pylab inline
pylab.style.use('ggplot')
import pandas as pd
import numpy as np
import seaborn as sns
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.data'
data_df = pd.read_csv(url, header=None)
data_df.head()
"""
Explanation: Abalone - UCI
End of explanation
"""
data_df.columns = ['Sex', 'Length', 'Diameter', 'Height',
'Whole_Weight', 'Shucked_Weight', 'Viscera_Weight', 'Shell_Weight',
'Rings']
"""
Explanation: Attribute information
Given is the attribute name, attribute type, the measurement unit and a
brief description. The number of rings is the value to predict: either
as a continuous value or as a classification problem.
Name Data Type Meas. Description
---- --------- ----- -----------
Sex nominal M, F, and I (infant)
Length continuous mm Longest shell measurement
Diameter continuous mm perpendicular to length
Height continuous mm with meat in shell
Whole weight continuous grams whole abalone
Shucked weight continuous grams weight of meat
Viscera weight continuous grams gut weight (after bleeding)
Shell weight continuous grams after being dried
Rings integer +1.5 gives the age in years
End of explanation
"""
g = sns.FacetGrid(col='Sex', data=data_df)
g = g.map(pylab.hist, 'Rings')
"""
Explanation: Variations of Rings for Different Sexes
End of explanation
"""
features = data_df.columns.drop(['Sex', 'Rings'])
_, axes = pylab.subplots(2, 4, figsize=(16, 10))
for i, fname in enumerate(features):
row, col = divmod(i, 4)
sns.regplot(data=data_df, x=fname, y='Rings', ax=axes[row][col])
"""
Explanation: Bivariate Analysis of Numerical Features with Rings
End of explanation
"""
f_corrs = data_df.loc[:, features].corrwith(data_df.loc[:, 'Rings'])
f_corrs.plot(kind='barh')
"""
Explanation: Feature Correlations with Rings
End of explanation
"""
f_corrs = data_df.loc[:, features].corr()
sns.heatmap(f_corrs, annot=True)
"""
Explanation: Feature Correlations
End of explanation
"""
import statsmodels.formula.api as sm
model = sm.ols(formula='Rings ~ Shell_Weight', data=data_df)
result = model.fit()
result.summary()
"""
Explanation: OLS Regression with the Feature with Highest Correlation
End of explanation
"""
all_features = ' + '.join(features)
formula = ' ~ '.join(['Rings', all_features])
print(formula)
model = sm.ols(formula=formula, data=data_df)
result = model.fit()
result.summary()
"""
Explanation: OLS Regression with All Numerical Features
End of explanation
"""
from sklearn.preprocessing import MultiLabelBinarizer
sorted_labels = sorted(pd.unique(data_df.Sex))
encoder = MultiLabelBinarizer(classes=sorted_labels)
encoded = encoder.fit_transform(data_df.Sex)
encoded_sex = pd.DataFrame(index=data_df.index, data=encoded, columns=['sex_{}'.format(l) for l in sorted_labels])
encoded_df = data_df.drop('Sex', axis=1).merge(encoded_sex, left_index=True, right_index=True)
encoded_df.head()
from sklearn.svm import SVR
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
features = encoded_df.drop('Rings', axis=1)
target = encoded_df.Rings
model = SVR(C=1000, gamma=0.001, kernel='rbf')
prep = StandardScaler()
estimator = make_pipeline(prep, model)
scores = cross_val_score(estimator=estimator, X=features, y=target, scoring='r2', cv=10)
scores = pd.Series(scores)
scores.plot(kind='bar')
"""
Explanation: Using a Decision Tree
End of explanation
"""
|
diegocavalca/Studies | books/deep-learning-with-python/5.2-using-convnets-with-small-datasets.ipynb | cc0-1.0 | import os, shutil
# The path to the directory where the original
# dataset was uncompressed
original_dataset_dir = '/Users/fchollet/Downloads/kaggle_original_data'
# The directory where we will
# store our smaller dataset
base_dir = '/Users/fchollet/Downloads/cats_and_dogs_small'
os.mkdir(base_dir)
# Directories for our training,
# validation and test splits
train_dir = os.path.join(base_dir, 'train')
os.mkdir(train_dir)
validation_dir = os.path.join(base_dir, 'validation')
os.mkdir(validation_dir)
test_dir = os.path.join(base_dir, 'test')
os.mkdir(test_dir)
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
os.mkdir(train_cats_dir)
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
os.mkdir(train_dogs_dir)
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
os.mkdir(validation_cats_dir)
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
os.mkdir(validation_dogs_dir)
# Directory with our validation cat pictures
test_cats_dir = os.path.join(test_dir, 'cats')
os.mkdir(test_cats_dir)
# Directory with our validation dog pictures
test_dogs_dir = os.path.join(test_dir, 'dogs')
os.mkdir(test_dogs_dir)
# Copy first 1000 cat images to train_cats_dir
fnames = ['cat.{}.jpg'.format(i) for i in range(1000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(train_cats_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 cat images to validation_cats_dir
fnames = ['cat.{}.jpg'.format(i) for i in range(1000, 1500)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(validation_cats_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 cat images to test_cats_dir
fnames = ['cat.{}.jpg'.format(i) for i in range(1500, 2000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(test_cats_dir, fname)
shutil.copyfile(src, dst)
# Copy first 1000 dog images to train_dogs_dir
fnames = ['dog.{}.jpg'.format(i) for i in range(1000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(train_dogs_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 dog images to validation_dogs_dir
fnames = ['dog.{}.jpg'.format(i) for i in range(1000, 1500)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(validation_dogs_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 dog images to test_dogs_dir
fnames = ['dog.{}.jpg'.format(i) for i in range(1500, 2000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(test_dogs_dir, fname)
shutil.copyfile(src, dst)
"""
Explanation: 5.2 - Using convnets with small datasets
This notebook contains the code sample found in Chapter 5, Section 2 of Deep Learning with Python. Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.
Training a convnet from scratch on a small dataset
Having to train an image classification model using only very little data is a common situation, which you likely encounter yourself in
practice if you ever do computer vision in a professional context.
Having "few" samples can mean anywhere from a few hundreds to a few tens of thousands of images. As a practical example, we will focus on
classifying images as "dogs" or "cats", in a dataset containing 4000 pictures of cats and dogs (2000 cats, 2000 dogs). We will use 2000
pictures for training, 1000 for validation, and finally 1000 for testing.
In this section, we will review one basic strategy to tackle this problem: training a new model from scratch on what little data we have. We
will start by naively training a small convnet on our 2000 training samples, without any regularization, to set a baseline for what can be
achieved. This will get us to a classification accuracy of 71%. At that point, our main issue will be overfitting. Then we will introduce
data augmentation, a powerful technique for mitigating overfitting in computer vision. By leveraging data augmentation, we will improve
our network to reach an accuracy of 82%.
In the next section, we will review two more essential techniques for applying deep learning to small datasets: doing feature extraction
with a pre-trained network (this will get us to an accuracy of 90% to 93%), and fine-tuning a pre-trained network (this will get us to
our final accuracy of 95%). Together, these three strategies -- training a small model from scratch, doing feature extracting using a
pre-trained model, and fine-tuning a pre-trained model -- will constitute your future toolbox for tackling the problem of doing computer
vision with small datasets.
The relevance of deep learning for small-data problems
You will sometimes hear that deep learning only works when lots of data is available. This is in part a valid point: one fundamental
characteristic of deep learning is that it is able to find interesting features in the training data on its own, without any need for manual
feature engineering, and this can only be achieved when lots of training examples are available. This is especially true for problems where
the input samples are very high-dimensional, like images.
However, what constitutes "lots" of samples is relative -- relative to the size and depth of the network you are trying to train, for
starters. It isn't possible to train a convnet to solve a complex problem with just a few tens of samples, but a few hundreds can
potentially suffice if the model is small and well-regularized and if the task is simple.
Because convnets learn local, translation-invariant features, they are very
data-efficient on perceptual problems. Training a convnet from scratch on a very small image dataset will still yield reasonable results
despite a relative lack of data, without the need for any custom feature engineering. You will see this in action in this section.
But what's more, deep learning models are by nature highly repurposable: you can take, say, an image classification or speech-to-text model
trained on a large-scale dataset then reuse it on a significantly different problem with only minor changes. Specifically, in the case of
computer vision, many pre-trained models (usually trained on the ImageNet dataset) are now publicly available for download and can be used
to bootstrap powerful vision models out of very little data. That's what we will do in the next section.
For now, let's get started by getting our hands on the data.
Downloading the data
The cats vs. dogs dataset that we will use isn't packaged with Keras. It was made available by Kaggle.com as part of a computer vision
competition in late 2013, back when convnets weren't quite mainstream. You can download the original dataset at:
https://www.kaggle.com/c/dogs-vs-cats/data (you will need to create a Kaggle account if you don't already have one -- don't worry, the
process is painless).
The pictures are medium-resolution color JPEGs. They look like this:
Unsurprisingly, the cats vs. dogs Kaggle competition in 2013 was won by entrants who used convnets. The best entries could achieve up to
95% accuracy. In our own example, we will get fairly close to this accuracy (in the next section), even though we will be training our
models on less than 10% of the data that was available to the competitors.
This original dataset contains 25,000 images of dogs and cats (12,500 from each class) and is 543MB large (compressed). After downloading
and uncompressing it, we will create a new dataset containing three subsets: a training set with 1000 samples of each class, a validation
set with 500 samples of each class, and finally a test set with 500 samples of each class.
Here are a few lines of code to do this:
End of explanation
"""
print('total training cat images:', len(os.listdir(train_cats_dir)))
print('total training dog images:', len(os.listdir(train_dogs_dir)))
print('total validation cat images:', len(os.listdir(validation_cats_dir)))
print('total validation dog images:', len(os.listdir(validation_dogs_dir)))
print('total test cat images:', len(os.listdir(test_cats_dir)))
print('total test dog images:', len(os.listdir(test_dogs_dir)))
"""
Explanation: As a sanity check, let's count how many pictures we have in each training split (train/validation/test):
End of explanation
"""
from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
"""
Explanation: So we have indeed 2000 training images, and then 1000 validation images and 1000 test images. In each split, there is the same number of
samples from each class: this is a balanced binary classification problem, which means that classification accuracy will be an appropriate
measure of success.
Building our network
We've already built a small convnet for MNIST in the previous example, so you should be familiar with them. We will reuse the same
general structure: our convnet will be a stack of alternated Conv2D (with relu activation) and MaxPooling2D layers.
However, since we are dealing with bigger images and a more complex problem, we will make our network accordingly larger: it will have one
more Conv2D + MaxPooling2D stage. This serves both to augment the capacity of the network, and to further reduce the size of the
feature maps, so that they aren't overly large when we reach the Flatten layer. Here, since we start from inputs of size 150x150 (a
somewhat arbitrary choice), we end up with feature maps of size 7x7 right before the Flatten layer.
Note that the depth of the feature maps is progressively increasing in the network (from 32 to 128), while the size of the feature maps is
decreasing (from 148x148 to 7x7). This is a pattern that you will see in almost all convnets.
Since we are attacking a binary classification problem, we are ending the network with a single unit (a Dense layer of size 1) and a
sigmoid activation. This unit will encode the probability that the network is looking at one class or the other.
End of explanation
"""
model.summary()
"""
Explanation: Let's take a look at how the dimensions of the feature maps change with every successive layer:
End of explanation
"""
from keras import optimizers
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
"""
Explanation: For our compilation step, we'll go with the RMSprop optimizer as usual. Since we ended our network with a single sigmoid unit, we will
use binary crossentropy as our loss (as a reminder, check out the table in Chapter 4, section 5 for a cheatsheet on what loss function to
use in various situations).
End of explanation
"""
from keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
"""
Explanation: Data preprocessing
As you already know by now, data should be formatted into appropriately pre-processed floating point tensors before being fed into our
network. Currently, our data sits on a drive as JPEG files, so the steps for getting it into our network are roughly:
Read the picture files.
Decode the JPEG content to RBG grids of pixels.
Convert these into floating point tensors.
Rescale the pixel values (between 0 and 255) to the [0, 1] interval (as you know, neural networks prefer to deal with small input values).
It may seem a bit daunting, but thankfully Keras has utilities to take care of these steps automatically. Keras has a module with image
processing helper tools, located at keras.preprocessing.image. In particular, it contains the class ImageDataGenerator which allows to
quickly set up Python generators that can automatically turn image files on disk into batches of pre-processed tensors. This is what we
will use here.
End of explanation
"""
for data_batch, labels_batch in train_generator:
print('data batch shape:', data_batch.shape)
print('labels batch shape:', labels_batch.shape)
break
"""
Explanation: Let's take a look at the output of one of these generators: it yields batches of 150x150 RGB images (shape (20, 150, 150, 3)) and binary
labels (shape (20,)). 20 is the number of samples in each batch (the batch size). Note that the generator yields these batches
indefinitely: it just loops endlessly over the images present in the target folder. For this reason, we need to break the iteration loop
at some point.
End of explanation
"""
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
"""
Explanation: Let's fit our model to the data using the generator. We do it using the fit_generator method, the equivalent of fit for data generators
like ours. It expects as first argument a Python generator that will yield batches of inputs and targets indefinitely, like ours does.
Because the data is being generated endlessly, the generator needs to know example how many samples to draw from the generator before
declaring an epoch over. This is the role of the steps_per_epoch argument: after having drawn steps_per_epoch batches from the
generator, i.e. after having run for steps_per_epoch gradient descent steps, the fitting process will go to the next epoch. In our case,
batches are 20-sample large, so it will take 100 batches until we see our target of 2000 samples.
When using fit_generator, one may pass a validation_data argument, much like with the fit method. Importantly, this argument is
allowed to be a data generator itself, but it could be a tuple of Numpy arrays as well. If you pass a generator as validation_data, then
this generator is expected to yield batches of validation data endlessly, and thus you should also specify the validation_steps argument,
which tells the process how many batches to draw from the validation generator for evaluation.
End of explanation
"""
model.save('cats_and_dogs_small_1.h5')
"""
Explanation: It is good practice to always save your models after training:
End of explanation
"""
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
"""
Explanation: Let's plot the loss and accuracy of the model over the training and validation data during training:
End of explanation
"""
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
"""
Explanation: These plots are characteristic of overfitting. Our training accuracy increases linearly over time, until it reaches nearly 100%, while our
validation accuracy stalls at 70-72%. Our validation loss reaches its minimum after only five epochs then stalls, while the training loss
keeps decreasing linearly until it reaches nearly 0.
Because we only have relatively few training samples (2000), overfitting is going to be our number one concern. You already know about a
number of techniques that can help mitigate overfitting, such as dropout and weight decay (L2 regularization). We are now going to
introduce a new one, specific to computer vision, and used almost universally when processing images with deep learning models: data
augmentation.
Using data augmentation
Overfitting is caused by having too few samples to learn from, rendering us unable to train a model able to generalize to new data.
Given infinite data, our model would be exposed to every possible aspect of the data distribution at hand: we would never overfit. Data
augmentation takes the approach of generating more training data from existing training samples, by "augmenting" the samples via a number
of random transformations that yield believable-looking images. The goal is that at training time, our model would never see the exact same
picture twice. This helps the model get exposed to more aspects of the data and generalize better.
In Keras, this can be done by configuring a number of random transformations to be performed on the images read by our ImageDataGenerator
instance. Let's get started with an example:
End of explanation
"""
# This is module with image preprocessing utilities
from keras.preprocessing import image
fnames = [os.path.join(train_cats_dir, fname) for fname in os.listdir(train_cats_dir)]
# We pick one image to "augment"
img_path = fnames[3]
# Read the image and resize it
img = image.load_img(img_path, target_size=(150, 150))
# Convert it to a Numpy array with shape (150, 150, 3)
x = image.img_to_array(img)
# Reshape it to (1, 150, 150, 3)
x = x.reshape((1,) + x.shape)
# The .flow() command below generates batches of randomly transformed images.
# It will loop indefinitely, so we need to `break` the loop at some point!
i = 0
for batch in datagen.flow(x, batch_size=1):
plt.figure(i)
imgplot = plt.imshow(image.array_to_img(batch[0]))
i += 1
if i % 4 == 0:
break
plt.show()
"""
Explanation: These are just a few of the options available (for more, see the Keras documentation). Let's quickly go over what we just wrote:
rotation_range is a value in degrees (0-180), a range within which to randomly rotate pictures.
width_shift and height_shift are ranges (as a fraction of total width or height) within which to randomly translate pictures
vertically or horizontally.
shear_range is for randomly applying shearing transformations.
zoom_range is for randomly zooming inside pictures.
horizontal_flip is for randomly flipping half of the images horizontally -- relevant when there are no assumptions of horizontal
asymmetry (e.g. real-world pictures).
fill_mode is the strategy used for filling in newly created pixels, which can appear after a rotation or a width/height shift.
Let's take a look at our augmented images:
End of explanation
"""
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
"""
Explanation: If we train a new network using this data augmentation configuration, our network will never see twice the same input. However, the inputs
that it sees are still heavily intercorrelated, since they come from a small number of original images -- we cannot produce new information,
we can only remix existing information. As such, this might not be quite enough to completely get rid of overfitting. To further fight
overfitting, we will also add a Dropout layer to our model, right before the densely-connected classifier:
End of explanation
"""
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=100,
validation_data=validation_generator,
validation_steps=50)
"""
Explanation: Let's train our network using data augmentation and dropout:
End of explanation
"""
model.save('cats_and_dogs_small_2.h5')
"""
Explanation: Let's save our model -- we will be using it in the section on convnet visualization.
End of explanation
"""
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
"""
Explanation: Let's plot our results again:
End of explanation
"""
|
gbtimmon/ase16GBT | code/6/magoff2_pom3_ga.ipynb | unlicense | %matplotlib inline
# All the imports
from __future__ import print_function, division
from math import *
import random
import sys
import matplotlib.pyplot as plt
# TODO 1: Enter your unity ID here
__author__ = "magoff2"
class O:
"""
Basic Class which
- Helps dynamic updates
- Pretty Prints
"""
def __init__(self, **kwargs):
self.has().update(**kwargs)
def has(self):
return self.__dict__
def update(self, **kwargs):
self.has().update(kwargs)
return self
def __repr__(self):
show = [':%s %s' % (k, self.has()[k])
for k in sorted(self.has().keys())
if k[0] is not "_"]
txt = ' '.join(show)
if len(txt) > 60:
show = map(lambda x: '\t' + x + '\n', show)
return '{' + ' '.join(show) + '}'
print("Unity ID: ", __author__)
"""
Explanation: Optimizing Real World Problems
In this workshop we will code up a model called POM3 and optimize it using the GA we developed in the first workshop.
POM3 is a software estimation model like XOMO for Software Engineering. It is based on Turner
and Boehm’s model of agile development. It compares traditional plan-based approaches
to agile-based approaches in requirements prioritization. It describes how a team decides which
requirements to implement next. POM3 reveals requirements incrementally in random order, with
which developers plan their work assignments. These assignments are further adjusted based on
current cost and priority of requirement. POM3 is a realistic model which takes more runtime than
standard mathematical models(2-100ms, not 0.006-0.3ms)
End of explanation
"""
# Few Utility functions
def say(*lst):
"""
Print whithout going to new line
"""
print(*lst, end="")
sys.stdout.flush()
def random_value(low, high, decimals=2):
"""
Generate a random number between low and high.
decimals incidicate number of decimal places
"""
return round(random.uniform(low, high),decimals)
def gt(a, b): return a > b
def lt(a, b): return a < b
def shuffle(lst):
"""
Shuffle a list
"""
random.shuffle(lst)
return lst
class Decision(O):
"""
Class indicating Decision of a problem
"""
def __init__(self, name, low, high):
"""
@param name: Name of the decision
@param low: minimum value
@param high: maximum value
"""
O.__init__(self, name=name, low=low, high=high)
class Objective(O):
"""
Class indicating Objective of a problem
"""
def __init__(self, name, do_minimize=True, low=0, high=1):
"""
@param name: Name of the objective
@param do_minimize: Flag indicating if objective has to be minimized or maximized
"""
O.__init__(self, name=name, do_minimize=do_minimize, low=low, high=high)
def normalize(self, val):
return (val - self.low)/(self.high - self.low)
class Point(O):
"""
Represents a member of the population
"""
def __init__(self, decisions):
O.__init__(self)
self.decisions = decisions
self.objectives = None
def __hash__(self):
return hash(tuple(self.decisions))
def __eq__(self, other):
return self.decisions == other.decisions
def clone(self):
new = Point(self.decisions[:])
new.objectives = self.objectives[:]
return new
class Problem(O):
"""
Class representing the cone problem.
"""
def __init__(self, decisions, objectives):
"""
Initialize Problem.
:param decisions - Metadata for Decisions
:param objectives - Metadata for Objectives
"""
O.__init__(self)
self.decisions = decisions
self.objectives = objectives
@staticmethod
def evaluate(point):
assert False
return point.objectives
@staticmethod
def is_valid(point):
return True
def generate_one(self, retries = 20):
for _ in xrange(retries):
point = Point([random_value(d.low, d.high) for d in self.decisions])
if self.is_valid(point):
return point
raise RuntimeError("Exceeded max runtimes of %d" % 20)
"""
Explanation: The Generic Problem Class
Remember the Problem Class we coded up for GA workshop. Here we abstract it further such that it can be inherited by all the future classes. Go through these utility functions and classes before you proceed further.
End of explanation
"""
class POM3(Problem):
from pom3.pom3 import pom3 as pom3_helper
helper = pom3_helper()
def __init__(self):
"""
Initialize the POM3 classes
"""
names = ["Culture", "Criticality", "Criticality Modifier", "Initial Known",
"Inter-Dependency", "Dynamism", "Size", "Plan", "Team Size"]
lows = [0.1, 0.82, 2, 0.40, 1, 1, 0, 0, 1]
highs = [0.9, 1.20, 10, 0.70, 100, 50, 4, 5, 44]
# TODO 2: Use names, lows and highs defined above to code up decision
# and objective metadata for POM3.
decisions = [Decision(n, l, h) for n, l, h in zip(names, lows, highs)]
objectives = [Objective("Cost", True, 0, 10000), Objective("Score", False, 0, 1),
Objective("Completion", False, 0, 1), Objective("Idle", True, 0, 1)]
Problem.__init__(self, decisions, objectives)
@staticmethod
def evaluate(point):
if not point.objectives:
point.objectives = POM3.helper.simulate(point.decisions)
return point.objectives
pom3 = POM3()
one = pom3.generate_one()
print(POM3.evaluate(one))
"""
Explanation: Great. Now that the class and its basic methods is defined, lets extend it for
POM3 model.
POM3 has multiple versions but for this workshop we will code up the POM3A model. It has 9 decisions defined as follows
Culture in [0.1, 0.9]
Criticality in [0.82, 1.20]
Criticality Modifier in [2, 10]
Initially Known in [0.4, 0.7]
Inter-Dependency in [1, 100]
Dynamism in [1, 50]
Size in [0, 4]
Plan in [0, 5]
Team Size in [1, 44]
<img src="pom3.png"/>
The model has 4 objectives
* Cost in [0,10000] - Minimize
* Score in [0,1] - Maximize
* Completion in [0,1] - Maximize
* Idle in [0,1] - Minimize
End of explanation
"""
def populate(problem, size):
"""
Create a Point list of length size
"""
population = []
for _ in range(size):
population.append(problem.generate_one())
return population
def crossover(mom, dad):
"""
Create a new point which contains decisions from
the first half of mom and second half of dad
"""
n = len(mom.decisions)
return Point(mom.decisions[:n//2] + dad.decisions[n//2:])
def mutate(problem, point, mutation_rate=0.01):
"""
Iterate through all the decisions in the point
and if the probability is less than mutation rate
change the decision(randomly set it between its max and min).
"""
for i, decision in enumerate(problem.decisions):
if random.random() < mutation_rate:
point.decisions[i] = random_value(decision.low, decision.high)
return point
def bdom(problem, one, two):
"""
Return if one dominates two based
on binary domintation
"""
objs_one = problem.evaluate(one)
objs_two = problem.evaluate(two)
dominates = False
for i, obj in enumerate(problem.objectives):
better = lt if obj.do_minimize else gt
if better(objs_one[i], objs_two[i]):
dominates = True
elif objs_one[i] != objs_two[i]:
return False
return dominates
def fitness(problem, population, point, dom_func):
"""
Evaluate fitness of a point based on the definition in the previous block.
For example point dominates 5 members of population,
then fitness of point is 5.
"""
return len([1 for another in population if dom_func(problem, point, another)])
def elitism(problem, population, retain_size, dom_func):
"""
Sort the population with respect to the fitness
of the points and return the top 'retain_size' points of the population
"""
fitnesses = []
for point in population:
fitnesses.append((fitness(problem, population, point, dom_func), point))
population = [tup[1] for tup in sorted(fitnesses, reverse=True)]
return population[:retain_size]
"""
Explanation: Utility functions for genetic algorithms.
End of explanation
"""
def ga(pop_size = 100, gens = 250, dom_func=bdom):
problem = POM3()
population = populate(problem, pop_size)
[problem.evaluate(point) for point in population]
initial_population = [point.clone() for point in population]
gen = 0
while gen < gens:
say(".")
children = []
for _ in range(pop_size):
mom = random.choice(population)
dad = random.choice(population)
while (mom == dad):
dad = random.choice(population)
child = mutate(problem, crossover(mom, dad))
if problem.is_valid(child) and child not in population+children:
children.append(child)
population += children
population = elitism(problem, population, pop_size, dom_func)
gen += 1
print("")
return initial_population, population
"""
Explanation: Putting it all together and making the GA
End of explanation
"""
def plot_pareto(initial, final):
initial_objs = [point.objectives for point in initial]
final_objs = [point.objectives for point in final]
initial_x = [i[1] for i in initial_objs]
initial_y = [i[2] for i in initial_objs]
final_x = [i[1] for i in final_objs]
final_y = [i[2] for i in final_objs]
plt.scatter(initial_x, initial_y, color='b', marker='+', label='initial')
plt.scatter(final_x, final_y, color='r', marker='o', label='final')
plt.title("Scatter Plot between initial and final population of GA")
plt.ylabel("Score")
plt.xlabel("Completion")
plt.legend(loc=9, bbox_to_anchor=(0.5, -0.175), ncol=2)
plt.show()
initial, final = ga(gens=50)
plot_pareto(initial, final)
"""
Explanation: Visualize
Lets plot the initial population with respect to the final frontier.
End of explanation
"""
|
betoesquivel/comment_summarization | Lab1 Text processing with python.ipynb | mit | import sklearn
import numpy as np
import matplotlib.pyplot as plt
data = np.array([[1,2], [2,3], [3,4], [4,5], [5,6]])
x = data[:,0]
y = data[:,1]
data, x, y
"""
Explanation: Basic usage of Sklearn
End of explanation
"""
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(min_df = 1)
content = ["How to format my hard disk", " Hard disk format problems "]
# fit_transform returns array of two rows, one per 'document'.
# each row has 7 elements, each element being the number of items
# a given feature occurred in that document.
X = vectorizer.fit_transform(content)
vectorizer.get_feature_names(), X.toarray()
"""
Explanation: Text processing with Scikit learn
We can use CountVectorizer to extract a bag of words representation from a collection of documents, using the SciKit-Learn method fit_transform. We will use a list of strings as documents.
End of explanation
"""
X.toarray()[0]
"""
Explanation: Array vector for the first document
End of explanation
"""
X.toarray()[1][vectorizer.get_feature_names().index('hard')]
"""
Explanation: Number of times word "hard" occurs
End of explanation
"""
from sklearn.datasets import fetch_20newsgroups
categories = ['alt.atheism', 'soc.religion.christian',
'comp.graphics', 'sci.med']
twenty_train = fetch_20newsgroups(subset='train',
categories=categories, shuffle=True,
random_state=42)
"""
Explanation: Using the 20 Newsgroups dataset
We are going to fetch just some categories so that it doesn't take that long to download the docs.
End of explanation
"""
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
train_counts = vectorizer.fit_transform(twenty_train.data)
"""
Explanation: Creating a CountVectorizer object
End of explanation
"""
vectorizer.vocabulary_.get(u'algorithm')
"""
Explanation: We can now see how frequently the word algorithm occurs in the subset of the 20Newgroups collection we are considering.
End of explanation
"""
len(vectorizer.get_feature_names())
"""
Explanation: How many terms were extracted? use get_feature_names()
End of explanation
"""
vectorizer = CountVectorizer(stop_words='english')
sorted(vectorizer.get_stop_words())[0:20]
"""
Explanation: CountVectorizer can do more preprocessing. This can be stopword removal.
End of explanation
"""
import nltk
"""
Explanation: More preprocessing
For stemming and more advanced preprocessing, supplement SciKit Learn with another Python library, NLTK. Up next.
More advanced preprocessing with NLTK
NLTK is described in detail in a book by Bird, Klein and Loper available online:
http://www.nltk.org/book_1ed/ for version 2.7 of python
About NLTK
It is not the best
It is very easy to use
You should read the book linked above to get familiar with the package and with text preprocessing.
End of explanation
"""
s = nltk.stem.SnowballStemmer('english')
s.stem("cats"), s.stem("ran"), s.stem("jumped")
"""
Explanation: Create an English stemmer
http://www.nltk.org/howto/stem.html for general intro.
http://www.nltk.org/api/nltk.stem.html for more details (including languages covered).
End of explanation
"""
from nltk.tokenize import word_tokenize
text = word_tokenize("And now for something completely different")
nltk.pos_tag(text)
"""
Explanation: NLTK for text analytics
NERs
Sentiment analysis
Extracting information from social media.
End of explanation
"""
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(stop_words="english")
analyze = vectorizer.build_analyzer()
analyze("John bought carrots and potatoes")
"""
Explanation: Integrating NLTK with SciKit's vectorizer
NLTK Stemmer
The stemmer can be used to stem documents before feeding into SciKit's vectorizer, thus obtaining a more compact index.
One way to do this is to define a new class StemmedCountVectorizer extending CountVectorizer by redifining the method build_analyzer() that handles preprocessing and tokenization.
http://scikitlearn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html
build_analyzer() takes a string as input and outputs a list of tokens.
End of explanation
"""
import nltk.stem
english_stemmer = nltk.stem.SnowballStemmer('english')
class StemmedCountVectorizer(CountVectorizer):
def build_analyzer(self):
analyzer=super(StemmedCountVectorizer, self).build_analyzer()
return lambda doc:(english_stemmer.stem(w) for w in analyzer(doc))
"""
Explanation: If we modify build_analyzer() to apply the NLTK stemmer to the output of default build_analyzer(), we get a version that does stemming as well:
End of explanation
"""
stem_vectorizer = StemmedCountVectorizer(min_df=1,
stop_words='english')
stem_analyze = stem_vectorizer.build_analyzer()
Y = stem_analyze("John bought carrots and potatoes")
[tok for tok in Y]
"""
Explanation: So now we can create an instance of this class:
End of explanation
"""
from sklearn.datasets import fetch_20newsgroups
categories = ['alt.atheism', 'soc.religion.christian',
'comp.graphics', 'sci.med']
twenty_train = fetch_20newsgroups(subset='train',
categories=categories,
shuffle=True, random_state=42)
train_counts = stem_vectorizer.fit_transform(twenty_train.data)
len(stem_vectorizer.get_feature_names())
print train_counts[:6]
"""
Explanation: Use this vectorizer to extract features
Compare this result to around 35,000 features we obtained using the unstemmed version.
End of explanation
"""
!ipython nbconvert --to script Lab1\ Text\ processing\ with\ python.ipynb
"""
Explanation: Notes
You should always experiment and see if it is good to use stemming with your problem set. It might not be the best thing to do.
SOLR works for processing larger datasets, since Python and SciKit-Learn become less effective, and more industrial strength software is required. One example of such software is Apache SOLR, an open source indexing package available from:
http://lucene.apache.org/solr/
It produces Lucene-style indices that can be used by text analytics packages such as Mahout.
Elastic http://www.elastic.co/
End of explanation
"""
|
d-k-b/udacity-deep-learning | embeddings/Skip-Grams-Solution.ipynb | mit | import time
import numpy as np
import tensorflow as tf
import utils
"""
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
"""
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
"""
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
"""
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
"""
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
"""
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
"""
from collections import Counter
import random
threshold = 1e-5
word_counts = Counter(int_words)
total_count = len(int_words)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}
train_words = [word for word in int_words if random.random() < (1 - p_drop[word])]
"""
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is that probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
"""
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = set(words[start:idx] + words[idx+1:stop+1])
return list(target_words)
"""
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window.
End of explanation
"""
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
"""
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
"""
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name='inputs')
labels = tf.placeholder(tf.int32, [None, None], name='labels')
"""
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
"""
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs)
"""
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
"""
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(n_vocab))
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b,
labels, embed,
n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
"""
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
"""
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
"""
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
"""
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
"""
Explanation: Restore the trained network if you need to:
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
"""
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation
"""
|
brian-rose/ClimateModeling_courseware | Lectures/Lecture10 -- Who needs spectral bands.ipynb | mit | # Ensure compatibility with Python 2 and 3
from __future__ import print_function, division
"""
Explanation: ATM 623: Climate Modeling
Brian E. J. Rose, University at Albany
Lecture 10: Who needs spectral bands? We do. Some baby steps...
Warning: content out of date and not maintained
You really should be looking at The Climate Laboratory book by Brian Rose, where all the same content (and more!) is kept up to date.
Here you are likely to find broken links and broken code.
About these notes:
This document uses the interactive Jupyter notebook format. The notes can be accessed in several different ways:
The interactive notebooks are hosted on github at https://github.com/brian-rose/ClimateModeling_courseware
The latest versions can be viewed as static web pages rendered on nbviewer
A complete snapshot of the notes as of May 2017 (end of spring semester) are available on Brian's website.
Also here is a legacy version from 2015.
Many of these notes make use of the climlab package, available at https://github.com/brian-rose/climlab
End of explanation
"""
# Applying the above formula
eps = 0.586
print( 'Doubling a grey gas absorber would \
change the absorptivity from {:.3} \
to {:.3}'.format(eps, 2*eps - eps**2))
"""
Explanation: Contents
What if CO$_2$ actually behaved like a Grey Gas?
Another look at observed spectra
Water vapor changes under global warming
A simple water vapor parameterization
Modeling spectral bands with the climlab.BandRCModel process
<a id='section1'></a>
1. What if CO$_2$ actually behaved like a Grey Gas?
Suppose that CO$_2$ actually behaved as a grey gas. In other words, no spectral dependence in absorptivity.
If we then double the CO2 concentration in the atmosphere, we double the number of absorbers. This should imply that we also double the absorption cross-section:
$$ \kappa^\prime = 2 ~ \kappa $$
This would imply that we double the optical thickness of every layer:
$$ \Delta \tau^\prime = 2 \left( -\frac{\kappa}{g} \Delta p \right) = 2 ~ \Delta \tau$$
And since (from Lecture 9) the absorptivity / emissivity of each layer is
$$ \epsilon = 1 - \exp\big( - \Delta \tau \big) $$
the modified absorptivity is
$$ \epsilon^\prime = 1 - \exp\big( - 2\Delta \tau \big) = 1 - \left( \exp\big( - \Delta \tau \big)\right)^2 = 1 - (1-\epsilon)^2 $$
or simply
$$ \epsilon^\prime = 2 \epsilon - \epsilon^2 $$
(Note that $\epsilon^\prime = 2 \epsilon$ for very thin layers, for which $\epsilon$ is small).
What does our 2-layer analytical model then say about the radiative forcing?
Recall that we tuned the two-layer model with
$$ \epsilon = 0.586 $$
to get the observed OLR with observed temperatures.
End of explanation
"""
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
from xarray.ufuncs import cos, deg2rad, log
# Disable interactive plotting (use explicit display calls to show figures)
plt.ioff()
# Open handles to the data files
datapath = "http://thredds.atmos.albany.edu:8080/thredds/dodsC/cesm/"
ctrl = xr.open_dataset(datapath + 'som_control/som_control.cam.h0.clim.nc', decode_times=False)
co2 = xr.open_dataset(datapath + 'som_2xCO2/som_2xCO2.cam.h0.clim.nc', decode_times=False)
# Plot cross-sections of the following anomalies under 2xCO2:
# - Temperature
# - Specific humidity
# - Relative humidity
fig, axes = plt.subplots(1,3, figsize=(16,6))
ax = axes[0]
CS = ax.contourf(ctrl.lat, ctrl.lev, (co2['T'] - ctrl['T']).mean(dim=('time','lon')),
levels=np.arange(-11,12,1), cmap=plt.cm.seismic)
ax.set_title('Temperature (K)')
fig.colorbar(CS, orientation='horizontal', ax=ax)
ax = axes[1]
CS = ax.contourf(ctrl.lat, ctrl.lev, (co2['Q'] - ctrl['Q']).mean(dim=('time','lon'))*1000,
levels=np.arange(-3,3.25,0.25), cmap=plt.cm.seismic)
ax.set_title('Specific humidity (g/kg)')
fig.colorbar(CS, orientation='horizontal', ax=ax)
ax = axes[2]
CS = ax.contourf(ctrl.lat, ctrl.lev, (co2['RELHUM'] - ctrl['RELHUM']).mean(dim=('time','lon')),
levels=np.arange(-11,12,1), cmap=plt.cm.seismic)
ax.set_title('Relative humidity (%)')
fig.colorbar(CS, orientation='horizontal', ax=ax)
for ax in axes:
ax.invert_yaxis()
ax.set_xticks([-90, -60, -30, 0, 30, 60, 90]);
ax.set_xlabel('Latitude')
ax.set_ylabel('Pressure')
fig.suptitle('Anomalies for 2xCO2 in CESM slab ocean simulations', fontsize=16);
fig
"""
Explanation: If CO2 behaved like a grey gas, doubling it would cause a huge increase in the absorptivity of each layer!
Back in Lecture 7
we worked out that the radiative forcing in this model (with the observed lapse rate) is about +2.2 W m$^{-2}$ for an increase of 0.01 in $\epsilon$.
This means that our hypothetical doubling of "grey CO$_2$" should yield a radiative forcing of 53.5 W m$^{-2}$.
This is an absolutely enormous number. Assuming a net climate feedback of -1.3 W m$^{-2}$ K$^{-1}$
(consistent with the AR5 ensemble)
would then give us an equilibrium climate sensitivity of 41 K.
Conclusions:
If CO2 did behave like a grey gas, we would be toast.
The Grey Gas model is insufficient for understanding radiative forcing and feedback.
<a id='section2'></a>
2. Another look at observed spectra
It's time to move away from the Grey Gas approximation and look more carefully at the actual observed spectra of solar and terrestrial radiation.
Observed solar spectra
The following figure shows observed spectra of solar radiation at TOA and at the surface, along with the theoretical Planck function for a blackbody at 5525 K.
<img src='../images/Solar_spectrum.png'>
This figure shows the solar radiation spectrum for direct light at both the top of the Earth's atmosphere and at sea level. The sun produces light with a distribution similar to what would be expected from a 5525 K (5250 °C) blackbody, which is approximately the sun's surface temperature. As light passes through the atmosphere, some is absorbed by gases with specific absorption bands. Additional light is redistributed by Raleigh scattering, which is responsible for the atmosphere's blue color. These curves are based on the American Society for Testing and Materials (ASTM) Terrestrial Reference Spectra, which are standards adopted by the photovoltaics industry to ensure consistent test conditions and are similar to the light that could be expected in North America. Regions for ultraviolet, visible and infrared light are indicated.
Source: http://commons.wikimedia.org/wiki/File:Solar_spectrum_en.svg
The figure shows that that the incident beam at TOA has the shape of a blackbody radiator.
By the time the beam arrives at the surface, it is strongly depleted at specific wavelengths.
Absorption by O$_3$ (ozone) depletes almost the entire ultraviolet spectrum.
Weaker absorption features, mostly due to H$_2$O, deplete some parts of the near-infrared.
Note that the depletion in the visible band is mostly due to scattering, which depletes the direct beam but contributes diffuse radiation (so we can still see when it's cloudy!)
Observed terrestrial spectra
This figure shows the Planck function for Earth's surface temperature compared with the spectrum observed from space.
<img src='../images/Terrestrial_spectrum.png'>
Source: https://www.e-education.psu.edu/earth103/node/671
Careful: I'm pretty sure what is plotted here is not the total observed spectrum, but rather the part of the emissions from the surface that actual make it out to space.
As we now, the terrestrial beam from the surface is depleted by absorption by many greenhouse gases, but principally CO$_2$ and H$_2$O.
However there is a spectral band centered on 10 $\mu$m in which the greenhouse effect is very weak. This is the so-called window region in the spectrum.
Since absorption is so strong across most of the rest of the infrared spectrum, this window region is a key determinant of the overall greenhouse effect.
One very big shortcoming of the Grey Gas model: it ignores the window region
We would therefore like to start using a model that includes enough spectral information that it represents
the mostly strong CO2 absorption outside the window region
the weak absorption inside the window region
<a id='section3'></a>
3. Water vapor changes under global warming
Another big shortcoming of the Grey Gas model is that it cannot represent the water vapor feedback.
We have seen above that H$_2$O is an important absorber in both longwave and shortwave spectra.
We also know that the water vapor load in the atmosphere increases as the climate warms. The primary reason is that the saturation vapor pressure increases strongly with temperature.
Evidence from CESM simulations
Let's take at changes in the mean water vapor fields in the CESM model after a doubling of CO$_2$
End of explanation
"""
import climlab
from climlab import constants as const
"""
Explanation: What do you see here?
Where does the largest warming occur?
Where does the largest moistening occur?
In fact the specific humidity anomaly has roughly the same shape of the specific humidity field itself -- it is largest where the temperature is highest. This is a consequence of the Clausius-Clapeyron relation.
The relative humidity anomaly is
overall rather small (just a few percent)
Largest in places cold places where the specific humidity is very small.
The smallness of the relative humidity change is a rather remarkable result.
This is not something we can derive from first principles. It is an emergent property of the GCMs. However it is a very robust feature of global warming simulations.
<a id='section4'></a>
4. A simple water vapor parameterization
A credible climate model needs a water vapor feedback
If relative humidity is nearly constant under global warming, and water vapor is a greenhouse gas, this implies a positive feedback that will amplify the warming for a given radiative forcing.
Thus far our simple models have ignored this process, and we have not been able to use them to assess the climate sensitivity.
To proceed towards more realistic models, we have two options:
Simulate all the evaporation, condensation and transport processes that determine the time-mean water vapor field (as is done in the CESM).
Parameterize the dependence of water vapor on temperature by insisting that relative humidity stays constant as the climate changes.
We will now explore this second option, so that we can continue to think of the global energy budget under climate change as a process occurring in a single column.
Manabe's constant relative humidity parameterization
We are going to adopt a parameterization first used in a very famous paper:
Manabe, S. and Wetherald, R. T. (1967). Thermal equilibrium of the atmosphere with a given distribution of relative humidity. J. Atmos. Sci., 24(3):241–259.
This paper was the first to give a really credible calculation of climate sensitivity to a doubling of CO2 by accounting for the known spectral properties of CO2 and H2O absorption, as well as the water vapor feedback!
The parameterization is very simple:
We assume that the relative humidity $r$ is a linear function of pressure $p$:
$$ r = r_s \left( \frac{p/p_s - 0.02}{1 - 0.02} \right) $$
where $p_s = 1000$ hPa is the surface pressure, and $r_s$ is a prescribed surface value of relative humidity. Manabe and Wetherald set $r_s = 0.77$, but we should consider this a tunable parameter in our parameterization.
Since this formula gives a negative number above 20 hPa, we also assume that the specific humidity has a minimum value of $0.005$ g/kg (a typical stratospheric value).
This formula is implemented in climlab.radiation.ManabeWaterVapor()
Using this parameterization, the surface and tropospheric specific humidity will always increase as the temperature increases.
<a id='section5'></a>
5. Modeling spectral bands with the climlab.BandRCModel process
Here is a brief introduction to the climlab.BandRCModel process.
This is a model that divides the spectrum into 7 distinct bands: three shortwave and four longwave.
As we will see, the process works much like the familiar climlab.RadiativeConvectiveModel.
About the spectra
Shortwave
The shortwave is divided into three channels:
Channel 0 is the Hartley and Huggins band (extreme UV, 200 - 340 nm, 1% of total flux, strong ozone absorption)
Channel 1 is Chappuis band (450 - 800 nm, 27% of total flux, moderate ozone absorption)
Channel 2 is remaining radiation (72% of total flux, largely in the visible range, no ozone absorption)
Longwave
The longwave is divided into four bands:
Band 0 is the window region (between 8.5 and 11 $\mu$m), 17% of total flux.
Band 1 is the CO2 absorption channel (the band of strong absorption by CO2 around 15 $\mu$m), 15% of total flux
Band 2 is a weak water vapor absorption channel, 35% of total flux
Band 3 is a strong water vapor absorption channel, 33% of total flux
The longwave decomposition is not as easily related to specific wavelengths, as in reality there is a lot of overlap between H$_2$O and CO$_2$ absorption features (as well as absorption by other greenhouse gases such as CH$_4$ and N$_2$O that we are not representing).
Example usage of the spectral model
End of explanation
"""
col1 = climlab.BandRCModel()
print( col1)
"""
Explanation: First try a model with all default parameters. Usage is very similar to the familiar RadiativeConvectiveModel.
End of explanation
"""
col1.state
"""
Explanation: Check out the list of subprocesses.
We now have a process called H2O, in addition to things we've seen before.
The state variables are still just temperatures:
End of explanation
"""
col1.q
"""
Explanation: But the model has a new input field for specific humidity:
End of explanation
"""
col1.integrate_years(2)
# Check for energy balance
col1.ASR - col1.OLR
fig, ax = plt.subplots()
ax.plot( col1.Tatm, col1.lev, 'c-', label='default' )
ax.plot( col1.Ts, climlab.constants.ps, 'co', markersize=16 )
ax.invert_yaxis()
ax.set_xlabel('Temperature (K)', fontsize=16)
ax.set_ylabel('Pressure (hPa)', fontsize=16 )
ax.set_title('Temperature profiles', fontsize = 18)
ax.grid()
fig
"""
Explanation: The H2O process sets the specific humidity field at every timestep to a specified profile, determined by air temperatures. More on that below. For now, let's compute a radiative equilibrium state.
End of explanation
"""
col1.absorber_vmr
"""
Explanation: By default this model has convective adjustment. We can set the adjusted lapse rate by passing a parameter when we create the model.
The model currently has no ozone (so there is no stratosphere). Not very realistic!
About the radiatively active gases
The Band model is aware of three different absorbing gases: O3 (ozone), CO2, and H2O (water vapor). The abundances of these gases are stored in a dictionary of arrays as follows:
End of explanation
"""
ozone = xr.open_dataset( datapath + 'som_input/ozone_1.9x2.5_L26_2000clim_c091112.nc')
# Take global, annual average and convert to Kelvin
weight_ozone = cos(deg2rad(ozone.lat)) / cos(deg2rad(ozone.lat)).mean(dim='lat')
O3_global = (ozone.O3 * weight_ozone).mean(dim=('lat','lon','time'))
print(O3_global)
fig, ax = plt.subplots()
ax.plot( O3_global*1E6, ozone.lev)
ax.invert_yaxis()
ax.set_xlabel('Ozone (ppm)', fontsize=16)
ax.set_ylabel('Pressure (hPa)', fontsize=16 )
ax.set_title('Global, annual mean ozone concentration', fontsize = 16);
fig
"""
Explanation: Ozone and CO2 are both specified in the model. The default, as you see above, is zero ozone, and constant (well-mixed) CO2 at a volume mixing ratio of 3.8E-4 or 380 ppm.
Water vapor is handled differently: it is determined by the model at each timestep. We make the following assumptions, following a classic paper on radiative-convective equilibrium by Manabe and Wetherald (J. Atmos. Sci. 1967):
the relative humidity just above the surface is fixed at 77% (can be changed of course... see the parameter col1.relative_humidity
water vapor drops off linearly with pressure
there is a small specified amount of water vapor in the stratosphere.
Putting in some ozone
We need to provide some ozone data to the model in order to simulate a stratosphere. We will read in some ozone data just as we did in Lecture 8.
End of explanation
"""
# Create the column with appropriate vertical coordinate, surface albedo and convective adjustment
col2 = climlab.BandRCModel(lev=ozone.lev)
print( col2)
# Set the ozone mixing ratio
col2.absorber_vmr['O3'] = O3_global.values
# Run the model out to equilibrium!
col2.integrate_years(2.)
fig, ax = plt.subplots()
ax.plot( col1.Tatm, np.log(col1.lev/1000), 'c-', label='RCE' )
ax.plot( col1.Ts, 0, 'co', markersize=16 )
ax.plot(col2.Tatm, np.log(col2.lev/1000), 'r-', label='RCE O3' )
ax.plot(col2.Ts, 0, 'ro', markersize=16 )
ax.invert_yaxis()
ax.set_xlabel('Temperature (K)', fontsize=16)
ax.set_ylabel('log(Pressure)', fontsize=16 )
ax.set_title('Temperature profiles', fontsize = 18)
ax.grid(); ax.legend()
fig
"""
Explanation: We are going to create another instance of the model, this time using the same vertical coordinates as the ozone data.
End of explanation
"""
col3 = climlab.process_like(col2)
print( col3)
# Let's double CO2.
col3.absorber_vmr['CO2'] *= 2.
col3.compute_diagnostics()
print( 'The radiative forcing for doubling CO2 is %f W/m2.' % (col2.diagnostics['OLR'] - col3.diagnostics['OLR']))
col3.integrate_years(3)
col3.ASR - col3.OLR
print( 'The Equilibrium Climate Sensitivity is %f K.' % (col3.Ts - col2.Ts))
# An example with no ozone
col4 = climlab.process_like(col1)
print( col4)
col4.absorber_vmr['CO2'] *= 2.
col4.compute_diagnostics()
print( 'The radiative forcing for doubling CO2 is %f W/m2.' % (col1.OLR - col4.OLR))
col4.integrate_years(3.)
col4.ASR - col4.OLR
print( 'The Equilibrium Climate Sensitivity is %f K.' % (col4.Ts - col1.Ts))
"""
Explanation: Once we include ozone we get a well-defined stratosphere.
Things to consider / try:
Here we used the global annual mean Q = 341.3 W m$^{-2}$. We might want to consider latitudinal or seasonal variations in Q.
We also used the global annual mean ozone profile! Ozone varies tremendously in latitude and by season. That information is all contained in the ozone data file we opened above. We might explore the effects of those variations.
We can calculate climate sensitivity in this model by doubling the CO2 concentration and re-running out to the new equilibrium. Does the amount of ozone affect the climate sensitivity? (example below)
An important shortcoming of the model: there are no clouds! (that would be the next step in the hierarchy of column models)
Clouds would act both in the shortwave (increasing the albedo, cooling the climate) and in the longwave (greenhouse effect, warming the climate). Which effect is stronger depends on the vertical structure of the clouds (high or low clouds) and their optical properties (e.g. thin cirrus clouds are nearly transparent to solar radiation but are good longwave absorbers).
End of explanation
"""
%load_ext version_information
%version_information numpy, matplotlib, xarray, climlab
"""
Explanation: Interesting that the model is MORE sensitive when ozone is set to zero.
<div class="alert alert-success">
[Back to ATM 623 notebook home](../index.ipynb)
</div>
Version information
End of explanation
"""
|
yl565/statsmodels | examples/notebooks/interactions_anova.ipynb | bsd-3-clause | %matplotlib inline
from __future__ import print_function
from statsmodels.compat import urlopen
import numpy as np
np.set_printoptions(precision=4, suppress=True)
import statsmodels.api as sm
import pandas as pd
pd.set_option("display.width", 100)
import matplotlib.pyplot as plt
from statsmodels.formula.api import ols
from statsmodels.graphics.api import interaction_plot, abline_plot
from statsmodels.stats.anova import anova_lm
try:
salary_table = pd.read_csv('salary.table')
except: # recent pandas can read URL without urlopen
url = 'http://stats191.stanford.edu/data/salary.table'
fh = urlopen(url)
salary_table = pd.read_table(fh)
salary_table.to_csv('salary.table')
E = salary_table.E
M = salary_table.M
X = salary_table.X
S = salary_table.S
"""
Explanation: Interactions and ANOVA
Note: This script is based heavily on Jonathan Taylor's class notes http://www.stanford.edu/class/stats191/interactions.html
Download and format data:
End of explanation
"""
plt.figure(figsize=(6,6))
symbols = ['D', '^']
colors = ['r', 'g', 'blue']
factor_groups = salary_table.groupby(['E','M'])
for values, group in factor_groups:
i,j = values
plt.scatter(group['X'], group['S'], marker=symbols[j], color=colors[i-1],
s=144)
plt.xlabel('Experience');
plt.ylabel('Salary');
"""
Explanation: Take a look at the data:
End of explanation
"""
formula = 'S ~ C(E) + C(M) + X'
lm = ols(formula, salary_table).fit()
print(lm.summary())
"""
Explanation: Fit a linear model:
End of explanation
"""
lm.model.exog[:5]
"""
Explanation: Have a look at the created design matrix:
End of explanation
"""
lm.model.data.orig_exog[:5]
"""
Explanation: Or since we initially passed in a DataFrame, we have a DataFrame available in
End of explanation
"""
lm.model.data.frame[:5]
"""
Explanation: We keep a reference to the original untouched data in
End of explanation
"""
infl = lm.get_influence()
print(infl.summary_table())
"""
Explanation: Influence statistics
End of explanation
"""
df_infl = infl.summary_frame()
df_infl[:5]
"""
Explanation: or get a dataframe
End of explanation
"""
resid = lm.resid
plt.figure(figsize=(6,6));
for values, group in factor_groups:
i,j = values
group_num = i*2 + j - 1 # for plotting purposes
x = [group_num] * len(group)
plt.scatter(x, resid[group.index], marker=symbols[j], color=colors[i-1],
s=144, edgecolors='black')
plt.xlabel('Group');
plt.ylabel('Residuals');
"""
Explanation: Now plot the reiduals within the groups separately:
End of explanation
"""
interX_lm = ols("S ~ C(E) * X + C(M)", salary_table).fit()
print(interX_lm.summary())
"""
Explanation: Now we will test some interactions using anova or f_test
End of explanation
"""
from statsmodels.stats.api import anova_lm
table1 = anova_lm(lm, interX_lm)
print(table1)
interM_lm = ols("S ~ X + C(E)*C(M)", data=salary_table).fit()
print(interM_lm.summary())
table2 = anova_lm(lm, interM_lm)
print(table2)
"""
Explanation: Do an ANOVA check
End of explanation
"""
interM_lm.model.data.orig_exog[:5]
"""
Explanation: The design matrix as a DataFrame
End of explanation
"""
interM_lm.model.exog
interM_lm.model.exog_names
infl = interM_lm.get_influence()
resid = infl.resid_studentized_internal
plt.figure(figsize=(6,6))
for values, group in factor_groups:
i,j = values
idx = group.index
plt.scatter(X[idx], resid[idx], marker=symbols[j], color=colors[i-1],
s=144, edgecolors='black')
plt.xlabel('X');
plt.ylabel('standardized resids');
"""
Explanation: The design matrix as an ndarray
End of explanation
"""
drop_idx = abs(resid).argmax()
print(drop_idx) # zero-based index
idx = salary_table.index.drop(drop_idx)
lm32 = ols('S ~ C(E) + X + C(M)', data=salary_table, subset=idx).fit()
print(lm32.summary())
print('\n')
interX_lm32 = ols('S ~ C(E) * X + C(M)', data=salary_table, subset=idx).fit()
print(interX_lm32.summary())
print('\n')
table3 = anova_lm(lm32, interX_lm32)
print(table3)
print('\n')
interM_lm32 = ols('S ~ X + C(E) * C(M)', data=salary_table, subset=idx).fit()
table4 = anova_lm(lm32, interM_lm32)
print(table4)
print('\n')
"""
Explanation: Looks like one observation is an outlier.
End of explanation
"""
try:
resid = interM_lm32.get_influence().summary_frame()['standard_resid']
except:
resid = interM_lm32.get_influence().summary_frame()['standard_resid']
plt.figure(figsize=(6,6))
for values, group in factor_groups:
i,j = values
idx = group.index
plt.scatter(X[idx], resid[idx], marker=symbols[j], color=colors[i-1],
s=144, edgecolors='black')
plt.xlabel('X[~[32]]');
plt.ylabel('standardized resids');
"""
Explanation: Replot the residuals
End of explanation
"""
lm_final = ols('S ~ X + C(E)*C(M)', data = salary_table.drop([drop_idx])).fit()
mf = lm_final.model.data.orig_exog
lstyle = ['-','--']
plt.figure(figsize=(6,6))
for values, group in factor_groups:
i,j = values
idx = group.index
plt.scatter(X[idx], S[idx], marker=symbols[j], color=colors[i-1],
s=144, edgecolors='black')
# drop NA because there is no idx 32 in the final model
plt.plot(mf.X[idx].dropna(), lm_final.fittedvalues[idx].dropna(),
ls=lstyle[j], color=colors[i-1])
plt.xlabel('Experience');
plt.ylabel('Salary');
"""
Explanation: Plot the fitted values
End of explanation
"""
U = S - X * interX_lm32.params['X']
plt.figure(figsize=(6,6))
interaction_plot(E, M, U, colors=['red','blue'], markers=['^','D'],
markersize=10, ax=plt.gca())
"""
Explanation: From our first look at the data, the difference between Master's and PhD in the management group is different than in the non-management group. This is an interaction between the two qualitative variables management,M and education,E. We can visualize this by first removing the effect of experience, then plotting the means within each of the 6 groups using interaction.plot.
End of explanation
"""
try:
jobtest_table = pd.read_table('jobtest.table')
except: # don't have data already
url = 'http://stats191.stanford.edu/data/jobtest.table'
jobtest_table = pd.read_table(url)
factor_group = jobtest_table.groupby(['ETHN'])
fig, ax = plt.subplots(figsize=(6,6))
colors = ['purple', 'green']
markers = ['o', 'v']
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
ax.set_xlabel('TEST');
ax.set_ylabel('JPERF');
min_lm = ols('JPERF ~ TEST', data=jobtest_table).fit()
print(min_lm.summary())
fig, ax = plt.subplots(figsize=(6,6));
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
ax.set_xlabel('TEST')
ax.set_ylabel('JPERF')
fig = abline_plot(model_results = min_lm, ax=ax)
min_lm2 = ols('JPERF ~ TEST + TEST:ETHN',
data=jobtest_table).fit()
print(min_lm2.summary())
fig, ax = plt.subplots(figsize=(6,6));
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
fig = abline_plot(intercept = min_lm2.params['Intercept'],
slope = min_lm2.params['TEST'], ax=ax, color='purple');
fig = abline_plot(intercept = min_lm2.params['Intercept'],
slope = min_lm2.params['TEST'] + min_lm2.params['TEST:ETHN'],
ax=ax, color='green');
min_lm3 = ols('JPERF ~ TEST + ETHN', data = jobtest_table).fit()
print(min_lm3.summary())
fig, ax = plt.subplots(figsize=(6,6));
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
fig = abline_plot(intercept = min_lm3.params['Intercept'],
slope = min_lm3.params['TEST'], ax=ax, color='purple');
fig = abline_plot(intercept = min_lm3.params['Intercept'] + min_lm3.params['ETHN'],
slope = min_lm3.params['TEST'], ax=ax, color='green');
min_lm4 = ols('JPERF ~ TEST * ETHN', data = jobtest_table).fit()
print(min_lm4.summary())
fig, ax = plt.subplots(figsize=(8,6));
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
fig = abline_plot(intercept = min_lm4.params['Intercept'],
slope = min_lm4.params['TEST'], ax=ax, color='purple');
fig = abline_plot(intercept = min_lm4.params['Intercept'] + min_lm4.params['ETHN'],
slope = min_lm4.params['TEST'] + min_lm4.params['TEST:ETHN'],
ax=ax, color='green');
# is there any effect of ETHN on slope or intercept?
table5 = anova_lm(min_lm, min_lm4)
print(table5)
# is there any effect of ETHN on intercept
table6 = anova_lm(min_lm, min_lm3)
print(table6)
# is there any effect of ETHN on slope
table7 = anova_lm(min_lm, min_lm2)
print(table7)
# is it just the slope or both?
table8 = anova_lm(min_lm2, min_lm4)
print(table8)
"""
Explanation: Minority Employment Data
End of explanation
"""
try:
rehab_table = pd.read_csv('rehab.table')
except:
url = 'http://stats191.stanford.edu/data/rehab.csv'
rehab_table = pd.read_table(url, delimiter=",")
rehab_table.to_csv('rehab.table')
fig, ax = plt.subplots(figsize=(8,6))
fig = rehab_table.boxplot('Time', 'Fitness', ax=ax, grid=False)
rehab_lm = ols('Time ~ C(Fitness)', data=rehab_table).fit()
table9 = anova_lm(rehab_lm)
print(table9)
print(rehab_lm.model.data.orig_exog)
print(rehab_lm.summary())
"""
Explanation: One-way ANOVA
End of explanation
"""
try:
kidney_table = pd.read_table('./kidney.table')
except:
url = 'http://stats191.stanford.edu/data/kidney.table'
kidney_table = pd.read_csv(url, delim_whitespace=True)
"""
Explanation: Two-way ANOVA
End of explanation
"""
kidney_table.head(10)
"""
Explanation: Explore the dataset
End of explanation
"""
kt = kidney_table
plt.figure(figsize=(8,6))
fig = interaction_plot(kt['Weight'], kt['Duration'], np.log(kt['Days']+1),
colors=['red', 'blue'], markers=['D','^'], ms=10, ax=plt.gca())
"""
Explanation: Balanced panel
End of explanation
"""
kidney_lm = ols('np.log(Days+1) ~ C(Duration) * C(Weight)', data=kt).fit()
table10 = anova_lm(kidney_lm)
print(anova_lm(ols('np.log(Days+1) ~ C(Duration) + C(Weight)',
data=kt).fit(), kidney_lm))
print(anova_lm(ols('np.log(Days+1) ~ C(Duration)', data=kt).fit(),
ols('np.log(Days+1) ~ C(Duration) + C(Weight, Sum)',
data=kt).fit()))
print(anova_lm(ols('np.log(Days+1) ~ C(Weight)', data=kt).fit(),
ols('np.log(Days+1) ~ C(Duration) + C(Weight, Sum)',
data=kt).fit()))
"""
Explanation: You have things available in the calling namespace available in the formula evaluation namespace
End of explanation
"""
sum_lm = ols('np.log(Days+1) ~ C(Duration, Sum) * C(Weight, Sum)',
data=kt).fit()
print(anova_lm(sum_lm))
print(anova_lm(sum_lm, typ=2))
print(anova_lm(sum_lm, typ=3))
nosum_lm = ols('np.log(Days+1) ~ C(Duration, Treatment) * C(Weight, Treatment)',
data=kt).fit()
print(anova_lm(nosum_lm))
print(anova_lm(nosum_lm, typ=2))
print(anova_lm(nosum_lm, typ=3))
"""
Explanation: Sum of squares
Illustrates the use of different types of sums of squares (I,II,II)
and how the Sum contrast can be used to produce the same output between
the 3.
Types I and II are equivalent under a balanced design.
Don't use Type III with non-orthogonal contrast - ie., Treatment
End of explanation
"""
|
csdms/pymt | notebooks/gipl_and_ecsimplesnow.ipynb | mit | import pymt.models
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import matplotlib.colors as mcolors
from matplotlib.colors import LinearSegmentedColormap
sns.set(style='whitegrid', font_scale= 1.2)
"""
Explanation: Coupling GIPL and ECSimpleSnow models
Before you begin, install:
conda install -c conda-forge pymt pymt_gipl pymt_ecsimplesnow seaborn
End of explanation
"""
ec = pymt.models.ECSimpleSnow()
print(ec.name)
# List input and output variable names.
print(ec.output_var_names)
print(ec.input_var_names)
"""
Explanation: Load ECSimpleSnow module from PyMT
End of explanation
"""
gipl = pymt.models.GIPL()
print(gipl.name)
# List input and output variable names.
print(gipl.output_var_names)
print(gipl.input_var_names)
"""
Explanation: Load GIPL module from PyMT
End of explanation
"""
ec_defaults = ec.setup('.')
print(ec_defaults)
gipl_defaults = gipl.setup('.')
print(gipl_defaults)
ec.initialize('snow_model.cfg')
gipl.initialize('gipl_config.cfg')
# Get soil depth: [unit: m]
depth = gipl.get_grid_z(2)
n_depth = int(len(depth))
# Get the length of forcing data:
ntime = int(gipl.end_time)
# Define a variable to store soil temperature through the time period
tsoil = np.zeros((n_depth, ntime)) * np.nan
print('Final soil temperatures will be ', tsoil.shape)
fig = plt.figure(figsize=[12,6])
ax2 = fig.add_subplot(2,3,1)
ax2.set_title('Air Temperature (Input)')
ax3 = fig.add_subplot(2,3,2)
ax3.set_title('Precipition (Input)')
ax4 = fig.add_subplot(2,3,4)
ax4.set_title('Snow Depth (EC Output)')
ax5 = fig.add_subplot(2,3,5)
ax5.set_title('Snow Density (EC Output)')
ax1 = fig.add_subplot(2,3,(3,6))
ax1.set_ylim([15,0])
ax1.set_xlim([-20,20])
ax1.set_xlabel('Soil Temperature ($^oC$)')
ax1.set_ylabel('Depth (m)')
ax1.plot([0,0],[15,0],'k--')
for i in np.arange(365):
ec.update() # Update Snow Model Once
# Get output from snow model
tair = ec.get_value('land_surface_air__temperature')
prec = ec.get_value('precipitation_mass_flux')
snd = ec.get_value('snowpack__depth', units='m')
rsn = ec.get_value('snowpack__mass-per-volume_density', units = 'g cm-3')
# Pass value to GIPL model
gipl.set_value('land_surface_air__temperature', tair)
gipl.set_value('snowpack__depth', snd)
gipl.set_value('snow__thermal_conductivity', rsn * rsn * 2.846)
gipl.update() # Update GIPL model Once
tsoil[:,i] = gipl.get_value('soil__temperature') # Save results to a matrix
ax1.plot(tsoil[depth>=0,i], depth[depth>=0],color = [0.7,0.7,0.7], alpha = 0.1)
ax2.scatter(i, tair, c = 'k')
ax3.scatter(i, prec, c = 'k')
ax4.scatter(i, snd , c = 'k')
ax5.scatter(i, rsn , c = 'k')
ax1.plot(tsoil[depth>=0,:].max(axis=1), depth[depth>=0], 'r', linewidth = 2, label = 'Max')
ax1.plot(tsoil[depth>=0,:].min(axis=1), depth[depth>=0], 'b', linewidth = 2, label = 'Min')
ax1.plot(tsoil[depth>=0,:].mean(axis=1), depth[depth>=0], 'k', linewidth = 2, label = 'Mean')
ax1.legend()
ax1.set_title('Ground Temperatures (GIPL output)')
ax2.set_xticks([])
ax3.set_xticks([])
fig = plt.figure(figsize=[9,4])
divnorm = mcolors.TwoSlopeNorm(vmin=-25., vcenter=0., vmax=10)
plt.contourf(np.arange(ntime), depth, tsoil, np.linspace(-25,10,15),
norm = divnorm,
cmap="RdBu_r", extend = 'both')
plt.ylim([5,0])
cb = plt.colorbar()
plt.xlabel('Day')
plt.ylabel('Depth (m)')
cb.ax.set_ylabel('Soil Temperature ($^oC$)')
plt.contour(np.arange(ntime), depth, tsoil, [0]) # ZERO
"""
Explanation: Call the setup method on both ECSimpleSnow and GIPL to get default configuration files and data.
End of explanation
"""
|
DJCordhose/ai | notebooks/ai/Play.ipynb | mit | terrain = [
["_", "R", "_", "_"],
["H", "_", "B", "_"],
["_", "_", "B", "_"],
["B", "_", "G", "_"]
]
"""
Explanation: Robot Run
The Game
In a certain terrain a Robot (R) plays against a Human player (H)
* Both Human and Robot try to reach a goal which is at the same distance from both of them
* Blocks (B) and both players block each other
End of explanation
"""
from copy import deepcopy
from math import sqrt, pow
robot_symbol = 'R'
robot_win_symbol = '*'
goal_symbol = 'G'
human_symbol = 'H'
human_win_symbol = '#'
blank_symbol = '_'
def field_contains(state, symbol):
for row in state:
for field in row:
if field == symbol:
return True
return False
def is_robot_win(state):
return field_contains(state, robot_win_symbol)
def is_human_win(state):
return field_contains(state, human_win_symbol)
def as_string(state):
s = ''
for row in state:
row_string = ''
for field in row:
row_string += field + ' '
s += row_string + '\n'
return s
def locate(state, what):
for row_index, row in enumerate(state):
for column_index, field in enumerate(row):
if field == what:
return (row_index, column_index)
def check_position(state, position):
max_row = len(state) - 1
max_column = len(state[0]) - 1
if position[0] < 0 or position[0] > max_row or position[1] < 0 or position[1] > max_column:
return False
symbol = state[position[0]][position[1]]
if symbol != blank_symbol and symbol != goal_symbol:
return False
return True
def player_moves(state, player_symbol):
player = locate(state, player_symbol)
left = (player[0], player[1] - 1)
right = (player[0], player[1] + 1)
up = (player[0] - 1, player[1])
down = (player[0] + 1, player[1])
valid_moves = [move for move in (left, right, down, up) if check_position(state, move)]
return valid_moves
def place_player(state, player, player_symbol, player_win_symbol):
old_player = locate(state, player_symbol)
new_state = deepcopy(state)
new_state[old_player[0]][old_player[1]] = blank_symbol
if new_state[player[0]][player[1]] == goal_symbol:
new_state[player[0]][player[1]] = player_win_symbol
else:
new_state[player[0]][player[1]] = player_symbol
return new_state
def expand(state, player_symbol, player_win_symbol):
valid_moves = player_moves(state, player_symbol)
new_states = [(position, place_player(state, position, player_symbol, player_win_symbol)) for position in valid_moves]
return new_states
def expand_robot(state):
return expand(state, robot_symbol, robot_win_symbol)
def expand_human(state):
return expand(state, human_symbol, human_win_symbol)
def distance(pos1, pos2):
if pos1 and pos2:
return sqrt(pow(pos1[0] - pos2[0], 2) + pow(pos1[1] - pos2[1], 2))
else:
return 0
def estimate_state(state):
goal_position = locate(state, goal_symbol)
robot_position = locate(state, robot_symbol)
human_position = locate(state, human_symbol)
robot_distance = distance(robot_position, goal_position)
human_distance = distance(human_position, goal_position)
estimated_value = human_distance - robot_distance
return estimated_value
"""
Explanation: Basic Game Playing Code
End of explanation
"""
# https://en.wikipedia.org/wiki/Depth-first_search
# 1 procedure DFS(G,v):
# 2 label v as discovered
# 3 for all edges from v to w in G.adjacentEdges(v) do
# 4 if vertex w is not labeled as discovered then
# 5 recursively call DFS(G,w)
def depth_first_search(state, max_depth=10, debug=False, closed_list=[], depth = 0, path=[]):
if as_string(state) in closed_list or depth > max_depth:
return None
if debug:
print('depth', depth)
print('closed_list', closed_list)
print('path', path)
print('state', as_string(state))
if is_robot_win(state):
return path
closed_list = closed_list + [as_string(state)]
for move, next_state in expand_robot(state):
new_path = path + [move]
res = depth_first_search(next_state, max_depth, debug, closed_list, depth + 1, new_path)
if res:
return res
"""
Explanation: Depth first search as a recursive solution
End of explanation
"""
terrain
depth_first_search(terrain)
"""
Explanation: This quite obviously is not the shortest path, but who cares, as long as your robot wins
End of explanation
"""
# https://en.wikipedia.org/wiki/Minimax
# robot is maximizer, human is minimizer
min = float('-inf')
max = float('inf')
def mini_max(state, is_robot_move=True, max_depth=10, debug=False, verbose=False, depth = 0):
if debug:
print('-----')
print('is_robot_move', is_robot_move)
print('depth', depth)
print('inspecting state')
print(as_string(state))
if is_robot_win(state):
if verbose:
print('-----')
print('robot win detected')
print('depth', depth)
print('state', state)
print('-----')
return (max, None)
if is_human_win(state):
if verbose:
print('-----')
print('human win detected')
print('depth', depth)
print('state', state)
print('-----')
return (min, None)
if depth == max_depth:
estimated_value = estimate_state(state)
if verbose:
print('max depth reached, estimation at edge {}'.format(estimated_value))
return (estimated_value, None)
if is_robot_move:
best_value = min
best_move = None
for move, next_state in expand_robot(state):
value_for_move, _ =\
mini_max(next_state, is_robot_move=False, max_depth=max_depth, debug=debug, verbose=verbose, depth = depth + 1)
if value_for_move > best_value:
best_value = value_for_move
best_move = next_state
return (best_value, best_move)
else:
best_value = max
best_move = None
for move, next_state in expand_human(state):
value_for_move, _, =\
mini_max(next_state, is_robot_move=True, max_depth=max_depth, debug=debug, verbose=verbose, depth = depth + 1)
if value_for_move < best_value:
best_value = value_for_move
best_move = next_state
return (best_value, best_move)
terrain
"""
Explanation: Minimax
This is not good enough, because now we have an adversary
End of explanation
"""
mini_max(terrain)
mini_max(terrain, is_robot_move=False)
simple_terrain = [
["R", "_" ],
["_", "G"],
["H", "_"]
]
# after 3 moves in total (2 robot, 1 human) we have a win for robot
# mini_max(simple_terrain, max_depth = 1)
# mini_max(simple_terrain, max_depth = 2)
mini_max(simple_terrain, max_depth = 3, verbose=True)
"""
Explanation: It seems like who ever starts wins
End of explanation
"""
# https://en.wikipedia.org/wiki/Alpha%E2%80%93beta_pruning
def alpha_beta(state, alpha = min, beta = max, is_robot_move=True, max_depth=10, depth = 0, verbose=True, debug=False):
if debug:
print('-----')
print('is_robot_move', is_robot_move)
print('depth', depth)
print('inspecting state')
print(as_string(state))
if is_robot_win(state):
if verbose:
print('-----')
print('robot win detected')
print('depth', depth)
print('state', state)
print('-----')
return (max, None)
if is_human_win(state):
if verbose:
print('-----')
print('human win detected')
print('depth', depth)
print('state', state)
print('-----')
return (min, None)
if depth == max_depth:
estimated_value = estimate_state(state)
if verbose:
print('max depth reached, estimation at edge {}'.format(estimated_value))
return (estimated_value, None)
if is_robot_move:
best_value = min
best_move = None
for move, next_state in expand_robot(state):
value_for_move, _ =\
alpha_beta(next_state, is_robot_move=False, alpha = alpha, beta = beta, max_depth=max_depth, verbose=verbose, debug=debug, depth = depth + 1)
if value_for_move > best_value:
best_value = value_for_move
best_move = next_state
if best_value > alpha:
if debug:
print('adjusting alpha from {} to {}'.format(alpha, best_value))
alpha = best_value
if beta <= alpha:
if debug:
print('breaking, beta {} <= alpha {}'.format(beta, alpha))
break
return (best_value, best_move)
else:
best_value = max
best_move = None
for move, next_state in expand_human(state):
value_for_move, _, =\
alpha_beta(next_state, is_robot_move=True, alpha = alpha, beta = beta, max_depth=max_depth, verbose=verbose, debug=debug, depth = depth + 1)
if value_for_move < best_value:
best_value = value_for_move
best_move = next_state
if best_value < beta:
if debug:
print('adjusting beta from {} to {}'.format(beta, best_value))
beta = best_value
if beta <= alpha:
if debug:
print('breaking, beta {} <= alpha {}'.format(beta, alpha))
break
return (best_value, best_move)
mini_max(simple_terrain, max_depth = 4, verbose=True)
alpha_beta(simple_terrain, max_depth = 4, verbose=True)
%time mini_max(terrain, max_depth = 15, verbose=False)
%time alpha_beta(terrain, max_depth = 15, verbose=False)
%time alpha_beta(terrain, max_depth = 20, verbose=False)
%time alpha_beta(terrain, max_depth = 25, verbose=False)
%time alpha_beta(terrain, max_depth = 30, verbose=False)
simple_terrain
# booth mini max and alpha beta expand the same left side, but alpha beta prunes complete right side (see mini-max-tree.jpg)
mini_max(simple_terrain, max_depth = 3, verbose=True, debug=True)
alpha_beta(simple_terrain, max_depth = 3, verbose=True, debug=True)
"""
Explanation: Alpha Beta Pruning
We are checking on a lot of obviously stupid moves
we repeatedly check for robot win, even though we could know we already won
if we did not we could look at more promising moves instead
this of course would only pay off in larger mazes
End of explanation
"""
|
tlhr/plumology | examples/example.ipynb | mit | from plumology import vis, util, io
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: PLUMOLOGY
vis: Visualization and plotting functions
util: Various utilities and calculation functions
io: Functions to read certain output files and an HDF interface
End of explanation
"""
data = io.read_plumed('data.dat')
data.head()
"""
Explanation: Reading PLUMED output
We read a file in PLUMED output format:
End of explanation
"""
data = io.read_plumed('data.dat', columns=[r'p.i\d', 'peplen'], step=10)
data.head()
"""
Explanation: We can also specify certain columns using regular expressions, and also specify the stepping:
End of explanation
"""
hills = io.read_all_hills(['HILLS.0', 'HILLS.1'])
"""
Explanation: Lets read some MetaD hills files:
End of explanation
"""
hills.head()
"""
Explanation: The separate files are horizontally concatenated into one dataframe:
End of explanation
"""
dist, ranges = util.dist1D(data, nbins=50)
_ = vis.dist1D(dist, ranges)
"""
Explanation: Analysis
Lets compute 1D histograms of our collective variables:
End of explanation
"""
sm_data = io.read_plumed('colvar-red1.dat')
"""
Explanation: We also have a SketchMap representation of our trajectory and can visualize it as a free-energy surface:
End of explanation
"""
clipped_data = util.clip(sm_data, ranges={'cv1': (-50, 50), 'cv2': (-50, 50)})
edges = util.dist1D(clipped_data, ret='edges')
dist = util.dist2D(clipped_data, nbins=50, weight_name='ww')
fes = util.free_energy(dist, kbt=2.49)
_ = vis.dist2D(fes, edges)
"""
Explanation: We should probably clip it to a reasonable range first:
End of explanation
"""
|
metpy/MetPy | v0.9/_downloads/d02fda82caa4290e31f980126221b2a4/Wind_SLP_Interpolation.ipynb | bsd-3-clause | import cartopy.crs as ccrs
import cartopy.feature as cfeature
from matplotlib.colors import BoundaryNorm
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from metpy.calc import wind_components
from metpy.cbook import get_test_data
from metpy.interpolate import interpolate_to_grid, remove_nan_observations
from metpy.plots import add_metpy_logo
from metpy.units import units
to_proj = ccrs.AlbersEqualArea(central_longitude=-97., central_latitude=38.)
"""
Explanation: Wind and Sea Level Pressure Interpolation
Interpolate sea level pressure, as well as wind component data,
to make a consistent looking analysis, featuring contours of pressure and wind barbs.
End of explanation
"""
with get_test_data('station_data.txt') as f:
data = pd.read_csv(f, header=0, usecols=(2, 3, 4, 5, 18, 19),
names=['latitude', 'longitude', 'slp', 'temperature', 'wind_dir',
'wind_speed'],
na_values=-99999)
"""
Explanation: Read in data
End of explanation
"""
lon = data['longitude'].values
lat = data['latitude'].values
xp, yp, _ = to_proj.transform_points(ccrs.Geodetic(), lon, lat).T
"""
Explanation: Project the lon/lat locations to our final projection
End of explanation
"""
x_masked, y_masked, pres = remove_nan_observations(xp, yp, data['slp'].values)
"""
Explanation: Remove all missing data from pressure
End of explanation
"""
slpgridx, slpgridy, slp = interpolate_to_grid(x_masked, y_masked, pres, interp_type='cressman',
minimum_neighbors=1, search_radius=400000,
hres=100000)
"""
Explanation: Interpolate pressure using Cressman interpolation
End of explanation
"""
wind_speed = (data['wind_speed'].values * units('m/s')).to('knots')
wind_dir = data['wind_dir'].values * units.degree
good_indices = np.where((~np.isnan(wind_dir)) & (~np.isnan(wind_speed)))
x_masked = xp[good_indices]
y_masked = yp[good_indices]
wind_speed = wind_speed[good_indices]
wind_dir = wind_dir[good_indices]
"""
Explanation: Get wind information and mask where either speed or direction is unavailable
End of explanation
"""
u, v = wind_components(wind_speed, wind_dir)
windgridx, windgridy, uwind = interpolate_to_grid(x_masked, y_masked, np.array(u),
interp_type='cressman', search_radius=400000,
hres=100000)
_, _, vwind = interpolate_to_grid(x_masked, y_masked, np.array(v), interp_type='cressman',
search_radius=400000, hres=100000)
"""
Explanation: Calculate u and v components of wind and then interpolate both.
Both will have the same underlying grid so throw away grid returned from v interpolation.
End of explanation
"""
x_masked, y_masked, t = remove_nan_observations(xp, yp, data['temperature'].values)
tempx, tempy, temp = interpolate_to_grid(x_masked, y_masked, t, interp_type='cressman',
minimum_neighbors=3, search_radius=400000, hres=35000)
temp = np.ma.masked_where(np.isnan(temp), temp)
"""
Explanation: Get temperature information
End of explanation
"""
levels = list(range(-20, 20, 1))
cmap = plt.get_cmap('viridis')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
fig = plt.figure(figsize=(20, 10))
add_metpy_logo(fig, 360, 120, size='large')
view = fig.add_subplot(1, 1, 1, projection=to_proj)
view._hold = True # Work-around for CartoPy 0.16/Matplotlib 3.0.0 incompatibility
view.set_extent([-120, -70, 20, 50])
view.add_feature(cfeature.STATES.with_scale('50m'))
view.add_feature(cfeature.OCEAN)
view.add_feature(cfeature.COASTLINE.with_scale('50m'))
view.add_feature(cfeature.BORDERS, linestyle=':')
cs = view.contour(slpgridx, slpgridy, slp, colors='k', levels=list(range(990, 1034, 4)))
view.clabel(cs, inline=1, fontsize=12, fmt='%i')
mmb = view.pcolormesh(tempx, tempy, temp, cmap=cmap, norm=norm)
fig.colorbar(mmb, shrink=.4, pad=0.02, boundaries=levels)
view.barbs(windgridx, windgridy, uwind, vwind, alpha=.4, length=5)
view.set_title('Surface Temperature (shaded), SLP, and Wind.')
plt.show()
"""
Explanation: Set up the map and plot the interpolated grids appropriately.
End of explanation
"""
|
letsgoexploring/linearsolve-package | examples/cia_model.ipynb | mit | # Import numpy, pandas, linearsolve, scipy.optimize, matplotlib.pyplot
import numpy as np
import pandas as pd
import linearsolve as ls
from scipy.optimize import root,fsolve,broyden1,broyden2
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
"""
Explanation: A Cash-in-Advance Model
Replicate CIA model from Chapter 4 of Monetary Theory and Policy, 2nd edition by Carl Walsh.
End of explanation
"""
alpha = 0.36
beta = 0.989
delta = 0.019
eta = 1
psi = 1.34
sigma = 2
A = 1
rhoa = 0.95
gamma = 0.8
phi=0.5
"""
Explanation: Set parameters
End of explanation
"""
r_ss = 1/beta
yk_ss= 1/alpha*(r_ss-1+delta)
ck_ss = yk_ss-delta
def func(n):
'''Funciton to compute steady state labor'''
return (1-alpha)/psi*beta*yk_ss**((sigma-alpha)/(1-alpha))*ck_ss**(-sigma) - (1-n)**-eta*n**sigma
n_ss = root(func,0.3)['x'][0]
nk_ss = (yk_ss)**(1/(1-alpha))
k_ss = n_ss/nk_ss
y_ss = yk_ss*k_ss
c_ss = ck_ss*k_ss
m_ss = c_ss
a_ss = 1
u_ss = 1
pi_ss = 1
lam_ss = beta*c_ss**-sigma
mu_ss = (1-beta)*c_ss**-sigma
# Store steady state values in a list
ss = [a_ss,u_ss,m_ss,k_ss,pi_ss,r_ss,n_ss,c_ss,lam_ss,mu_ss,y_ss]
# Load parameter values into a Pandas Series
parameters = pd.Series({
'alpha':alpha,
'beta':beta,
'delta':delta,
'eta':eta,
'psi':psi,
'sigma':sigma,
'rhoa':rhoa,
'gamma':gamma,
'phi':phi,
'n_ss':n_ss,
'yk_ss':yk_ss,
'ck_ss':ck_ss
})
"""
Explanation: Compute exact steady state
End of explanation
"""
# Define function to compute equilibrium conditions
def equations(variables_forward,variables_current,parameters):
# Parameters
p = parameters
# Variables
fwd = variables_forward
cur = variables_current
# Household Euler equation
foc1 = p.alpha*cur.k+(1-p.alpha)*cur.n + fwd.a - cur.y
foc2 = p.ck_ss*fwd.m + fwd.k - (1-p.delta)*cur.k - p.yk_ss*cur.y
foc3 = p.alpha*p.yk_ss*(fwd.y - fwd.k) - cur.r
foc4 = fwd.lam + cur.r - cur.lam
foc5 = (1+p.eta*p.n_ss/(1-p.n_ss))*cur.n - cur.y - cur.lam
foc6 = cur.r + fwd.pi - cur.rn
foc7 = -p.sigma*fwd.maux-fwd.pi - cur.lam
foc8 = cur.m-cur.pi+cur.u - fwd.m
foc9 = cur.maux - fwd.m
foc10= p.gamma*cur.u+p.phi*fwd.a - fwd.u
foc11= p.rhoa*cur.a - fwd.a
# Stack equilibrium conditions into a numpy array
return np.array([
foc1,
foc2,
foc3,
foc4,
foc5,
foc6,
foc7,
foc8,
foc9,
foc10,
foc11,
])
# Initialize the model
model = ls.model(equations = equations,
n_states=4,
n_exo_states=3,
var_names=['a', 'u', 'm', 'k', 'lam', 'pi', 'rn', 'r', 'n', 'y','maux'],
parameters = parameters)
# Compute the steady state numerically
guess = 0*np.array([1,1,10,10,1,1,0.5,2,1,1,1])
model.compute_ss(guess,method='fsolve')
# Construct figure and axes
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(2,1,1)
ax2 = fig.add_subplot(2,1,2)
# Iterate over different degrees of persistence for money growth shock
for gamma in [0.5,0.8]:
model.parameters['gamma'] = gamma
# Solve the model
model.approximate_and_solve(log_linear=False)
# Compute impulse responses and plot
model.impulse(T=17,t0=1,shocks=None,percent=True)
# Plot
y = model.irs['e_u']['y']
n = model.irs['e_u']['n']
rn = model.irs['e_u']['rn']
pi = model.irs['e_u']['pi']
tme=rn.index
ax1.plot(tme,y,lw=5,alpha=0.5,label='y ($\gamma='+str(gamma)+'$)')
ax1.plot(tme,n,'--',lw=5,alpha=0.5,label='n ($\gamma='+str(gamma)+'$)')
ax1.grid(True)
ax1.legend(loc='lower right')
ax2.plot(tme,rn,lw=5,alpha=0.5,label='Rn ($\gamma='+str(gamma)+'$)')
ax2.plot(tme,pi,'--',lw=5,alpha=0.5,label='$\pi$ ($\gamma='+str(gamma)+'$)')
ax2.grid(True)
ax2.legend()
"""
Explanation: Linear model
Solve and simulate the log-linearized model.
End of explanation
"""
# Define function to compute equilibrium conditions
def equations(variables_forward,variables_current,parameters):
# Parameters
p = parameters
# Variables
fwd = variables_forward
cur = variables_current
# Household Euler equation
foc_1 = cur.a**p.rhoa - fwd.a
foc_2 = cur.u**p.gamma*cur.a**p.phi - fwd.u
foc_3 = cur.lam+cur.mu - cur.c**-p.sigma
foc_4 = cur.lam*(1-p.alpha)*cur.y/cur.n - p.psi*(1-cur.n)**-p.eta
foc_5 = p.beta*(fwd.lam*cur.Rn)/fwd.pi - cur.lam
foc_6 = p.beta*(fwd.mu+fwd.lam)/fwd.pi - cur.lam
foc_7 = p.beta*(fwd.lam*(p.alpha*fwd.y/fwd.k+1-p.delta)) - cur.lam
foc_8 = cur.a*cur.k**alpha*cur.n**(1-p.alpha) - cur.y
foc_9 = cur.c+fwd.k-(1-p.delta)*cur.k - cur.y
foc_10 = cur.m/cur.pi*cur.u - fwd.m
foc_11 = fwd.m - cur.c
# Stack equilibrium conditions into a numpy array
return np.array([
foc_1,
foc_2,
foc_3,
foc_4,
foc_5,
foc_6,
foc_7,
foc_8,
foc_9,
foc_10,
foc_11
])
# Initialize the model
varNames=['a','u','m','k','pi','Rn','n','c','lam','mu','y']
parameters = parameters[['alpha','beta','delta','eta','psi','sigma','rhoa','gamma','phi']]
model = ls.model(equations = equations,
n_states=4,
n_exo_states=3,
var_names=varNames,
parameters = parameters)
# Set the steady state using exact values calculated above
model.set_ss(ss)
# Construct figure and axes
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(2,1,1)
ax2 = fig.add_subplot(2,1,2)
# Iterate over different degrees of persistence for money growth shock
for gamma in [0.5,0.8]:
model.parameters['gamma'] = gamma
# Find the log-linear approximation around the non-stochastic steady state
model.approximate_and_solve()
# Compute impulse responses and plot
model.impulse(T=17,t0=1,shocks=None,percent=True)
# Plot
y = model.irs['e_u']['y']
n = model.irs['e_u']['n']
rn = model.irs['e_u']['Rn']
pi = model.irs['e_u']['pi']
tme=rn.index
ax1.plot(tme,y,lw=5,alpha=0.5,label='y ($\gamma='+str(gamma)+'$)')
ax1.plot(tme,n,'--',lw=5,alpha=0.5,label='n ($\gamma='+str(gamma)+'$)')
ax1.grid(True)
ax1.legend(loc='lower right')
ax2.plot(tme,rn,lw=5,alpha=0.5,label='Rn ($\gamma='+str(gamma)+'$)')
ax2.plot(tme,pi,'--',lw=5,alpha=0.5,label='$\pi$ ($\gamma='+str(gamma)+'$)')
ax2.grid(True)
ax2.legend()
"""
Explanation: Nonlinear model
Approximate, solve, and simulate the log-linearized model.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.22/_downloads/1935e973eb220e31cb4a6a6541231eb1/plot_background_statistics.ipynb | bsd-3-clause | # Authors: Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
from functools import partial
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D # noqa, analysis:ignore
import mne
from mne.stats import (ttest_1samp_no_p, bonferroni_correction, fdr_correction,
permutation_t_test, permutation_cluster_1samp_test)
print(__doc__)
"""
Explanation: Statistical inference
Here we will briefly cover multiple concepts of inferential statistics in an
introductory manner, and demonstrate how to use some MNE statistical functions.
:depth: 3
End of explanation
"""
width = 40
n_subjects = 10
signal_mean = 100
signal_sd = 100
noise_sd = 0.01
gaussian_sd = 5
sigma = 1e-3 # sigma for the "hat" method
n_permutations = 'all' # run an exact test
n_src = width * width
# For each "subject", make a smoothed noisy signal with a centered peak
rng = np.random.RandomState(2)
X = noise_sd * rng.randn(n_subjects, width, width)
# Add a signal at the center
X[:, width // 2, width // 2] = signal_mean + rng.randn(n_subjects) * signal_sd
# Spatially smooth with a 2D Gaussian kernel
size = width // 2 - 1
gaussian = np.exp(-(np.arange(-size, size + 1) ** 2 / float(gaussian_sd ** 2)))
for si in range(X.shape[0]):
for ri in range(X.shape[1]):
X[si, ri, :] = np.convolve(X[si, ri, :], gaussian, 'same')
for ci in range(X.shape[2]):
X[si, :, ci] = np.convolve(X[si, :, ci], gaussian, 'same')
"""
Explanation: Hypothesis testing
Null hypothesis
^^^^^^^^^^^^^^^
From Wikipedia <https://en.wikipedia.org/wiki/Null_hypothesis>__:
In inferential statistics, a general statement or default position that
there is no relationship between two measured phenomena, or no
association among groups.
We typically want to reject a null hypothesis with
some probability (e.g., p < 0.05). This probability is also called the
significance level $\alpha$.
To think about what this means, let's follow the illustrative example from
[1]_ and construct a toy dataset consisting of a 40 x 40 square with a
"signal" present in the center with white noise added and a Gaussian
smoothing kernel applied.
End of explanation
"""
fig, ax = plt.subplots()
ax.imshow(X.mean(0), cmap='inferno')
ax.set(xticks=[], yticks=[], title="Data averaged over subjects")
"""
Explanation: The data averaged over all subjects looks like this:
End of explanation
"""
titles = ['t']
out = stats.ttest_1samp(X, 0, axis=0)
ts = [out[0]]
ps = [out[1]]
mccs = [False] # these are not multiple-comparisons corrected
def plot_t_p(t, p, title, mcc, axes=None):
if axes is None:
fig = plt.figure(figsize=(6, 3))
axes = [fig.add_subplot(121, projection='3d'), fig.add_subplot(122)]
show = True
else:
show = False
p_lims = [0.1, 0.001]
t_lims = -stats.distributions.t.ppf(p_lims, n_subjects - 1)
p_lims = [-np.log10(p) for p in p_lims]
# t plot
x, y = np.mgrid[0:width, 0:width]
surf = axes[0].plot_surface(x, y, np.reshape(t, (width, width)),
rstride=1, cstride=1, linewidth=0,
vmin=t_lims[0], vmax=t_lims[1], cmap='viridis')
axes[0].set(xticks=[], yticks=[], zticks=[],
xlim=[0, width - 1], ylim=[0, width - 1])
axes[0].view_init(30, 15)
cbar = plt.colorbar(ax=axes[0], shrink=0.75, orientation='horizontal',
fraction=0.1, pad=0.025, mappable=surf)
cbar.set_ticks(t_lims)
cbar.set_ticklabels(['%0.1f' % t_lim for t_lim in t_lims])
cbar.set_label('t-value')
cbar.ax.get_xaxis().set_label_coords(0.5, -0.3)
if not show:
axes[0].set(title=title)
if mcc:
axes[0].title.set_weight('bold')
# p plot
use_p = -np.log10(np.reshape(np.maximum(p, 1e-5), (width, width)))
img = axes[1].imshow(use_p, cmap='inferno', vmin=p_lims[0], vmax=p_lims[1],
interpolation='nearest')
axes[1].set(xticks=[], yticks=[])
cbar = plt.colorbar(ax=axes[1], shrink=0.75, orientation='horizontal',
fraction=0.1, pad=0.025, mappable=img)
cbar.set_ticks(p_lims)
cbar.set_ticklabels(['%0.1f' % p_lim for p_lim in p_lims])
cbar.set_label(r'$-\log_{10}(p)$')
cbar.ax.get_xaxis().set_label_coords(0.5, -0.3)
if show:
text = fig.suptitle(title)
if mcc:
text.set_weight('bold')
plt.subplots_adjust(0, 0.05, 1, 0.9, wspace=0, hspace=0)
mne.viz.utils.plt_show()
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
"""
Explanation: In this case, a null hypothesis we could test for each voxel is:
There is no difference between the mean value and zero
($H_0 \colon \mu = 0$).
The alternative hypothesis, then, is that the voxel has a non-zero mean
($H_1 \colon \mu \neq 0$).
This is a two-tailed test because the mean could be less than
or greater than zero, whereas a one-tailed test would test only one of
these possibilities, i.e. $H_1 \colon \mu \geq 0$ or
$H_1 \colon \mu \leq 0$.
<div class="alert alert-info"><h4>Note</h4><p>Here we will refer to each spatial location as a "voxel".
In general, though, it could be any sort of data value,
including cortical vertex at a specific time, pixel in a
time-frequency decomposition, etc.</p></div>
Parametric tests
Let's start with a paired t-test, which is a standard test
for differences in paired samples. Mathematically, it is equivalent
to a 1-sample t-test on the difference between the samples in each condition.
The paired t-test is parametric
because it assumes that the underlying sample distribution is Gaussian, and
is only valid in this case. This happens to be satisfied by our toy dataset,
but is not always satisfied for neuroimaging data.
In the context of our toy dataset, which has many voxels
($40 \cdot 40 = 1600$), applying the paired t-test is called a
mass-univariate approach as it treats each voxel independently.
End of explanation
"""
ts.append(ttest_1samp_no_p(X, sigma=sigma))
ps.append(stats.distributions.t.sf(np.abs(ts[-1]), len(X) - 1) * 2)
titles.append(r'$\mathrm{t_{hat}}$')
mccs.append(False)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
"""
Explanation: "Hat" variance adjustment
The "hat" technique regularizes the variance values used in the t-test
calculation [1]_ to compensate for implausibly small variances.
End of explanation
"""
# Here we have to do a bit of gymnastics to get our function to do
# a permutation test without correcting for multiple comparisons:
X.shape = (n_subjects, n_src) # flatten the array for simplicity
titles.append('Permutation')
ts.append(np.zeros(width * width))
ps.append(np.zeros(width * width))
mccs.append(False)
for ii in range(n_src):
ts[-1][ii], ps[-1][ii] = permutation_t_test(X[:, [ii]], verbose=False)[:2]
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
"""
Explanation: Non-parametric tests
Instead of assuming an underlying Gaussian distribution, we could instead
use a non-parametric resampling method. In the case of a paired t-test
between two conditions A and B, which is mathematically equivalent to a
one-sample t-test between the difference in the conditions A-B, under the
null hypothesis we have the principle of exchangeability. This means
that, if the null is true, we can exchange conditions and not change
the distribution of the test statistic.
When using a paired t-test, exchangeability thus means that we can flip the
signs of the difference between A and B. Therefore, we can construct the
null distribution values for each voxel by taking random subsets of
samples (subjects), flipping the sign of their difference, and recording the
absolute value of the resulting statistic (we record the absolute value
because we conduct a two-tailed test). The absolute value of the statistic
evaluated on the veridical data can then be compared to this distribution,
and the p-value is simply the proportion of null distribution values that
are smaller.
<div class="alert alert-danger"><h4>Warning</h4><p>In the case of a true one-sample t-test, i.e. analyzing a single
condition rather than the difference between two conditions,
it is not clear where/how exchangeability applies; see
`this FieldTrip discussion <ft_exch_>`_.</p></div>
In the case where n_permutations is large enough (or "all") so
that the complete set of unique resampling exchanges can be done
(which is $2^{N_{samp}}-1$ for a one-tailed and
$2^{N_{samp}-1}-1$ for a two-tailed test, not counting the
veridical distribution), instead of randomly exchanging conditions
the null is formed from using all possible exchanges. This is known
as a permutation test (or exact test).
End of explanation
"""
N = np.arange(1, 80)
alpha = 0.05
p_type_I = 1 - (1 - alpha) ** N
fig, ax = plt.subplots(figsize=(4, 3))
ax.scatter(N, p_type_I, 3)
ax.set(xlim=N[[0, -1]], ylim=[0, 1], xlabel=r'$N_{\mathrm{test}}$',
ylabel=u'Probability of at least\none type I error')
ax.grid(True)
fig.tight_layout()
fig.show()
"""
Explanation: Multiple comparisons
So far, we have done no correction for multiple comparisons. This is
potentially problematic for these data because there are
$40 \cdot 40 = 1600$ tests being performed. If we use a threshold
p < 0.05 for each individual test, we would expect many voxels to be declared
significant even if there were no true effect. In other words, we would make
many type I errors (adapted from here <errors_>_):
.. rst-class:: skinnytable
+----------+--------+------------------+------------------+
| | Null hypothesis |
| +------------------+------------------+
| | True | False |
+==========+========+==================+==================+
| | | Type I error | Correct |
| | Yes | False positive | True positive |
+ Reject +--------+------------------+------------------+
| | | Correct | Type II error |
| | No | True Negative | False negative |
+----------+--------+------------------+------------------+
To see why, consider a standard $\alpha = 0.05$.
For a single test, our probability of making a type I error is 0.05.
The probability of making at least one type I error in
$N_{\mathrm{test}}$ independent tests is then given by
$1 - (1 - \alpha)^{N_{\mathrm{test}}}$:
End of explanation
"""
titles.append('Bonferroni')
ts.append(ts[-1])
ps.append(bonferroni_correction(ps[0])[1])
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
"""
Explanation: To combat this problem, several methods exist. Typically these
provide control over either one of the following two measures:
Familywise error rate (FWER) <fwer_>_
The probability of making one or more type I errors:
.. math::
\mathrm{P}(N_{\mathrm{type\ I}} >= 1 \mid H_0)
False discovery rate (FDR) <fdr_>_
The expected proportion of rejected null hypotheses that are
actually true:
.. math::
\mathrm{E}(\frac{N_{\mathrm{type\ I}}}{N_{\mathrm{reject}}}
\mid N_{\mathrm{reject}} > 0) \cdot
\mathrm{P}(N_{\mathrm{reject}} > 0 \mid H_0)
We cover some techniques that control FWER and FDR below.
Bonferroni correction
Perhaps the simplest way to deal with multiple comparisons, Bonferroni
correction <https://en.wikipedia.org/wiki/Bonferroni_correction>__
conservatively multiplies the p-values by the number of comparisons to
control the FWER.
End of explanation
"""
titles.append('FDR')
ts.append(ts[-1])
ps.append(fdr_correction(ps[0])[1])
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
"""
Explanation: False discovery rate (FDR) correction
Typically FDR is performed with the Benjamini-Hochberg procedure, which
is less restrictive than Bonferroni correction for large numbers of
comparisons (fewer type II errors), but provides less strict control of type
I errors.
End of explanation
"""
titles.append(r'$\mathbf{Perm_{max}}$')
out = permutation_t_test(X, verbose=False)[:2]
ts.append(out[0])
ps.append(out[1])
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
"""
Explanation: Non-parametric resampling test with a maximum statistic
Non-parametric resampling tests can also be used to correct for multiple
comparisons. In its simplest form, we again do permutations using
exchangeability under the null hypothesis, but this time we take the
maximum statistic across all voxels in each permutation to form the
null distribution. The p-value for each voxel from the veridical data
is then given by the proportion of null distribution values
that were smaller.
This method has two important features:
It controls FWER.
It is non-parametric. Even though our initial test statistic
(here a 1-sample t-test) is parametric, the null
distribution for the null hypothesis rejection (the mean value across
subjects is indistinguishable from zero) is obtained by permutations.
This means that it makes no assumptions of Gaussianity
(which do hold for this example, but do not in general for some types
of processed neuroimaging data).
End of explanation
"""
from sklearn.feature_extraction.image import grid_to_graph # noqa: E402
mini_adjacency = grid_to_graph(3, 3).toarray()
assert mini_adjacency.shape == (9, 9)
print(mini_adjacency[0])
"""
Explanation: Clustering
Each of the aforementioned multiple comparisons corrections have the
disadvantage of not fully incorporating the correlation structure of the
data, namely that points close to one another (e.g., in space or time) tend
to be correlated. However, by defining the adjacency/adjacency/neighbor
structure in our data, we can use clustering to compensate.
To use this, we need to rethink our null hypothesis. Instead
of thinking about a null hypothesis about means per voxel (with one
independent test per voxel), we consider a null hypothesis about sizes
of clusters in our data, which could be stated like:
The distribution of spatial cluster sizes observed in two experimental
conditions are drawn from the same probability distribution.
Here we only have a single condition and we contrast to zero, which can
be thought of as:
The distribution of spatial cluster sizes is independent of the sign
of the data.
In this case, we again do permutations with a maximum statistic, but, under
each permutation, we:
Compute the test statistic for each voxel individually.
Threshold the test statistic values.
Cluster voxels that exceed this threshold (with the same sign) based on
adjacency.
Retain the size of the largest cluster (measured, e.g., by a simple voxel
count, or by the sum of voxel t-values within the cluster) to build the
null distribution.
After doing these permutations, the cluster sizes in our veridical data
are compared to this null distribution. The p-value associated with each
cluster is again given by the proportion of smaller null distribution
values. This can then be subjected to a standard p-value threshold
(e.g., p < 0.05) to reject the null hypothesis (i.e., find an effect of
interest).
This reframing to consider cluster sizes rather than individual means
maintains the advantages of the standard non-parametric permutation
test -- namely controlling FWER and making no assumptions of parametric
data distribution.
Critically, though, it also accounts for the correlation structure in the
data -- which in this toy case is spatial but in general can be
multidimensional (e.g., spatio-temporal) -- because the null distribution
will be derived from data in a way that preserves these correlations.
.. sidebar:: Effect size
For a nice description of how to compute the effect size obtained
in a cluster test, see this
`FieldTrip mailing list discussion <ft_cluster_effect_size_>`_.
However, there is a drawback. If a cluster significantly deviates from
the null, no further inference on the cluster (e.g., peak location) can be
made, as the entire cluster as a whole is used to reject the null.
Moreover, because the test statistic concerns the full data, the null
hypothesis (and our rejection of it) refers to the structure of the full
data. For more information, see also the comprehensive
FieldTrip tutorial <ft_cluster_>_.
Defining the adjacency matrix
First we need to define our adjacency (sometimes called "neighbors") matrix.
This is a square array (or sparse matrix) of shape (n_src, n_src) that
contains zeros and ones to define which spatial points are neighbors, i.e.,
which voxels are adjacent to each other. In our case this
is quite simple, as our data are aligned on a rectangular grid.
Let's pretend that our data were smaller -- a 3 x 3 grid. Thinking about
each voxel as being connected to the other voxels it touches, we would
need a 9 x 9 adjacency matrix. The first row of this matrix contains the
voxels in the flattened data that the first voxel touches. Since it touches
the second element in the first row and the first element in the second row
(and is also a neighbor to itself), this would be::
[1, 1, 0, 1, 0, 0, 0, 0, 0]
:mod:sklearn.feature_extraction provides a convenient function for this:
End of explanation
"""
titles.append('Clustering')
# Reshape data to what is equivalent to (n_samples, n_space, n_time)
X.shape = (n_subjects, width, width)
# Compute threshold from t distribution (this is also the default)
threshold = stats.distributions.t.ppf(1 - alpha, n_subjects - 1)
t_clust, clusters, p_values, H0 = permutation_cluster_1samp_test(
X, n_jobs=1, threshold=threshold, adjacency=None,
n_permutations=n_permutations, out_type='mask')
# Put the cluster data in a viewable format
p_clust = np.ones((width, width))
for cl, p in zip(clusters, p_values):
p_clust[cl] = p
ts.append(t_clust)
ps.append(p_clust)
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
"""
Explanation: In general the adjacency between voxels can be more complex, such as
those between sensors in 3D space, or time-varying activation at brain
vertices on a cortical surface. MNE provides several convenience functions
for computing adjacency matrices (see the
Statistics API <api_reference_statistics>).
Standard clustering
Here, since our data are on a grid, we can use adjacency=None to
trigger optimized grid-based code, and run the clustering algorithm.
End of explanation
"""
titles.append(r'$\mathbf{C_{hat}}$')
stat_fun_hat = partial(ttest_1samp_no_p, sigma=sigma)
t_hat, clusters, p_values, H0 = permutation_cluster_1samp_test(
X, n_jobs=1, threshold=threshold, adjacency=None, out_type='mask',
n_permutations=n_permutations, stat_fun=stat_fun_hat, buffer_size=None)
p_hat = np.ones((width, width))
for cl, p in zip(clusters, p_values):
p_hat[cl] = p
ts.append(t_hat)
ps.append(p_hat)
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
"""
Explanation: "Hat" variance adjustment
This method can also be used in this context to correct for small
variances [1]_:
End of explanation
"""
titles.append(r'$\mathbf{C_{TFCE}}$')
threshold_tfce = dict(start=0, step=0.2)
t_tfce, _, p_tfce, H0 = permutation_cluster_1samp_test(
X, n_jobs=1, threshold=threshold_tfce, adjacency=None,
n_permutations=n_permutations, out_type='mask')
ts.append(t_tfce)
ps.append(p_tfce)
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
"""
Explanation: Threshold-free cluster enhancement (TFCE)
TFCE eliminates the free parameter initial threshold value that
determines which points are included in clustering by approximating
a continuous integration across possible threshold values with a standard
Riemann sum <https://en.wikipedia.org/wiki/Riemann_sum>__ [2]_.
This requires giving a starting threshold start and a step
size step, which in MNE is supplied as a dict.
The smaller the step and closer to 0 the start value,
the better the approximation, but the longer it takes.
A significant advantage of TFCE is that, rather than modifying the
statistical null hypothesis under test (from one about individual voxels
to one about the distribution of clusters in the data), it modifies the data
under test while still controlling for multiple comparisons.
The statistical test is then done at the level of individual voxels rather
than clusters. This allows for evaluation of each point
independently for significance rather than only as cluster groups.
End of explanation
"""
titles.append(r'$\mathbf{C_{hat,TFCE}}$')
t_tfce_hat, _, p_tfce_hat, H0 = permutation_cluster_1samp_test(
X, n_jobs=1, threshold=threshold_tfce, adjacency=None, out_type='mask',
n_permutations=n_permutations, stat_fun=stat_fun_hat, buffer_size=None)
ts.append(t_tfce_hat)
ps.append(p_tfce_hat)
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
"""
Explanation: We can also combine TFCE and the "hat" correction:
End of explanation
"""
fig = plt.figure(facecolor='w', figsize=(14, 3))
assert len(ts) == len(titles) == len(ps)
for ii in range(len(ts)):
ax = [fig.add_subplot(2, 10, ii + 1, projection='3d'),
fig.add_subplot(2, 10, 11 + ii)]
plot_t_p(ts[ii], ps[ii], titles[ii], mccs[ii], ax)
fig.tight_layout(pad=0, w_pad=0.05, h_pad=0.1)
plt.show()
"""
Explanation: Visualize and compare methods
Let's take a look at these statistics. The top row shows each test statistic,
and the bottom shows p-values for various statistical tests, with the ones
with proper control over FWER or FDR with bold titles.
End of explanation
"""
|
mbinkowski/DeepSpeechDistances | deep_speech_distances.ipynb | apache-2.0 | !pip install python_speech_features
!pip install resampy
!pip install scipy
!pip install gdown
!pip install tqdm -U
"""
Explanation: This notebook provides a demo for the use of DeepSpeech Distances proposed in High Fidelity Speech Synthesis with Adversarial Networks as new evaluation metrics for neural speech synthesis.
The computation involves estimating Fréchet and Kernel distances between high-level features of the reference and the examined samples extracted from NVIDIA's DeepSpeech2 model.
We propose four distances:
Fréchet DeepSpeech Distance (FDSD, based on FID, see [2])
Kernel DeepSpeech Distance (KDSD, based on KID, see [3])
conditional Fréchet DeepSpeech Distance (cFDSD),
conditional Kernel DeepSpeech Distance (cKDSD).
The conditional distances compare samples with the same conditioning (e.g. text) and asses conditional quality of the audio. The uncoditional ones compare random samples from two distributions and asses general quality of audio. For more details, see [1].
References
[1] Mikołaj Bińkowski, Jeff Donahue, Sander Dieleman, Aidan Clark, Erich Elsen, Norman Casagrande, Luis C. Cobo, Karen Simonyan, High Fidelity Speech Synthesis with Adversarial Networks, ICLR 2020.
[2] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Sepp Hochreiter, GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium, NeurIPS 2017.
[3] Mikołaj Bińkowski, Dougal J. Sutherland, Michael Arbel, Arthur Gretton, Demystifying MMD GANs, ICLR 2018.
Demo
Firsty, install dependencies and download the checkpoint of the pretrained DeepSpeech2 model.
End of explanation
"""
PATH = '/content/drive/My Drive/DeepSpeechDistances'
SAMPLE_PATH = '/content/drive/My Drive/DeepSpeechDistances/abstract_samples'
NUM_SPLITS = 3 # number of data splits to comute std of DSD
SAMPLES_PER_SPLIT = 500 # number of samples in a single DSD run.
# We recommend at least 10k samples for evaluation to get reasonable estimates.
AUDIO_LENGTH = 2 # length of individual sample, in seconds
NUM_NOISE_LEVELS = 3 # number of different noise levels for samples to evaluate
from google.colab import drive
drive.mount('/content/drive/', force_remount=True)
import sys, os
sh = lambda path: "'" + path + "'"
if not os.path.exists(PATH):
!git clone https://github.com/mbinkowski/DeepSpeechDistances.git {sh(PATH)}
else:
print('Found DeepSpeechDistances directory, skipping git clone.')
sys.path.append(PATH)
if not os.path.exists(os.path.join(PATH, 'checkpoint', 'ds2_large')):
CKPT = sh(PATH + '/ds2.tar.gz')
!gdown https://drive.google.com/uc?id=1EDvL9wMCO2vVE-ynBvpwkFTultbzLNQX -O {CKPT}
!tar -C {sh(PATH +'/checkpoint/')} -xvf {CKPT}
!rm {CKPT}
else:
print('Found checkpoint directory, skipping download.')
"""
Explanation: Set up evaluation parameters and paths within the mounted Google Drive. Clone repository and download checkpoint.
End of explanation
"""
%tensorflow_version 1.x
import tensorflow.compat.v1 as tf
tf.disable_eager_execution()
import audio_distance
from sample_utils import subsample_audio
"""
Explanation: Do other necessary imports.
End of explanation
"""
subsample_audio(os.path.join(PATH, 'abstract.wav'),
SAMPLE_PATH,
num_samples=NUM_SPLITS * SAMPLES_PER_SPLIT,
length=AUDIO_LENGTH,
num_noise_levels=NUM_NOISE_LEVELS)
"""
Explanation: Create random 2-second samples from source audio file.
By default, we use our generated abstract ([1]) as a source of both clean reference samples and noise-corrupted samples for examination.
User can specify other audio files; in principle, DeepSpeech Distances require reference samples (e.g. real speech) and samples to be examined, all of the sample length. Although this demo uses a smaller number for the sake of fast computation, we suggest using at least 10,000 samples per each set, to obtain good estimates of the distances.
End of explanation
"""
reference_path = os.path.join(SAMPLE_PATH, 'ref', '*.wav')
eval_paths = [os.path.join(SAMPLE_PATH, f'noisy_{i+1}', '*.wav') for i
in range(NUM_NOISE_LEVELS)]
evaluator = audio_distance.AudioDistance(
load_path=os.path.join(PATH, 'checkpoint', 'ds2_large', 'model.ckpt-54800'),
meta_path=os.path.join(PATH, 'checkpoint', 'collection-stripped-meta.meta'),
required_sample_size=NUM_SPLITS * SAMPLES_PER_SPLIT,
num_splits=NUM_SPLITS)
evaluator.load_real_data(reference_path)
"""
Explanation: Create evaluator object and load reference samples.
End of explanation
"""
dist_names = ['FDSD', 'KDSD', 'cFDSD', 'cKDSD']
def print_results(values):
print('\n' + ', '.join(['%s = %.5f (%.5f)' % (n, v[0], v[1]) for n, v
in zip(dist_names, values)]))
with tf.Session(config=evaluator.sess_config) as sess:
print('Computing reference DeepSpeech distances.')
values = evaluator.get_distance(sess=sess)
print_results(values)
distances = [values]
for eval_path in eval_paths:
print('\nComputing DeepSpeech distances for files in the directory:\n'
+ os.path.dirname(eval_path))
values = evaluator.get_distance(sess=sess, files=eval_path)
print_results(values)
distances.append(values)
"""
Explanation: Carry out distance estimation.
End of explanation
"""
all_paths = [reference_path] + eval_paths
prefix_len = len(os.path.commonpath(all_paths))
sample_names = [path[prefix_len + 1:] for path in all_paths]
if all([os.path.basename(p) == '*.wav' for p in all_paths]):
sample_names = [name[:-6] for name in sample_names]
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 2, figsize=(12, 4))
x = range(NUM_NOISE_LEVELS + 1)
for i, kind in enumerate(['Frechet', 'kernel']):
ax[i].set_title(kind + ' distances')
ax[i].set_xticks(x)
ax[i].set_xticklabels(sample_names)
for j in [0, 2]:
k = i + j
ax[i].plot(x, [d[k][0] for d in distances], color='cmyk'[k],
label=dist_names[k])
ax[i].fill_between(x, [d[k][0] - d[k][1] for d in distances],
[d[k][0] + d[k][1] for d in distances], color='cmyk'[k],
alpha=.3)
ax[i].legend()
ax[i].grid()
drive.flush_and_unmount()
"""
Explanation: Plot results.
Kernel distances usually give similar numbers (which, in case of small samples, might be negative due to unbiased estimator of MMD). Unconditional Frechet distance differs significantly from the conditional one for reference samples and always gives positive values due to biased estimator.
End of explanation
"""
|
CivicTechTO/ttc_subway_times | doc/Single_station_all_day_analysis.ipynb | gpl-3.0 | import datetime
from psycopg2 import connect
import configparser
import pandas as pd
import pandas.io.sql as pandasql
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.widgets import Slider
%matplotlib qt
try:
con.close()
except:
print("No existing connection... moving on")
CONFIG = configparser.ConfigParser(interpolation=None)
CONFIG.read('../db.cfg')
dbset = CONFIG['DBSETTINGS']
con = connect(**dbset)
"""
Explanation: Single Station Analysis - Historic
Building on the API Exploration Notebook and the Filtering Observed Arrivals notebook. Let's explore a different approach for analyzing the data. Note that, I modified the api scraper to only retrieve the soonest time from the next subway API. This should (hopefully) help with some of the issues we were previously having. I made a new database and ran the API for a few hours on Sunday polling only St. George station (station_id == 10) at a poll frequency of once every 10 seconds. I will post the data online so that others can try it out.
Created by Rami on May 6/2018
End of explanation
"""
sql = '''SELECT requestid, stationid, lineid, create_date, request_date, station_char, subwayline, system_message_type,
timint, traindirection, trainid, train_message
FROM requests
INNER JOIN ntas_data USING (requestid)
WHERE request_date >= '2017-06-14'::DATE + interval '6 hours 5 minutes'
AND request_date < '2017-06-14'::DATE + interval '29 hours'
AND stationid = 10
AND traindirection = 'South'
ORDER BY request_date
'''
stg_south = pandasql.read_sql(sql, con)
stg_south
stg_south_resamp = stg_south[stg_south.index % 3 == 0]
stg_south_resamp
"""
Explanation: Retrieving data from the database
Let's start by getting data from our database by joining along requestid to simplify things for us we're only going to look at Southbound trains for now.
End of explanation
"""
arrival_times = []
departure_times = []
all_wait_times = []
all_time_stamps = []
expected_wait_times = []
prev_arrival_train_id = -1
for index, row in stg_south_resamp.iterrows():
if index == 0:
prev_departure_train_id = row['trainid']
all_wait_times.append(row['timint'])
all_time_stamps.append(row['create_date'])
if (row['trainid'] != prev_arrival_train_id):
arrival_times.append(row['create_date'])
prev_arrival_train_id = row['trainid']
#elif (row['trainid'] != prev_departure_train_id):
departure_times.append(row['create_date'])
expected_wait_times.append(row['timint'])
#prev_departure_train_id = row['trainid']
"""
Explanation: Extracting some useful information
Now we need to process the data to extract some useful information from the raw ntas_data. To do this we're going to go row by row through the table shown above to get arrival times, departure times and wait times.
arrival_times are the times at which a train arrives at St. George station
departure_times are the times at which a train leaves St. George station
all_wait_times are all the reported wait times from every API call (which in this case is every 10 seconds)
expected_wait_times are the expected wait times immediately after a train has departed the station. They represent the worst case wait times.
End of explanation
"""
plt.plot(all_time_stamps,all_wait_times)
plt.xlabel('Time')
plt.xticks(fontsize=10, rotation=45)
plt.ylabel('Wait Time (mins)')
plt.title('All reported wait times at St. George')
plt.savefig('all_wait_times.png', dpi=500)
plt.show()
def timeToArrival(all_time_stamps,all_wait_times,arrival_times):
actual_wait_times = []
i = 0
k = 0
arrival_time = arrival_times[i]
for time in all_time_stamps:
if (all_wait_times[k] == 0):
actual_wait_times.append(arrival_times[0]-arrival_times[0])
k+=1
continue
while ((arrival_time - time).total_seconds() < 0):
i+=1
if (i > (len(arrival_times) -1)):
break
arrival_time = arrival_times[i]
actual_wait_times.append(arrival_time - time)
k+=1
return actual_wait_times
print(len(all_time_stamps[0:-1]))
actual_wait_times_all = timeToArrival(all_time_stamps,all_wait_times,arrival_times)
def sliding_window_filter(input_mat,window_size,overlap):
average_time = []
max_time = []
for i in range(0,len(input_mat)-window_size,overlap):
window = input_mat[i:(i+window_size)]
average_time.append(np.mean(window))
max_time.append(np.mean(window))
return average_time #, max_time
window_size = 30
overlap = 25
#average_time, max_time = sliding_window_filter(all_wait_times,window_size, overlap)
#times = all_time_stamps[0:len(all_time_stamps)-window_size:overlap]
#times = all_time_stamps[0:len(actual_wait_times_all)]
times = all_time_stamps[0:len(all_time_stamps)-window_size:overlap]
plt.plot(times,np.floor(sliding_window_filter(convert_timedelta_to_mins(actual_wait_times_all),window_size,overlap)))
#average_time, max_time = sliding_window_filter(convert_timedelta_to_mins(actual_wait_times_all),window_size, overlap)
plt.plot(times,np.ceil(sliding_window_filter(all_wait_times,window_size,overlap)))
plt.xlabel('Time')
plt.xticks(fontsize=10, rotation=45)
plt.ylabel('Wait Time (mins)')
plt.title('All reported wait times at St. George')
plt.show()
class sliding_figure:
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.widgets import Slider
def __init__(self,all_time_stamps,all_wait_times):
self.fig, self.ax = plt.subplots()
plt.subplots_adjust(bottom=0.25)
self.t = all_time_stamps;
self.s = all_wait_times;
self.l, = plt.plot(self.t,self.s)
self.y_min = 0.0;
self.y_max = max(self.s)
plt.axis([self.t[0], self.t[100], self.y_min, self.y_max])
x_dt = self.t[100] - self.t[0]
self.axcolor = 'lightgoldenrodyellow'
self.axpos = plt.axes([0.2, 0.1, 0.65, 0.03], facecolor=axcolor)
self.spos = Slider(self.axpos, 'Pos', matplotlib.dates.date2num(self.t[0]), matplotlib.dates.date2num(self.t[-1]))
#self.showPlot()
# pretty date names
plt.gcf().autofmt_xdate()
self.plt = plt
#self.showPlot()
def update(self,val):
pos = self.spos.val
self.xmax_time = matplotlib.dates.num2date(pos) + x_dt
self.xmin_time = pos
self.ax.axis([self.xmin_time, self.xmax_time, self.y_min, self.y_max])
fig.canvas.draw_idle()
def showPlot(self):
self.spos.on_changed(self.update)
self.plt.show()
wait_times_figure = sliding_figure(all_time_stamps, all_wait_times)
wait_times_figure.showPlot()
"""
Explanation: We can look at all the reported wait times. While this is somewhat interesting, it doesn't tell us very much
End of explanation
"""
def time_delta(times):
delta_times = []
for n in range(0,len(times)-1):
time_diff = times[n+1] - times[n]
delta_times.append(time_diff/np.timedelta64(1, 's'))
return delta_times
delta_times = time_delta(arrival_times)
#delta_times
plt.plot(arrival_times[:-1],np.multiply(delta_times,1/60.0))
plt.xlabel('Time')
plt.xticks(fontsize=10, rotation=45)
plt.ylabel('Headway (mins)')
plt.title('Headway between trains as they approach St. George')
plt.savefig('headway.png', dpi=500)
"""
Explanation: Headway analysis
By looking at the difference in arrival times at St. Geore we can determine the headway (aka. the time between trains) as the approach St. George station
End of explanation
"""
time_at_station = np.subtract(departure_times[:],arrival_times[:])
#time_at_station
def convert_timedelta_to_mins(mat):
result = []
for element in mat:
result.append((element/np.timedelta64(1, 'm')))
return result
time_at_station_mins = convert_timedelta_to_mins(time_at_station)
plt.plot(departure_times,time_at_station_mins)
plt.xlabel('Time')
plt.xticks(fontsize=10, rotation=45)
plt.ylabel('Duration of time at station (mins)')
plt.title('Duration of time that trains spend at St. George Station')
"""
Explanation: Analyzing time spent at the station
We can also look at how long trains spend at the station by looking at the difference between the departure and arrival times. St. George station is an interchange station, as such, trains do tend to spend longer here than at intermediary station.
End of explanation
"""
#expected_wait_times
plt.plot(arrival_times,expected_wait_times)
plt.ylabel('Expected Wait Time (mins)')
plt.xticks(fontsize=10, rotation=45)
plt.xlabel('Time')
plt.title('Worst-case expected wait times for next train at St. George')
plt.savefig('expected_wait_times.png', dpi=500)
"""
Explanation: Expected wait times
The expected wait times represent the worst-case wait reported wait time immediately after the previous train has left the station
End of explanation
"""
actual_wait_times = np.subtract(arrival_times[1:],arrival_times[:-1])
actual_wait_times_mins = convert_timedelta_to_mins(actual_wait_times)
plt.plot(arrival_times[1:],actual_wait_times_mins,color = 'C1')
plt.xlabel('Time')
plt.xticks(fontsize=10, rotation=45)
plt.ylabel('Actual wait time (mins)')
plt.title('Worst-case actual wait times for next train at St. George')
plt.savefig('actual_wait_times.png', dpi=500)
plt.show()
print(len(actual_wait_times_mins))
window_size = 15
overlap = 14
average_time, max_time = sliding_window_filter(actual_wait_times_mins ,window_size, overlap)
print(len(average_time))
times = arrival_times[0:len(all_time_stamps)-window_size:overlap]
print(len(times))
plt.plot(times[1:],average_time)
plt.plot(times[1:],max_time)
plt.xlabel('Time')
plt.xticks(fontsize=10, rotation=45)
plt.ylabel('Wait Time (mins)')
plt.title('All reported wait times at St. George')
plt.show()
"""
Explanation: Actual wait time
It's instructive if we can look at the actual worst-case wait time and compare this to the expected worst case wait time. In this case, we will also consider the actual worst-case wait time as the time between when a train departs and the next train arrives (i.e the difference between the arrival time and the previous departed time)
End of explanation
"""
print(len(expected_wait_times))
print(len(arrival_times))
print(len(arrival_times))
print(len(actual_wait_times_mins))
type(arrival_times[1])
arrival_times_pdt = []
for item in arrival_times:
arrival_times_pdt.append(datetime.time(item.to_pydatetime().hour,item.to_pydatetime().minute))
arrival_times_pdt[2]
plt.plot(arrival_times,expected_wait_times)
plt.plot(arrival_times[1:],np.floor(actual_wait_times_mins[:]))
#plt.legend(['Expected Wait for Next Train','Actual Wait Time for Next Train'],
# bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.xlabel('Time')
plt.xticks(fontsize=10, rotation=45)
plt.ylabel('Wait Time (mins)')
plt.title('Comparing actual and expected wait times at St. George')
lgd = plt.legend(['Expected Wait for Next Train','Actual Wait Time for Next Train'],
bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.savefig('actual_and_expected_wait_times.png', bbox_extra_artists=(lgd,), bbox_inches='tight', dpi=700)
window_size = 15
overlap = 12
average_time = sliding_window_filter(actual_wait_times_mins,window_size, overlap)
print(len(average_time))
times = arrival_times[0:len(all_time_stamps)-window_size:overlap]
print(len(times))
plt.plot(times[1:],np.floor(average_time))
average_time = sliding_window_filter(np.ceil(expected_wait_times),window_size, overlap)
plt.plot(times[1:],np.floor(average_time))
plt.xlabel('Time')
plt.xticks(fontsize=10, rotation=45)
plt.ylabel('Wait Time (mins)')
plt.title('All reported wait times at St. George')
plt.show()
lgd = plt.legend(['Actual Wait for Next Train','Expected Wait Time for Next Train'])
"""
Explanation: Comparing actual and expected wait times
Now let's put everything together and compare the actual and expected wait times.
End of explanation
"""
plt.plot(departure_times,expected_wait_times)
plt.plot(departure_times[1:],actual_wait_times_mins)
plt.plot(all_time_stamps,all_wait_times)
plt.legend(['Expected Wait for Next Train','Actual Wait Time for Next Train','All Reported Wait Times'],
bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.xlabel('Time')
plt.xticks(fontsize=10, rotation=45)
plt.ylabel('Wait Time (mins)')
plt.title('Comparing actual and expected wait times at St. George')
"""
Explanation: We can also plot all the reported wait times too!
End of explanation
"""
plt.plot(all_time_stamps,all_wait_times)
plt.plot(arrival_times[:],time_at_station_mins)
plt.title('Durtion of time trains spend at St.George')
plt.xlabel('Time')
plt.xticks(fontsize=10, rotation=90)
plt.ylabel('Time (mins)')
plt.legend(['All Reported Wait Times','Time train spends at station (mins)'],
bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
"""
Explanation: We can also look at how long trains spend at St. George
End of explanation
"""
|
vvishwa/deep-learning | gan_mnist/Intro_to_GANs_Exercises.ipynb | mit | %matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
"""
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
"""
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(shape=(None, real_dim), dtype=tf.float32, name='input_real')
inputs_z = tf.placeholder(shape=(None, z_dim), dtype=tf.float32, name='input_z')
return inputs_real, inputs_z
"""
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
"""
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(z, n_units)
# Leaky ReLU
h1 = tf.maximum(alpha*h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
"""
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
"""
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('descriminator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(h1, alpha*h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
"""
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
"""
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
"""
Explanation: Hyperparameters
End of explanation
"""
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
"""
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
"""
# Calculate losses
labels = tf.ones_like(d_logits_real) * (1 - smooth)
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=labels))
labels = tf.ones_like(d_logits_real)
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=labels))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_logits_fake)))
"""
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
"""
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer().minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer().minimize(g_loss, var_list=g_vars)
"""
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
"""
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
"""
Explanation: Training
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
"""
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
"""
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
"""
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
"""
_ = view_samples(-1, samples)
"""
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
"""
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
"""
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
"""
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
"""
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation
"""
|
ogoann/StatisticalMethods | examples/SDSScatalog/GalaxySizes.ipynb | gpl-2.0 | %load_ext autoreload
%autoreload 2
import numpy as np
import SDSS
import pandas as pd
import matplotlib
%matplotlib inline
galaxies = "SELECT top 1000 \
petroR50_i AS size, \
petroR50Err_i AS err \
FROM PhotoObjAll \
WHERE \
(type = '3' AND petroR50Err_i > 0)"
print galaxies
# Download data. This can take a few moments...
data = SDSS.select(galaxies)
data.head()
!mkdir -p downloads
data.to_csv("downloads/SDSSgalaxysizes.csv")
"""
Explanation: Illustrating Observed and Intrinsic Object Properties:
SDSS "Galaxy" Sizes
In a catalog, each galaxy's measurements come with "error bars" providing information about how uncertain we should be about each property of each galaxy.
This means that the distribution of "observed" galaxy properties (as reported in the catalog) is not the same as the underlying or "intrinsic" distribution.
Let's look at the distribution of observed sizes in the SDSS photometric object catalog.
End of explanation
"""
data = pd.read_csv("downloads/SDSSgalaxysizes.csv",usecols=["size","err"])
data['size'].hist(bins=np.linspace(0.0,5.0,100),figsize=(12,7))
"""
Explanation: The Distribution of Observed SDSS "Galaxy" Sizes
Let's look at a histogram of galaxy sizes, for 1000 objects classified as "galaxies".
End of explanation
"""
data.plot(kind='scatter', x='size', y='err',s=100,figsize=(12,7));
"""
Explanation: Things to notice:
No small objects (why not?)
A "tail" to large size
Some very large sizes that look a little odd
Are these large galaxies actually large, or have they just been measured that way?
Let's look at the reported uncertainties on these sizes:
End of explanation
"""
def generate_galaxies(mu=np.log10(1.5),S=0.3,N=1000):
return pd.DataFrame({'size' : 10.0**(mu + S*np.random.randn(N))})
mu = np.log10(1.5)
S = 0.05
intrinsic = generate_galaxies(mu=mu,S=S,N=1000)
intrinsic.hist(bins=np.linspace(0.0,5.0,100),figsize=(12,7),color='green')
"""
Explanation: Generating Mock Data
Let's look at how distributions like this one can come about, by making a generative model for this dataset.
First, let's imagine a set of perfectly measured galaxies. They won't all have the same size, because the Universe isn't like that. Let's suppose the logarithm of their intrinsic sizes are drawn from a Gaussian distribution of width $S$ and mean $\mu$.
To model one mock galaxy, we draw a sample from this distribution. To model the whole dataset, we draw 1000 samples.
End of explanation
"""
def make_noise(sigma=0.3,N=1000):
return pd.DataFrame({'size' : sigma*np.random.randn(N)})
sigma = 0.3
errors = make_noise(sigma=sigma,N=1000)
observed = intrinsic + errors
observed.hist(bins=np.linspace(0.0,5.0,100),figsize=(12,7),color='red')
both = pd.DataFrame({'SDSS': data['size'], 'Model': observed['size']}, columns=['SDSS', 'Model'])
both.hist(alpha=0.5,bins=np.linspace(0.0,5.0,100),figsize=(12,7))
# data['size'].hist(bins=np.linspace(0.0,5.0,100),figsize=(12,7))
# observed.hist(bins=np.linspace(0.0,5.0,100),figsize=(12,7),color='red')
"""
Explanation: Now let's add some observational uncertainty. We can model this by drawing random Gaussian offsets $\epsilon$ and add one to each intrinsic size.
End of explanation
"""
V_data = np.var(data['size'])
print "Variance of the SDSS distribution = ",V_data
V_int = np.var(intrinsic['size'])
V_noise = np.var(errors['size'])
V_obs = np.var(observed['size'])
print "Variance of the intrinsic distribution = ", V_int
print "Variance of the noise = ", V_noise
print "Variance of the observed distribution = ", V_int + V_noise, \
"cf", V_obs
"""
Explanation: Q: How did we do? Is this a good model for our data?
Play around with the parameters $\mu$, $S$ and $\sigma$ and see if you can get a better match to the observed distribution of sizes.
<br>
One last thing: let's look at the variances of these distributions.
Recall:
$V(x) = \frac{1}{N} \sum_{i=1}^N (x_i - \nu)^2$
If $\nu$, the population mean of $x$, is not known, an estimator for $V$ is
$\hat{V}(x) = \frac{1}{N} \sum_{i=1}^N (x_i - \bar{x})^2$
where $\bar{x} = \frac{1}{N} \sum_{i=1}^N x_i$, the sample mean.
End of explanation
"""
# !pip install --upgrade triangle_plot
import triangle
to_plot = np.array([observed['size'],intrinsic['size']]).transpose()
fig = triangle.corner(to_plot,labels=['Observed size','Intrinsic size'],range=[(0.0,3.0),(0.0,3.0)],color='Blue',plot_datapoints=False, fill_contours=True,
levels=[0.68, 0.95], bins=50, smooth=1.)
"""
Explanation: You may recall this last result from previous statistics courses.
Why is the variance of our mock dataset's galaxy sizes so much smaller than that of the SDSS sample?
Sampling Distributions
In the above example we drew samples from two probability distributions:
The intrinsic size distribution, ${\rm Pr}(R_{\rm true}|\mu,S)$
The "error" distribution, ${\rm Pr}(R_{\rm obs}|R_{\rm true},\sigma)$
The procedure of drawing numbers from the first, and then adding numbers from the second, produced mock data - which then appeared to have been drawn from:
${\rm Pr}(R_{\rm obs}|\mu,S)$
Q: What would we do differently if we wanted to simulate 1 Galaxy?
The three distributions are related by an integral:
${\rm Pr}(R_{\rm obs}|\mu,S) = \int {\rm Pr}(R_{\rm obs}|R_{\rm true},\sigma) \; {\rm Pr}(R_{\rm true}|\mu,S) \; dR_{\rm true}$
When we only plot the 1D histogram of observed sizes, we are summing over or "marginalizing out" the intrinsic ones.
Often it is useful to visualize all 1 and 2D projections of a multivariate probability distribution - like this:
End of explanation
"""
from IPython.display import Image
Image(filename="samplingdistributions.png")
"""
Explanation: Probabilistic Graphical Models
We can draw a diagram representing the above combination of probability distributions, that:
Shows the dependencies between variables
Gives you a recipe for generating mock data
We can also do this in python, using the daft package.:
End of explanation
"""
|
thewtex/SimpleITK-Notebooks | 02_Pythonic_Image.ipynb | apache-2.0 | import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rc('image', aspect='equal')
%matplotlib inline
import SimpleITK as sitk
# Download data to work on
from downloaddata import fetch_data as fdata
"""
Explanation: Pythonic Syntactic Sugar
The Image Basics Notebook was straight forward and closely follows ITK's C++ interface.
Sugar is great it gives your energy to get things done faster! SimpleITK has applied a generous about of syntactic sugar to help get things done faster too.
End of explanation
"""
img = sitk.GaussianSource(size=[64]*2)
plt.imshow(sitk.GetArrayFromImage(img))
img = sitk.GaborSource(size=[64]*2, frequency=.03)
plt.imshow(sitk.GetArrayFromImage(img))
def myshow(img):
nda = sitk.GetArrayFromImage(img)
plt.imshow(nda)
myshow(img)
"""
Explanation: Let us begin by developing a convenient method for displaying images in our notebooks.
End of explanation
"""
img[24,24]
"""
Explanation: Multi-dimension slice indexing
If you are familiar with numpy, sliced index then this should be cake for the SimpleITK image. The Python standard slice interface for 1-D object:
<table>
<tr><td>Operation</td> <td>Result</td></tr>
<tr><td>d[i]</td> <td>ith item of d, starting index 0</td></tr>
<tr><td>d[i:j]</td> <td>slice of d from i to j</td></tr>
<tr><td>d[i:j:k]</td> <td>slice of d from i to j with step k</td></tr>
</table>
With this convient syntax many basic tasks can be easily done.
End of explanation
"""
myshow(img[16:48,:])
myshow(img[:,16:-16])
myshow(img[:32,:32])
"""
Explanation: Cropping
End of explanation
"""
img_corner = img[:32,:32]
myshow(img_corner)
myshow(img_corner[::-1,:])
myshow(sitk.Tile(img_corner, img_corner[::-1,::],img_corner[::,::-1],img_corner[::-1,::-1], [2,2]))
"""
Explanation: Flipping
End of explanation
"""
img = sitk.GaborSource(size=[64]*3, frequency=0.05)
# Why does this produce an error?
myshow(img)
myshow(img[:,:,32])
myshow(img[16,:,:])
"""
Explanation: Slice Extraction
A 2D image can be extracted from a 3D one.
End of explanation
"""
myshow(img[:,::3,32])
"""
Explanation: Super Sampling
End of explanation
"""
img = sitk.ReadImage(fdata("cthead1.png"))
img = sitk.Cast(img,sitk.sitkFloat32)
myshow(img)
img[150,150]
timg = img**2
myshow(timg)
timg[150,150]
"""
Explanation: Mathematical Operators
Most python mathematical operators are overloaded to call the SimpleITK filter which does that same operation on a per-pixel basis. They can operate on a two images or an image and a scalar.
If two images are used then both must have the same pixel type. The output image type is ussually the same.
As these operators basically call ITK filter, which just use raw C++ operators, care must be taked to prevent overflow, and divide by zero etc.
<table>
<tr><td>Operators</td></tr>
<tr><td>+</td></tr>
<tr><td>-</td></tr>
<tr><td>\*</td></tr>
<tr><td>/</td></tr>
<tr><td>//</td></tr>
<tr><td>**</td></tr>
<table>
End of explanation
"""
img = sitk.ReadImage(fdata("cthead1.png"))
myshow(img)
"""
Explanation: Division Operators
All three Python division operators are implemented __floordiv__, __truediv__, and __div__.
The true division's output is a double pixle type.
See PEP 238 to see why Python changed the division operator in Python 3.
Bitwise Logic Operators
<table>
<tr><td>Operators</td></tr>
<tr><td>&</td></tr>
<tr><td>|</td></tr>
<tr><td>^</td></tr>
<tr><td>~</td></tr>
<table>
End of explanation
"""
img = sitk.ReadImage(fdata("cthead1.png"))
myshow(img)
"""
Explanation: Comparative Operators
<table>
<tr><td>Operators</td></tr>
<tr><td>\></td></tr>
<tr><td>\>=</td></tr>
<tr><td><</td></tr>
<tr><td><=</td></tr>
<tr><td>==</td></tr>
<table>
These comparative operators follow the same convention as the reset of SimpleITK for binary images. They have the pixel type of ``sitkUInt8`` with values of 0 and 1.
End of explanation
"""
myshow(img>90)
myshow(img>150)
myshow((img>90)+(img>150))
"""
Explanation: Amazingly make common trivial tasks really trivial
End of explanation
"""
|
TomTranter/OpenPNM | examples/percolation/Part A - Ordinary Percolation.ipynb | mit | import openpnm as op
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(10)
from ipywidgets import interact, IntSlider
%matplotlib inline
ws = op.Workspace()
ws.settings["loglevel"] = 40
"""
Explanation: Part A: Ordinary Percolation
OpenPNM contains several percolation algorithms which are central to the multiphase models employed by pore networks. The essential idea is to identify pathways for fluid flow through the network using the entry capillary pressure as a threshold for passage between connected pores. The capillary pressure can either be associated to the pores themselves known as site percolation or the connecting throats known as bond percolation or a mixture of both. OpenPNM provides several models for calculating the entry pressure for a given pore or throat and it generally depends on the size of the pore or throat and the wettability to a particular phase characterised by the contact angle. If a pathway through the network connects pores into clusters that contain both an inlet and an outlet then it is deemed to be percolating.
In this example we will demonstrate Ordinary Percolation which is the fastest and simplest algorithm to run. The number of steps involved in the algorithm is equal to the number of points that are specified in the run method. This can either be an integer, in which case the minimum and maximum capillary entry pressures in the network are used as limits and the integer value is used to create that number of intervals between the limits, or an array of specified pressured can be supplied.
The algorithm progresses incrementally from low pressure to high. At each step, clusters of connected pores are found with entry pressures below the current threshold and those that are not already invaded and connected to an inlet are set to be invaded at this pressure. Therefore the process is quasistatic and represents the steady state saturation that would be achieved if the inlet pressure were to be held at that threshold.
First do our imports
End of explanation
"""
N = 100
net = op.network.Cubic(shape=[N, N, 1], spacing=2.5e-5)
geom = op.geometry.StickAndBall(network=net, pores=net.Ps, throats=net.Ts)
water = op.phases.Water(network=net)
phys = op.physics.Standard(network=net, phase=water, geometry=geom)
"""
Explanation: Create a 2D Cubic network with standard PSD and define the phase as Water and use Standard physics which implements the washburn capillary pressure relation for throat entry pressure.
End of explanation
"""
phys.models['throat.entry_pressure']
"""
Explanation: We can check the model by looking at the model dict on the phys object
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
alg = op.algorithms.OrdinaryPercolation(network=net)
alg.setup(phase=water, pore_volume='pore.volume', throat_volume='throat.volume')
alg.set_inlets(pores=net.pores('left'))
alg.set_outlets(pores=net.pores('right'))
alg.run(points=1000)
alg.plot_intrusion_curve()
plt.show()
"""
Explanation: Now set up and run the algorithm choosing the left and right sides of the network for inlets and outlets respectively. Because we did not set up the network with boundary pores with zero volume a little warning is given because the starting saturation for the algorithm is not zero. However, this is fine and because the network is quite large the starting saturation is actually quite close to zero.
End of explanation
"""
data = alg.get_intrusion_data()
mask = np.logical_and(np.asarray(data.Snwp) > 0.0 , np.asarray(data.Snwp) < 1.0)
mask = np.argwhere(mask).flatten()
pressures = np.asarray(data.Pcap)[mask]
"""
Explanation: The algorithm completes very quickly and the invading phase saturation can be plotted versus the applied boundary pressure.
End of explanation
"""
def plot_saturation(step):
arg = mask[step]
Pc = np.ceil(data.Pcap[arg])
sat = np.around(data.Snwp[arg], 3)
is_perc = alg.is_percolating(Pc)
pmask = alg['pore.invasion_pressure'] <= Pc
im = pmask.reshape([N, N])
fig, ax = plt.subplots(figsize=[5, 5])
ax.imshow(im, cmap='Blues');
title = ('Capillary Pressure: '+str(Pc)+' Saturation: '+str(sat)+
' Percolating : '+str(is_perc))
plt.title(title)
plt.show()
#NBVAL_IGNORE_OUTPUT
perc_thresh = alg.get_percolation_threshold()
thresh_step = np.argwhere(np.asarray(pressures) == perc_thresh)
interact(plot_saturation, step=IntSlider(min=0, max=len(mask)-1, step=1, value=thresh_step));
"""
Explanation: As the network is 2D and cubic we can easily plot the invading phase configuration at the different invasion steps
End of explanation
"""
|
mathLab/RBniCS | tutorials/08_nonlinear_parabolic/tutorial_nonlinear_parabolic_exact.ipynb | lgpl-3.0 | from dolfin import *
from rbnics import *
from utils import *
"""
Explanation: Tutorial 08 - Non linear Parabolic problem
Keywords: exact parametrized functions, POD-Galerkin
1. Introduction
In this tutorial, we consider the FitzHugh-Nagumo (F-N) system. The F-N system is used to describe neuron excitable systems. The nonlinear parabolic problem for the F-N system is defined on the interval $I=[0,L]$. Let $x\in I$, $t\geq0$
$$\begin{cases}
\varepsilon u_t(x,t) =\varepsilon^2u_{xx}(x,t)+g(u(x,t))-\omega(x,t)+c, & x\in I,\quad t\geq 0, \
\omega_t(x,t) =bu(x,t)-\gamma\omega(x,t)+c, & x\in I,\quad t\geq 0, \
u(x,0) = 0,\quad\omega(x,0)=0, & x\in I, \
u_x(0,t)=-i_0(t),\quad u_x(L,t)=0, & t\geq 0,
\end{cases}$$
where the nonlinear function is defined by
$$g(u) = u(u-0.1)(1-u)$$
and the parameters are given by $L = 1$, $\varepsilon = 0.015$, $b = 0.5$, $\gamma = 2$, and $c = 0.05$. The stimulus $i_0(t)=50000t^3\exp(-15t)$. The variables $u$ and $\omega$ represent the $\textit{voltage}$ and the $\textit{recovery of voltage}$, respectively.
In order to obtain an exact solution of the problem we pursue a model reduction by means of a POD-Galerkin reduced order method.
2. Formulation for the F-N system
Let $u,\omega$ the solutions in the domain $I$.
For this problem we want to find $\boldsymbol{u}=(u,\omega)$ such that
$$
m\left(\partial_t\boldsymbol{u}(t),\boldsymbol{v}\right)+a\left(\boldsymbol{u}(t),\boldsymbol{v}\right)+c\left(u(t),v\right)=f(\boldsymbol{v})\quad \forall \boldsymbol{v}=(v,\tilde{v}), \text{ with }v,\tilde{v} \in\mathbb{V},\quad\forall t\geq0
$$
where
the function space $\mathbb{V}$ is defined as
$$
\mathbb{V} = {v\in L^2(I) : v|_{{0}}=0}
$$
the bilinear form $m(\cdot, \cdot): \mathbb{V} \times \mathbb{V} \to \mathbb{R}$ is defined by
$$m(\partial\boldsymbol{u}(t), \boldsymbol{v})=\varepsilon\int_{I}\frac{\partial u}{\partial t}v \ d\boldsymbol{x} \ + \ \int_{I}\frac{\partial\omega}{\partial t}\tilde{v} \ d\boldsymbol{x},$$
the bilinear form $a(\cdot, \cdot): \mathbb{V} \times \mathbb{V} \to \mathbb{R}$ is defined by
$$a(\boldsymbol{u}(t), \boldsymbol{v})=\varepsilon^2\int_{I} \nabla u\cdot \nabla v \ d\boldsymbol{x}+\int_{I}\omega v \ d\boldsymbol{x} \ - \ b\int_{I} u\tilde{v} \ d\boldsymbol{x}+\gamma\int_{I}\omega\tilde{v} \ d\boldsymbol{x},$$
the bilinear form $c(\cdot, \cdot): \mathbb{V} \times \mathbb{V} \to \mathbb{R}$ is defined by
$$c(u, v)=-\int_{I} g(u)v \ d\boldsymbol{x},$$
the linear form $f(\cdot): \mathbb{V} \to \mathbb{R}$ is defined by
$$f(\boldsymbol{v})= c\int_{I}\left(v+\tilde{v}\right) \ d\boldsymbol{x} \ + \ \varepsilon^2i_0(t)\int_{{0}}v \ d\boldsymbol{s}.$$
The output of interest $s(t)$ is given by
$$s(t) = c\int_{I}\left[u(t)+\omega(t)\right] \ d\boldsymbol{x} \ + \ \varepsilon^2i_0(t)\int_{{0}}u(t) \ d\boldsymbol{s} $$.
End of explanation
"""
@ExactParametrizedFunctions()
class FitzHughNagumo(NonlinearParabolicProblem):
# Default initialization of members
def __init__(self, V, **kwargs):
# Call the standard initialization
NonlinearParabolicProblem.__init__(self, V, **kwargs)
# ... and also store FEniCS data structures for assembly
assert "subdomains" in kwargs
assert "boundaries" in kwargs
self.subdomains, self.boundaries = kwargs["subdomains"], kwargs["boundaries"]
self.du = TrialFunction(V)
(self.du1, self.du2) = split(self.du)
self.u = self._solution
(self.u1, self.u2) = split(self.u)
self.v = TestFunction(V)
(self.v1, self.v2) = split(self.v)
self.dx = Measure("dx")(subdomain_data=self.subdomains)
self.ds = Measure("ds")(subdomain_data=self.boundaries)
# Problem coefficients
self.epsilon = 0.015
self.b = 0.5
self.gamma = 2
self.c = 0.05
self.i0 = lambda t: 50000 * t**3 * exp(-15 * t)
self.g = lambda v: v * (v - 0.1) * (1 - v)
# Customize time stepping parameters
self._time_stepping_parameters.update({
"report": True,
"snes_solver": {
"linear_solver": "mumps",
"maximum_iterations": 20,
"report": True
}
})
# Return custom problem name
def name(self):
return "FitzHughNagumoExact"
# Return theta multiplicative terms of the affine expansion of the problem.
@compute_theta_for_derivatives
def compute_theta(self, term):
if term == "m":
theta_m0 = self.epsilon
theta_m1 = 1.
return (theta_m0, theta_m1)
elif term == "a":
theta_a0 = self.epsilon**2
theta_a1 = 1.
theta_a2 = - self.b
theta_a3 = self.gamma
return (theta_a0, theta_a1, theta_a2, theta_a3)
elif term == "c":
theta_c0 = - 1.
return (theta_c0,)
elif term == "f":
t = self.t
theta_f0 = self.c
theta_f1 = self.epsilon**2 * self.i0(t)
return (theta_f0, theta_f1)
elif term == "s":
t = self.t
theta_s0 = self.c
theta_s1 = self.epsilon**2 * self.i0(t)
return (theta_s0, theta_s1)
else:
raise ValueError("Invalid term for compute_theta().")
# Return forms resulting from the discretization of the affine expansion of the problem operators.
@assemble_operator_for_derivatives
def assemble_operator(self, term):
(v1, v2) = (self.v1, self.v2)
dx = self.dx
if term == "m":
(u1, u2) = (self.du1, self.du2)
m0 = u1 * v1 * dx
m1 = u2 * v2 * dx
return (m0, m1)
elif term == "a":
(u1, u2) = (self.du1, self.du2)
a0 = inner(grad(u1), grad(v1)) * dx
a1 = u2 * v1 * dx
a2 = u1 * v2 * dx
a3 = u2 * v2 * dx
return (a0, a1, a2, a3)
elif term == "c":
u1 = self.u1
c0 = self.g(u1) * v1 * dx
return (c0,)
elif term == "f":
ds = self.ds
f0 = v1 * dx + v2 * dx
f1 = v1 * ds(1)
return (f0, f1)
elif term == "s":
(v1, v2) = (self.v1, self.v2)
ds = self.ds
s0 = v1 * dx + v2 * dx
s1 = v1 * ds(1)
return (s0, s1)
elif term == "inner_product":
(u1, u2) = (self.du1, self.du2)
x0 = inner(grad(u1), grad(v1)) * dx + u2 * v2 * dx
return (x0,)
else:
raise ValueError("Invalid term for assemble_operator().")
# Customize the resulting reduced problem
@CustomizeReducedProblemFor(NonlinearParabolicProblem)
def CustomizeReducedNonlinearParabolic(ReducedNonlinearParabolic_Base):
class ReducedNonlinearParabolic(ReducedNonlinearParabolic_Base):
def __init__(self, truth_problem, **kwargs):
ReducedNonlinearParabolic_Base.__init__(self, truth_problem, **kwargs)
self._time_stepping_parameters.update({
"report": True,
"nonlinear_solver": {
"report": True,
"line_search": "wolfe"
}
})
return ReducedNonlinearParabolic
"""
Explanation: 3. Affine Decomposition
We set the variables $u:=u_1$, $\omega:=u_2$ and the test functions $v:=v_1$, $\tilde{v}:=v_2$.
For this problem the affine decomposition is straightforward:
$$m(\boldsymbol{u},\boldsymbol{v})=\underbrace{\varepsilon}{\Theta^{m}_0}\underbrace{\int{I}u_1v_1 \ d\boldsymbol{x}}{m_0(u_1,v_1)} \ + \ \underbrace{1}{\Theta^{m}1}\underbrace{\int{I}u_2v_2 \ d\boldsymbol{x}}{m_1(u_2,v_2)},$$
$$a(\boldsymbol{u},\boldsymbol{v})=\underbrace{\varepsilon^2}{\Theta^{a}0}\underbrace{\int{I}\nabla u_1 \cdot \nabla v_1 \ d\boldsymbol{x}}{a_0(u_1,v_1)} \ + \ \underbrace{1}{\Theta^{a}1}\underbrace{\int{I}u_2v_1 \ d\boldsymbol{x}}{a_1(u_2,v_1)} \ + \ \underbrace{-b}{\Theta^{a}2}\underbrace{\int{I}u_1v_2 \ d\boldsymbol{x}}{a_2(u_1,v_2)} \ + \ \underbrace{\gamma}{\Theta^{a}3}\underbrace{\int{I}u_2v_2 \ d\boldsymbol{x}}{a_3(u_2,v_2)},$$
$$c(u,v)=\underbrace{-1}{\Theta^{c}0}\underbrace{\int{I}g(u_1)v_1 \ d\boldsymbol{x}}{c_0(u_1,v_1)},$$
$$f(\boldsymbol{v}) = \underbrace{c}{\Theta^{f}0} \underbrace{\int{I}(v_1 + v_2) \ d\boldsymbol{x}}{f_0(v_1,v_2)} \ + \ \underbrace{\varepsilon^2i_0(t)}{\Theta^{f}1} \underbrace{\int{{0}} v_1 \ d\boldsymbol{s}}{f_1(v_1)}.$$
We will implement the numerical discretization of the problem in the class
class FitzHughNagumo(NonlinearParabolicProblem):
by specifying the coefficients $\Theta^{m}$, $\Theta^{a}_$, $\Theta^{c}$ and $\Theta^{f}_$ in the method
def compute_theta(self, term):
and the bilinear forms $m(\boldsymbol{u}, \boldsymbol{v})$, $a_(\boldsymbol{u}, \boldsymbol{v})$, $c_(u, v)$ and linear forms $f_(\boldsymbol{v})$ in
def assemble_operator(self, term):
End of explanation
"""
mesh = Mesh("data/interval.xml")
subdomains = MeshFunction("size_t", mesh, "data/interval_physical_region.xml")
boundaries = MeshFunction("size_t", mesh, "data/interval_facet_region.xml")
"""
Explanation: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh.ipynb notebook.
End of explanation
"""
V = VectorFunctionSpace(mesh, "Lagrange", 1, dim=2)
"""
Explanation: 4.2. Create Finite Element space (Lagrange P1)
End of explanation
"""
problem = FitzHughNagumo(V, subdomains=subdomains, boundaries=boundaries)
mu_range = []
problem.set_mu_range(mu_range)
problem.set_time_step_size(0.02)
problem.set_final_time(8)
"""
Explanation: 4.3. Allocate an object of the FitzHughNagumo class
End of explanation
"""
reduction_method = PODGalerkin(problem)
reduction_method.set_Nmax(20)
"""
Explanation: 4.4. Prepare reduction with a POD-Galerkin method
End of explanation
"""
reduction_method.initialize_training_set(1)
reduced_problem = reduction_method.offline()
"""
Explanation: 4.5. Perform the offline phase
End of explanation
"""
solution_over_time = problem.solve()
reduced_solution_over_time = reduced_problem.solve()
print(reduced_problem.compute_output())
basis_functions = reduced_problem.basis_functions
plot_phase_space(solution_over_time, reduced_solution_over_time, basis_functions, 0.0)
plot_phase_space(solution_over_time, reduced_solution_over_time, basis_functions, 0.1)
plot_phase_space(solution_over_time, reduced_solution_over_time, basis_functions, 0.5)
"""
Explanation: 4.6. Perform an online solve
End of explanation
"""
reduction_method.initialize_testing_set(1)
reduction_method.error_analysis()
"""
Explanation: 4.7. Perform an error analysis
End of explanation
"""
reduction_method.speedup_analysis()
"""
Explanation: 4.8. Perform a speedup analysis
End of explanation
"""
|
sameersingh/ml-discussions | week1/using_mltools_package.ipynb | apache-2.0 | from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(0)
"""
Explanation: I combined all the code lines I said should be at the begining of your code.
End of explanation
"""
!ls
"""
Explanation: Importing mltools
First you want to make sure it sits in the same folder or wherever you put your PYTHON_PATH pointer to. By default it will be the same folder you're running things from (which is the case here).
You can see that I have that folder in my directory.
End of explanation
"""
import mltools as ml
# If this prints error you either defined the PYTHON_PATH to point to somewhere else or entered a different directory.
"""
Explanation: With having it there, I can just import it use it as said in the HW assignement.
End of explanation
"""
path_to_file = 'HW1-code/data/iris.txt'
iris = np.genfromtxt(path_to_file, delimiter=None) # Loading thet txt file
X = iris[:, :-1] # Features are the first 4 columns
Y = iris[:, -1] # Classes are the last column
"""
Explanation: Using mltools
End of explanation
"""
X, Y = ml.shuffleData(X, Y) ## MAKE SURE YOU HAVE BOTH X AND Y!!! (Why?)
# It's still the same size, just different order
Xtr, Xva, Ytr, Yva = ml.splitData(X, Y, 0.75) # Splitting keeping 75% as training and the rest as validation
"""
Explanation: One important tool that you will use ALL the time is the shuffle and split data methods. The shuffle is used to add randomality to the order of points in case the order of them was some indication of something. The split allows you to create train and test data easily.
End of explanation
"""
# Creating an classifier.
knn = ml.knn.knnClassify()
# Training the classifier.
knn.train(Xtr, Ytr, K=5) # What is this thing doing? (Look at the code)
# Making predictions
YvaHat = knn.predict(Xva)
"""
Explanation: A common mistake here is to split and then forget to use the new splitted data and use X, Y instead.
KNN Classifier
You can read about it on the wiki page or in your notes.
End of explanation
"""
knn = ml.knn.knnClassify()
knn.train(Xtr[:, :2], Ytr, K=5)
ml.plotClassify2D(knn, Xtr[:, :2], Ytr)
plt.show()
"""
Explanation: A VERY good practice thing you should do after you make predictions is to make sure all the dimensions match. That way you at least know that you probably ran it on the right data.
Plotting the classifier and predictions
This is useful if you have 2D data (or 1D for that matter). To show how it works we'll repeat the process using only the first two columns of X.
We plot the areas of classification and the training data.
End of explanation
"""
YvaHat = knn.predict(Xva[:, :2])
ml.plotClassify2D(knn, Xva[:, :2], YvaHat)
plt.show()
"""
Explanation: Now let's plot the test data with the predicted class. Notice that to do so I just had to change the set of points and classes that I give the plotClassify2D method.
End of explanation
"""
ml.plotClassify2D(knn, Xva[:, :2], Yva)
plt.show()
"""
Explanation: In the plot above we plotted the test data with the predicted class. That's why it looks perfectly correct. Next we'll plot the test data with the true class.
Now we can see some mistakes.
End of explanation
"""
K = [1, 2, 5, 10, 50, 100, 200]
train_err = np.ones(7) * np.random.rand(7)
val_err = np.ones(7) * np.random.rand(7)
# Creating subplots with just one subplot so basically a single figure.
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
# I added lw (line width) and the label.
ax.semilogx(K, train_err, 'r-', lw=3, label='Training')
ax.semilogx(K, val_err, 'g-', lw=3, label='Validation')
# Adding a legend to the plot that will use the labels from the 'label'.
ax.legend()
# Controlling the axis.
ax.set_xlim(0, 200)
ax.set_ylim(0, 1)
# And still doing this to clean the canvas.
plt.show()
"""
Explanation: Plotting Error
In the HW assignment you are required to plot the error for the training and validation using the samilogx method. To show you how to do that, I'll use a random errors.
In my plotting I will use a more commonly used way of plotting using the axis handler. This way gives a lot more control though I will not demondtrate that too much here. I will try to do add new plotting stuff every new discussion as producing nice plots is 80% of the job for a data scientist :)
End of explanation
"""
|
gee-community/gee_tools | notebooks/image/addConstantBand.ipynb | mit | import ee
ee.Initialize()
from geetools import tools
col = ee.ImageCollection('COPERNICUS/S2').select(['B1', 'B2', 'B3']).limit(10)
"""
Explanation: addConstantBands(value, names, *pairs)
Adds bands with a constant value
names: final names for the additional bands
value: constant value
pairs: keywords for the bands (see example)
return the function for ee.ImageCollection.map()
End of explanation
"""
def print_center(collection):
first = ee.Image(collection.first())
p = first.geometry().centroid()
return tools.image.getValue(first, p, scale=10, side='client')
"""
Explanation: helper function to print values for the centroid of the first image of a collection
End of explanation
"""
newcol = col.map(lambda i: tools.image.addConstantBands(i, 0, "a", "b", "c"))
print_center(newcol)
"""
Explanation: Option 1 - arguments
End of explanation
"""
newcolCK = col.map(lambda i: tools.image.addConstantBands(i, a=0, b=1, c=2))
print_center(newcolCK)
"""
Explanation: Option 2 - keyword arguments
End of explanation
"""
newcolCC = col.map(lambda i:tools.image.addConstantBands(i, 0, "a", "b", "c", d=1, e=2))
print_center(newcolCC)
"""
Explanation: Option 3 - combined
End of explanation
"""
|
JonasHarnau/apc | apc/vignettes/vignette_over_dispersed_apc.ipynb | gpl-3.0 | import apc
# Turn off a FutureWarnings
import warnings
warnings.simplefilter('ignore', FutureWarning)
"""
Explanation: Over-dispersed Age-Period-Cohort Models
We replicate the data example in Harnau and Nielsen (2017) in Section 6.
The work on this vignette was supported by the European Research Council, grant AdG 694262.
First, we import the package
End of explanation
"""
model = apc.Model()
model.data_from_df(apc.loss_TA(), data_format='CL')
"""
Explanation: Next, we create a model and attach the Taylor and Ashe (1983) data to it.
End of explanation
"""
model.fit_table('od_poisson_response')
model.deviance_table
"""
Explanation: Deviance Analysis (Table 2)
We first consider a deviance analysis. We start with an over-dispersed Poisson model with an age-period-cohort predictor and look for reductions.
End of explanation
"""
model.deviance_table[model.deviance_table['P>F'] > 0.05]
"""
Explanation: First, we see that the age-period-cohort deviance is an extremely unlikely draw from a $\chi^2_{28}$ so a Poisson model is clearly rejected. Thus, we look at the $F$-tests in the column F_vs_APC and the corresponding p-values. We limit ourselves to nested models that cannot be rejected at the 5% level.
Remark:
The nesting is nicely illustrated in the following figure, taken from Nielsen (2014, Figure 5):
<img src="https://user-images.githubusercontent.com/25103918/42902938-3fc5c6bc-8a9e-11e8-94b6-7406f9a42c29.png" alt="Nested Sub-Models" width="400"/>
Nielsen (2014) also discusses the individual sub-models and provides their specific parameterizations.
End of explanation
"""
model.fit_table('od_poisson_response', reference_predictor='AC')
model.deviance_table
model.deviance_table[model.deviance_table['P>F'] > 0.05]
"""
Explanation: The models not rejected at the 5% level include the age-period (AP), age-cohort (AC), age-drift (Ad) and age (A) model. Only the AP and AC model are immediately nested in the APC model with the Ad and A model nested in both of them.
When it comes to forecasting, the age-cohort model has several advantages over the age-period model. Since is does not include a period effect, it does not require parameter extrapolation. Further, in a run-off triangle, the situation we have here, the age-cohort model replicates the chain-ladder point forecasts. Thus, we now take the age-cohort model as the primary model. We can then see what models we can reduce the age-cohort model to.
End of explanation
"""
model.fit_table('od_poisson_response', reference_predictor='AP')
model.deviance_table
model.deviance_table[model.deviance_table['P>F'] > 0.05]
"""
Explanation: Age-drift and age model are (still) the only feasible reductions.
Remark (not in paper): we can also consider the age-period model as the new primary model and see what reductions are feasible. This yields the same reductions:
End of explanation
"""
model.fit_table('od_poisson_response', reference_predictor='Ad')
model.deviance_table
model.deviance_table[model.deviance_table['P>F'] > 0.05]
"""
Explanation: Next, we take the age-drift model as the primary model.
End of explanation
"""
model.fit('od_poisson_response', 'APC')
"""
Explanation: We can still just about not reject the age model.
Taken together, these results replicate Table 2 in the paper.
Parameter Estimation and Uncertainty (Table 3, Figure 1)
We move on look at the parameter uncertainty of both Poisson and over-dispersed Poisson models.
First, we fit an over-dispersed Poisson age-period-cohort model
End of explanation
"""
model.parameters.head()
"""
Explanation: As part of the estimation, the package attaches a parameter table to the model. This includes parameter estimates, standard errors, $t$ statistics and p-values compared to a $t$ distribution. We take a look at the first couple rows before recreating Table 3 from the paper.
End of explanation
"""
model_ac = model.clone() # this creates a model object with the data already attached
model_ac.fit('od_poisson_response', 'AC')
model_apc_pois = model.clone()
model_apc_pois.fit('poisson_response', 'APC')
model_ac_pois = model.clone()
model_ac_pois.fit('poisson_response', 'AC')
"""
Explanation: To recreate Table 3, we further need to estimate an over-dispersed Poisson age-cohort model, and a Poisson age-period-cohort and age-cohort model.
End of explanation
"""
model_apc_pois.parameters.head()
"""
Explanation: For a Poisson model, the parameter table includes $z$ scores and p-values compared to a normal rather than a $t$ distribution. We look at the first couple rows of the Poisson age-period-cohort model.
End of explanation
"""
import pandas as pd
pd.concat([
pd.concat([
model.parameters['coef'],
model_apc_pois.parameters['std_err'].rename('se N'),
model.parameters['std_err'].rename('se t')
], axis=1),
pd.concat([
model_ac.parameters['coef'],
model_ac_pois.parameters['std_err'].rename('se N'),
model_ac.parameters['std_err'].rename('se t')
], axis=1)
], axis=1, keys=['apc model', 'ac model'], sort=False)
"""
Explanation: Then we can combine the resulting parameter tables. We recall that the parameter estimates are identical for over-dispersed Poisson and Poisson model.
Remark: The standard errors do not exactly match those in the paper but give the same impression. This is due to a former bug in the software.
End of explanation
"""
model.plot_parameters(around_coef=False)
"""
Explanation: We can also plot the parameter estimates, replicating Figure 1.
End of explanation
"""
model_ac.forecast()
"""
Explanation: Besides plots for the double differences and the detrended version, the plots also include the level, for which there is no confidence band given the sampling scheme, and the trends. We point out that these trends related to the detrended parameterization. Thus, they cannot be interpreted separately, in contrast to the detrended parameters.
Remark (not in paper): instead, we can also plot the double sums of double differences as shown in equation (3) in the paper. To do this, we merely need to add the argument plot_style='sum_sum' to plot_parameters. In this case, the trends are de-coupled and can be interpreted separately. However, the interpretation of the double sums is difficult.
Forecasting (Table 4)
Finally, we replicate the forecasting results. The package has both the $t$ and the bootstrap forecasts included.
Remark: The quantiles of the $t$ forecast do not exactly match those in the paper but give the same impression. This is due to a former bug in the software.
First, we look at the $t$ forecast. If we do not supply the argument method to get_distribution_fc, $t$ forecasts will be generated.
End of explanation
"""
model_ac.forecasts['Period'].round()
"""
Explanation: Forecast by cell, cohort, age, period, and total are automatically generated. First, we look at the forecasts by period (calendar year).
End of explanation
"""
model_ac.forecasts['Cohort'].round()
"""
Explanation: The point-forecast corresponds to the cash-flow by calendar year. Besides, the output includes quantile forecasts, and the standard error and its components:
* se_total: $[\hat{\tau} {D_1/(n-q)}{\hat{\pi}\mathcal{A} + \hat{s}^2\mathcal{A} + (\hat{\pi})^2}]^{1/2}$
* se_process: $[\hat{\tau} {D_1/(n-q)}\hat{\pi}\mathcal{A}]^{1/2}$
* se_estimation_xi: $[\hat{\tau} {D_1/(n-q)} \hat{s}^2\mathcal{A}]^{1/2}$
* se_estimation_tau: $[\hat{\tau} {D_1/(n-q)} (\hat{\pi}_\mathcal{A})^2]^{1/2}$
Similarly, we can look at forecasts by cohort
End of explanation
"""
model_ac.forecasts['Total'].round()
"""
Explanation: and for the total
End of explanation
"""
fc_bootstrap = apc.bootstrap_forecast(apc.loss_TA(), seed=1)
"""
Explanation: Next, we compute distribution forecasts based on the bootstrap by England and Verrall (1999) and England (2002). Since bootstrapping requires random sampling, the results differ somewhat from those in the paper. We note that the bootstrap does not have a solid theoretical foundation.
End of explanation
"""
fc_bootstrap['Period'].round()
"""
Explanation: Just as for the $t$ forecast, this automatically computes forecasts by cell, age, period, cohort and for the total. The output for the bootstrap forecasts contains descriptive statistics over bootstrap draws:
End of explanation
"""
fc_bootstrap['Cohort'].round()
fc_bootstrap['Total'].round()
"""
Explanation: In contrast to the $t$ forecast, the bootstrap comes with a mean forecast that differs from the chain-ladder point forecast. Also, the reported bootstrap standard deviation differs from the bootstrapped chain-ladder standard error since it is computed around the bootstrap mean, not the chain-ladder point forecast.
Just as before, we can look at forecasts aggregated by cohort and for the total.
End of explanation
"""
model_ad = model.clone()
model_ad.fit('od_poisson_response', 'Ad')
model_ad.forecast()
model_a = model.clone()
model_a.fit('od_poisson_response', 'A')
model_a.forecast()
"""
Explanation: Taken together, this replicates Table 4.
Forecasting with smaller models (not in paper)
In the deviance analysis we found that we cannot reject a reduction to an age-drift or even an age model. Since the age-cohort model replicates the chain-ladder point forecasts we have so far not considered forecasts resulting from the smaller models. However, this is easily done.
End of explanation
"""
print('Age-Cohort Model')
model_ac.forecasts['Total'].round()
print('Age-Drift Model')
model_ad.forecasts['Total'].round()
print('Age Model')
model_a.forecasts['Total'].round()
"""
Explanation: We can now compare the forecasts of the three models. We look at the forecasts for the total, but we could just as easily look at other aggregates or forecasts by cells.
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/tutorials/generative/dcgan.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
tf.__version__
# To generate GIFs
!pip install imageio
!pip install git+https://github.com/tensorflow/docs
import glob
import imageio
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
from tensorflow.keras import layers
import time
from IPython import display
"""
Explanation: 深層畳み込み敵対的生成ネットワーク(DCGAN)
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/generative/dcgan"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/generative/dcgan.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/generative/dcgan.ipynb"> <img src="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/generative/dcgan.ipynb"> GitHubでソースを表示</a></td>
<td> <img src="https://www.tensorflow.org/images/download_logo_32px.png"><a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/generative/dcgan.ipynb">ノートブックをダウンロード</a> </td>
</table>
このチュートリアルでは、深層畳み込み敵対的生成ネットワーク (DCGAN) を使用して手書きの数字の画像を生成する方法を実演します。このコードは、tf.GradientTape トレーニングループを伴う Keras Sequential API を使用して記述されています。
GAN とは?
敵対的生成ネットワーク (GAN) は現在コンピュータサイエンス分野で最も興味深い構想です。2 つのモデルが敵対的なプロセスにより同時にトレーニングされます。ジェネレータ(「芸術家」)が本物のような画像の制作を学習する一方で、ディスクリミネータ(「芸術評論家」)は本物の画像を偽物と見分けることを学習します。
トレーニング中、ジェネレータでは、本物に見える画像の作成が徐々に上達し、ディスクリミネータでは、本物と偽物の区別が上達します。このプロセスは、ディスクリミネータが本物と偽物の画像を区別できなくなった時点で平衡に達します。
このノートブックでは、このプロセスを MNIST データセットで実演しています。以下のアニメーションは、50 エポックでトレーニングする過程でジェネレータが生成した一連の画像を示しています。画像は、ランダムノイズとして始まり、徐々に手書きの数字へと似ていきます。
GAN についてさらに学習するには、MIT の「Intro to Deep Learning」コースをご覧ください。
MNIST モデルをビルドする
End of explanation
"""
(train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]
BUFFER_SIZE = 60000
BATCH_SIZE = 256
# Batch and shuffle the data
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
"""
Explanation: データセットを読み込んで準備する
ジェネレータとディスクリミネータのトレーニングには、MNIST データセットを使用します。ジェネレータは、MNIST データに似た手書きの数字を生成するようになります。
End of explanation
"""
def make_generator_model():
model = tf.keras.Sequential()
model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Reshape((7, 7, 256)))
assert model.output_shape == (None, 7, 7, 256) # Note: None is the batch size
model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))
assert model.output_shape == (None, 7, 7, 128)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))
assert model.output_shape == (None, 14, 14, 64)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
assert model.output_shape == (None, 28, 28, 1)
return model
"""
Explanation: モデルを作成する
ジェネレータとディスクリミネータの定義には、Keras Sequential API を使用します。
ジェネレータ
ジェネレータは、tf.keras.layers.Conv2DTranspose (アップサンプリング) レイヤーを使用して、シード (ランダムノイズ) から画像を生成します。このシードを入力として取る Dense レイヤーから始め、期待する画像サイズ (28x28x1) に到達するまで何度もアップサンプリングします。tanh を使用する出力レイヤーを除き、各レイヤーに tf.keras.layers.LeakyReLU アクティベーションが使用されています。
End of explanation
"""
generator = make_generator_model()
noise = tf.random.normal([1, 100])
generated_image = generator(noise, training=False)
plt.imshow(generated_image[0, :, :, 0], cmap='gray')
"""
Explanation: (まだトレーニングされていない)ジェネレータを使用して画像を作成します。
End of explanation
"""
def make_discriminator_model():
model = tf.keras.Sequential()
model.add(layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same',
input_shape=[28, 28, 1]))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Flatten())
model.add(layers.Dense(1))
return model
"""
Explanation: ディスクリミネータ
ディスクリミネータは CNN ベースの画像分類子です。
End of explanation
"""
discriminator = make_discriminator_model()
decision = discriminator(generated_image)
print (decision)
"""
Explanation: (まだトレーニングされていない)ディスクリミネータを使用して、生成された画像を本物と偽物に分類します。モデルは、本物の画像に対して正の値を出力し、偽物の画像には負の値を出力するようにトレーニングされます。
End of explanation
"""
# This method returns a helper function to compute cross entropy loss
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
"""
Explanation: 損失とオプティマイザを定義する
両方のモデルに損失関数とオプティマイザを定義します。
End of explanation
"""
def discriminator_loss(real_output, fake_output):
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
total_loss = real_loss + fake_loss
return total_loss
"""
Explanation: ディスクリミネータの損失
このメソッドは、ディスクリミネータが本物と偽物の画像をどれくらいうまく区別できるかを数値化します。本物の画像に対するディスクリミネータの予測を 1 の配列に比較し、(生成された)偽物の画像に対するディスクリミネータの予測を 0 の配列に比較します。
End of explanation
"""
def generator_loss(fake_output):
return cross_entropy(tf.ones_like(fake_output), fake_output)
"""
Explanation: ジェネレータの損失
ジェネレータの損失は、ディスクリミネータをどれくらいうまく騙せたかを数値化します。直感的に、ジェネレータがうまく機能しているのであれば、ディスクリミネータはその偽物の画像を本物(または 1)として分類します。ここでは、生成された画像に対するディスクリミネータの判定を 1 の配列に比較します。
End of explanation
"""
generator_optimizer = tf.keras.optimizers.Adam(1e-4)
discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
"""
Explanation: 2 つのネットワークを個別にトレーニングするため、ディスクリミネータオプティマイザとジェネレータオプティマイザは異なります。
End of explanation
"""
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
"""
Explanation: チェックポイントを保存する
このノートブックでは、モデルの保存と復元方法も実演します。これは長時間実行するトレーニングタスクが中断された場合に役立ちます。
End of explanation
"""
EPOCHS = 50
noise_dim = 100
num_examples_to_generate = 16
# You will reuse this seed overtime (so it's easier)
# to visualize progress in the animated GIF)
seed = tf.random.normal([num_examples_to_generate, noise_dim])
"""
Explanation: トレーニングループを定義する
End of explanation
"""
# Notice the use of `tf.function`
# This annotation causes the function to be "compiled".
@tf.function
def train_step(images):
noise = tf.random.normal([BATCH_SIZE, noise_dim])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = generator(noise, training=True)
real_output = discriminator(images, training=True)
fake_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for image_batch in dataset:
train_step(image_batch)
# Produce images for the GIF as you go
display.clear_output(wait=True)
generate_and_save_images(generator,
epoch + 1,
seed)
# Save the model every 15 epochs
if (epoch + 1) % 15 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print ('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-start))
# Generate after the final epoch
display.clear_output(wait=True)
generate_and_save_images(generator,
epochs,
seed)
"""
Explanation: トレーニングループは、ランダムシードを入力として受け取っているジェネレータから始まります。そのシードを使って画像が生成されると、ディスクリミネータを使って本物の画像(トレーニングセットから取り出された画像)と偽物の画像(ジェネレータが生成した画像)が分類されます。これらの各モデルに対して損失が計算されると、勾配を使用してジェネレータとディスクリミネータが更新されます。
End of explanation
"""
def generate_and_save_images(model, epoch, test_input):
# Notice `training` is set to False.
# This is so all layers run in inference mode (batchnorm).
predictions = model(test_input, training=False)
fig = plt.figure(figsize=(4, 4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
plt.axis('off')
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
"""
Explanation: 画像を生成して保存する
End of explanation
"""
train(train_dataset, EPOCHS)
"""
Explanation: モデルをトレーニングする
上記で定義した train() メソッドを呼び出し、ジェネレータとディスクリミネータを同時にトレーニングします。GAN のトレーニングには注意が必要です。ジェネレータとディスクリミネータが互いを押さえつけることのないようにすることが重要です (同じようなレートでトレーニングするなど)。
トレーニング開始時に生成された画像はランダムノイズのように見えます。トレーニングが進行するにつれ、生成された数字は徐々に本物に見えるようになります。約 50 エポック後には、これらは MNIST 数字に似たものになります。Colab でのこの過程には、デフォルトの設定でエポック当たり約 1 分がかかります。
End of explanation
"""
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
"""
Explanation: 最新のチェックポイントを復元します。
End of explanation
"""
# Display a single image using the epoch number
def display_image(epoch_no):
return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))
display_image(EPOCHS)
"""
Explanation: GIF を作成する
End of explanation
"""
anim_file = 'dcgan.gif'
with imageio.get_writer(anim_file, mode='I') as writer:
filenames = glob.glob('image*.png')
filenames = sorted(filenames)
for filename in filenames:
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
import tensorflow_docs.vis.embed as embed
embed.embed_file(anim_file)
"""
Explanation: トレーニング中に保存した画像を使用して、アニメーション GIF を作成するには、imageio を使用します。
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.