markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
G Write a function, not_in_between, which takes three parameters: a NumPy array a lower threshold, a floating point value an upper threshold, a floating point value You should use a boolean mask to return only the values in the NumPy array that are NOT in between the two specified threshold values, lower and upper. No loops are allowed. For example, in_between([1, 2, 3, 4], 1, 3) should return a NumPy array of [4]. Hint: you can use your functions from Parts D and E to help!
import numpy as np np.random.seed(475185) x = np.random.random((10, 20, 30)) lo = 0.001 hi = 0.999 y = np.array([ 9.52511605e-04, 8.62993716e-04, 3.70243252e-04, 9.99945849e-01, 7.21751759e-04, 9.36931041e-04, 5.10792605e-04, 6.44911672e-04]) np.testing.assert_allclose(y, not_in_between(x, lo, hi)) import numpy as np np.random.seed(51954) x = np.random.random((30, 40, 50)) lo = 0.00001 hi = 0.99999 y = np.array([ 8.46159001e-06, 9.99998669e-01, 9.99993873e-01, 5.58488698e-06, 9.99993348e-01]) np.testing.assert_allclose(y, not_in_between(x, lo, hi))
assignments/A6/A6_Q2.ipynb
eds-uga/csci1360-fa16
mit
H Write a function, reverse_array, which takes one parameter: a 1D NumPy array of data This function uses fancy indexing to reverse the ordering of the elements in the input array, and returns the reversed array. You cannot use the [::-1] notation, nor the built-in reversed method, or any other Python function or loops. You can use the list(), range(), and np.arange() functions, however, and only some or all of those (but again, no loops!). You must construct a list of indexes and use NumPy fancy indexing to reverse the ordering of the elements in the input list, then return the reversed list.
import numpy as np np.random.seed(5748) x1 = np.random.random(75) y1 = x1[::-1] # Sorry, you're not allowed to do this! np.testing.assert_allclose(y1, reverse_array(x1)) x2 = np.random.random(581) y2 = x2[::-1] # Sorry, you're not allowed to do this! np.testing.assert_allclose(y2, reverse_array(x2))
assignments/A6/A6_Q2.ipynb
eds-uga/csci1360-fa16
mit
US income mobility example Similar to Markov Based Methods notebook, we will demonstrate the usage of the mobility methods by an application to data on per capita incomes observed annually from 1929 to 2009 for the lower 48 US states.
import libpysal import numpy as np import mapclassify as mc income_path = libpysal.examples.get_path("usjoin.csv") f = libpysal.io.open(income_path) pci = np.array([f.by_col[str(y)] for y in range(1929, 2010)]) #each column represents an state's income time series 1929-2010 q5 = np.array([mc.Quantiles(y).yb for y in pci]).transpose() #each row represents an state's income time series 1929-2010 m = markov.Markov(q5) m.p
notebooks/MobilityMeasures.ipynb
sjsrey/giddy
bsd-3-clause
After acquiring the estimate of transition probability matrix, we could call the method $markov_mobility$ to estimate any of the five Markov-based summary mobility indice. 1. Shorrock1's mobility measure \begin{equation} M_{P} = \frac{m-\sum_{i=1}^m P_{ii}}{m-1} \end{equation} python measure = "P"
mobility.markov_mobility(m.p, measure="P")
notebooks/MobilityMeasures.ipynb
sjsrey/giddy
bsd-3-clause
2. Shorroks2's mobility measure \begin{equation} M_{D} = 1 - |\det(P)| \end{equation} python measure = "D"
mobility.markov_mobility(m.p, measure="D")
notebooks/MobilityMeasures.ipynb
sjsrey/giddy
bsd-3-clause
3. Sommers and Conlisk's mobility measure \begin{equation} M_{L2} = 1 - |\lambda_2| \end{equation} python measure = "L2"
mobility.markov_mobility(m.p, measure = "L2")
notebooks/MobilityMeasures.ipynb
sjsrey/giddy
bsd-3-clause
4. Bartholomew1's mobility measure \begin{equation} M_{B1} = \frac{m-m \sum_{i=1}^m \pi_i P_{ii}}{m-1} \end{equation} $\pi$: the inital income distribution python measure = "B1"
pi = np.array([0.1,0.2,0.2,0.4,0.1]) mobility.markov_mobility(m.p, measure = "B1", ini=pi)
notebooks/MobilityMeasures.ipynb
sjsrey/giddy
bsd-3-clause
5. Bartholomew2's mobility measure \begin{equation} M_{B2} = \frac{1}{m-1} \sum_{i=1}^m \sum_{j=1}^m \pi_i P_{ij} |i-j| \end{equation} $\pi$: the inital income distribution python measure = "B1"
pi = np.array([0.1,0.2,0.2,0.4,0.1]) mobility.markov_mobility(m.p, measure = "B2", ini=pi)
notebooks/MobilityMeasures.ipynb
sjsrey/giddy
bsd-3-clause
We can query results from the database based off a variety of values, but for this example we will query a known result from the database.
record = client.query_procedures(id=1683293)[0] record
docs/qcportal/source/record-optimization-example.ipynb
psi4/DatenQM
bsd-3-clause
There are a variety of helper functions on this object to find quantities related to the computation.
record.get_final_molecule() record.show_history()
docs/qcportal/source/record-optimization-example.ipynb
psi4/DatenQM
bsd-3-clause
We can also observe the program, method, and basis for which the optimization was executed under.
record.qc_spec.dict()
docs/qcportal/source/record-optimization-example.ipynb
psi4/DatenQM
bsd-3-clause
We can also find all keywords passed into the geometry optimization. Here we see that this geometry optimization was evaluated under a dihedral constraint.
record.keywords
docs/qcportal/source/record-optimization-example.ipynb
psi4/DatenQM
bsd-3-clause
Finally, every Result generated in the computational trajectory can be queried and observed. Here we will obtain the very last computed Result.
record.get_trajectory()[-1]
docs/qcportal/source/record-optimization-example.ipynb
psi4/DatenQM
bsd-3-clause
To run a optimization on this methane molecule we need to specify the full input as shown below. It should be noted that this function is also organized in such a way where the optimization of many molecules with the same level of theory is most efficient.
options = { "keywords": {'coordsys': 'tric'}, # Geometry optimization program options "qc_spec": { # Quantum chemistry specifications "driver": "gradient", "method": "HF", "basis": "sto-3g", "keywords": None, "program": "psi4" }, } compute = client.add_procedure("optimization", "geometric", options, [methane]) compute
docs/qcportal/source/record-optimization-example.ipynb
psi4/DatenQM
bsd-3-clause
The ids of the submitted optimization can then be queried and examined. As a note the computation is not instantaneous, you may need to wait a moment and requery for this small molecule.
result = client.query_procedures(id=compute.ids)[0] result ch_bond_original = result.get_initial_molecule().measure([0, 1]) ch_bond_optimized = result.get_final_molecule().measure([0, 1]) print(f"Optimized/Original C-H bond {ch_bond_original}/{ch_bond_optimized} (bohr)") result.show_history()
docs/qcportal/source/record-optimization-example.ipynb
psi4/DatenQM
bsd-3-clause
1) Open your dataset up using pandas in a Jupyter notebook
df = pd.read_csv('Mother Jones US Mass Shootings 1982-2016 - US mass shootings.csv')
08/Homework_8_Emelike.ipynb
mercye/foundations-homework
mit
2) Do a .head() to get a feel for your data
df.head() df.columns
08/Homework_8_Emelike.ipynb
mercye/foundations-homework
mit
3) Write down 12 questions to ask your data, or 12 things to hunt for in the data Are more weapons obtained legally or illegally? Where are most weapons obtained? What state has the most mass shootings? Which mass shooting had the most wounded? / How many wounded? Which mass shootings had the most victims? What mass shooting was most fatal? What kind of weapons were used in the most fatal shootings? Are glocks used more often than not? What kind of weapons were used in the shootings that had the most victims (wounded and fatal)? How many shooters showed prior signs of mental illness? In which kind of venue do most mass shootings occur? What kind of weapons are most common? What is the gender of most people who carry out mass shootings? 4) Attempt to answer those ten questions using the magic of pandas: 1) Are more weapons obtained legally or illegally?
df['Weapons obtained legally'] = df['Weapons obtained legally'].str.replace('\nYes', 'Yes') df['Weapons obtained legally'] = df['Weapons obtained legally'].str.replace('Yes\s.+','Yes') df['Weapons obtained legally'] = df['Weapons obtained legally'].str.replace('Yes ','Yes') df['Weapons obtained legally'].str.strip() ax = df['Weapons obtained legally'].value_counts().plot(kind='bar', title='Was the weapon obtained legally?')
08/Homework_8_Emelike.ipynb
mercye/foundations-homework
mit
2) Where are most weapons obtained?
df['Gun Show'] = df['Where obtained'].str.contains('[Ss]how', na=False) df['Online'] = df['Where obtained'].str.contains('[Oo]nline') | df['Where obtained'].str.contains('[Ii]nternet') df['Family/Friends'] = df['Where obtained'].str.contains('[Gg]randfather')| df['Where obtained'].str.contains('[Mm]other') | df['Where obtained'].str.contains('[Ff]ather') | df['Where obtained'].str.contains('[Ff]riend') | df['Where obtained'].str.contains('[Ii]ndividual') df['Store/Retailer'] = df['Where obtained'].str.contains('Trading')| df['Where obtained'].str.contains('[Ss]ports') | df['Where obtained'].str.contains('Big')| df['Where obtained'].str.contains('[Ss]portsman') | df['Where obtained'].str.contains("[Ss]portsman's") | df['Where obtained'].str.contains('[Ff]irearms') | df['Where obtained'].str.contains('Gander') | df['Where obtained'].str.contains('Galore')| df['Where obtained'].str.contains('[Dd]ealer') | df['Where obtained'].str.contains('[Ss]upply') | df['Where obtained'].str.contains('Fin') | df['Where obtained'].str.contains('[Ss]tore') | df['Where obtained'].str.contains('[Ss]tores') | df['Where obtained'].str.contains('[Cc]enter') |df['Where obtained'].str.contains('[Pp]awn') | df['Where obtained'].str.contains('[Rr]etailer') | df['Where obtained'].str.contains('[Rr]etailers') | df['Where obtained'].str.contains('[Ff]lea') | df['Where obtained'].str.contains('[Ss]uppliers') | df['Where obtained'].str.contains('[Rr]ange') | df['Where obtained'].str.contains("Frank's") | df['Where obtained'].str.contains("Frank's") | df['Where obtained'].str.contains("[Ss]upplies") | df['Where obtained'].str.contains("Frank's") | df['Where obtained'].str.contains('[Ss]ales') | df['Where obtained'].str.contains('[Ww]arehouse') | df['Where obtained'].str.contains('Bullseye' )| df['Where obtained'].str.contains('Outdoorsman') df['Stolen'] = df['Where obtained'].str.contains('[Ss]tolen') | df['Where obtained'].str.contains('[Bb]urglary') df['Unknown'] = df['Where obtained'].str.contains('Unknown') | df['Where obtained'].str.contains('Unclear') | df['Where obtained'].isnull() df['Issued'] = df['Where obtained'].str.contains('[Ii]ssued', na=False) df['Other'] = df['Where obtained'].str.contains('[Th]ird party', na=False) | df['Where obtained'].str.contains('[Aa]ssembled', na=False) df df['Gun Show'].value_counts() df['Online'].value_counts() df['Family/Friends'].value_counts() df['Store/Retailer'].value_counts() df['Stolen'].value_counts() df['Issued'].value_counts() df['Unknown'].value_counts()
08/Homework_8_Emelike.ipynb
mercye/foundations-homework
mit
3) Where state has the most mass shootings?
# create empty list states = [] # split the string values in location such that city, state become tuples # iterate over list of tuples, appending only the state to the above list for item in df['Location'].str.split(','): states.append(item[1]) # create series by setting series equal to the list df['State'] = pd.Series(states) # value counts on the series df['State'].value_counts().head(5)
08/Homework_8_Emelike.ipynb
mercye/foundations-homework
mit
4) Which mass shooting had the most wounded? / How many were wounded?
df['Wounded'].idxmax() # returns index of maximum of values in a series df['Case'].iloc[23] df['Wounded'].iloc[23]
08/Homework_8_Emelike.ipynb
mercye/foundations-homework
mit
5) Which mass shootings had the most victims?
df['Total victims'].idxmax() df['Case'].iloc[23]
08/Homework_8_Emelike.ipynb
mercye/foundations-homework
mit
6) What mass shooting was most fatal?
df['Fatalities'].idxmax() # returns index of maximum of values in a series df['Case'].iloc[1]
08/Homework_8_Emelike.ipynb
mercye/foundations-homework
mit
7) What kind of weapons were used in the most fatal shootings? / Do shooters use a glock more often than not?
df['Weapon details'].iloc[1] #case = Orlando nightclub df['Glock'] = df['Weapon details'].str.contains('[Gg]locks') | df['Weapon details'].str.contains('[Gg]lock') df['Glock'].value_counts().plot(kind='bar', title = 'Shooter used a Glock?')
08/Homework_8_Emelike.ipynb
mercye/foundations-homework
mit
8) What kind of weapons were used in the shootings that had the most victims (wounded and fatal)?
df['Weapon details'].iloc[23]
08/Homework_8_Emelike.ipynb
mercye/foundations-homework
mit
9) How many shooters showed prior signs of mental illness?
df['Prior signs of possible mental illness'].value_counts() df['Prior signs of possible mental illness'] = df['Prior signs of possible mental illness'].str.replace('Unclear', 'Unknown') df['Prior signs of possible mental illness']= df['Prior signs of possible mental illness'].str.replace('unknown', 'Unknown') df['Prior signs of possible mental illness']= df['Prior signs of possible mental illness'].str.replace('(pending)', 'Unknown') df['Prior signs of possible mental illness']= df['Prior signs of possible mental illness'].str.replace('(Unknown) ', 'Unknown') df['Prior signs of possible mental illness']= df['Prior signs of possible mental illness'].str.replace('(Unknown)', 'Unknown') df['Prior signs of possible mental illness']= df['Prior signs of possible mental illness'].str.strip() df['Prior signs of possible mental illness'].value_counts().plot(kind='bar', title = 'Prior Signs of Mental Illness?')
08/Homework_8_Emelike.ipynb
mercye/foundations-homework
mit
10) In which kind of venue do most mass shootings occur?
df['Venue'] = df['Venue'].str.replace('Other\n', 'Other') df['Venue'] = df['Venue'].str.replace('\nWorkplace', 'Workplace') df['Venue'].value_counts()
08/Homework_8_Emelike.ipynb
mercye/foundations-homework
mit
Example: 2HDM w/ U(2) symmetry LO partial wave matrices
def My1s1(l1, l3): return -l1*np.identity(3)/(16*np.pi) def My1s0(l1, l3): return -(l1+2*l3)*np.identity(1)/(16*np.pi) def My0s1(l1, l3): return -np.array([[l1, l1-l3, 0, 0],[l1-l3, l1, 0, 0],[0, 0, l3, 0],[0, 0, 0, l3]])/(16*np.pi) def My0s0(l1, l3): return -np.array([[3*l1, l1+l3, 0, 0],[l1+l3, 3*l1, 0, 0],[0, 0, 2*l1-l3, 0],[0, 0, 0, 2*l1-l3]])/(16*np.pi)
NLOUnitarityBounds.ipynb
christopher-w-murphy/NLOUnitarityBounds
mit
Beta Functions
def betafunctions(l1, l3): # returns a list with the beta functions for the quartic couplings l1 and l3 return np.array([14*l1**2 + 2*l3**2, 6*l1**2 + 4*l1*l3 + 6*l3**2])/(16*np.pi**2)
NLOUnitarityBounds.ipynb
christopher-w-murphy/NLOUnitarityBounds
mit
Beta function contributions to the partial wave matrices (w/ factor of -3/2 included)
def bMy1s1(l1, l3): return -(3/2)*My1s1(*betafunctions(l1, l3)) def bMy1s0(l1, l3): return -(3/2)*My1s0(*betafunctions(l1, l3)) def bMy0s1(l1, l3): return -(3/2)*My0s1(*betafunctions(l1, l3)) def bMy0s0(l1, l3): return -(3/2)*My0s0(*betafunctions(l1, l3))
NLOUnitarityBounds.ipynb
christopher-w-murphy/NLOUnitarityBounds
mit
Plot compare against FIG. 1 (a) of arXiv:1502.08511, which was made entirely using Mathematica.
import matplotlib %matplotlib inline matplotlib.rc('text', usetex = True) matplotlib.rc('font', family = 'serif') matplotlib.rcParams['xtick.direction'] = 'in' matplotlib.rcParams['ytick.direction'] = 'in' delta = 0.3 x = np.arange(-15.0, 15.3, delta) y = np.arange(-15.0, 15.3, delta) X, Y = np.meshgrid(x, y) Z1 = (-(4*X + Y)/(16*np.pi))**2 Z2 = ((X - 2*Y)/(16*np.pi))**2 Z3 = (-(2*X - Y)/(16*np.pi))**2 plt.figure(figsize=(6,6)) plt.contourf(X, Y, Z3, [0.25, 100], colors='green', alpha=0.3) plt.contourf(X, Y, Z2, [0.25, 100], colors='orange', alpha=0.3) plt.contourf(X, Y, Z1, [0.25, 100], colors='blue', alpha=0.3) plt.contour(X, Y, Z3, [0.25], colors='green') plt.contour(X, Y, Z2, [0.25], colors='orange') plt.contour(X, Y, Z1, [0.25], colors='blue') plt.title(r"FIG. 1(a): $\displaystyle\frac{1}{4} \geq \left(a_0^{(0)}\right)^2$", fontsize=14) plt.xlabel(r"$\displaystyle\lambda_1(s)$", fontsize=16) plt.ylabel(r"$\displaystyle\lambda_3(s)$", fontsize=16)
NLOUnitarityBounds.ipynb
christopher-w-murphy/NLOUnitarityBounds
mit
Preamble
import os, time as tm, warnings warnings.filterwarnings( "ignore" ) # from IPython.core.display import HTML from IPython.display import display, HTML import numpy as np import matplotlib.pyplot as plt %matplotlib inline np.random.seed( 569034853 ) ## This is the correct way to use the random number generator, ## since it allows finer control. rand = np.random.RandomState( np.random.randint( 0x7FFFFFFF ) )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
EM and MNIST The $\TeX$ markup used here uses the "align*" environment and thus should not be viewed though nbViewer. Before proceeding, it seems pedagogically necessary (at least for myself) to revise the EM-slgorithm and show its "correctness", so to say. A brief description of the EM algorithm The EM algorithm seeks to maximize the likelihood by means of successive application of two steps: the E-step and the M-step. For any probability measure $Q$ on the space of latent variables $Z$ with density $q$ the following holds: \begin{align} \log p(X|\Theta) &= \int q(Z) \log p(X|\Theta) dZ = \mathbb{E}q \log p(X|\Theta) \ %% &= \Bigl[p(X,Z|\Theta) = p(Z|X,\Theta) p(X|\Theta) \Bigr] \ &= \mathbb{E}{Z\sim q} \log \frac{p(X,Z|\Theta)}{p(Z|X\Theta)} = \mathbb{E}{Z\sim q} \log \frac{q(Z)}{p(Z|X,\Theta)} + \mathbb{E}{Z\sim q} \log \frac{p(X,Z|\Theta)}{q(Z)} \ &= KL\bigl(q\|p(\cdot|X,\Theta)\bigr) + \mathcal{L}\bigl(q, \Theta\bigr)\,, \end{align} since the Bayes theorem posits that $p(X,Z|\Theta) = p(Z|X,\Theta) p(X|\Theta)$. Call this equiation the "master equation". Now note that since the Kullback-Leibler divergence is always non-negative, one has the following inequality: $$\log p(X|\Theta) \geq \mathcal{L}\bigl(q, \Theta\bigr) \,.$$ Let's try to make the lower bound as large as possible by changing $\Theta$ and varying $q$. But first note that the left-hand side of the master equation is independent of $q$, whence maximization of $\mathcal{L}$ with respect to $q$ (with $\Theta$ fixed) is equivalent to minimization of $KL\bigl(q\|p(\cdot|X,\Theta)\bigr)$ with respect to $q$ taking $\Theta$ fixed. Since $q$ is arbitrary, the optimal minimizer $q^_\Theta$ is $q^(Z|\Theta) = p(Z|X,\Theta)$ for all $Z$. Now at the optimal distributuion $q^_\Theta$ the master equation becomes $$ \log p(X|\Theta) = \mathcal{L}\bigl(q^\Theta, \Theta\bigr) = \mathbb{E}{Z\sim q^_\Theta} \log \frac{p(X,Z|\Theta)}{q^(Z|\Theta)} = \mathbb{E}{Z\sim q^\Theta} \log p(X,Z|\Theta) - \mathbb{E}{Z\sim q^\Theta} \log q^*(Z|\Theta) \,, $$ for any $\Theta$. Thus the problem of log-likelihood maximization reduces to that of maximizing the sum of expectations on the right-hand side. This new problem does not seem to be tractable in general since the optimization paramters $\Theta$ affect both the expected log-likelihood $\log p(X,Z|\Theta)$ under $Z\sim q^*_\Theta$ and the entropy of the optimal distribution of the latent variables $Z$. Hopefully using an iterative procedure which switches between the computation of $q^_\Theta$ and the maximization of $\Theta$ might be effective. Consider the folowing : * E-step: considering $\Theta_i$ as given and fixed find $q^{\Theta_i} = \mathop{\text{argmin}}_q\,\, KL\bigl(q\|p(\cdot|X,\Theta_i)\bigr)$ and set $q{i+1} = q^_{\Theta_i}$; * M*-step: considering $q_{i+1}$ as given, solve $\mathcal{L}(q_{i+1},\Theta) \to \mathop{\text{max}}\Theta$, where $$ \mathcal{L}(q,\Theta) = \mathbb{E}{Z\sim q} \log p(X,Z|\Theta) - \mathbb{E}_{Z\sim q} \log q(Z) \,.$$ The fact that $q_i$ is considered fixed makes the optimization of $\mathcal{L}(q_i,\Theta)$ equivalent to maximization of the expected log-likelihood, since the entropy term is fixed. Therefore the M-step becomes: * given $q_{i+1}$ find $\Theta^{i+1} = \mathop{\text{argmax}}\Theta\,\, \mathbb{E}{Z\sim q{i+1}} \log p(X,Z|\Theta)$ and put $\Theta_{i+1} = \Theta^_{i+1}$. Now, if the latent variables are mutually independent, then the optimal $q$ must be factorizable into marginal densities and: \begin{align} KL\bigl(q\|p(\cdot|X,\Theta)\bigr) &= \mathbb{E}{Z\sim q} \log q(Z) - \sum_j \mathbb{E}{z_j\sim q_j} \log p(z_j|X,\Theta)\ &= \sum_j \mathbb{E}{z_j\sim q_j} \log q_j(z_j) - \sum_j \mathbb{E}{z_j\sim q_j} \log p(z_j|X,\Theta) = \sum_j KL\bigl(q_j\|p_j(|X,\Theta)\bigr) \,, \end{align} where $q_j$ is the marginal desity of $z_j$ in $q(Z)$ (the last term in the first line comes from the Fubini theorem). Therefore the E-step could be reduced to a set of minimization problems with respect to one-dimensional density functions: $$ q_j^* = \mathop{\text{argmin}}_{q_j}\,\, KL\bigl(q_j\|p_j(\cdot|X,\Theta)\bigr) \,, $$ since the Kulback-Leibler divergence in this case in additively separable. Correctness Recall that the master equation is an identity: for all densities $q$ on $Z$ and for all admissible parameters $\Theta$ $$ \log p(X|\Theta) = KL\bigl(q\|p(\cdot|X,\Theta)\bigr) + \mathcal{L}\bigl(q, \Theta\bigr) \,.$$ Hence if after the E-step the Kulback-Leibler divergence is reduced: $$ KL\bigl(q'\|p(\cdot|X,\Theta)\bigr) \leq KL\bigl(q\|p(\cdot|X,\Theta)\bigr) \,,$$ then for the same set of parameters $\Theta$ one has $$ \mathcal{L}(q,\Theta) \leq \mathcal{L}(q',\Theta) \,.$$ Just after the E-step one has $q_{i+1} = p(Z|X,\Theta_i)$, whence $KL\bigl(q_{i+1}\|p(\cdot|X,\Theta_i)\bigr) = 0$. In turn, this implies via the master equation that the following equality holds: $$ \log p(X|\Theta_i) = \mathcal{L}(q_{i+1},\Theta_i) \,.$$ After the M-step, since $\Theta_{i+1}$ is a maximizer, or at least an "improver" of $\mathcal{L}(q_{i+1},\Theta)$ compared to its value at $(q_i,\Theta_i)$, one has $$ \mathcal{L}(q_{i+1},\Theta_i) \leq \mathcal{L}(q_{i+1},\Theta_{i+1}) \,.$$ Threfore the effect of a single complete round of EM on the log-likelihood itself is: $$ \log p(X|\Theta_i) = \mathcal{L}(q_{i+1},\Theta_i) \leq \mathcal{L}(q_{i+1},\Theta_{i+1}) \leq \mathcal{L}(q_{i+2},\Theta_{i+1}) = \log p(X|\Theta_{i+1}) \,,$$ where the equality is achieved between the E and the M step within one round. This implies that EM indeed iteratively improves the log-likihood. Note, that in the general case, without attaining zero Kulback-Leibler divergence at the $E$-step, one cannot be sure that the real log-likelihood is improved by each iteration and one can just say that $$ \mathcal{L}(q_{i+1},\Theta_i) \leq \log p(X|\Theta_i) \,,$$ which does not uncover a relationship with $\log p(X|\Theta_{i+1})$. And without the guarantee that EM improves the log-likelihood to the maximum one cannot be sure about the consistency of the estimators. The key question is whether the lower bound $\mathcal{L}(q,\Theta)$ is any good. Application of the EM to MNIST data Each image is a random element in a discrete probability space $\Omega = {0,1}^{N\times M}$ with product-measure $$ \mathbb{P}(\omega) = \prod_{i=1}^N\prod_{j=1}^M \theta_{ij}^{\omega_{ij}} (1-\theta_{ij})^{1-\omega_{ij}} \,,$$ for any $\omega\in \Omega$. In particular $M=N=28$. Basically each bit of the image is independent of any other bit and each one is a Bernoulli random variable with parameter $\theta_{ij}$: $\omega_{ij}\sim \text{Bern}(\theta_{ij})$. Let's apply the EM algorithm to this dataset. The proposed model is the following. Consider a mixture model of discrete probability spaces. Suppose there are $K$ componets in the mixture. Then each image is distributed according to the following law: $$p(\omega|\Theta) = \sum_{k=1}^K \pi_k p_k(\omega|\theta_k) = \sum_{k=1}^K \pi_k \prod_{i=1}^N \prod_{j=1}^M \theta_{kij}^{\omega_{ij}} (1-\theta_{kij})^{1-\omega_{ij}}$$ where $\theta_{kij}$ is the paramter of the probability distribution of the $(i,j)$-th random variable (pixel) in the $k$-th class, and $\pi_k$ is the (prior) porbability of the $k$-th mixutre to generate a random element, $\sum_{k=1}^K \pi_k= 1$. Suppose $X=(x_i){i=1}^n \in \Omega^n$ is the dataset. The log-likelihood is given by $$ \log p(X|\Theta) = \sum{s=1}^n \log \sum_{k=1}^K \pi_k \prod_{i=1}^N \prod_{j=1}^M \theta_{kij}^{x_{sij}} (1-\theta_{kij})^{1-x_{sij}} \,,$$ where $x_{sij}\in{0,1}$ -- is the value of the the $(i,j)$-th pixel at the $s$-th observation. If the source $Z=(z_i){i=1}^n$ components of the mixture at each datapoint were known, then the log-likelihood would have been $$ \log p(X,Z|\Theta) = \sum{s=1}^n \log \prod_{k=1}^K \Bigl[ \pi_k \prod_{i=1}^N \prod_{j=1}^M \theta_{kij}^{x_{sij}} (1-\theta_{kij})^{1-x_{sij}} \Bigr]^{1_{z_s = k}} \,,$$ where $1_{z_s = k}$ is the indicator and take the value $1$ if ${z_s = k}$ and $0$ otherwise ($1_{{k}}(z_s)$ is another notation). The log-likelihood simplifies to $$ \log p(X,Z|\Theta) = \sum_{s=1}^n \sum_{k=1}^K 1_{z_s = k} \Bigl( \log \pi_k + \sum_{i=1}^N \sum_{j=1}^M \bigl( x_{sij} \log \theta_{kij} + (1-x_{sij}) \log (1-\theta_{kij}) \bigr) \Bigr) \,,$$ and further into a more separable form $$ \log p(X,Z|\Theta) = \sum_{s=1}^n \sum_{k=1}^K 1_{z_s = k} \log \pi_k + \sum_{s=1}^n \sum_{k=1}^K 1_{z_s = k} \Bigl( \sum_{i=1}^N \sum_{j=1}^M x_{sij} \log \theta_{kij} + \sum_{i=1}^N \sum_{j=1}^M (1-x_{sij}) \log (1-\theta_{kij}) \Bigr) \,.$$ The expected log-likelihood under $z_s\sim q_s$ with $\mathbb{P}(z_s=k|X) = q_{sk}$, is given by $$ \mathbb{E}\log p(X,Z|\Theta) = \sum_{s=1}^n \sum_{k=1}^K q_{sk} \log \pi_k + \sum_{s=1}^n \sum_{k=1}^K q_{sk} \sum_{i=1}^N \sum_{j=1}^M \bigl( x_{sij} \log \theta_{kij} + (1-x_{sij}) \log (1-\theta_{kij}) \bigr) \,.$$ Analytic solution: E-step At the E-step one must compute $q^(Z) = \mathbb{P}(z_s=k|X) = \hat{q}{sk}$ based on the value of $\Theta = ((\pi_k), (\theta{kij}))$. $$\hat{q}{sk} = \frac{p(x_s|z_s=k,\Theta) p(z_s=k)}{\sum{l=1}^K p(x_s|z_s=l,\Theta) p(z_s=l)} \propto \pi_k \prod_{i=1}^N \prod_{j=1}^M \theta_{kij}^{x_{sij}} (1-\theta_{kij})^{1-x_{sij}} $$ and $$ q^(Z) = \prod_{s=1}^n q_{s z_s} $$ Note that the denominator is actually the log-likelihood of the data. In order to improve numerical stability and avoid numerical underflow it is better to use the following procedure for computation of the conditional probability: $$ l_{sk} = \sum_{i=1}^N \sum_{j=1}^M \log \bigl( \theta_{kij}^{x_{sij}} (1-\theta_{kij})^{1-x_{sij}} \bigr) \,,$$ set $l^s = \max_k l{sk}$ and compute the log-sum $$ \hat{l}s = \log \sum{k=1}^K \text{exp}\Bigl{ ( l_{sk} - l^s ) + \log \pi_k \Bigr} \,,$$ and then compute the consitional distribution: $$ \hat{q}{sk} = \text{exp}\Bigl{ l_{sk} + \log \pi_k - ( \hat{l}_s + l^*_s ) \Bigr} \,.$$ This seemingly redunant subctration and addition of $l^_s$ helps avoid underflow during the numerical exponentiation. After this sanitization the E*-step's optimal distribution would be numerically accurate. If $l^s >> l{sk}$ for all $k$ but such that $l^s = l{sk}$ (let it be $k^$), then an underflow occurs at the sum-exp step, whence for some very small $\epsilon > 0$ one has $$ \hat{l}_s = l^s + \log (1+\epsilon) \,,$$ whence $$ \hat{q}{sk} = \text{exp}\bigl{l_{sk} - \hat{l}s\bigr} = (1+\epsilon)^{-1} \cdot \text{exp}\bigl{l{sk} - l^_s \bigr} \,.$$ For $k=k^$ on has $\hat{q}{sk} = \frac{1}{1+\epsilon}\approx 1$, and for $k\neq k^*$ -- $\hat{q}{sk} = \frac{\eta}{1+\epsilon} \approx 0$ for some extremely small $\eta>0$. The variables in the code have the following dimensions: * $\theta \in [0,1]^{K\times (N\times M)}$; * $\pi \in [0,1]^{1\times K}$; * $x \in {0,1}^{n\times (N\times M)}$; * $z \in [0,1]^{n\times K}$. Wrappers required for the assignment.
## A bunch of wrappers to match the task specifications def posterior( x, clusters ) : pi = np.ones( clusters.shape[ 0 ], dtype = np.float ) / clusters.shape[ 0 ] q, ll = __posterior( x, theta = clusters, pi = pi ) return q ## The likelihood is a byproduct of the E-step's minimization of Kullback-Leibler def likelihood( x, clusters ) : pi = np.ones( clusters.shape[ 0 ], dtype = np.float ) / clusters.shape[ 0 ] q, ll = __posterior( x, theta = clusters, pi = pi ) return np.sum( ll )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Classify using the maximum aposteriori rule.
## Classifier def classify( x, theta, pi = None ) : pi = pi if pi is not None else np.ones( theta.shape[ 0 ], dtype = np.float ) / theta.shape[ 0 ] ## Compute the posterior probabilities of the data q_sk, ll_s = __posterior( x, theta = theta, pi = pi ) ## Classify according to max pasterior: c_s = np.argmax( q_sk, axis = 1 ) return c_s, q_sk, ll_s
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
A procedure to compute the log-likelihood of each observaton with respect to each mixture component. Used in the posterior computation.
def __component_likelihood( x, theta ) : ## Unfortunately sometimes there can be negative machine zeros, which ## spoil the log-likelihood computation by poisoning with NANs. ## That is why the theta array is restricted to [0,1]. theta_clipped = np.clip( theta, 0.0, 1.0 ) ## Iterate over classes ll_sk = np.zeros( ( x.shape[ 0 ], theta.shape[ 0 ] ), dtype = np.float ) ## Make a binary mask of the data mask = x > 0 for k in xrange( theta.shape[ 0 ] ) : ## Note that the power formulation is just a mathematically convenient way of ## writing \theta if x=1 or (1-\theta) otherwise. ll_sk[ :, k ] = np.sum( np.where( mask, np.log( theta_clipped[ k ] ), np.log( 1 - theta_clipped[ k ] ) ), axis = ( 1, ) ) return ll_sk
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
The actual procedure for computing the E*-step: the conditional distribution and the log-likelihood scores.
## The core procedure for computing the conditional density of classes def __posterior( x, theta, pi ) : ## Get the log-likelihoods of each observation in each mixture component. ll_sk = __component_likelihood( x, theta ) ## Find the largest unnormalized probability. llstar_s = np.reshape( np.max( ll_sk, axis = ( 1, ) ), ( ll_sk.shape[ 0 ], 1 ) ) ## Subtract the largest exponent ll_sk -= llstar_s ## In the rare case when the largest exponent is -Inf, force the differences ## to zero. This effective treaks such observations as having unfiorm likelihood ## across classes. This way the priors don't get masked by really small numbers. ## I could've used ``np.nan_to_num( ll_sk - llstar_s )'' but it actually copies ## the ll_sk array. ll_sk[ np.isnan( ll_sk ) ] = 0.0 ## Don't forget to add the log-prior probability (Numpy broadcasting applies!). ## Adding priors before dealing with infinities would mask then and yield ## incorrect estimates of the log-likelihoods! ll_sk += np.log( np.reshape( pi, ( 1, ll_sk.shape[ 1 ] ) ) ) ## Compute the log-sum-exp of the individual log-likelihoods. Negative infinities ## resolve to 0.0 while the largest exponent resolves to a one. This step cannot ## produce NaNs ll_s = np.reshape( np.log( np.sum( np.exp( ll_sk ), axis = ( 1, ) ) ), ( ll_sk.shape[ 0 ], 1 ) ) ## The sum-exp could never be anything lower than 1, since at least one ## element of each row of ll_sk has to be lstar_s, whence the respective ## difference should be zero and the exponent -- 1. Thus even if the ## rest of the sum is close to machine zero, the logarithm would still ## return 0. ## Normalise the likelihoods to get conditional probability, and compute ## the sum of the log-denominator, which is the log-likelihood. return np.exp( ll_sk - ll_s ), ll_s + llstar_s
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Analytic solution: M-step At the M-step for some fixed $q(Z)$ one solves $\mathbb{E}\log p(X,Z|\Theta)\to \max_\Theta$ subject to $\sum_{k=1}^K \pi_k = 1$ which is a convex optimization problem with respect to $\Theta$, since the log-likelihood as a linear combination of convex functions is convex. The first order condition is $\sum_{s=1}^n \frac{q_{sk}}{\pi_k} - \lambda = 0$ for all $k=1,\ldots,K$, whence $ \lambda = \sum_{s=1}^n \sum_{l=1}^K q_{sl} = n $ and finally $$ \hat{\pi}k = \frac{\sum{s=1}^n q_{sk}}{n} \,.$$ For $\theta_{kij}$, $i=1,\ldots,N$, $j=1,\ldots,M$ and $k=1,\ldots,K$ the FOC is $$ \sum_{s=1}^n q_{sk} \frac{x_{sij}}{\theta_{kij}} - \sum_{s=1}^n q_{sk} \frac{1-x_{sij}}{1-\theta_{kij}} = 0 \,,$$ whence $$ \hat{\theta}{kij} = \frac{\sum{s=1}^n q_{sk} x_{sij}}{ \sum_{s=1}^n q_{sk} } = \frac{\sum_{s=1}^n q_{sk} x_{sij}}{ n \hat{\pi}_k } \,.$$ This M-step procedure is implemented below.
## The E-step is simple: just compute the optimal parameters under ## the current conditional distribution of the latent variables. def __learn_clusters( x, z ) : ## The prior class probabilities pi = z.sum( axis = ( 0, ) ) ## Pixel probabilities conditional on the calss theta = np.tensordot( z, x, ( 0, 0 ) ) / pi.reshape( ( pi.shape[ 0 ], 1 ) ) ## Return: regularization should be done at **E**-step! # return np.clip( theta, 1.0/784, 1.0 - 1.0/784 ), pi / x.shape[ 0 ] return theta, pi / x.shape[ 0 ]
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
A wrapper to match the assignment specifications.
## A wrapper for the above function def learn_clusters( x, z ) : theta, pi = __learn_clusters( x, z ) ## Just return theta: in the condtional model the pi are fixed. return theta
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
A it has been mentioned eariler, the EM algorithm switches between E and M steps until convergence.
## A wrapper for the core em algorithm below def em_algorithm( x, K, maxiter, verbose = True, rel_eps = 1e-4, full = False ) : ## Initialize the model parameters with uniform [0.25,0.75] random numbers theta_1 = rand.uniform( size = ( K, x.shape[ 1 ] ) ) * 0.5 + 0.25 pi_1 = None if not full else np.ones( K, dtype = np.float ) / K ## Run the em algorithm tick = tm.time( ) ll, theta, pi, status = __em_algorithm( x, theta_1 = theta_1, pi_1 = pi_1, niter = maxiter, rel_eps = rel_eps, verbose = verbose ) tock = tm.time( ) print( "total %.3f, %.3f/iter" % ( ( tock - tick ), ( tock - tick ) / len( ll ), ) ) ## Return the history of theta and the final log liklihood if verbose : if status[ 'status' ] != 0 : print "Convergence not achieved. %d" % ( status[ 'status' ], ) if full : return ( theta, pi ), ll return theta, ll
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
The procedure above actually invokes the true EM core, defined below.
## The core of the EM algorithm def __em_algorithm( x, theta_1, pi_1 = None, niter = 1000, rel_eps = 1e-4, verbose = True ) : ## If we were supplied with an initial estimate of the prior distribution, ## then assume the full model is needed. full_model = pi_1 is not None ## If the prior cluster probabilities are not supplied, assume uniform distribution. pi_1 = pi_1 if full_model else np.ones( theta_1.shape[ 0 ], dtype = np.float ) / theta_1.shape[ 0 ] ## Allocate the necessary space for the history of model estimates theta_hist, pi_hist = theta_1[ np.newaxis ].copy( ), pi_1[ np.newaxis ].copy( ) ll_hist = np.asarray( [ -np.inf ], dtype = np.float ) ## Set "old" estimated to zero. At this line the current estimates are in fact ## the initially provided ones. theta_0, pi_0 = np.zeros_like( theta_1 ), np.zeros_like( pi_1 ) ## Initialize the loop status, kiter, rel_theta, rel_pi, ll = -1, 0, np.nan, np.nan, -np.inf while kiter < niter : ## Dump the current estimators and other information. if verbose : print( "Iteration %d: avg. log-lik: %.3f, $\\Theta$ div. %.3f, $\\Pi$ div. %.3f" % ( kiter, ll / x.shape[ 0 ], rel_theta, rel_pi ) ) show_data( theta_1 - theta_0 if True else theta_1, n = theta_0.shape[ 0 ], n_col = min( 10, theta_0.shape[ 0 ] ), cmap = plt.cm.hot, interpolation = 'nearest' ) ## The convergence criterion is the L^∞ norm of relative L^1 errors if max( rel_pi, rel_theta ) < rel_eps : status = 0 break ; ## Overwrite the initial estimates theta_0, pi_0 = theta_1, pi_1 ## E-step: call the core posterior function to get both the log-likelihood ## and the estimate of the conditional distribution. z_1, ll_s = __posterior( x, theta_0, pi_0 ) ## Sum the individual log-likelihoods of observations ll = ll_s.sum() ## M-step: compute the optimal parameters under the current estimate of the posterior theta_1, pi_1 = __learn_clusters( x, z_1 ) ## Discard the computed estimate of pi if the model is discriminative (conditional likelihood). if not full_model : pi_1 = pi_0 ## Record the current estimates to the history theta_hist = np.vstack( ( theta_hist, theta_1[np.newaxis] ) ) pi_hist = np.vstack( ( pi_hist, pi_1[np.newaxis] ) ) ll_hist = np.append( ll_hist, ll ) ## Check for bad float numbers if not ( np.all( np.isfinite( theta_1 ) ) and np.all( np.isfinite( pi_1 ) ) ) : status= -2 break ; ## Check convergence: L^1 relative error. If the relative margin is exactly ## zero, then return NaNs. This makes teh loop exhaust all iterations, since ## any comprison agains a NaN returns False. rel_theta = np.sum( np.abs( theta_1 - theta_0 ) / ( np.abs( theta_0 ) + rel_eps ) ) if rel_eps > 0 else np.nan rel_pi = np.sum( np.abs( pi_1 - pi_0 ) / ( np.abs( pi_0 ) + rel_eps ) ) if rel_eps > 0 else np.nan ## Next iteration kiter += 1 return ll_hist, theta_hist, pi_hist, { 'status': status, 'iter': kiter }
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Define a convenient procedure for running experiments. By setting relative error to zero the algorithm is forced to exhaust all the allocated iterations.
def experiment( data, K, maxiter, verbose = True, until_convergence = False, full = False ) : ## Run the EM return em_algorithm( data, K, maxiter, rel_eps = 1.0e-4 if until_convergence else 0.0, verbose = verbose, full = full )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Miscellanea: visualization In order to be able to plot more flexibly, define another arranger.
## A more flexible image arrangement def arrange_flex( images, n_row = 10, n_col = 10, N = 28, M = 28, fill_value = 0 ) : ## Create the final grid of images row-by-row im_grid = np.full( ( n_row * N, n_col * M ), fill_value, dtype = images.dtype ) for k in range( min( images.shape[ 0 ], n_col * n_row ) ) : ## Get the grid cell at which to place the image i, j = ( k // n_col ) * N, ( k % n_col ) * M ## Just put the image in the cell im_grid[ i:i+N, j:j+M ] = np.reshape( images[ k ], ( N, M, ) ) return im_grid
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
The folowing pair of procedures are used to plot the digits in a clear manner. The first one just creates a canvas for the image: it sets up both axes properly and adds labels to them.
def setup_canvas( axis, n_row, n_col, N = 28, M = 28 ) : ## Setup major tick marks to the seam between images and disable their labels axis.set_yticks( np.arange( 1, n_row + 1 ) * N, minor = False ) axis.set_xticks( np.arange( 1, n_col + 1 ) * M, minor = False ) axis.set_yticklabels( [ ], minor = False ) ; axis.set_xticklabels( [ ], minor = False ) ## Set minor ticks so that they are exactly between the major ones axis.set_yticks( ( np.arange( n_row + 1 ) + 0.5 ) * N, minor = True ) axis.set_xticks( ( np.arange( n_col + 1 ) + 0.5 ) * M, minor = True ) ## Make their labels into cell x-y coordinates axis.set_yticklabels( [ "%d" % (i,) for i in 1+np.arange( n_row + 1 ) ], minor = True ) axis.set_xticklabels( [ "%d" % (i,) for i in 1+np.arange( n_col + 1 ) ], minor = True ) ## Tick marks should be oriented outward axis.tick_params( axis = 'both', which = 'both', direction = 'out' ) ## Return nothing! axis.grid( color = 'white', linestyle = '--' )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
This procedure displays the images on a nice plot. Used for one-line visualization.
def show_data( data, n, n_col = 10, transpose = False, **kwargs ) : ## Get the number of rows necessary to plot the needed number of images n_row = ( n + n_col - 1 ) // n_col ## Transpose if necessary if transpose : n_col, n_row = n_row, n_col ## Set the dimensions of the figure fig = plt.figure( figsize = ( n_col, n_row ) ) axis = fig.add_subplot( 111 ) ## Plot! setup_canvas( axis, n_row, n_col ) axis.imshow( arrange_flex( data[:n], n_col = n_col, n_row = n_row ), **kwargs ) ## Plot plt.show( ) def visualize( data, clusters, ll, n_col = 2, plot_ll = True ) : ## Display the result print "Final conditional log-likelihood value per observation achieved %f in %d iteration(s)" % ( ll[-1] / data.shape[ 0 ], len( ll ) ) ## Plot the first difference of average log-likelihood if plot_ll : plt.figure( figsize = ( 12, 7 ) ) ax = plt.subplot(111) ax.set_title( r"avg. log-likelihood change between successive iterations (log scale)" ) ax.plot( np.diff( ll / data.shape[ 0 ] ) ) # ax.set_ylabel( r"$\Delta_i \frac{1}{n} \sum_{s=1}^n \mathbb{E}_{z_s\sim q_i} \log p(x_s,z_s|\Theta_i)$" ) ax.set_ylabel( r"$\Delta_i \frac{1}{n} \sum_{s=1}^n \log p(x_s|\Theta_i)$" ) ax.set_yscale( 'log' ) ## Plot the final estimates if n_col > 0 : show_data( clusters[-1], n = clusters.shape[1], n_col = n_col, cmap = plt.cm.spectral, interpolation = 'nearest' )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
MIscellanea: animating the EM This function creates an animation of successive iterations of a run of the EM.
def animate( theta, ll, pi = None, n_col = 10, n_row = 10, interval = 1, **kwargs ) : ## Create a background bg = arrange_flex( np.zeros_like( theta[ 0 ] ), n_col = n_col, n_row = n_row ) ## Compute log-likelihood differences and sanitize them. ll_diff = np.maximum( np.diff( ll ), np.finfo(np.float).eps ) ll_diff[ ~np.isfinite( ll_diff ) ] = np.nan ## Set up the figure, the axis, and the plot elements we want to animate fig = plt.figure( figsize = ( 12, 12 ) ) ## Create three subplots and sposition them specifically if pi is None : ax1, ax3, ax2 = fig.add_subplot( 311 ), fig.add_subplot( 312 ), fig.add_subplot( 313 ) else : ax1, ax4 = fig.add_subplot( 411 ), fig.add_subplot( 412 ) ax3, ax2 = fig.add_subplot( 413 ), fig.add_subplot( 414 ) ## Initialize different ranges for the image aritsts setup_canvas( ax1, n_row = n_row, n_col = n_col ) ax1.set_title( r"Current estimate of the mixture components" ) setup_canvas( ax2, n_row = n_row, n_col = n_col ) ax2.set_title( r"Change between successive iterations" ) ## Initialize geomtery for the delta log-likelihood plot. ax3.set_xlim( -0.1, ll.shape[ 0 ] + 0.1 ) ax3.set_yscale( 'log' ) #; ax3.set_yticklabels( [ ] ) ax3.set_title( r"Change between successive iterations of EM (log scale)" ) ax3.set_ylabel( r"$\Delta_i \sum_{s=1}^n \log p(x_s|\Theta_i)$" ) ax3.set_ylim( np.nanmin( ll_diff ) * 0.9, np.nanmax( ll_diff ) * 1.1 ) ax3.grid( ) ## Setup a plot for prior probabilites if pi is not None : classes = 1 + np.arange( len( pi[ 0 ] ) ) ax4.set_xticks( classes ) ax4.set_ylim( 0.0, 1.0 ) ax4.set_title( r"Current estimate of the mixture weights" ) ba1 = ax4.bar( classes, pi[ 0 ], align = "center" ) ## Setup the artists im1 = ax1.imshow( bg, vmin = +0.0, vmax = +1.0, **kwargs ) im2 = ax2.imshow( bg, vmin = -1.0, vmax = +1.0, **kwargs ) line1, = ax3.plot( [ ], linestyle = "-", color = 'blue' ) ## Animation function. This is called sequentially def update( i ) : ## Compute the frame frame = theta[ i ] - theta[ i-1 ] if i > 0 else theta[ 0 ] frame /= np.max( np.abs( frame ) ) ## Draw frames on the image artists im1.set_data( arrange_flex( theta[ i ], n_col = n_col, n_row = n_row ) ) im2.set_data( arrange_flex( frame, n_col = n_col, n_row = n_row ) ) if i > 0 : ## Show history on the line artist line1.set_data( np.arange( i ), ll_diff[ :i ] ) if pi is not None : [ b.set_height( h ) for b, h in zip( ba1, pi[ i ] ) ] if i > 0 : [ b.set_color( 'green' if h > p else 'red' ) for b, h, p in zip( ba1, pi[ i ], pi[ i-1 ] ) ] ## Return an iterator of artists in this frame return ( im1, im2, line1, ) + ba1 return im1, im2, line1, ## Call the animator. return animation.FuncAnimation( fig, update, frames = theta.shape[ 0 ], interval = interval, blit = True )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Define a function that produces (using ffmpeg) and embeds a video in HTML into IPython
## Make simple animations of the EM estimatora ## http://jakevdp.github.io/blog/2013/05/12/embedding-matplotlib-animations/ ## http://jakevdp.github.io/blog/2012/08/18/matplotlib-animation-tutorial/ from matplotlib import animation from IPython.display import HTML from tempfile import NamedTemporaryFile def embed_video( anim ) : VIDEO_TAG = """<video controls autoplay muted loop><source src="data:video/x-m4v;base64,{0}" type="video/mp4">Your browser does not support the video tag.</video>""" plt.close( anim._fig ) if not hasattr( anim, '_encoded_video' ) : ffmpeg_writer = animation.FFMpegWriter( ) with NamedTemporaryFile( suffix = '.mp4' ) as f: anim.save( 'myanim.mp4', fps = 12, extra_args = [ '-vcodec', 'libx264' ] )# , writer = ffmpeg_writer ) video = open( 'myanim.mp4', "rb" ).read( ) anim._encoded_video = video.encode( "base64" ) return HTML( VIDEO_TAG.format( anim._encoded_video ) )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Miscellanea: obtaining the data Try to download the MNIST data from the SciKit repository.
if False : ## Fetch MNIST dataset from SciKit and create a local copy. from sklearn.datasets import fetch_mldata mnist = fetch_mldata( "MNIST original", data_home = './data/' ) np.savez_compressed('./data/mnist/mnist_scikit.npz', data = mnist.data, labels = mnist.target )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Or obtain the data from the provided CSV files.
if False : ## The procedure below loads the MNIST data from a comma-separated text file. def load_mnist_from_csv( filename ) : ## Read the CSV file data = np.loadtxt( open( filename, "rb" ), dtype = np.short, delimiter = ",", skiprows = 0 ) ## Peel off the lables return data[:,1:], data[:,0] ## Fetch the data from the provided CSV (!) files and save as a compressed data blob data, labels = load_mnist_from_csv( "./data/mnist/mnist_train.csv" ) np.savez_compressed( './data/mnist/mnist_train.npz', labels = labels, data = data ) data, labels = load_mnist_from_csv( "./data/mnist/mnist_test.csv" ) np.savez_compressed( './data/mnist/mnist_test.npz', labels = labels, data = data )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Study First of all load and binarize the training data using the value 127 as the threshold.
assert( os.path.exists( './data/mnist/mnist_train.npz' ) ) with np.load( './data/mnist/mnist_train.npz', 'r' ) as npz : mnist_labels, mnist_data = npz[ 'labels' ], np.array( npz[ 'data' ] > 127, np.int ) assert( os.path.exists( './data/mnist/mnist_test.npz' ) ) with np.load( './data/mnist/mnist_test.npz', 'r' ) as npz : test_labels, test_data = npz[ 'labels' ], np.array( npz[ 'data' ] > 127, np.int )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Case : $K=2$ Let's have a look at some 6s and 9s.
## Mask inx_sixes, inx_nines = np.where( mnist_labels == 6 )[ 0 ], np.where( mnist_labels == 9 )[ 0 ] ## Extract sixes = mnist_data[ rand.choice( inx_sixes, 90, replace = False ) ] nines = mnist_data[ rand.choice( inx_nines, 90, replace = False ) ] ## Show show_data( sixes, n = 45, n_col = 15, cmap = plt.cm.gray, interpolation = 'nearest' ) show_data( nines, n = 45, n_col = 15, cmap = plt.cm.gray, interpolation = 'nearest' )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
They do indeed look quite distinct. Now collect them into a single dataset and estimate the model.
data = mnist_data[ np.append( inx_sixes, inx_nines ) ] clusters, ll = experiment( data, 2, 30 )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
The estimate deltas show that the EM algorithm's E-step actually transfers the unlikely observations between classes, as is expected by constructon of the algorithm. Judging by the plot below, it turns out that 30 iterations is more than enough for the EM to get meaninful estimates the class ideals, represented by the probability porduct-measure on $\Omega^{28\times 28}$.
visualize( data, clusters, ll )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Now let's see how well the EM algorithm performs on a model with more classes. But before that let's have a look at a random sample of the handwritten digits.
indices = np.arange( mnist_data.shape[ 0 ] ) rand.shuffle( indices ) show_data( mnist_data[ indices[:100] ] , n = 100, n_col = 10, cmap = plt.cm.gray, interpolation = 'nearest' )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Case : $K=10, 15, 20, 30, 60$ and $90$ The original size of the trainig sample is too large to fit in these RAM banks :) That is why I had to limit the sample to a random subset of 2000 observations.
sub_sample = np.concatenate( tuple( [ rand.choice( np.where( mnist_labels == i )[ 0 ], size = 200 ) for i in range( 10 ) ] ) ) train_data, train_labels = mnist_data[ sub_sample ], mnist_labels[ sub_sample ] # train_data, train_labels = mnist_data, mnist_labels
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Run the procedure that perfoems EM algorithm and return the history of the parameter estimates as well as the dynamics of the log-likelihood lower bonud.
clusters_10, ll_10 = experiment( train_data, 10, 50 )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
One can clearly see, that $50$ iterations were not enough for the alogirithm to converge: though the changes are tiny, even on the log-scale, they are still unstable.
visualize( train_data, clusters_10, ll_10, n_col = 10, plot_ll = True )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Let's see if changing $K$ does the trick.
clusters_15, ll_15 = experiment( train_data, 15, 50, verbose = False, until_convergence = False ) clusters_20, ll_20 = experiment( train_data, 20, 50, verbose = False, until_convergence = False ) clusters_30, ll_30 = experiment( train_data, 30, 50, verbose = False, until_convergence = False )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
For what values of $K$ was it possible to infer the templates of all digits?
visualize( train_data, clusters_15, ll_15, n_col = 10, plot_ll = False ) visualize( train_data, clusters_20, ll_20, n_col = 10, plot_ll = False ) visualize( train_data, clusters_30, ll_30, n_col = 10, plot_ll = False )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Obviously, the model with more mixture components is more likely to produce "templates" for all digits. For larger $K$ this is indeed the case. Having run this algorithm for many times we are able to say that the digits $3$ and $8$, $4$ and $9$ and sometimes $5$ tend to be poorly separated. Furthermore due to there being many different handwritten variations of the same digit one should estimate a model with more classes. The returned templates of the mixture components are clearly suboptimal: the procedure seems to get stuck at individual examples. This may happen for any $K$ and allowing for more iterations does not remedy this. Some possibilities do exist: add regularizers to the E step, that tie neighbouring pixel distributions together.
clusters_60, ll_60 = experiment( train_data, 60, 500, verbose = False, until_convergence = True )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
As one can see, increasing the number of iterations does not necessarily improve the results.
visualize( train_data, clusters_60, ll_60, n_col = 15 )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Judging by the plot of the log-likelihood, the fact that the EM is guaranteed to converge to local maxima and does so extremely fast, there was no need for more than 120-130 iterations. The chages in the log-likelihood around that number of iterations are of the order $10^{-4}$. Since we are working in finite precision arithmetic (double), the smallest precision is $\approx 10^{-14}$. Let's see the dynamics of the estimated of the EM iterations. You will have to ensure that ffMPEG is installed (Windows: and is on the PATH environment variable).
## If you want to see this animation ensure that ffmpeg is installed and uncomment the following lines. anim_60 = animate( clusters_60, ll_60, n_col = 15, n_row = 4, interval = 1, cmap = plt.cm.hot, interpolation = 'nearest' ) embed_video( anim_60 )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
The parameter estimates of the EM stabilize pretty quickly. In fact most templates stabilize by iterations 100-120. Choosing $K$ Among many methods of model selection, let's use simple training sample fittness score, givne by the value of the log-likelihood. Becasue the models are nested with respect to the number of mixture components, one should expect the likelihood to be a non decreasing function of $K$ (on average dut to randomization of the initial parameter values). For large enough $K$ this method may lead to overfitting.
## Test model with K from 12 up to 42 with a step of 4 classes = 12 + np.arange( 11, dtype = np.int ) * 3 ll_hist = np.full( len( classes ), -np.inf, dtype = np.float ) ## Store parameters parameter_hist = list( ) for i, K in enumerate( classes ) : ## Run the experiment c, l = experiment( train_data, K, 50, verbose = False, until_convergence = False ) ll_hist[ i ] = l[ -1 ] parameter_hist.append( c[ -1 ] ) ## Visualize the final parameters show_data( c[-1], n = K, n_col = 13, cmap = plt.cm.hot, interpolation = 'nearest' )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Indeed the log-likelihood does not decrease with $K$ on average. Nevertheless the model with the highes likelihood turs out to have this many mixture components:
print classes[ np.argmax( ll_hist ) ]
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
A nice, yet expected coincidence :) Classification: label assignment Select a model ...
# clusters = parameter_hist[ np.argmax( ll_hist ) ] * 0.999 + 0.0005 clusters = clusters_60[-1]
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
... and get the posterior mixture component probabilities.
## Compute the posterior component probabilities, and use max-aposteriori ## for the best class selection. c_s, q_sk, ll_s = classify( train_data, clusters )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Use a simple majority rule to automatically assign lables to templates.
template_x_label_maj_60 = np.full( clusters.shape[ 0 ], -1, np.int ) for t in range( clusters.shape[ 0 ] ) : l, f = np.unique( train_labels[ c_s == t ], return_counts = True) if len( l ) > 0 : ## This is too blunt an approach: it does not guarantee surjectivity of the mapping. template_x_label_maj_60[ t ] = l[ np.argmax( f ) ]
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Assign the labels $l$ to templates $t$ according to its score, based on the average of the top-$5$ log-likelihoods of observations with label $l$ and classfified with template $t$.
## there are 10 labels and K templates label_cluster_score = np.full( ( clusters.shape[ 0 ], 10 ), -np.inf, np.float ) ## Loop over each template for t in range( clusters.shape[ 0 ] ) : ## The selected templates are chosen according to max-aposteriori rule. inx = np.where( c_s == t )[ 0 ] ## Get the assigned lables and their frequencies actual_labels = train_labels[ inx ] l, f = np.unique( actual_labels, return_counts = True ) ## For each template and each associated label in the training set the ## score is average of the top-5 highest log-likelihoods. label_cluster_score[ t, l ] = [ np.average( sorted( ll_s[ inx[ actual_labels == a ] ].flatten( ), reverse = True )[ : 5 ] ) for a in l ] ## For each template choose the label with the highes likelihood. template_x_label_lik_60 = np.argmax( label_cluster_score, axis = 1 )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Compare the label assignments. Here are the templates.
show_data( clusters, clusters.shape[ 0 ], 10, cmap = plt.cm.spectral, interpolation = 'nearest' )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
These are the templates, which were assigned different labels by the majority and "trust" methods.
mask = np.asarray( template_x_label_maj_60 != template_x_label_lik_60, dtype = np.float ).reshape( (-1,1) ) show_data( clusters * mask, clusters.shape[ 0 ], 10, cmap = plt.cm.spectral, interpolation = 'nearest' ) print "\nLikelihood based: ", template_x_label_lik_60[ mask[:,0] > 0 ] print "Majority bassed: ", template_x_label_maj_60[ mask[:,0] > 0 ]
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Here are the pictures of templates ordered according to their label.
show_data( clusters[ np.argsort( template_x_label_lik_60 ) ], clusters.shape[ 0 ], 10, cmap = plt.cm.spectral, interpolation = 'nearest' ) show_data( clusters[ np.argsort( template_x_label_maj_60 ) ], clusters.shape[ 0 ], 10, cmap = plt.cm.spectral, interpolation = 'nearest' )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Classification: test sample Shall we try running the classifier on the test data?
## Run the classifier on the test data c_s_60, q_sk, ll_s = classify( test_data, clusters )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Let's see the best template for each test observation in some sub-sample.
## Show a sample of images and their templates sample = np.random.permutation( test_data.shape[ 0 ] )[:64] ## Stack image and tis best template atop one another display_stack = np.empty( ( 2 * len( sample ), test_data.shape[ 1 ] ), dtype = np.float ) display_stack[0::2] = test_data[ sample ] * q_sk[ sample, c_s_60[ sample ], np.newaxis ] display_stack[1::2] = clusters[ c_s_60[ sample ] ] ## Display show_data( display_stack, n = display_stack.shape[ 0 ], n_col = 16, transpose = False, cmap = plt.cm.spectral, interpolation = 'nearest' )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
The digits are shown in pairs: each first digit is the test observation (colour is determined by the confidence of the classifier -- the whiter the higher), and each second -- is the best template. Let's see how accurate the classification was. Recall that the component assignment was done using $$ \hat{t}_s = \mathop{\text{argmax}}_k p\bigl( C_s = k \, \big\vert\, X = x_s\bigr) \,, $$ i.e. maximum aposteriori rule. Then, with the classes assigned, labels were deduced based on either : * simple majority; * observations with top largest likelihoods in each class. By accuracy I understand the following socre: $$ \alpha = 1 - \frac{1}{|\text{TEST}|} \sum_{s\,\in\,\text{TEST}} \mathbf{1}{ l_s \neq L(\hat{t}_s)}\,, $$ where $l_s$ -- is the actual lablesof an observation $s$, $\hat{t}_s$ -- is the inferred mixture component (class) of that observation, and $k\mapsto L(k)$ is the component to label mapping.
print "Accuracy of likelihood based labelling: %.2f" % ( 100 * np.average( template_x_label_lik_60[ c_s_60 ] == test_labels ), ) print "Accuracy of simple majority labelling: %.2f" % ( 100 * np.average( template_x_label_maj_60[ c_s_60 ] == test_labels ), )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Not surprisingly, majority- and likelihood-based classification accuracies are close. Let's see which test observations the model considers an artefact and for which it cannot reliably assign a template: i.e. the posterior class probablity for these cases coincides with the prior. This happens when the likelihood of an observation is identical within each class.
## Now display the test observations, which the model cold nor classify at all. bad_tests = np.where( np.isinf( ll_s ) )[ 0 ] show_data( test_data[ bad_tests ], n = max( len( bad_tests ), 10 ), n_col = 15, cmap = plt.cm.gray, interpolation = 'nearest' ) # print q_sk[ bad_tests ]
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
</hr> Let's see how more pasrimonious models fare with respect to accuracy on thte test sample. Accuracy of $K=30$
clusters = clusters_30[-1] c_s, q_sk, ll_s = classify( train_data, clusters ) template_x_label_maj_30 = np.full( clusters.shape[ 0 ], -1, np.int ) for t in range( clusters.shape[ 0 ] ) : l, f = np.unique( train_labels[ c_s == t ], return_counts = True) if len( l ) > 0 : template_x_label_maj_30[ t ] = l[ np.argmax( f ) ] label_cluster_score_30 = np.full( ( clusters.shape[ 0 ], 10 ), -np.inf, np.float ) for t in range( clusters.shape[ 0 ] ) : inx = np.where( c_s == t )[ 0 ] actual_labels = train_labels[ inx ] l, f = np.unique( actual_labels, return_counts = True ) label_cluster_score_30[ t, l ] = [ np.average( sorted( ll_s[ inx[ actual_labels == a ] ].flatten( ), reverse = True )[ : 5 ] ) for a in l ] template_x_label_lik_30 = np.argmax( label_cluster_score_30, axis = 1 ) show_data( clusters[ np.argsort( template_x_label_lik_30 ) ], clusters.shape[ 0 ], 10, cmap = plt.cm.spectral, interpolation = 'nearest' ) print template_x_label_lik_30[ np.argsort( template_x_label_lik_30 ) ].reshape((3,-1)) c_s_30, q_sk, ll_s = classify( test_data, clusters ) print "Accuracy of likelihood based labelling: %.2f" % ( 100 * np.average( template_x_label_lik_30[ c_s_30 ] == test_labels ), ) print "Accuracy of simple majority labelling: %.2f" % ( 100 * np.average( template_x_label_maj_30[ c_s_30 ] == test_labels ), )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Accuracy of $K=20$
clusters = clusters_20[-1] c_s, q_sk, ll_s = classify( train_data, clusters ) template_x_label_maj_20 = np.full( clusters.shape[ 0 ], -1, np.int ) for t in range( clusters.shape[ 0 ] ) : l, f = np.unique( train_labels[ c_s == t ], return_counts = True) if len( l ) > 0 : template_x_label_maj_20[ t ] = l[ np.argmax( f ) ] label_cluster_score_20 = np.full( ( clusters.shape[ 0 ], 10 ), -np.inf, np.float ) for t in range( clusters.shape[ 0 ] ) : inx = np.where( c_s == t )[ 0 ] actual_labels = train_labels[ inx ] l, f = np.unique( actual_labels, return_counts = True ) label_cluster_score_20[ t, l ] = [ np.average( sorted( ll_s[ inx[ actual_labels == a ] ].flatten( ), reverse = True )[ : 5 ] ) for a in l ] template_x_label_lik_20 = np.argmax( label_cluster_score_20, axis = 1 ) show_data( clusters[ np.argsort( template_x_label_lik_20 ) ], clusters.shape[ 0 ], 10, cmap = plt.cm.spectral, interpolation = 'nearest' ) print template_x_label_lik_20[ np.argsort( template_x_label_lik_20 ) ].reshape((2,-1)) c_s_20, q_sk, ll_s = classify( test_data, clusters ) print "Accuracy of likelihood based labelling: %.2f" % ( 100 * np.average( template_x_label_lik_20[ c_s_20 ] == test_labels ), ) print "Accuracy of simple majority labelling: %.2f" % ( 100 * np.average( template_x_label_maj_20[ c_s_20 ] == test_labels ), )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Accuracy of $K=15$
clusters = clusters_15[-1] c_s, q_sk, ll_s = classify( train_data, clusters ) template_x_label_maj_15 = np.full( clusters.shape[ 0 ], -1, np.int ) for t in range( clusters.shape[ 0 ] ) : l, f = np.unique( train_labels[ c_s == t ], return_counts = True) if len( l ) > 0 : template_x_label_maj_15[ t ] = l[ np.argmax( f ) ] label_cluster_score_15 = np.full( ( clusters.shape[ 0 ], 10 ), -np.inf, np.float ) for t in range( clusters.shape[ 0 ] ) : inx = np.where( c_s == t )[ 0 ] actual_labels = train_labels[ inx ] l, f = np.unique( actual_labels, return_counts = True ) label_cluster_score_15[ t, l ] = [ np.average( sorted( ll_s[ inx[ actual_labels == a ] ].flatten( ), reverse = True )[ : 5 ] ) for a in l ] template_x_label_lik_15 = np.argmax( label_cluster_score_15, axis = 1 ) show_data( clusters[ np.argsort( template_x_label_lik_15 ) ], clusters.shape[ 0 ], 15, cmap = plt.cm.spectral, interpolation = 'nearest' ) print template_x_label_lik_15[ np.argsort( template_x_label_lik_15 ) ].reshape((1,-1)) c_s_15, q_sk, ll_s = classify( test_data, clusters ) print "Accuracy of likelihood based labelling: %.2f" % ( 100 * np.average( template_x_label_lik_15[ c_s_15 ] == test_labels ), ) print "Accuracy of simple majority labelling: %.2f" % ( 100 * np.average( template_x_label_maj_15[ c_s_15 ] == test_labels ), )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Accuracy of $K=10$
clusters = clusters_10[-1] c_s, q_sk, ll_s = classify( train_data, clusters ) template_x_label_maj_10 = np.full( clusters.shape[ 0 ], -1, np.int ) for t in range( clusters.shape[ 0 ] ) : l, f = np.unique( train_labels[ c_s == t ], return_counts = True) if len( l ) > 0 : template_x_label_maj_10[ t ] = l[ np.argmax( f ) ] label_cluster_score_10 = np.full( ( clusters.shape[ 0 ], 10 ), -np.inf, np.float ) for t in range( clusters.shape[ 0 ] ) : inx = np.where( c_s == t )[ 0 ] actual_labels = train_labels[ inx ] l, f = np.unique( actual_labels, return_counts = True ) label_cluster_score_10[ t, l ] = [ np.average( sorted( ll_s[ inx[ actual_labels == a ] ].flatten( ), reverse = True )[ : 5 ] ) for a in l ] template_x_label_lik_10 = np.argmax( label_cluster_score_10, axis = 1 ) show_data( clusters[ np.argsort( template_x_label_lik_10 ) ], clusters.shape[ 0 ], 10, cmap = plt.cm.spectral, interpolation = 'nearest' ) print template_x_label_lik_10[ np.argsort( template_x_label_lik_10 ) ].reshape((1,-1)) c_s_10, q_sk, ll_s = classify( test_data, clusters ) print "Accuracy of likelihood based labelling: %.2f" % ( 100 * np.average( template_x_label_lik_10[ c_s_10 ] == test_labels ), ) print "Accuracy of simple majority labelling: %.2f" % ( 100 * np.average( template_x_label_maj_10[ c_s_10 ] == test_labels ), )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
As one can see test sample accuracy of the model falls drammatically for less number of mixture components. This was expected, since due to various reasons, one being thet the data is handwritten, it is higly unlikely, that a single digit would have only one template.
print "Model with K = 10: %.2f" % ( 100 * np.average( template_x_label_lik_10[ c_s_10 ] == test_labels ), ) print "Model with K = 15: %.2f" % ( 100 * np.average( template_x_label_lik_15[ c_s_15 ] == test_labels ), ) print "Model with K = 20: %.2f" % ( 100 * np.average( template_x_label_lik_20[ c_s_20 ] == test_labels ), ) print "Model with K = 30: %.2f" % ( 100 * np.average( template_x_label_lik_30[ c_s_30 ] == test_labels ), ) print "Model with K = 60: %.2f" % ( 100 * np.average( template_x_label_lik_60[ c_s_60 ] == test_labels ), )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
<br/><p style="font-size: 20pt;font-weight: bold; text-align: center;font-family: Courier New"> Ignore everything below </p><br/> Let digress for a moment and consider the full model and create a video to see how the estimates are refined.
( clusters_full, pi_full ), ll_full = experiment( data, 30, 1000, False, True, True ) anim_full = animate( clusters_full, ll_full, pi = pi_full, n_col = 15, n_row = 2, interval = 1, cmap = plt.cm.hot, interpolation = 'nearest' ) embed_video( anim_full )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
<hr/> A random variable $X\sim \text{Beta}(\alpha,\beta)$ if the law of $X$ has density $$p(u) = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)} u^{\alpha-1}(1-u)^{\beta-1} $$ $$ \log p(X,Z|\Theta) = \sum_{s=1}^n \log \prod_{k=1}^K \Bigl[ \pi_k \prod_{i=1}^N \prod_{j=1}^M \frac{\Gamma(\alpha_{kij}+\beta_{kij})}{\Gamma(\alpha_{kij})\Gamma(\beta_{kij})} x_{sij}^{\alpha_{kij}-1}(1-x_{sij})^{\beta_{kij}-1} \Bigr]^{1_{z_s = k}}$$ \begin{align} \mathbb{E}q \log p(X,Z|\Theta) &= \sum{k=1}^K \sum_{s=1}^n q_{sk} \log \pi_k \ &+ \sum_{k=1}^K \sum_{i=1}^N \sum_{j=1}^M \sum_{s=1}^n q_{sk} \bigl( \log \Gamma(\alpha_{kij}+\beta_{kij}) - \log \Gamma(\alpha_{kij}) - \log \Gamma(\beta_{kij}) \bigr) \ &+ \sum_{k=1}^K \sum_{i=1}^N \sum_{j=1}^M \sum_{s=1}^n q_{sk} \bigl( (\alpha_{kij}-1) \log x_{sij} + (\beta_{kij}-1) \log(1-x_{sij}) \bigr) \ \end{align} Derivative of a Gamma function does not seem to yeild analytically tracktable solutions. <hr/> <p style="font-size: 20pt; text-align: center;font-family: Courier New">Non-parametric approach</p>
from sklearn.neighbors import KernelDensity from sklearn.decomposition import PCA from sklearn.grid_search import GridSearchCV pca = PCA( n_components = 50 ) X_train_pca = pca.fit_transform( X_train ) params = { 'bandwidth' : np.logspace( -1, 1, 20 ) } grid = GridSearchCV( KernelDensity( ), params ) grid.fit( X_train_pca ) print("best bandwidth: {0}".format( grid.best_estimator_.bandwidth ) ) params kde = grid.best_estimator_ new_data = kde.sample( 100 ) new_data = pca.inverse_transform( new_data ) print new_data.shape plt.figure( figsize = ( 9, 9 ) ) plt.imshow( arrange_flex( new_data ), cmap = plt.cm.gray, interpolation = 'nearest' ) plt.show( )
year_14_15/spring_2015/machine_learning/MNIST-assignment.ipynb
ivannz/study_notes
mit
Linear Discriminant Analysis Predict groupings in continuous data.
X = np.linspace(0, 20, 100) def f(x): if x < 7: return 'a', 2. + np.random.random() elif x < 14: return 'b', 4 + np.random.random() else: return 'c', 6 + np.random.random() K, Y = zip(*[f(x) for x in X]) colors = plt.get_cmap('Set1') categories = ['a', 'b', 'c'] plt.scatter(X, Y, c=[colors(categories.index(k)*20) for k in K]) plt.show()
.ipynb_checkpoints/Linear and Quadratic Discriminant Analysis-checkpoint.ipynb
erickpeirson/statistical-computing
cc0-1.0
LDA is like inverted ANOVA: ANOVA looks for differences in a continuous response among categories, whereas LDA infers categories using a continuous predictor.
bycategory = [ [Y[i] for i in xrange(len(Y)) if K[i] == k] for k in categories ] plt.figure(figsize=(10, 5)) plt.subplot(121) plt.boxplot(bycategory) plt.ylim(0, 8) plt.title('ANOVA') plt.subplot(122) plt.boxplot(bycategory, 0, 'rs', 0) plt.title('LDA') plt.xlim(0, 8) plt.show()
.ipynb_checkpoints/Linear and Quadratic Discriminant Analysis-checkpoint.ipynb
erickpeirson/statistical-computing
cc0-1.0
LDA assumes that the variance in each group is the same, and that the predictor(s) are normally distributed for each group. In other words, different $\mu_k$, one shared $\sigma$.
X = [np.linspace(0, 7, 50), np.linspace(2, 10, 50), np.linspace(7, 16, 50)] plt.figure(figsize=(10, 4)) k = 1 for x in X: mu_k = x.mean() plt.plot(x, stats.norm.pdf(x, loc=mu_k)) plt.plot([mu_k, mu_k], [0, 0.5], c='k') plt.text(mu_k + 0.2, 0.5, "$\mu_%i$" % k, size=18) k += 1 plt.ylim(0, 0.75) plt.xlabel('Predictor', size=18) plt.ylabel('Probability', size=18) plt.show()
.ipynb_checkpoints/Linear and Quadratic Discriminant Analysis-checkpoint.ipynb
erickpeirson/statistical-computing
cc0-1.0
Recall Bayes Theorem: this allows us to "flip" the predictor and the response. $P(A|B) = \frac{P(B|A)P(A)}{P(B|A) P(A) P(B|\bar{A}) P(\bar{A})}$ Therefore the probability of group $k$ given the continuous predictor B is: $P(A_k|B) = \frac{P(B|A_k) P(A_k)}{\sum_{l=1}^k P(B|A_l) P(A_l)}$ The probability that a value $X=x$ came from group $Y=k$ is: $P(Y=k|X=x) = \frac{f(x|Y=k)\pi(Y=k)}{\sum_{l=1}^k f(x|Y=l)\pi(Y=l)}$ Where $\pi(Y=k)$ is the probability of $Y=k$ regardless of $x$. This is just the relative representation of each group: $\pi(Y=k) = \frac{n_k}{\sum_{l=1}^k n_l}$ And $f(x|Y=k) = f_k(x)$ is the PDF for group $k$: $f_k(x) = \frac{1}{\sqrt{2\pi}\sigma}e^{\frac{-(x-\mu_k)^2}{2\sigma^2}}$ Therefore: $P(Y=k|X=x) = \frac{\frac{1}{\sqrt{2\pi}\sigma}e^{\frac{-(x-\mu_k)^2}{2\sigma^2}}\pi(Y=k)}{\sum_{l=1}^k[ \frac{1}{\sqrt{2\pi}\sigma}e^{\frac{-(x-\mu_l)^2}{2\sigma^2}} \pi(Y=l)]}$ Assume for the sake of algebra that each $k \in K$ is represented equally: $\pi(Y=k) = \frac{1}{K}$. So: $P(Y=k|X=x) = \frac{e^{\frac{-(x-\mu_k)^2}{2\sigma^2}}}{\sum_{l=1}^k e^{\frac{-(x-\mu_l)^2}{2\sigma^2}}}$ Discriminant function Our prediction should be the category with the largest probability at $x$. In other words we want to choose category $k$ that maximizes $P(Y=k|X=x)$. We can therefore ignore the denomenator in the equation above. This is the same as maximizing: $e^\frac{-(x-\mu_k)^2}{2\sigma^2}$ Since $\log$ is monotonic, this is the same as maximizing: $\frac{-(x-\mu_k)^2}{2\sigma^2} = \frac{-(x^2 - 2x\mu_k + \mu_k)^2}{2\sigma^2}$ ...which is the same as maximizing: $\delta(x) = \frac{2x\mu_k}{2\sigma^2} - \frac{\mu_k^2}{2\sigma^2} $ $\delta(x)$ is the discriminant function. In Quadratic Discriminant Analysis, the $x$ term becomes $x^2$. In the $k=2$ case, the boundary point $x^*$ (where our predictions flip) is found by setting: $\delta_1(x^) = \delta_2(x^)$ $\frac{2x\mu_1}{2\sigma^2} - \frac{\mu_1^2}{2\sigma^2} = \frac{2x\mu_2}{2\sigma^2} - \frac{\mu_2^2}{2\sigma^2}$ ... $x^* = \frac{\mu_1 + \mu_2}{2}$ ...which is precisely halfway between the two means. This makes sense, since variance is equal. In practice, we estimate $\mu_k$ with $\hat{\mu_k} = \bar{x_1}$.
class LDAModel_1D(object): """ Linear Discriminant Analysis with one predictor. Parameters ---------- X_bound : list Boundary points between categories in ``K_ordered``. K_ordered : list Categories, ordered by mean. """ def __init__(self, mu, sigma, K_labels): assert len(mu) == len(sigma) assert len(K_labels) == len(mu) self.K = len(K_labels) self.K_labels = K_labels self.mu = mu self.sigma = sigma def find_bounds(self): K_ordered = np.array(self.K_)[np.argsort(np.array(X_means.values()))] self.X_bound = [] for i in xrange(1, len(K_ordered)): k_0, k_1 = K_ordered[i-1], K_ordered[i] mu_0, mu_1 = X_means[k_0], X_means[k_1] self.X_bound.append(mu_0 + ((mu_1 - mu_0)/2.)) def _predict(self, x): for i in xrange(self.K): if i == 0: comp = lambda x: x <= self.X_bound[0] elif i == self.K - 1: comp = lambda x: x >= self.X_bound[-1] else: comp = lambda x: self.X_bound[i-1] < x < self.X_bound[i] if comp(x): return self.K_ordered[i] def predict(self, x, criterion=None): if criterion: return self.K_labels[criterion(self.posterior(x))] return self.K_labels[np.argmax(self.posterior(x))] def posterior(self, x): post_values = [stats.norm.pdf(x, loc=self.mu[i], scale=self.sigma[i]) for i in xrange(self.K)] return [pv/sum(post_values) for pv in post_values] def lda(K_x, X): """ Calculate the boundary points between categories. Parameters ---------- K_x : list Known category for each observation. X : list Observations of a continuous variable. Returns ------- model : :class:`.LDAModel_1D` """ K = set(K_x) X_grouped = {k:[] for k in list(K)} for k, x in zip(K_x, X): X_grouped[k].append(x) K_labels, mu = zip(*[(k, mean(v)) for k,v in X_grouped.iteritems()]) sigma = [mean([np.var(v) for v in X_grouped.values()]) for i in xrange(len(K_labels))] return LDAModel_1D(mu, sigma, K_labels) X = np.linspace(0, 20, 100) def f(x): if x < 7: return 'a', 2. + np.random.random() elif x < 14: return 'b', 4 + np.random.random() else: return 'c', 6 + np.random.random() K, Y = zip(*[f(x) for x in X]) model = lda(K, X)
.ipynb_checkpoints/Linear and Quadratic Discriminant Analysis-checkpoint.ipynb
erickpeirson/statistical-computing
cc0-1.0
Iris Example
iris = pd.read_csv('data/iris.csv') iris_training = pd.concat([iris[iris.Species == 'setosa'].sample(25, random_state=8675309), iris[iris.Species == 'versicolor'].sample(25, random_state=8675309), iris[iris.Species == 'virginica'].sample(25, random_state=8675309)]) iris_test = iris.loc[iris.index.difference(iris_training.index)] iris_training.groupby('Species')['Sepal.Length'].hist() plt.show() model = lda(iris_training.Species, iris_training['Sepal.Length']) predictions = np.array([model.predict(x) for x in iris_test['Sepal.Length']]) truth = iris_test['Species'].values results = pd.DataFrame(np.array([predictions, truth]).T, columns=['Prediction', 'Truth']) vcounts = results.groupby('Prediction').Truth.value_counts() vcounts_dense = np.zeros((3,3)) for i in xrange(model.K): k_i = model.K_labels[i] for j in xrange(model.K): k_j = model.K_labels[j] try: vcounts_dense[i,j] = vcounts[k_i][k_j] except KeyError: pass comparison = pd.DataFrame(vcounts_dense, columns=model.K_labels) comparison['Truth'] = model.K_labels comparison x = stats.norm.rvs(loc=4, scale=1.3, size=200) def qda(K_x, X): K = set(K_x) X_grouped = {k:[] for k in list(K)} for k, x in zip(K_x, X): X_grouped[k].append(x) # Maximize f to find mu and sigma params_k = {} for k, x in X_grouped.iteritems(): guess = (np.mean(x), np.std(x)) # Variance must be greater than 0. constraints = {'type': 'eq', 'fun': lambda params: params[1] > 0} f = lambda params: np.sum(((-1.*(x - params[0])**2)/(2.*params[1]**2)) - np.log(params[1]*np.sqrt(2.*np.pi))) params_k[k] = optimize.minimize(lambda params: -1.*f(params), guess, constraints=constraints).x K_ordered = np.array(params_k.keys())[np.argsort(np.array(zip(*params_k.values())[0]))] X_bound = [] for i in xrange(1, len(K_ordered)): k_0, k_1 = K_ordered[i-1], K_ordered[i] mu_0, sigma2_0 = params_k[k_0] mu_1, sigma2_1 = params_k[k_1] delta_0 = lambda x: ((-1.*(x - mu_0)**2)/(2.*sigma2_0**2)) - np.log(sigma2_0*np.sqrt(2.*np.pi)) delta_1 = lambda x: ((-1.*(x - mu_1)**2)/(2.*sigma2_1**2)) - np.log(sigma2_1*np.sqrt(2.*np.pi)) bound = lambda x: np.abs(delta_0(x) - delta_1(x)) o = optimize.minimize(bound, mu_0 + (mu_1-mu_0)) X_bound.append(o.x) mu, sigma = zip(*params_k.values()) return LDAModel_1D(mu, sigma, params_k.keys()) qmodel = qda(iris_training.Species, iris_training['Sepal.Length'])
.ipynb_checkpoints/Linear and Quadratic Discriminant Analysis-checkpoint.ipynb
erickpeirson/statistical-computing
cc0-1.0
$P(Y=k|X=x) \propto \frac{-(x-\mu_k)^2}{2\sigma_k^2} - log(\sigma_k\sqrt{2\pi})$ $-\sum_{x \in X} \frac{-(x-\mu_k)^2}{2\sigma_k^2} - log(\sigma_k\sqrt{2\pi})$ $\bigg|\bigg(\frac{-(x-\mu_k)^2}{2\sigma_k^2} - log(\sigma_k\sqrt{2\pi})\bigg) - \bigg(\frac{-(x-\mu_{k'})^2} {2\sigma_{k'}^2} - log(\sigma_{k'}\sqrt{2\pi})\bigg)\bigg|$
qpredictions = np.array([qmodel.predict(x) for x in iris_test['Sepal.Length']]) plt.figure(figsize=(15, 5)) X_ = np.linspace(0, 20, 200) iris_training.groupby('Species')['Sepal.Length'].hist() # iris_test.groupby('Species')['Sepal.Length'].hist() ax = plt.gca() ax2 = ax.twinx() for k in qmodel.K_labels: i = qmodel.K_labels.index(k) ax2.plot(X_, stats.norm.pdf(X_, loc=qmodel.mu[i], scale=qmodel.sigma[i]), label='{0}, $\mu={1}$, $\sigma={2}$'.format(k, qmodel.mu[i], qmodel.sigma[i]), lw=4) plt.legend(loc=2) plt.xlim(2, 9) plt.show() results = pd.DataFrame(np.array([qpredictions, truth]).T, columns=['Prediction', 'Truth']) vcounts = results.groupby('Prediction').Truth.value_counts() vcounts_dense = np.zeros((3,3)) for i in xrange(qmodel.K): k_i = qmodel.K_labels[i] for j in xrange(qmodel.K): k_j = qmodel.K_labels[j] try: vcounts_dense[i,j] = vcounts[k_i][k_j] except KeyError: pass comparison = pd.DataFrame(vcounts_dense, columns=qmodel.K_labels) comparison['Truth'] = qmodel.K_labels comparison c = np.array(zip(qpredictions, truth)).T float((c[0] == c[1]).sum())/c.shape[1] Hemocrit = pd.read_csv('data/Hemocrit.csv') model = lda(Hemocrit.status, Hemocrit.hemocrit)
.ipynb_checkpoints/Linear and Quadratic Discriminant Analysis-checkpoint.ipynb
erickpeirson/statistical-computing
cc0-1.0
The default approach was to predict 'Cheat' when $P(Cheater\big|X) > 0.5$.
# Histogram of hemocrit values for cheaters and non-cheaters. Hemocrit[Hemocrit.status == 'Cheat'].hemocrit.hist(histtype='step') Hemocrit[Hemocrit.status == 'Clean'].hemocrit.hist(histtype='step') plt.ylim(0, 40) plt.ylabel('N') # Probability of being a cheater (or not) as a function of hemocrit. ax = plt.gca() ax2 = ax.twinx() R = np.linspace(0, 100, 500) post = np.array([model.posterior(r) for r in R]) ax2.plot(R, post[:, 0], label=model.K_labels[0]) ax2.plot(R, post[:, 1], label=model.K_labels[1]) plt.ylabel('P(Y=k)') plt.xlabel('Hemocrit') plt.legend() plt.xlim(40, 60) plt.title('Criterion: P > 0.5') plt.show()
.ipynb_checkpoints/Linear and Quadratic Discriminant Analysis-checkpoint.ipynb
erickpeirson/statistical-computing
cc0-1.0
Confusion matrix:
predictions = [model.predict(h) for h in Hemocrit.hemocrit] truth = Hemocrit.status.values confusion = pd.DataFrame(np.array([predictions, truth]).T, columns=('Prediction', 'Truth')) confusion.groupby('Prediction').Truth.value_counts()
.ipynb_checkpoints/Linear and Quadratic Discriminant Analysis-checkpoint.ipynb
erickpeirson/statistical-computing
cc0-1.0
Trying the same thing, but with QDA:
qmodel = qda(Hemocrit.status, Hemocrit.hemocrit) qpredictions = np.array([qmodel.predict(h) for h in Hemocrit.hemocrit]) truth = Hemocrit.status.values qconfusion = pd.DataFrame(np.array([qpredictions, truth]).T, columns=('Prediction', 'Truth'))
.ipynb_checkpoints/Linear and Quadratic Discriminant Analysis-checkpoint.ipynb
erickpeirson/statistical-computing
cc0-1.0
Receiver Operator Characteristic (ROC) curve Provides a visual summary of the confusion matrix over a range of criteria. Given a confusion matrix, $N=TN+FP$, $P=TP+FN$.
plt.figure(figsize=(5, 5)) plt.text(0.25, 0.75, 'TN', size=18) plt.text(0.75, 0.75, 'FP', size=18) plt.text(0.25, 0.25, 'FN', size=18) plt.text(0.75, 0.25, 'TN', size=18) plt.xticks([0.25, 0.75], ['Neg', 'Pos'], size=20) plt.yticks([0.25, 0.75], ['Pos', 'Neg'], size=20) plt.ylabel('Truth', size=24) plt.xlabel('Prediction', size=24) plt.title('Confusion Matrix', size=26) plt.show()
.ipynb_checkpoints/Linear and Quadratic Discriminant Analysis-checkpoint.ipynb
erickpeirson/statistical-computing
cc0-1.0
The true positive rate, or Power (or Sensitivity) is $\frac{TP}{P}$ and the Type 1 error is $\frac{FP}{N}$. The ROC curve shows Power vs. Type 1 error. Ideally, we can achieve a high true positive rate at a very low false positive rate:
plt.figure() X = np.linspace(0., 0.5, 200) f = lambda x: 0.001 if x < 0.01 else 0.8 plt.plot(X, map(f, X)) plt.ylabel('True positive rate (power)') plt.xlabel('False positive rate (type 1 error)') plt.show()
.ipynb_checkpoints/Linear and Quadratic Discriminant Analysis-checkpoint.ipynb
erickpeirson/statistical-computing
cc0-1.0
With the hemocrit example:
ROC = [] C = [] for p in np.arange(0.5, 1.0, 0.005): criterion = lambda posterior: 0 if posterior[0] > p else 1 predictions = [model.predict(h, criterion) for h in Hemocrit.hemocrit] truth = Hemocrit.status.values confusion = pd.DataFrame(np.array([predictions, truth]).T, columns=('Prediction', 'Truth')) FP = confusion[confusion['Prediction'] == 'Cheat'][confusion['Truth'] == 'Clean'].shape[0] N = confusion[confusion['Truth'] == 'Clean'].shape[0] FP_rate = float(FP)/N TP = confusion[confusion['Prediction'] == 'Cheat'][confusion['Truth'] == 'Cheat'].shape[0] P = confusion[confusion['Truth'] == 'Cheat'].shape[0] TP_rate = float(TP)/P ROC.append((FP_rate, TP_rate)) C.append(p) plt.title('ROC curve for LDA') FP_rate, TP_rate = zip(*ROC) plt.plot(FP_rate, TP_rate) for i in xrange(0, len(FP_rate), 10): plt.plot(FP_rate[i], TP_rate[i], 'ro') plt.text(FP_rate[i]+0.001, TP_rate[i]+0.01, C[i]) plt.xlim(-0.01, 0.14) plt.ylim(0, .7) plt.ylabel('True positive rate (power)') plt.xlabel('False positive rate (type 1 error)') plt.show() QROC = [] C = [] for p in np.arange(0.5, 1.0, 0.005): criterion = lambda posterior: 0 if posterior[0] > p else 1 predictions = [qmodel.predict(h, criterion) for h in Hemocrit.hemocrit] truth = Hemocrit.status.values confusion = pd.DataFrame(np.array([predictions, truth]).T, columns=('Prediction', 'Truth')) FP = confusion[confusion['Prediction'] == 'Cheat'][confusion['Truth'] == 'Clean'].shape[0] N = confusion[confusion['Truth'] == 'Clean'].shape[0] FP_rate = float(FP)/N TP = confusion[confusion['Prediction'] == 'Cheat'][confusion['Truth'] == 'Cheat'].shape[0] P = confusion[confusion['Truth'] == 'Cheat'].shape[0] TP_rate = float(TP)/P QROC.append((FP_rate, TP_rate)) C.append(p) plt.title('ROC curve for QDA') FP_rate, TP_rate = zip(*QROC) plt.plot(FP_rate, TP_rate) for i in xrange(0, len(FP_rate), 10): plt.plot(FP_rate[i], TP_rate[i], 'ro') plt.text(FP_rate[i]+0.001, TP_rate[i]+0.01, C[i]) plt.xlim(-0.01, 0.14) plt.ylim(0, .7) plt.ylabel('True positive rate (power)') plt.xlabel('False positive rate (type 1 error)') plt.show()
.ipynb_checkpoints/Linear and Quadratic Discriminant Analysis-checkpoint.ipynb
erickpeirson/statistical-computing
cc0-1.0
Important Note: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: K.function(...). 1 - Problem Statement You are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around. <center> <video width="400" height="200" src="nb_images/road_video_compressed2.mp4" type="video/mp4" controls> </video> </center> <caption><center> Pictures taken from a car-mounted camera while driving around Silicon Valley. <br> We would like to especially thank drive.ai for providing this dataset! Drive.ai is a company building the brains of self-driving vehicles. </center></caption> <img src="nb_images/driveai.png" style="width:100px;height:100;"> You've gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like. <img src="nb_images/box_label.png" style="width:500px;height:250;"> <caption><center> <u> Figure 1 </u>: Definition of a box<br> </center></caption> If you have 80 classes that you want YOLO to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step. In this exercise, you will learn how YOLO works, then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use. 2 - YOLO YOLO ("you only look once") is a popular algoritm because it achieves high accuracy while also being able to run in real-time. This algorithm "only looks once" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes. 2.1 - Model details First things to know: - The input is a batch of images of shape (m, 608, 608, 3) - The output is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers. We will use 5 anchor boxes. So you can think of the YOLO architecture as the following: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85). Lets look in greater detail at what this encoding represents. <img src="nb_images/architecture.png" style="width:700px;height:400;"> <caption><center> <u> Figure 2 </u>: Encoding architecture for YOLO<br> </center></caption> If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object. Since we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height. For simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425). <img src="nb_images/flatten.png" style="width:700px;height:400;"> <caption><center> <u> Figure 3 </u>: Flattening the last two last dimensions<br> </center></caption> Now, for each box (of each cell) we will compute the following elementwise product and extract a probability that the box contains a certain class. <img src="nb_images/probability_extraction.png" style="width:700px;height:400;"> <caption><center> <u> Figure 4 </u>: Find the class detected by each box<br> </center></caption> Here's one way to visualize what YOLO is predicting on an image: - For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across both the 5 anchor boxes and across different classes). - Color that grid cell according to what object that grid cell considers the most likely. Doing this results in this picture: <img src="nb_images/proba_map.png" style="width:300px;height:300;"> <caption><center> <u> Figure 5 </u>: Each of the 19x19 grid cells colored according to which class has the largest predicted probability in that cell.<br> </center></caption> Note that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm. Another way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this: <img src="nb_images/anchor_map.png" style="width:200px;height:200;"> <caption><center> <u> Figure 6 </u>: Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. <br> </center></caption> In the figure above, we plotted only boxes that the model had assigned a high probability to, but this is still too many boxes. You'd like to filter the algorithm's output down to a much smaller number of detected objects. To do so, you'll use non-max suppression. Specifically, you'll carry out these steps: - Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class) - Select only one box when several boxes overlap with each other and detect the same object. 2.2 - Filtering with a threshold on class scores You are going to apply a first filter by thresholding. You would like to get rid of any box for which the class "score" is less than a chosen threshold. The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It'll be convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables: - box_confidence: tensor of shape $(19 \times 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells. - boxes: tensor of shape $(19 \times 19, 5, 4)$ containing $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes per cell. - box_class_probs: tensor of shape $(19 \times 19, 5, 80)$ containing the detection probabilities $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell. Exercise: Implement yolo_filter_boxes(). 1. Compute box scores by doing the elementwise product as described in Figure 4. The following code may help you choose the right operator: python a = np.random.randn(19*19, 5, 1) b = np.random.randn(19*19, 5, 80) c = a * b # shape of c will be (19*19, 5, 80) 2. For each box, find: - the index of the class with the maximum box score (Hint) (Be careful with what axis you choose; consider using axis=-1) - the corresponding box score (Hint) (Be careful with what axis you choose; consider using axis=-1) 3. Create a mask by using a threshold. As a reminder: ([0.9, 0.3, 0.4, 0.5, 0.1] &lt; 0.4) returns: [False, True, False, False, True]. The mask should be True for the boxes you want to keep. 4. Use TensorFlow to apply the mask to box_class_scores, boxes and box_classes to filter out the boxes we don't want. You should be left with just the subset of boxes you want to keep. (Hint) Reminder: to call a Keras function, you should use K.function(...).
# GRADED FUNCTION: yolo_filter_boxes def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6): """Filters YOLO boxes by thresholding on object and class confidence. Arguments: box_confidence -- tensor of shape (19, 19, 5, 1) boxes -- tensor of shape (19, 19, 5, 4) box_class_probs -- tensor of shape (19, 19, 5, 80) threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box Returns: scores -- tensor of shape (None,), containing the class probability score for selected boxes boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold. For example, the actual output size of scores would be (10,) if there are 10 boxes. """ # Step 1: Compute box scores ### START CODE HERE ### (β‰ˆ 1 line) box_scores = box_confidence * box_class_probs # (19, 19, 5, 80) ### END CODE HERE ### # Step 2: Find the box_classes thanks to the max box_scores, keep track of the corresponding score ### START CODE HERE ### (β‰ˆ 2 lines) box_classes = K.argmax(box_scores, axis=-1) # (19, 19, 5) box_class_scores = K.max(box_scores, axis=-1) # (19, 19, 5) ### END CODE HERE ### # Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the # same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold) ### START CODE HERE ### (β‰ˆ 1 line) filtering_mask = box_class_scores >= threshold # (19, 19, 5) ### END CODE HERE ### # Step 4: Apply the mask to scores, boxes and classes ### START CODE HERE ### (β‰ˆ 3 lines) scores = tf.boolean_mask(box_class_scores, filtering_mask) boxes = tf.boolean_mask(boxes, filtering_mask) classes = tf.boolean_mask(box_classes, filtering_mask) ### END CODE HERE ### return scores, boxes, classes with tf.Session() as test_a: box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1) boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1) box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1) scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5) print("scores[2] = " + str(scores[2].eval())) print("boxes[2] = " + str(boxes[2].eval())) print("classes[2] = " + str(classes[2].eval())) print("scores.shape = " + str(scores.shape)) print("boxes.shape = " + str(boxes.shape)) print("classes.shape = " + str(classes.shape))
course-deeplearning.ai/course4-cnn/week3-car-detection/Autonomous+driving+application+-+Car+detection+-+v1.ipynb
liufuyang/deep_learning_tutorial
mit
Expected Output: <table> <tr> <td> **scores[2]** </td> <td> 10.7506 </td> </tr> <tr> <td> **boxes[2]** </td> <td> [ 8.42653275 3.27136683 -0.5313437 -4.94137383] </td> </tr> <tr> <td> **classes[2]** </td> <td> 7 </td> </tr> <tr> <td> **scores.shape** </td> <td> (?,) </td> </tr> <tr> <td> **boxes.shape** </td> <td> (?, 4) </td> </tr> <tr> <td> **classes.shape** </td> <td> (?,) </td> </tr> </table> 2.3 - Non-max suppression Even after filtering by thresholding over the classes scores, you still end up a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS). <img src="nb_images/non-max-suppression.png" style="width:500px;height:400;"> <caption><center> <u> Figure 7 </u>: In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probabiliy) one of the 3 boxes. <br> </center></caption> Non-max suppression uses the very important function called "Intersection over Union", or IoU. <img src="nb_images/iou.png" style="width:500px;height:400;"> <caption><center> <u> Figure 8 </u>: Definition of "Intersection over Union". <br> </center></caption> Exercise: Implement iou(). Some hints: - In this exercise only, we define a box using its two corners (upper left and lower right): (x1, y1, x2, y2) rather than the midpoint and height/width. - To calculate the area of a rectangle you need to multiply its height (y2 - y1) by its width (x2 - x1) - You'll also need to find the coordinates (xi1, yi1, xi2, yi2) of the intersection of two boxes. Remember that: - xi1 = maximum of the x1 coordinates of the two boxes - yi1 = maximum of the y1 coordinates of the two boxes - xi2 = minimum of the x2 coordinates of the two boxes - yi2 = minimum of the y2 coordinates of the two boxes In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) the lower-right corner.
# GRADED FUNCTION: iou def iou(box1, box2): """Implement the intersection over union (IoU) between box1 and box2 Arguments: box1 -- first box, list object with coordinates (x1, y1, x2, y2) box2 -- second box, list object with coordinates (x1, y1, x2, y2) """ # Calculate the (y1, x1, y2, x2) coordinates of the intersection of box1 and box2. Calculate its Area. ### START CODE HERE ### (β‰ˆ 5 lines) xi1 = max(box1[0], box2[0]) yi1 = max(box1[1], box2[1]) xi2 = min(box1[2], box2[2]) yi2 = min(box1[3], box2[3]) inter_area = (yi2-yi1) * (xi2-xi1) ### END CODE HERE ### # Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B) ### START CODE HERE ### (β‰ˆ 3 lines) box1_area = (box1[3] - box1[1]) * (box1[2] - box1[0]) box2_area = (box2[3] - box2[1]) * (box2[2] - box2[0]) union_area = box1_area + box2_area - inter_area ### END CODE HERE ### # compute the IoU ### START CODE HERE ### (β‰ˆ 1 line) iou = inter_area / union_area ### END CODE HERE ### return iou box1 = (2, 1, 4, 3) box2 = (1, 2, 3, 4) print("iou = " + str(iou(box1, box2)))
course-deeplearning.ai/course4-cnn/week3-car-detection/Autonomous+driving+application+-+Car+detection+-+v1.ipynb
liufuyang/deep_learning_tutorial
mit
Expected Output: <table> <tr> <td> **iou = ** </td> <td> 0.14285714285714285 </td> </tr> </table> You are now ready to implement non-max suppression. The key steps are: 1. Select the box that has the highest score. 2. Compute its overlap with all other boxes, and remove boxes that overlap it more than iou_threshold. 3. Go back to step 1 and iterate until there's no more boxes with a lower score than the current selected box. This will remove all boxes that have a large overlap with the selected boxes. Only the "best" boxes remain. Exercise: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your iou() implementation): - tf.image.non_max_suppression() - K.gather()
# GRADED FUNCTION: yolo_non_max_suppression def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5): """ Applies Non-max suppression (NMS) to set of boxes Arguments: scores -- tensor of shape (None,), output of yolo_filter_boxes() boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later) classes -- tensor of shape (None,), output of yolo_filter_boxes() max_boxes -- integer, maximum number of predicted boxes you'd like iou_threshold -- real value, "intersection over union" threshold used for NMS filtering Returns: scores -- tensor of shape (, None), predicted score for each box boxes -- tensor of shape (4, None), predicted box coordinates classes -- tensor of shape (, None), predicted class for each box Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this function will transpose the shapes of scores, boxes, classes. This is made for convenience. """ max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression() K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor # Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep ### START CODE HERE ### (β‰ˆ 1 line) nms_indices = tf.image.non_max_suppression( boxes, scores, max_boxes, iou_threshold=0.5, name=None ) ### END CODE HERE ### # Use K.gather() to select only nms_indices from scores, boxes and classes ### START CODE HERE ### (β‰ˆ 3 lines) scores = K.gather(scores, nms_indices) boxes = K.gather(boxes, nms_indices) classes = K.gather(classes, nms_indices) ### END CODE HERE ### return scores, boxes, classes with tf.Session() as test_b: scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1) boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1) classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1) scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes) print("scores[2] = " + str(scores[2].eval())) print("boxes[2] = " + str(boxes[2].eval())) print("classes[2] = " + str(classes[2].eval())) print("scores.shape = " + str(scores.eval().shape)) print("boxes.shape = " + str(boxes.eval().shape)) print("classes.shape = " + str(classes.eval().shape))
course-deeplearning.ai/course4-cnn/week3-car-detection/Autonomous+driving+application+-+Car+detection+-+v1.ipynb
liufuyang/deep_learning_tutorial
mit
Expected Output: <table> <tr> <td> **scores[2]** </td> <td> 6.9384 </td> </tr> <tr> <td> **boxes[2]** </td> <td> [-5.299932 3.13798141 4.45036697 0.95942086] </td> </tr> <tr> <td> **classes[2]** </td> <td> -2.24527 </td> </tr> <tr> <td> **scores.shape** </td> <td> (10,) </td> </tr> <tr> <td> **boxes.shape** </td> <td> (10, 4) </td> </tr> <tr> <td> **classes.shape** </td> <td> (10,) </td> </tr> </table> 2.4 Wrapping up the filtering It's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented. Exercise: Implement yolo_eval() which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided): python boxes = yolo_boxes_to_corners(box_xy, box_wh) which converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of yolo_filter_boxes python boxes = scale_boxes(boxes, image_shape) YOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image--for example, the car detection dataset had 720x1280 images--this step rescales the boxes so that they can be plotted on top of the original 720x1280 image. Don't worry about these two functions; we'll show you where they need to be called.
# GRADED FUNCTION: yolo_eval def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5): """ Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes. Arguments: yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors: box_confidence: tensor of shape (None, 19, 19, 5, 1) box_xy: tensor of shape (None, 19, 19, 5, 2) box_wh: tensor of shape (None, 19, 19, 5, 2) box_class_probs: tensor of shape (None, 19, 19, 5, 80) image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype) max_boxes -- integer, maximum number of predicted boxes you'd like score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box iou_threshold -- real value, "intersection over union" threshold used for NMS filtering Returns: scores -- tensor of shape (None, ), predicted score for each box boxes -- tensor of shape (None, 4), predicted box coordinates classes -- tensor of shape (None,), predicted class for each box """ ### START CODE HERE ### # Retrieve outputs of the YOLO model (β‰ˆ1 line) box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs[0], yolo_outputs[1], yolo_outputs[2], yolo_outputs[3] # Convert boxes to be ready for filtering functions boxes = yolo_boxes_to_corners(box_xy, box_wh) # Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (β‰ˆ1 line) scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = score_threshold) # Scale boxes back to original image shape. boxes = scale_boxes(boxes, image_shape) # Use one of the functions you've implemented to perform Non-max suppression with a threshold of iou_threshold (β‰ˆ1 line) scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes, max_boxes = max_boxes, iou_threshold = iou_threshold) ### END CODE HERE ### return scores, boxes, classes with tf.Session() as test_b: yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1), tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1), tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1), tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)) scores, boxes, classes = yolo_eval(yolo_outputs) print("scores[2] = " + str(scores[2].eval())) print("boxes[2] = " + str(boxes[2].eval())) print("classes[2] = " + str(classes[2].eval())) print("scores.shape = " + str(scores.eval().shape)) print("boxes.shape = " + str(boxes.eval().shape)) print("classes.shape = " + str(classes.eval().shape))
course-deeplearning.ai/course4-cnn/week3-car-detection/Autonomous+driving+application+-+Car+detection+-+v1.ipynb
liufuyang/deep_learning_tutorial
mit
Import Data
continuous_distributions = "C:\\Users\\jdorvinen\\Documents\\ipynbs\\Hoboken\\continuous_distributions_.csv" dists_unindexed = pd.read_csv(continuous_distributions) dist_list = dists_unindexed.distribution.tolist() dists = dists_unindexed.set_index(dists_unindexed.distribution) file_name = 'montauk_combined_data.csv' file_path = 'C:/Users/jdorvinen/Documents/Jared/Projects/East Hampton/met_data' #Path to your data here #If your data is in an excel spreadsheet save it as a delimited text file (.csv formatted) filename = os.path.join(file_path,file_name) df = pd.read_fwf(filename,usecols=['length','inter','swel','hsig','tps','a_hsig','a_tps']).dropna() df = df[df.hsig>3.1] title = str(file_name)+"\n" pd.tools.plotting.scatter_matrix(df.drop(labels=['a_hsig','a_tps'],axis=1)) plt.scatter(df.length,df.tps) import sklearn as sk def save_figure(path,name): plt.savefig(path+name+".png", dpi=200, facecolor='none', edgecolor='none' ) def find_nearest(array,value): idx = (np.abs(array-value)).argmin() return idx def dist_fit(name,dist_name,bins,parameter): global df #Initialize figure and set dimensions fig = plt.figure(figsize = (18,6)) gs = gridspec.GridSpec(2,2) ax1 = fig.add_subplot(gs[:,0]) ax3 = fig.add_subplot(gs[:,1]) ax1.set_title(title,fontsize=20) #Remove the plot frame lines. They are unnecessary chartjunk. ax1.spines["top"].set_visible(False) ax1.spines["right"].set_visible(False) ax3.spines["top"].set_visible(False) ax3.spines["right"].set_visible(False) # Ensure that the axis ticks only show up on the bottom and left of the plot. # Ticks on the right and top of the plot are generally unnecessary chartjunk. ax1.get_xaxis().tick_bottom() ax1.get_yaxis().tick_left() ax3.get_xaxis().tick_bottom() ax3.get_yaxis().tick_left() # Make sure your axis ticks are large enough to be easily read. # You don't want your viewers squinting to read your plot. ax1.tick_params(axis="both", which="both", bottom="off", top="off", labelbottom="on", left="on", right="off", labelleft="on",labelsize=14) ax3.tick_params(axis="both", which="both", bottom="off", top="off", labelbottom="on", left="on", right="off", labelleft="on",labelsize=14) # Along the same vein, make sure your axis labels are large # enough to be easily read as well. Make them slightly larger # than your axis tick labels so they stand out. ax1.set_xlabel(parameter, fontsize=16) ax1.set_ylabel("Frequency of occurence", fontsize=16) ax3.set_xlabel(parameter, fontsize=16) ax3.set_ylabel("Exceedance Probability", fontsize=16) #Setting .... variables size = len(df[parameter]) max_val = 1.1*max(df[parameter]) min_val = min(df[parameter]) range_val = max_val-min_val binsize = range_val/bins x0 = np.arange(min_val,max_val,range_val*0.0001) x1 = np.arange(min_val,max_val,binsize) y1 = df[parameter] #set x-axis limits ax1.set_xlim(min_val,max_val) ax3.set_xlim(min_val,max_val) ax3.set_ylim(0,1.1) #Plot histograms EPDF = ax1.hist(y1, bins=x1, color='w') ECDF = ax3.hist(y1, bins=x1, color='w', normed=1, cumulative=True) #Fitting distribution dist = getattr(scipy.stats, dist_name) param = dist.fit(y1) pdf_fitted = dist.pdf(x0, *param[:-2], loc=param[-2], scale=param[-1])*size*binsize cdf_fitted = dist.cdf(x0, *param[:-2], loc=param[-2], scale=param[-1]) #Checking goodness of fit #ks_fit = stats.kstest(pdf_fitted,dist_name) # Kolmogorov-Smirnov test: returns [KS stat (D,D+,orD-),pvalue] #print(ks_fit) #Finding location of 0.002 and 0.01 exceedence probability events FiveHundInd = find_nearest(cdf_fitted,0.998) OneHundInd = find_nearest(cdf_fitted,0.990) #Plotting pdf and cdf ax1.plot(x0,pdf_fitted,linewidth=2,label=dist_name) ax3.plot(x0,cdf_fitted,linewidth=2,label=dist_name) #update figure spacing gs.update(wspace=0.1, hspace=0.2) #adding a text box ax3.text(min_val+0.1*range_val,1.1, dist_name.upper()+" distribution\n" +"\n" +"0.2% - value: " + str("%.2f" %x0[FiveHundInd])+ " meters\n" +"1.0% - value: " + str("%.2f" %x0[OneHundInd]) + " meters", fontsize=14 ) print(dists.loc[dist_name,'description']+"\n") param_names = (dist.shapes + ', loc, scale').split(', ') if dist.shapes else ['loc', 'scale'] param_str = ', '.join(['{}={:0.2f}'.format(k,v) for k,v in zip(param_names, param)]) dist_str = '{}({})'.format(dist_name, param_str) print(dist_str) plt.show() print(stats.kstest(y1,dist_name,param,alternative='less')) print(stats.kstest(y1,dist_name,param,alternative='greater')) print(stats.kstest(y1,dist_name,param,alternative='two-sided')) return name interact_manual(dist_fit, name=filename, dist_name=dist_list,bins=[25,100,5],parameter=['length','inter','swel','hsig','tps','a_hsig','a_tps'])
.ipynb_checkpoints/Distribution_Fit_MC-checkpoint.ipynb
jdorvi/MonteCarlos_SLC
mit
###References: Python <br> http://stackoverflow.com/questions/6615489/fitting-distributions-goodness-of-fit-p-value-is-it-possible-to-do-this-with/16651524#16651524 <br> http://stackoverflow.com/questions/6620471/fitting-empirical-distribution-to-theoretical-ones-with-scipy-python <br><br> Extreme wave statistics <br> http://drs.nio.org/drs/bitstream/handle/2264/4165/Nat_Hazards_64_223a.pdf;jsessionid=55AAEDE5A2BF3AA06C6CCB5CE3CBEBAD?sequence=1 <br><br> List of available distributions can be found here <br> http://docs.scipy.org/doc/scipy/reference/stats.html#continuous-distributions<br><br> Goodness of fit tests <br> http://statsmodels.sourceforge.net/stable/stats.html#goodness-of-fit-tests-and-measures <br> http://docs.scipy.org/doc/scipy/reference/stats.html#statistical-functions <br>
dist = getattr(scipy.stats, 'genextreme') #param = dist.fit(y1)
.ipynb_checkpoints/Distribution_Fit_MC-checkpoint.ipynb
jdorvi/MonteCarlos_SLC
mit
The resulting model looks like half a parabola. Try on your own to see what the cubic looks like:
poly3_data = polynomial_sframe(sales['sqft_living'], 3) my_features3 = poly3_data.column_names() poly3_data['price'] = sales['price'] model3 = graphlab.linear_regression.create(poly3_data, target = 'price', features = my_features3, validation_set = None) plt.plot(poly3_data['power_1'],poly3_data['price'],'.', poly3_data['power_1'], model3.predict(poly3_data),'-')
Course 2 - ML, Regression/week-3-polynomial-regression-assignment-blank.ipynb
dennys-bd/Coursera-Machine-Learning-Specialization
mit