markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
However, addition increases the rank:
op2 = op + op + op op2.ranks
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Matrix multiplication multiplies the individual ranks:
op3 = mp.dot(op2, op2) op3.ranks
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
(NB: compress or compression below can call canonicalize on the MPA, which in turn could already reduce the rank to 1 in case the rank can be compressed without error. Keep that in mind.) Keep in mind that the operator represented by op3 is still the identity operator, i.e. a tensor product operator. This means that we expect to find a good approximation with low rank easily. Finding such an approximation is called compression and is achieved as follows:
op3 /= mp.norm(op3.copy()) # normalize to make overlap meaningful copy = op3.copy() overlap = copy.compress(method='svd', rank=1) copy.ranks
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Calling compress on an MPA replaces the MPA in place with a version with smaller bond dimension. Overlap gives the absolute value of the (Hilbert-Schmidt) inner product between the original state and the output:
overlap
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Instead of in-place compression, we can also obtain a compressed copy:
compr, overlap = op3.compression(method='svd', rank=2) overlap, compr.ranks, op3.ranks
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
SVD compression can also be told to meet a certain truncation error (see the documentation of mp.MPArray.compress for details).
compr, overlap = op3.compression(method='svd', relerr=1e-6) overlap, compr.ranks, op3.ranks
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
We can also use variational compression instead of SVD compression:
compr, overlap = op3.compression(method='var', rank=2, num_sweeps=10, var_sites=2) # Convert overlap from numpy array with shape () to float for nicer display: overlap = overlap.flat[0] complex(overlap), compr.ranks, op3.ranks
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
As a reminder, it is always advisable to check whether the overlap between the input state and the compression is large enough. In an involved algorithm, it can be useful to store the compression error at each invocation of compression. MPO sum of local terms A frequent task is to compute the MPO representation of a local Hamiltonian, i.e. of an operator of the form $$ H = \sum_{i=1}^{n-1} h_{i, i+1} $$ where $h_{i, i+1}$ acts only on sites $i$ and $i + 1$. This means that $h_{i, i+1} = \mathbb 1_{i - 1} \otimes h'{i, i+1} \otimes \mathbb 1{n - w + 1}$ where $\mathbb 1_k$ is the identity matrix on $k$ sites and $w = 2$ is the width of $h'_{i, i+1}$. We show how to obtain an MPO representation of such a Hamiltonian. First of all, we need to define the local terms. For simplicity, we choose $h'_{i, i+1} = \sigma_Z \otimes \sigma_Z$ independently of $i$.
zeros = np.zeros((2, 2)) zeros idm = np.eye(2) idm # Create a float array instead of an int array to avoid problems later Z = np.diag([1., -1]) Z h = np.kron(Z, Z) h
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
First, we have to convert the local term h to an MPO:
h_arr = h.reshape((2, 2, 2, 2)) h_mpo = mp.MPArray.from_array_global(h_arr, ndims=2) h_mpo.ranks
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
h_mpo has rank 4 even though h is a tensor product. This is far from optimal. We improve things as follows: (We could also compress h_mpo.)
h_mpo = mp.MPArray.from_kron([Z, Z]) h_mpo.ranks
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
The most simple way is to implement the formula from above with MPOs: First we compute the $h_{i, i+1}$ from the $h'_{i, i+1}$:
width = 2 sites = 6 local_terms = [] for startpos in range(sites - width + 1): left = [mp.MPArray.from_kron([idm] * startpos)] if startpos > 0 else [] right = [mp.MPArray.from_kron([idm] * (sites - width - startpos))] \ if sites - width - startpos > 0 else [] h_at_startpos = mp.chain(left + [h_mpo] + right) local_terms.append(h_at_startpos) local_terms
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Next, we compute the sum of all the local terms and check the bond dimension of the result:
H = local_terms[0] for local_term in local_terms[1:]: H += local_term H.ranks
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
The ranks are explained by the ranks of the local terms:
[local_term.ranks for local_term in local_terms]
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
We just have to add the ranks at each position. mpnum provides a function which constructs H from h_mpo, with an output MPO with smaller rank by taking into account the trivial action on some sites:
H2 = mp.local_sum([h_mpo] * (sites - width + 1)) H2.ranks
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Without additional arguments, mp.local_sum() just adds the local terms with the first term starting on site 0, the second on site 1 and so on. In addition, the length of the chain is chosen such that the last site of the chain coincides with the last site of the last local term. Other constructions can be obtained by prodividing additional arguments. We can check that the two Hamiltonians are equal:
mp.normdist(H, H2)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Of course, this means that we could just compress H:
H_comp, overlap = H.compression(method='svd', rank=3) overlap / mp.norm(H)**2 H_comp.ranks
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
We can also check the minimal bond dimension which can be achieved with SVD compression with small error:
H_comp, overlap = H.compression(method='svd', relerr=1e-6) overlap / mp.norm(H)**2 H_comp.ranks
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
MPS, MPOs and PMPS We can represent vectors (e.g. pure quantum states) as MPS, we can represent arbitrary matrices as MPO and we can represent positive semidefinite matrices as purifying matrix product states (PMPS). For mixed quantum states, we can thus choose between the MPO and PMPS representations. As mentioned in the introduction, MPS and MPOs are handled as MPAs with one and two physical legs per site. In addition, PMPS are handled as MPAs with two physical legs per site, where the first leg is the "system" site and the second leg is the corresponding "ancilla" site. From MPS and PMPS representations, we can easily obtain MPO representations. mpnum provides routines for this:
mps = mp.random_mpa(sites=5, ldim=2, rank=3, normalized=True) mps_mpo = mp.mps_to_mpo(mps) mps_mpo.ranks
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
As expected, the rank of mps_mpo is the square of the rank of mps. Now we create a PMPS with system site dimension 2 and ancilla site dimension 3:
pmps = mp.random_mpa(sites=5, ldim=(2, 3), rank=3, normalized=True) pmps.shape pmps_mpo = mp.pmps_to_mpo(pmps) pmps_mpo.ranks
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Again, the rank is squared, as expected. We can verify that the first physical leg of each site of pmps is indeed the system site by checking the shape of pmps_mpo:
pmps_mpo.shape
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Local reduced states For state tomography applications, we frequently need the local reduced states of an MPS, MPO or PMPS. We provide the following functions for this task: mp.reductions_mps_as_pmps(): Input: MPS, output: local reductions as PMPS mp.reductions_mps_as_mpo(): Input: MPS, output: local reductions as MPO mp.reductions_pmps(): Input: PMPS, output: Local reductions as PMPS mp.reductions_mpo(): Input: MPO, output: Local reductions as MPO The arguments of all functions are similar, e.g.:
width = 3 startsites = range(len(pmps) - width + 1) for startsite, red in zip(startsites, mp.reductions_pmps(pmps, width, startsites)): print('Reduction starting on site', startsite) print('bdims:', red.ranks) red_mpo = mp.pmps_to_mpo(red) print('trace:', mp.trace(red_mpo)) print()
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Gaussian process regression Lecture 1 Suzanne Aigrain, University of Oxford LSST DSFP Session 4, Seattle, Sept 2017 Lecture 1: Introduction and basics Tutorial 1: Write your own GP code Lecture 2: Examples and practical considerations Tutorial 3: Useful GP modules Lecture 3: Advanced applications Why GPs? flexible, robust probabilistic regression and classification tools. applied across a wide range of fields, from finance to zoology. useful for data containing non-trivial stochastic signals or noise. time-series data: causation implies correlation, so noise always correlated. increasingly popular in astronomy [mainly time-domain, but not just]. Spitzer exoplanet transits and eclipses (Evans et al. 2015) <img src="images/Evans_Spitzer.png" width="800"> GPz photometric redshifts (Almosallam, Jarvis & Roberts 2016) <img src="images/Almosallam_GPz.png" width="600"> What is a GP? A Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution. Consider a scalar variable $y$, drawn from a Gaussian distribution with mean $\mu$ and variance $\sigma^2$: $$ p(y) = \frac{1}{\sqrt{2 \pi} \sigma} \exp \left[ - \frac{(y-\mu)^2}{2 \sigma^2} \right]. $$ As a short hand, we write: $y \sim \mathcal{N}(\mu,\sigma^2)$.
def gauss1d(x,mu,sig): return np.exp(-(x-mu)**2/sig*2/2.)/np.sqrt(2*np.pi)/sig def pltgauss1d(sig=1): mu=0 x = np.r_[-4:4:101j] pl.figure(figsize=(10,7)) pl.plot(x, gauss1d(x,mu,sig),'k-'); pl.axvline(mu,c='k',ls='-'); pl.axvline(mu+sig,c='k',ls='--'); pl.axvline(mu-sig,c='k',ls='--'); pl.axvline(mu+2*sig,c='k',ls=':'); pl.axvline(mu-2*sig,c='k',ls=':'); pl.xlim(x.min(),x.max()); pl.ylim(0,1); pl.xlabel(r'$y$'); pl.ylabel(r'$p(y)$'); return interact(pltgauss1d, sig=widgets.FloatSlider(value=1.0, min=0.5, max=2.0, step=0.25, description=r'$\sigma$', readout_format='.2f'));
Sessions/Session04/Day2/GPLecture1.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Now let us consider a pair of variables $y_1$ and $y_2$, drawn from a bivariate Gaussian distribution. The joint probability density for $y_1$ and $y_2$ is: $$ \left[ \begin{array}{l} y_1 \ y_2 \end{array} \right] \sim \mathcal{N} \left( \left[ \begin{array}{l} \mu_1 \ \mu_2 \end{array} \right] , \left[ \begin{array}{ll} \sigma_1^2 & C \ C & \sigma_2^2 \end{array} \right] \right), $$ where $C = {\rm cov}(y_1,y_2)$ is the covariance between $y_1$ and $y_2$. The second term on the right hand side is the covariance matrix, $K$. We now use two powerful identities of Gaussian distributions to elucidate the relationship between $y_1$ and $y_2$. The marginal distribution of $y_1$ describes what we know about $y_1$ in the absence of any other information about $y_2$, and is simply: $$ p(y_1)= \mathcal{N} (\mu_1,\sigma_1^2). $$ If we know the value of $y_2$, the probability density for $y_1$ collapses to the the conditional distribution of $y_1$ given $y_2$: $$ p(y_1 \mid y_2) = \mathcal{N} \left( \mu_1 + C (y_2-\mu_2)/\sigma_2^2, \sigma_1^2-C^2\sigma_2^2 \right). $$ If $K$ is diagonal, i.e. if $C=0$, $p(y_1 \mid y_2) = p(y_1)$. Measuring $y_2$ doesn't teach us anything about $y_1$. The two variables are uncorrelated. If the variables are correlated ($C \neq 0$), measuring $y_2$ does alter our knowledge of $y_1$: it modifies the mean and reduces the variance.
def gauss2d(x1,x2,mu1,mu2,sig1,sig2,rho): z = (x1-mu1)**2/sig1**2 + (x2-mu2)**2/sig2**2 - 2*rho*(x1-mu1)*(x2-mu2)/sig1/sig2 e = np.exp(-z/2/(1-rho**2)) return e/(2*np.pi*sig1*sig2*np.sqrt(1-rho**2)) def pltgauss2d(rho=0,show_cond=0): mu1, sig1 = 0,1 mu2, sig2 = 0,1 y2o = -1 x1 = np.r_[-4:4:101j] x2 = np.r_[-4:4:101j] x22d,x12d = np.mgrid[-4:4:101j,-4:4:101j] y = gauss2d(x12d,x22d,mu1,mu2,sig1,sig2,rho) y1 = gauss1d(x1,mu1,sig1) y2 = gauss1d(x2,mu2,sig2) mu12 = mu1+rho*(y2o-mu2)/sig2**2 sig12 = np.sqrt(sig1**2-rho**2*sig2**2) y12 = gauss1d(x1,mu12,sig12) pl.figure(figsize=(10,10)) ax1 = pl.subplot2grid((3,3),(1,0),colspan=2,rowspan=2,aspect='equal') v = np.array([0.02,0.1,0.3,0.6]) * y.max() CS = pl.contour(x1,x2,y,v,colors='k') if show_cond: pl.axhline(y2o,c='r') pl.xlabel(r'$y_1$'); pl.ylabel(r'$y_2$'); pl.xlim(x1.min(),x1.max()) ax1.xaxis.set_major_locator(pl.MaxNLocator(5, prune = 'both')) ax1.yaxis.set_major_locator(pl.MaxNLocator(5, prune = 'both')) ax2 = pl.subplot2grid((3,3),(0,0),colspan=2,sharex=ax1) pl.plot(x1,y1,'k-') if show_cond: pl.plot(x1,y12,'r-') pl.ylim(0,0.8) pl.ylabel(r'$p(y_1)$') pl.setp(ax2.get_xticklabels(), visible=False) ax2.xaxis.set_major_locator(pl.MaxNLocator(5, prune = 'both')) ax2.yaxis.set_major_locator(pl.MaxNLocator(4, prune = 'upper')) pl.xlim(x1.min(),x1.max()) ax3 = pl.subplot2grid((3,3),(1,2),rowspan=2,sharey=ax1) pl.plot(y2,x2,'k-') if show_cond: pl.axhline(y2o,c='r') pl.ylim(x2.min(),x2.max()); pl.xlim(0,0.8); pl.xlabel(r'$p(y_2)$') pl.setp(ax3.get_yticklabels(), visible=False) ax3.xaxis.set_major_locator(pl.MaxNLocator(4, prune = 'upper')) ax3.yaxis.set_major_locator(pl.MaxNLocator(5, prune = 'both')) pl.subplots_adjust(hspace=0,wspace=0) return interact(pltgauss2d, rho=widgets.FloatSlider(min=-0.8,max=0.8,step=0.4,description=r'$\rho$',value=0), show_cond=widgets.Checkbox(value=True,description='show conditional distribution'));
Sessions/Session04/Day2/GPLecture1.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
To make the relation to time-series data a bit more obvious, let's plot the two variables side by side, then see what happens to one variable when we observe (fix) the other.
def SEKernel(par, x1, x2): A, Gamma = par D2 = cdist(x1.reshape(len(x1),1), x2.reshape(len(x2),1), metric = 'sqeuclidean') return A * np.exp(-Gamma*D2) A = 1.0 Gamma = 0.01 x = np.array([-1,1]) K = SEKernel([A,Gamma],x,x) m = np.zeros(len(x)) sig = np.sqrt(np.diag(K)) pl.figure(figsize=(15,7)) pl.subplot(121) for i in range(len(x)): pl.plot([x[i]-0.1,x[i]+0.1],[m[i],m[i]],'k-') pl.fill_between([x[i]-0.1,x[i]+0.1], [m[i]+sig[i],m[i]+sig[i]], [m[i]-sig[i],m[i]-sig[i]],color='k',alpha=0.2) pl.xlim(-2,2) pl.ylim(-2,2) pl.xlabel(r'$x$') pl.ylabel(r'$y$'); def Pred_GP(CovFunc, CovPar, xobs, yobs, eobs, xtest): # evaluate the covariance matrix for pairs of observed inputs K = CovFunc(CovPar, xobs, xobs) # add white noise K += np.identity(xobs.shape[0]) * eobs**2 # evaluate the covariance matrix for pairs of test inputs Kss = CovFunc(CovPar, xtest, xtest) # evaluate the cross-term Ks = CovFunc(CovPar, xtest, xobs) # invert K Ki = inv(K) # evaluate the predictive mean m = np.dot(Ks, np.dot(Ki, yobs)) # evaluate the covariance cov = Kss - np.dot(Ks, np.dot(Ki, Ks.T)) return m, cov xobs = np.array([-1]) yobs = np.array([1.0]) eobs = 0.0001 pl.subplot(122) pl.errorbar(xobs,yobs,yerr=eobs,capsize=0,fmt='k.') x = np.array([1]) m,C=Pred_GP(SEKernel,[A,Gamma],xobs,yobs,eobs,x) sig = np.sqrt(np.diag(C)) for i in range(len(x)): pl.plot([x[i]-0.1,x[i]+0.1],[m[i],m[i]],'k-') pl.fill_between([x[i]-0.1,x[i]+0.1], [m[i]+sig[i],m[i]+sig[i]], [m[i]-sig[i],m[i]-sig[i]],color='k',alpha=0.2) pl.xlim(-2,2) pl.ylim(-2,2) pl.xlabel(r'$x$') pl.ylabel(r'$y$');
Sessions/Session04/Day2/GPLecture1.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Now consider $N$ variables drawn from a multivariate Gaussian distribution: $$ \boldsymbol{y} \sim \mathcal{N} (\boldsymbol{m},K) $$ where $y = (y_1,y_2,\ldots,y_N)^T$, $\boldsymbol{m} = (m_1,m_2,\ldots,m_N)^T$ is the mean vector, and $K$ is an $N \times N$ positive semi-definite covariance matrix, with elements $K_{ij}={\rm cov}(y_i,y_j)$. A Gaussian process is an extension of this concept to infinite $N$, giving rise to a probability distribution over functions. This last generalisation may not be obvious conceptually, but in practice only ever deal with finite samples.
xobs = np.array([-1,1,2]) yobs = np.array([1,-1,0]) eobs = np.array([0.0001,0.1,0.1]) pl.figure(figsize=(15,7)) pl.subplot(121) pl.errorbar(xobs,yobs,yerr=eobs,capsize=0,fmt='k.') Gamma = 0.5 x = np.array([-2.5,-2,-1.5,-0.5, 0.0, 0.5,1.5,2.5]) m,C=Pred_GP(SEKernel,[A,Gamma],xobs,yobs,eobs,x) sig = np.sqrt(np.diag(C)) for i in range(len(x)): pl.plot([x[i]-0.1,x[i]+0.1],[m[i],m[i]],'k-') pl.fill_between([x[i]-0.1,x[i]+0.1], [m[i]+sig[i],m[i]+sig[i]], [m[i]-sig[i],m[i]-sig[i]],color='k',alpha=0.2) pl.xlim(-3,3) pl.ylim(-3,3) pl.xlabel(r'$x$') pl.ylabel(r'$y$'); pl.subplot(122) pl.errorbar(xobs,yobs,yerr=eobs,capsize=0,fmt='k.') x = np.linspace(-3,3,100) m,C=Pred_GP(SEKernel,[A,Gamma],xobs,yobs,eobs,x) sig = np.sqrt(np.diag(C)) pl.plot(x,m,'k-') pl.fill_between(x,m+sig,m-sig,color='k',alpha=0.2) pl.xlim(-3,3) pl.ylim(-3,3) pl.xlabel(r'$x$') pl.ylabel(r'$y$');
Sessions/Session04/Day2/GPLecture1.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Textbooks A good, detailed reference is Gaussian Processes for Machine Learning by C. E. Rasmussen & C. Williams, MIT Press, 2006. The examples in the book are generated using the Matlab package GPML. A more formal definition A Gaussian process is completely specified by its mean function and covariance function. We define the mean function $m(x)$ and the covariance function $k(x,x)$ of a real process $y(x)$ as $$ \begin{array}{rcl} m(x) & = & \mathbb{E}[y(x)], \ k(x,x') & = & \mathrm{cov}(y(x),y(x'))=\mathbb{E}[(y(x) − m(x))(y(x') − m(x'))]. \end{array} $$ A very common covariance function is the squared exponential, or radial basis function (RBF) kernel $$ K_{ij}=k(x_i,x_j)=A \exp\left[ - \Gamma (x_i-x_j)^2 \right], $$ which has 2 parameters: $A$ and $\Gamma$. We then write the Gaussian process as $$ y(x) \sim \mathcal{GP}(m(x), k(x,x')) $$ Here we are implicitly assuming the inputs $x$ are one-dimensional, e.g. $x$ might represent time. However, the input space can have more than one dimension. We will see an example of a GP with multi-dimensional inputs later. The prior Now consider a finite set of inputs $\boldsymbol{x}$, with corresponding outputs $\boldsymbol{y}$. The joint distribution of $\boldsymbol{y}$ given $\boldsymbol{x}$, $m$ and $k$ is $$ \mathrm{p}(\boldsymbol{y} \mid \boldsymbol{x},m,k) = \mathcal{N}( \boldsymbol{m},K), $$ where $\boldsymbol{m}=m(\boldsymbol{x})$ is the mean vector, and $K$ is the covariance matrix, with elements $K_{ij} = k(x_i,x_j)$. Test and training sets Suppose we have an (observed) training set $(\boldsymbol{x},\boldsymbol{y})$. We are interested in some other test set of inputs $\boldsymbol{x}_*$. The joint distribution over the training and test sets is $$ \mathrm{p} \left( \left[ \begin{array}{l} \boldsymbol{y} \ \boldsymbol{y} \end{array} \right] \right) = \mathcal{N} \left( \left[ \begin{array}{l} \boldsymbol{m} \ \boldsymbol{m}_ \end{array} \right], \left[ \begin{array}{ll} K & K \ K_^T & K_{**} \end{array} \right] \right), $$ where $\boldsymbol{m} = m(\boldsymbol{x}_)$, $K{,ij} = k(x_{,i},x_{,j})$ and $K_{,ij} = k(x_i,x_{,j})$. For simplicity, assume the mean function is zero everywhere: $\boldsymbol{m}=\boldsymbol{0}$. We will consider to non-trivial mean functions later. The conditional distribution The conditional distribution for the test set given the training set is: $$ \mathrm{p} ( \boldsymbol{y} \mid \boldsymbol{y},k) = \mathcal{N} ( K_^T K^{-1} \boldsymbol{y}, K{} - K_^T K^{-1} K_ ). $$ This is also known as the predictive distribution, because it can be use to predict future (or past) observations. More generally, it can be used for interpolating the observations to any desired set of inputs. This is one of the most widespread applications of GPs in some fields (e.g. kriging in geology, economic forecasting, ...) Adding white noise Real observations always contain a component of white noise, which we need to account for, but don't necessarily want to include in the predictions. If the white noise variance $\sigma^2$ is constant, we can write $$ \mathrm{cov}(y_i,y_j)=k(x_i,x_j)+\delta_{ij} \sigma^2, $$ and the conditional distribution becomes $$ \mathrm{p} ( \boldsymbol{y} \mid \boldsymbol{y},k) = \mathcal{N} ( K_^T (K + \sigma^2 \mathbb{I})^{-1} \boldsymbol{y}, K{} - K_^T (K + \sigma^2 \mathbb{I})^{-1} K_ ). $$ In real life, we may need to learn $\sigma$ from the data, alongside the other contribution to the covariance matrix. We assumed constant white noise, but it's trivial to allow for different $\sigma$ for each data point. Single-point prediction Let us look more closely at the predictive distribution for a single test point $x_*$. It is a Gaussian with mean $$ \overline{y} = \boldsymbol{k}_^T (K + \sigma^2 \mathbb{I})^{-1} \boldsymbol{y} $$ and variance $$ \mathbb{V}[y] = k(x_,x_) - \boldsymbol{k}_^T (K + \sigma^2 \mathbb{I})^{-1} \boldsymbol{k}_, $$ where $\boldsymbol{k}_$ is the vector of covariances between the test point and the training points. Notice the mean is a linear combination of the observations: the GP is a linear predictor. It is also a linear combination of covariance functions, each centred on a training point: $$ \overline{y}_ = \sum_{i=1}^N \alpha_i k(x_i,x_), $$ where $\alpha_i = (K + \sigma^2 \mathbb{I})^{-1} y_i$. The likelihood The likelihood of the data under the GP model is simply: $$ \mathrm{p}(\boldsymbol{y} \,|\, \boldsymbol{x}) = \mathcal{N}(\boldsymbol{y} \, | \, \boldsymbol{0},K + \sigma^2 \mathbb{I}). $$ This is a measure of how well the model explains, or predicts, the training set. In some textbooks this is referred to as the marginal likelihood. This arises if one considers the observed $\boldsymbol{y}$ as noisy realisations of a latent (unobserved) Gaussian process $\boldsymbol{f}$. The term marginal refers to marginalisation over the function values $\boldsymbol{f}$: $$ \mathrm{p}(\boldsymbol{y} \,|\, \boldsymbol{x}) = \int \mathrm{p}(\boldsymbol{y} \,|\, \boldsymbol{f},\boldsymbol{x}) \, \mathrm{p}(\boldsymbol{f} \,|\, \boldsymbol{x}) \, \mathrm{d}\boldsymbol{f}, $$ where $$ \mathrm{p}(\boldsymbol{f} \,|\, \boldsymbol{x}) = \mathcal{N}(\boldsymbol{f} \, | \, \boldsymbol{0},K) $$ is the prior, and $$ \mathrm{p}(\boldsymbol{y} \,|\, \boldsymbol{f},\boldsymbol{x}) = \mathcal{N}(\boldsymbol{y} \, | \, \boldsymbol{0},\sigma^2 \mathbb{I}) $$ is the likelihood. Parameters and hyper-parameters The parameters of the covariance and mean function as known as the hyper-parameters of the GP. This is because the actual parameters of the model are the function values, $\boldsymbol{f}$, but we never explicitly deal with them: they are always marginalised over. Conditioning the GP... ...means evaluating the conditional (or predictive) distribution for a given covariance matrix (i.e. covariance function and hyper-parameters), and training set. Training the GP... ...means maximising the likelihood of the model with respect to the hyper-parameters. The kernel trick Consider a linear basis model with arbitrarily many basis functions, or features, $\Phi(x)$, and a (Gaussian) prior $\Sigma_{\mathrm{p}}$ over the basis function weights. One ends up with exactly the same expressions for the predictive distribution and the likelihood so long as: $$ k(\boldsymbol{x},\boldsymbol{x'}) = \Phi(\boldsymbol{x})^{\mathrm{T}} \Sigma_{\mathrm{p}} \Phi(\boldsymbol{x'}), $$ or, writing $\Psi(\boldsymbol{x}) = \Sigma_{\mathrm{p}}^{1/2} \Phi(\boldsymbol{x})$, $$ k(\boldsymbol{x},\boldsymbol{x'}) = \Psi(\boldsymbol{x}) \cdot \Psi(\boldsymbol{x'}), $$ Thus the covariance function $k$ enables us to go from a (finite) input space to a (potentially infinite) feature space. This is known as the kernel trick and the covariance function is often referred to as the kernel. Non-zero mean functions In general (and in astronomy applications in particular) we often want to use non-trivial mean functions. To do this simply replace $\boldsymbol{y}$ by $\boldsymbol{r}=\boldsymbol{y}-\boldsymbol{m}$ in the expressions for predictive distribution and likelihood. The mean function represents the deterministic component of the model - e.g.: a linear trend, a Keplerian orbit, a planetary transit, ... The covariance function encodes the stochastic component. - e.g.: instrumental noise, stellar variability Covariance functions The only requirement for the covariance function is that it should return a positive semi-definite covariance matrix. The simplest covariance functions have two parameters: one input and one output variance (or scale). The form of the covariance function controls the degree of smoothness. The squared exponential The simplest, most widely used kernel is the squared exponential: $$ k_{\rm SE}(x,x') = A \exp \left[ - \Gamma (x-x')^2 \right]. $$ This gives rise to smooth functions with variance $A$ and inverse scale (characteristic length scale) $A$ and output scale (amplitude) $l$.
def kernel_SE(X1,X2,par): p0 = 10.0**par[0] p1 = 10.0**par[1] D2 = cdist(X1,X2,'sqeuclidean') K = p0 * np.exp(- p1 * D2) return np.matrix(K) def kernel_Mat32(X1,X2,par): p0 = 10.0**par[0] p1 = 10.0**par[1] DD = cdist(X1, X2, 'euclidean') arg = np.sqrt(3) * abs(DD) / p1 K = p0 * (1 + arg) * np.exp(- arg) return np.matrix(K) def kernel_RQ(X1,X2,par): p0 = 10.0**par[0] p1 = 10.0**par[1] alpha = par[2] D2 = cdist(X1, X2, 'sqeuclidean') K = p0 * (1 + D2 / (2*alpha*p1))**(-alpha) return np.matrix(K) def kernel_Per(X1,X2,par): p0 = 10.0**par[0] p1 = 10.0**par[1] period = par[2] DD = cdist(X1, X2, 'euclidean') K = p0 * np.exp(- p1*(np.sin(np.pi * DD / period))**2) return np.matrix(K) def kernel_QP(X1,X2,par): p0 = 10.0**par[0] p1 = 10.0**par[1] period = par[2] p3 = 10.0**par[3] DD = cdist(X1, X2, 'euclidean') D2 = cdist(X1, X2, 'sqeuclidean') K = p0 * np.exp(- p1*(np.sin(np.pi * DD / period))**2 - p3 * D2) return np.matrix(K) def add_wn(K,lsig): sigma=10.0**lsig N = K.shape[0] return K + sigma**2 * np.identity(N) def get_kernel(name): if name == 'SE': return kernel_SE elif name == 'RQ': return kernel_RQ elif name == 'M32': return kernel_Mat32 elif name == 'Per': return kernel_Per elif name == 'QP': return kernel_QP else: print 'No kernel called %s - using SE' % name return kernel_SE def pltsamples1(par0=0.0, par1=0.0, wn = 0.0): x = np.r_[-5:5:201j] X = np.matrix([x]).T # scipy.spatial.distance expects matrices kernel = get_kernel('SE') K = kernel(X,X,[par0,par1]) K = add_wn(K,wn) fig=pl.figure(figsize=(10,4)) ax1 = pl.subplot2grid((1,3), (0, 0), aspect='equal') pl.imshow(np.sqrt(K),interpolation='nearest',vmin=0,vmax=10) pl.title('Covariance matrix') ax2 = pl.subplot2grid((1,3), (0,1),colspan=2) np.random.seed(0) for i in range(3): y = np.random.multivariate_normal(np.zeros(len(x)),K) pl.plot(x,y-i*2) pl.xlim(-5,5) pl.ylim(-8,5) pl.xlabel('x') pl.ylabel('y') pl.title('Samples from %s prior' % 'SE') pl.tight_layout() interact(pltsamples1, par0=widgets.FloatSlider(min=-1,max=1,step=0.5,description=r'$\log_{10} A$',value=0), par1=widgets.FloatSlider(min=-1,max=1,step=0.5,description=r'$\log_{10} \Gamma$',value=0), wn=widgets.FloatSlider(min=-2,max=0,step=1,description=r'$\log_{10} \sigma$',value=-2) );
Sessions/Session04/Day2/GPLecture1.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
The Matern family The Matern 3/2 kernel $$ k_{3/2}(x,x')= A \left( 1 + \frac{\sqrt{3}r}{l} \right) \exp \left( - \frac{\sqrt{3}r}{l} \right), $$ where $r =|x-x'|$. It produces somewhat rougher behaviour, because it is only differentiable once w.r.t. $r$ (whereas the SE kernel is infinitely differentiable). There is a whole family of Matern kernels with varying degrees of roughness. The rational quadratic kernel is equivalent to a squared exponential with a powerlaw distribution of input scales $$ k_{\rm RQ}(x,x') = A^2 \left(1 + \frac{r^2}{2 \alpha l} \right)^{-\alpha}, $$ where $\alpha$ is the index of the power law. This is useful to model data containing variations on a range of timescales with just one extra parameter.
# Function to plot samples from kernel def pltsamples2(par2=0.5, kernel_shortname='SE'): x = np.r_[-5:5:201j] X = np.matrix([x]).T # scipy.spatial.distance expects matrices kernel = get_kernel(kernel_shortname) K = kernel(X,X,[0.0,0.0,par2]) fig=pl.figure(figsize=(10,4)) ax1 = pl.subplot2grid((1,3), (0, 0), aspect='equal') pl.imshow(np.sqrt(K),interpolation='nearest',vmin=0,vmax=10) pl.title('Covariance matrix') ax2 = pl.subplot2grid((1,3), (0,1),colspan=2) np.random.seed(0) for i in range(3): y = np.random.multivariate_normal(np.zeros(len(x)),K) pl.plot(x,y-i*2) pl.xlim(-5,5) pl.ylim(-8,5) pl.xlabel('x') pl.ylabel('y') pl.title('Samples from %s prior' % kernel_shortname) pl.tight_layout() interact(pltsamples2, par2=widgets.FloatSlider(min=0.25,max=1,step=0.25,description=r'$\alpha$',value=0.5), kernel_shortname=widgets.RadioButtons(options=['SE','M32','RQ'], value='SE',description='kernel') );
Sessions/Session04/Day2/GPLecture1.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Periodic kernels... ...can be constructed by replacing $r$ in any of the above by a periodic function of $r$. For example, the cosine kernel: $$ k_{\cos}(x,x') = A \cos\left(\frac{2\pi r}{P}\right), $$ [which follows the dynamics of a simple harmonic oscillator], or... ...the "exponential sine squared" kernel, obtained by mapping the 1-D variable $x$ to the 2-D variable $\mathbf{u}(x)=(\cos(x),\sin(x))$, and then applying a squared exponential in $\boldsymbol{u}$-space: $$ k_{\sin^2 {\rm SE}}(x,x') = A \exp \left[ -\Gamma \sin^2\left(\frac{\pi r}{P}\right) \right], $$ which allows for non-harmonic functions.
# Function to plot samples from kernel def pltsamples3(par2=2.0, par3=2.0,kernel_shortname='Per'): x = np.r_[-5:5:201j] X = np.matrix([x]).T # scipy.spatial.distance expects matrices kernel = get_kernel(kernel_shortname) K = kernel(X,X,[0.0,0.0,par2,par3]) fig=pl.figure(figsize=(10,4)) ax1 = pl.subplot2grid((1,3), (0, 0), aspect='equal') pl.imshow(np.sqrt(K),interpolation='nearest',vmin=0,vmax=10) pl.title('Covariance matrix') ax2 = pl.subplot2grid((1,3), (0,1),colspan=2) np.random.seed(0) for i in range(3): y = np.random.multivariate_normal(np.zeros(len(x)),K) pl.plot(x,y-i*2) pl.xlim(-5,5) pl.ylim(-8,5) pl.xlabel('x') pl.ylabel('y') pl.title('Samples from %s prior' % kernel_shortname) pl.tight_layout() interact(pltsamples3, par2=widgets.FloatSlider(min=1,max=3,step=1,description=r'$P$',value=2), par3=widgets.FloatSlider(min=-2,max=0,step=1,description=r'$\log\Gamma_2$',value=-1), kernel_shortname=widgets.RadioButtons(options=['Per','QP'], value='QP',description='kernel') );
Sessions/Session04/Day2/GPLecture1.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Combining kernels Any affine tranform, sum or product of valid kernels is a valid kernel. For example, a quasi-periodic kernel can be constructed by multiplying a periodic kernel with a non-periodic one. The following is frequently used to model stellar light curves: $$ k_{\mathrm{QP}}(x,x') = A \exp \left[ -\Gamma_1 \sin^2\left(\frac{\pi r}{P}\right) -\Gamma_2 r^2 \right]. $$ Example: Mauna Kea CO$_2$ dataset (From Rasmussen & Williams textbook) <img height="700" src="images/RW_mauna_kea.png"> 2 or more dimensions So far we assumed the inputs were 1-D but that doesn't have to be the case. For example, the SE kernel can be extended to D dimensions... using a single length scale, giving the Radial Basis Function (RBF) kernel: $$ k_{\rm RBF}(\mathbf{x},\mathbf{x'}) = A \exp \left[ - \Gamma \sum_{j=1}^{D}(x_j-x'_j)^2 \right], $$ where $\mathbf{x}=(x_1,x_2,\ldots, x_j,\ldots,x_D)^{\mathrm{T}}$ represents a single, multi-dimensional input. or using separate length scales for each dimension, giving the Automatic Relevance Determination (ARD) kernel: $$ k_{\rm ARD}(\mathbf{x},\mathbf{x'}) = A \exp \left[ - \sum_{j=1}^{D} \Gamma_j (x_j-x'_j)^2 \right]. $$
import george x2d, y2d = np.mgrid[-3:3:0.1,-3:3:0.1] x = x2d.ravel() y = y2d.ravel() N = len(x) X = np.zeros((N,2)) X[:,0] = x X[:,1] = y k1 = george.kernels.ExpSquaredKernel(1.0,ndim=2) s1 = george.GP(k1).sample(X).reshape(x2d.shape) k2 = george.kernels.ExpSquaredKernel(1.0,ndim=2,axes=1) + george.kernels.ExpSquaredKernel(0.2,ndim=2,axes=0) s2 = george.GP(k2).sample(X).reshape(x2d.shape) pl.figure(figsize=(10,5)) pl.subplot(121) pl.contourf(x2d,y2d,s1) pl.xlim(x.min(),x.max()) pl.ylim(y.min(),y.max()) pl.xlabel(r'$x$') pl.ylabel(r'$y$') pl.title('RBF') pl.subplot(122) pl.contourf(x2d,y2d,s2) pl.xlim(x.min(),x.max()) pl.ylim(y.min(),y.max()) pl.xlabel(r'$x$') pl.title('ARD'); # Function to plot samples from kernel def pltsamples3(par2=0.5,par3=0.5, kernel_shortname='SE'): x = np.r_[-5:5:201j] X = np.matrix([x]).T # scipy.spatial.distance expects matrices kernel = get_kernel(kernel_shortname) K = kernel(X,X,[0.0,0.0,par2,par3]) fig=pl.figure(figsize=(10,4)) ax1 = pl.subplot2grid((1,3), (0, 0), aspect='equal') pl.imshow(np.sqrt(K),interpolation='nearest',vmin=0,vmax=10) pl.title('Covariance matrix') ax2 = pl.subplot2grid((1,3), (0,1),colspan=2) np.random.seed(0) for i in range(5): y = np.random.multivariate_normal(np.zeros(len(x)),K) pl.plot(x,y) pl.xlim(-5,5) pl.ylim(-5,5) pl.xlabel('x') pl.ylabel('y') pl.title('Samples from %s prior' % kernel_shortname) pl.tight_layout() interact(pltsamples3, par2=widgets.FloatSlider(min=1,max=3,step=1,description=r'$P$',value=2), par3=widgets.FloatSlider(min=-2,max=0,step=1,description=r'$\log_{10}\Gamma_2$',value=-1.), kernel_shortname=widgets.RadioButtons(options=['Per','QP'], value='Per',description='kernel') );
Sessions/Session04/Day2/GPLecture1.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
You can list what is contained in the request:
karthaus
docs/notebooks/Data_Catalog.ipynb
mroberge/hydrofunctions
mit
The basic NWIS object will provide a list of every parameter collected at the site, the frequency of observations for that parameter, the name of the parameter, and the units of the observations. It also tells you the date and time of the first and last observation in the request. This is great, but it doesn't tell you when a parameter was first collected, or if a parameter was discontinued. If you leave out the 'period' part of the request, the USGS will give you the most recent value for every parameter, no matter how old, but this still doesn't tell you when observations were first collected. For more detailed information about the parameters collected at a site, request a 'data catalog' using the data_catalog() function. This will return a hydroRDB object containing a table (dataframe) with a row for every parameter that you request, and a header that describes every column in the dataset. Some of the most useful information in the data catalog are the: data type code: describes the frequency of observations dv: daily values uv, rt, or iv: 'real time' data collected more frequently than daily sv: site visits conducted irregularly ad: values listed in the USGS Annual Water Reports more information: https://waterservices.usgs.gov/rest/Site-Service.html#outputDataTypeCd parameter code: describes the type of data collected statistic code: describes the statistic used to report the parameter begin date, end date: the first and last observation made for this parameter count_nu: the number of observations made between the start and end dates. More information about the values in the Data Catalog are located in the header, and also from https://waterservices.usgs.gov/rest/Site-Service.html For more information about a site and the data collected at the site, try these sources: To access information about the site itself, use the site_file() function. To access the rating curve at a site (for translating water stage into discharge), use the rating_curve() function. To access field data collected by USGS personnel during site visits, use the field_meas() function. To access the annual peak discharges at a site, use the peaks() function. To access daily, monthly, or annual statistics for data at a site, use the stats() function. Example Usage
output = hf.data_catalog('01585200')
docs/notebooks/Data_Catalog.ipynb
mroberge/hydrofunctions
mit
Our new 'output' is a hydroRDB object. It has several useful properties, including: .table, which returns a dataframe of the data. Each row corresponds to a different parameter. .header, which is the original descriptive header provided by the USGS. It lists and describes the variables in the dataset. .columns, which is a list of the column names .dtypes, which is a list of the data types and column widths for each variable in the USGS RDB format. If you print or evaluate the hydroRDB object, it will return a tuple of the header and dataframe table.
print(output.header) # Transposing the table to show all of the columns as rows: output.table.T
docs/notebooks/Data_Catalog.ipynb
mroberge/hydrofunctions
mit
From 1D to 2D acoustic finite difference modelling The 1D acoustic wave equation is very useful to introduce the general concept and problems related to FD modelling. However, for realistic modelling and seismic imaging/inversion applications we have to solve at least the 2D acoustic wave equation. In the class we will develop a 2D acoustic FD modelling code based on the 1D code. I strongly recommend that you do this step by yourself, starting from this notebook containing only the 1D code. Finite difference solution of 2D acoustic wave equation As derived in this and this lecture, the acoustic wave equation in 2D with constant density is \begin{equation} \frac{\partial^2 p(x,z,t)}{\partial t^2} \ = \ vp(x,z)^2 \biggl(\frac{\partial^2 p(x,z,t)}{\partial x^2}+\frac{\partial^2 p(x,z,t)}{\partial z^2}\biggr) + f(x,z,t) \nonumber \end{equation} with pressure $p$, acoustic velocity $vp$ and source term $f$. We can split the source term into a spatial and temporal part. Spatially, we assume that the source is localized at one point ($x_s, z_s$). Therefore, the spatial source contribution consists of two Dirac $\delta$-functions $\delta(x-x_s)$ and $\delta(z-z_s)$. The temporal source part is an arbitrary source wavelet $s(t)$: \begin{equation} \frac{\partial^2 p(x,z,t)}{\partial t^2} \ = \ vp(x,z)^2 \biggl(\frac{\partial^2 p(x,z,t)}{\partial x^2}+\frac{\partial^2 p(x,z,t)}{\partial z^2}\biggr) + \delta(x-x_s)\delta(z-z_s)s(t) \nonumber \end{equation} Both second derivatives can be approximated by a 3-point difference formula. For example for the time derivative, we get: \begin{equation} \frac{\partial^2 p(x,z,t)}{\partial t^2} \ \approx \ \frac{p(x,z,t+dt) - 2 p(x,z,t) + p(x,z,t-dt)}{dt^2}, \nonumber \end{equation} and similar for the spatial derivatives: \begin{equation} \frac{\partial^2 p(x,z,t)}{\partial x^2} \ \approx \ \frac{p(x+dx,z,t) - 2 p(x,z,t) + p(x-dx,z,t)}{dx^2}, \nonumber \end{equation} \begin{equation} \frac{\partial^2 p(x,z,t)}{\partial x^2} \ \approx \ \frac{p(x,z+dz,t) - 2 p(x,z,t) + p(x,z-dz,t)}{dz^2}, \nonumber \end{equation} Injecting these approximations into the wave equation allows us to formulate the pressure p(x) for the time step $t+dt$ (the future) as a function of the pressure at time $t$ (now) and $t-dt$ (the past). This is called an explicit time integration scheme allowing the $extrapolation$ of the space-dependent field into the future only looking at the nearest neighbourhood. In the next step, we discretize the P-wave velocity and pressure wavefield at the discrete spatial grid points \begin{align} x &= idx\nonumber\ z &= jdz\nonumber\ \end{align} with $i = 0, 1, 2, ..., nx$, $j = 0, 1, 2, ..., nz$ on a 2D Cartesian grid. <img src="images/2D-grid_cart_ac.png" width="75%"> Using the discrete time steps \begin{align} t &= n*dt\nonumber \end{align} with $n = 0, 1, 2, ..., nt$ and time step $dt$, we can replace the time-dependent part (upper index time, lower indices space) by \begin{equation} \frac{p_{i,j}^{n+1} - 2 p_{i,j}^n + p_{i,j}^{n-1}}{\mathrm{d}t^2} \ = \ vp_{i,j}^2 \biggl( \frac{\partial^2 p}{\partial x^2} + \frac{\partial^2 p}{\partial z^2}\biggr) \ + \frac{s_{i,j}^n}{dx\;dz}. \nonumber \end{equation} The spatial $\delta$-functions $\delta(x-x_s)$ and $\delta(z-z_s)$ in the source term are approximated by the boxcar function: $$ \delta_{bc}(x) = \left{ \begin{array}{ll} 1/dx &|x|\leq dx/2 \ 0 &\text{elsewhere} \end{array} \right. $$ Solving for $p_{i,j}^{n+1}$ leads to the extrapolation scheme: \begin{equation} p_{i,j}^{n+1} \ = \ vp_{i,j}^2 \mathrm{d}t^2 \left( \frac{\partial^2 p}{\partial x^2} + \frac{\partial^2 p}{\partial z^2} \right) + 2p_{i,j}^n - p_{i,j}^{n-1} + \frac{\mathrm{d}t^2}{dx\; dz} s_{i,j}^n. \end{equation} The spatial derivatives are determined by \begin{equation} \frac{\partial^2 p(x,z,t)}{\partial x^2} \ \approx \ \frac{p_{i+1,j}^{n} - 2 p_{i,j}^n + p_{i-1,j}^{n}}{\mathrm{d}x^2} \nonumber \end{equation} and \begin{equation} \frac{\partial^2 p(x,z,t)}{\partial z^2} \ \approx \ \frac{p_{i,j+1}^{n} - 2 p_{i,j}^n + p_{i,j-1}^{n}}{\mathrm{d}z^2}. \nonumber \end{equation} Eq. (1) is the essential core of the 2D FD modelling code. Because we derived analytical solutions for wave propagation in a homogeneous medium, we should test our first code implementation for a similar medium, by setting \begin{equation} vp_{i,j} = vp0\notag \end{equation} at each spatial grid point $i = 0, 1, 2, ..., nx$; $j = 0, 1, 2, ..., nz$, in order to compare the numerical with the analytical solution. For a complete description of the problem we also have to define initial and boundary conditions. The initial condition is \begin{equation} p_{i,j}^0 = 0, \nonumber \end{equation} so the modelling starts with zero pressure amplitude at each spatial grid point $i, j$. As boundary conditions, we assume \begin{align} p_{0,j}^n = 0, \nonumber\ p_{nx,j}^n = 0, \nonumber\ p_{i,0}^n = 0, \nonumber\ p_{i,nz}^n = 0, \nonumber \end{align} for all time steps n. This Dirichlet boundary condition, leads to artifical boundary reflections which would obviously not describe a homogeneous medium. For now, we simply extend the model, so that boundary reflections are not recorded at the receiver positions. Let's implement it ...
# Import Libraries # ---------------- import numpy as np import matplotlib import matplotlib.pyplot as plt from pylab import rcParams # Ignore Warning Messages # ----------------------- import warnings warnings.filterwarnings("ignore") # Definition of modelling parameters # ---------------------------------- xmax = 500.0 # maximum spatial extension of the 1D model in x-direction (m) zmax = xmax # maximum spatial extension of the 1D model in z-direction(m) dx = 1.0 # grid point distance in x-direction dz = dx # grid point distance in z-direction tmax = 0.502 # maximum recording time of the seismogram (s) dt = 0.0010 # time step vp0 = 580. # P-wave speed in medium (m/s) # acquisition geometry xr = 330.0 # x-receiver position (m) zr = xr # z-receiver position (m) xsrc = 250.0 # x-source position (m) zsrc = 250.0 # z-source position (m) f0 = 40. # dominant frequency of the source (Hz) t0 = 4. / f0 # source time shift (s)
05_2D_acoustic_FD_modelling/1_From_1D_to_2D_acoustic_FD_modelling_final.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
Comparison of 2D finite difference with analytical solution In the function below we solve the homogeneous 2D acoustic wave equation by the 3-point spatial/temporal difference operator and compare the numerical results with the analytical solution: \begin{equation} G_{analy}(x,z,t) = G_{2D} * S \nonumber \end{equation} with the 2D Green's function: \begin{equation} G_{2D}(x,z,t) = \dfrac{1}{2\pi V_{p0}^2}\dfrac{H\biggl((t-t_s)-\dfrac{|r|}{V_{p0}}\biggr)}{\sqrt{(t-t_s)^2-\dfrac{r^2}{V_{p0}^2}}}, \nonumber \end{equation} where $H$ denotes the Heaviside function, $r = \sqrt{(x-x_s)^2+(z-z_s)^2}$ the source-receiver distance (offset) and $S$ the source wavelet. To play a little bit more with the modelling parameters, I restricted the input parameters to dt and dx. The number of spatial grid points and time steps, as well as the discrete source and receiver positions are estimated within this function.
# 2D Wave Propagation (Finite Difference Solution) # ------------------------------------------------ def FD_2D_acoustic(dt,dx,dz): nx = (int)(xmax/dx) # number of grid points in x-direction print('nx = ',nx) nz = (int)(zmax/dz) # number of grid points in x-direction print('nz = ',nz) nt = (int)(tmax/dt) # maximum number of time steps print('nt = ',nt) ir = (int)(xr/dx) # receiver location in grid in x-direction jr = (int)(zr/dz) # receiver location in grid in z-direction isrc = (int)(xsrc/dx) # source location in grid in x-direction jsrc = (int)(zsrc/dz) # source location in grid in x-direction # Source time function (Gaussian) # ------------------------------- src = np.zeros(nt + 1) time = np.linspace(0 * dt, nt * dt, nt) # 1st derivative of a Gaussian src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2)) # Analytical solution # ------------------- G = time * 0. # Initialize coordinates # ---------------------- x = np.arange(nx) x = x * dx # coordinates in x-direction (m) z = np.arange(nz) z = z * dz # coordinates in z-direction (m) # calculate source-receiver distance r = np.sqrt((x[ir] - x[isrc])**2 + (z[jr] - z[jsrc])**2) for it in range(nt): # Calculate Green's function (Heaviside function) if (time[it] - r / vp0) >= 0: G[it] = 1. / (2 * np.pi * vp0**2) * (1. / np.sqrt(time[it]**2 - (r/vp0)**2)) Gc = np.convolve(G, src * dt) Gc = Gc[0:nt] lim = Gc.max() # get limit value from the maximum amplitude # Initialize empty pressure arrays # -------------------------------- p = np.zeros((nx,nz)) # p at time n (now) pold = np.zeros((nx,nz)) # p at time n-1 (past) pnew = np.zeros((nx,nz)) # p at time n+1 (present) d2px = np.zeros((nx,nz)) # 2nd spatial x-derivative of p d2pz = np.zeros((nx,nz)) # 2nd spatial z-derivative of p # Initialize model (assume homogeneous model) # ------------------------------------------- vp = np.zeros((nx,nz)) vp = vp + vp0 # initialize wave velocity in model # Initialize empty seismogram # --------------------------- seis = np.zeros(nt) # Calculate Partial Derivatives # ----------------------------- for it in range(nt): # FD approximation of spatial derivative by 3 point operator for i in range(1, nx - 1): for j in range(1, nz - 1): d2px[i,j] = (p[i + 1,j] - 2 * p[i,j] + p[i - 1,j]) / dx ** 2 d2pz[i,j] = (p[i,j + 1] - 2 * p[i,j] + p[i,j - 1]) / dz ** 2 # Time Extrapolation # ------------------ pnew = 2 * p - pold + vp ** 2 * dt ** 2 * (d2px + d2pz) # Add Source Term at isrc # ----------------------- # Absolute pressure w.r.t analytical solution pnew[isrc,jsrc] = pnew[isrc,jsrc] + src[it] / (dx * dz) * dt ** 2 # Remap Time Levels # ----------------- pold, p = p, pnew # Output of Seismogram # ----------------- seis[it] = p[ir,jr] # Compare FD Seismogram with analytical solution # ---------------------------------------------- # Define figure size rcParams['figure.figsize'] = 12, 5 plt.plot(time, seis, 'b-',lw=3,label="FD solution") # plot FD seismogram Analy_seis = plt.plot(time,Gc,'r--',lw=3,label="Analytical solution") # plot analytical solution plt.xlim(time[0], time[-1]) plt.ylim(-lim, lim) plt.title('Seismogram') plt.xlabel('Time (s)') plt.ylabel('Amplitude') plt.legend() plt.grid() plt.show() %%time dx = 1.0 # grid point distance in x-direction (m) dx = dz # grid point distance in z-direction (m) dt = 0.0010 # time step (s) FD_2D_acoustic(dt,dx,dz)
05_2D_acoustic_FD_modelling/1_From_1D_to_2D_acoustic_FD_modelling_final.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
Let's consider precipitation/dissolution of NaCl: $$ \rm NaCl(s) \rightleftharpoons Na^+(aq) + Cl^-(aq) $$
init_concs = iNa_p, iCl_m, iNaCl = [sp.Symbol('i_'+str(i), real=True, negative=False) for i in range(3)] c = Na_p, Cl_m, NaCl = [sp.Symbol('c_'+str(i), real=True, negative=False) for i in range(3)] prod = lambda x: reduce(mul, x) texnames = [r'\mathrm{%s}' % k for k in 'Na^+ Cl^- NaCl'.split()]
examples/conditional.ipynb
bjodah/pyneqsys
bsd-2-clause
if the solution is saturated, then the solubility product will be constant: $$ K_{\rm sp} = \mathrm{[Na^+][Cl^-]} $$ in addition to this (conditial realtion) we can write equations for the preservation of atoms and charge:
stoichs = [[1, 1, -1]] Na = [1, 0, 1] Cl = [0, 1, 1] charge = [1, -1, 0] preserv = [Na, Cl, charge] eq_constants = [Ksp] = [sp.Symbol('K_{sp}', real=True, positive=True)] def get_f(x, params, saturated): init_concs = params[:3] if saturated else params[:2] eq_constants = params[3:] le = linear_exprs(preserv, x, linear_exprs(preserv, init_concs), rref=True) return le + ([Na_p*Cl_m - Ksp] if saturated else [NaCl])
examples/conditional.ipynb
bjodah/pyneqsys
bsd-2-clause
Our two sets of reactions are then:
get_f(c, init_concs + eq_constants, False) f_true = get_f(c, init_concs + eq_constants, True) f_false = get_f(c, init_concs + eq_constants, False) f_true, f_false
examples/conditional.ipynb
bjodah/pyneqsys
bsd-2-clause
We have one condition (a boolean describing whether the solution is saturated or not). We provide two conditionals, one for going from non-saturated to saturated (forward) and one going from saturated to non-saturated (backward):
from pyneqsys.core import ConditionalNeqSys cneqsys = ConditionalNeqSys( [ (lambda x, p: (x[0] + x[2]) * (x[1] + x[2]) > p[3], # forward condition lambda x, p: x[2] >= 0) # backward condition ], lambda conds: SymbolicSys( c, f_true if conds[0] else f_false, init_concs+eq_constants ), latex_names=['[%s]' % n for n in texnames], latex_param_names=['[%s]_0' % n for n in texnames] ) c0, K = [0.5, 0.5, 0], [1] # Ksp for NaCl(aq) isn't 1 in reality, but used here for illustration params = c0 + K
examples/conditional.ipynb
bjodah/pyneqsys
bsd-2-clause
Solving for inital concentrations below the solubility product:
cneqsys.solve([0.5, 0.5, 0], params)
examples/conditional.ipynb
bjodah/pyneqsys
bsd-2-clause
no surprises there (it is of course trivial). In order to illustrate its usefulness, let us consider addition of a more soluable sodium salt (e.g. NaOH) to a chloride rich solution (e.g. HCl):
%matplotlib inline ax_out = plt.subplot(1, 2, 1) ax_err = plt.subplot(1, 2, 2) xres, sols = cneqsys.solve_and_plot_series( c0, params, np.linspace(0, 3), 0, 'kinsol', {'ax': ax_out}, {'ax': ax_err}, fnormtol=1e-14) _ = ax_out.legend()
examples/conditional.ipynb
bjodah/pyneqsys
bsd-2-clause
We will be using the water level data from a fixed station in Kotzebue, AK. Below we create a simple Quality Assurance/Quality Control (QA/QC) configuration that will be used as input for ioos_qc. All the interval values are in the same units as the data. For more information on the tests and recommended values for QA/QC check the documentation of each test and its inputs: https://ioos.github.io/ioos_qc/api/ioos_qc.html#module-ioos_qc.qartod
variable_name = "sea_surface_height_above_sea_level_geoid_mhhw" qc_config = { "qartod": { "gross_range_test": { "fail_span": [-10, 10], "suspect_span": [-2, 3] }, "flat_line_test": { "tolerance": 0.001, "suspect_threshold": 10800, "fail_threshold": 21600 }, "spike_test": { "suspect_threshold": 0.8, "fail_threshold": 3, } } }
notebooks/2020-02-14-QARTOD_ioos_qc_Water-Level-Example.ipynb
ioos/notebooks_demos
mit
Now we are ready to load the data, run tests and plot results! We will get the data from the AOOS ERDDAP server. Note that the data may change in the future. For reproducibility's sake we will save the data downloaded into a CSV file.
from pathlib import Path import pandas as pd from erddapy import ERDDAP path = Path().absolute() fname = path.joinpath("data", "water_level_example.csv") if fname.is_file(): data = pd.read_csv(fname, parse_dates=["time (UTC)"]) else: e = ERDDAP( server="http://erddap.aoos.org/erddap/", protocol="tabledap" ) e.dataset_id = "kotzebue-alaska-water-level" e.constraints = { "time>=": "2018-09-05T21:00:00Z", "time<=": "2019-07-10T19:00:00Z", } e.variables = [ variable_name, "time", "z", ] data = e.to_pandas( index_col="time (UTC)", parse_dates=True, ) data["timestamp"] = data.index.astype("int64") // 1e9 data.to_csv(fname) data.head() from ioos_qc.config import QcConfig qc = QcConfig(qc_config) qc_results = qc.run( inp=data["sea_surface_height_above_sea_level_geoid_mhhw (m)"], tinp=data["timestamp"], zinp=data["z (m)"], ) qc_results
notebooks/2020-02-14-QARTOD_ioos_qc_Water-Level-Example.ipynb
ioos/notebooks_demos
mit
The results are returned in a dictionary format, similar to the input configuration, with a mask for each test. While the mask is a masked array it should not be applied as such. The results range from 1 to 4 meaning: data passed the QA/QC did not run on this data point flag as suspect flag as failed Now we can write a plotting function that will read these results and flag the data.
%matplotlib inline from datetime import datetime import numpy as np import matplotlib.pyplot as plt def plot_results(data, var_name, results, title, test_name): time = data["time (UTC)"] obs = data[var_name] qc_test = results["qartod"][test_name] qc_pass = np.ma.masked_where(qc_test != 1, obs) qc_suspect = np.ma.masked_where(qc_test != 3, obs) qc_fail = np.ma.masked_where(qc_test != 4, obs) qc_notrun = np.ma.masked_where(qc_test != 2, obs) fig, ax = plt.subplots(figsize=(15, 3.75)) fig.set_title = f"{test_name}: {title}" ax.set_xlabel("Time") ax.set_ylabel("Observation Value") kw = {"marker": "o", "linestyle": "none"} ax.plot(time, obs, label="obs", color="#A6CEE3") ax.plot(time, qc_notrun, markersize=2, label="qc not run", color="gray", alpha=0.2, **kw) ax.plot(time, qc_pass, markersize=4, label="qc pass", color="green", alpha=0.5, **kw) ax.plot(time, qc_suspect, markersize=4, label="qc suspect", color="orange", alpha=0.7, **kw) ax.plot(time, qc_fail, markersize=6, label="qc fail", color="red", alpha=1.0, **kw) ax.grid(True) title = "Water Level [MHHW] [m] : Kotzebue, AK"
notebooks/2020-02-14-QARTOD_ioos_qc_Water-Level-Example.ipynb
ioos/notebooks_demos
mit
The gross range test test should fail data outside the $\pm$ 10 range and suspect data below -2, and greater than 3. As one can easily see all the major spikes are flagged as expected.
plot_results( data, "sea_surface_height_above_sea_level_geoid_mhhw (m)", qc_results, title, "gross_range_test" )
notebooks/2020-02-14-QARTOD_ioos_qc_Water-Level-Example.ipynb
ioos/notebooks_demos
mit
An actual spike test, based on a data increase threshold, flags similar spikes to the gross range test but also indetifies other suspect unusual increases in the series.
plot_results( data, "sea_surface_height_above_sea_level_geoid_mhhw (m)", qc_results, title, "spike_test" )
notebooks/2020-02-14-QARTOD_ioos_qc_Water-Level-Example.ipynb
ioos/notebooks_demos
mit
The flat line test identifies issues with the data where values are "stuck." ioos_qc succefully identified a huge portion of the data where that happens and flagged a smaller one as suspect. (Zoom in the red point to the left to see this one.)
plot_results( data, "sea_surface_height_above_sea_level_geoid_mhhw (m)", qc_results, title, "flat_line_test" )
notebooks/2020-02-14-QARTOD_ioos_qc_Water-Level-Example.ipynb
ioos/notebooks_demos
mit
The figure is part of a well patterns that extends to infinity in all directions. The thick lines are the boundaries of the river bed. The installed wells are plotted in blue. The focus well is red. All other wells are virtual or mirror wells that together guarantee that the flow across the river-bed boundaries is always zero. All blue dashed lines are water divides. If we let $x$ run from 0 to 100 m and $y$ from 0 to 100 m, then our axes span only one of the quadrants between water divides. The flow is 0.5 L/s for 8 h/d or Q = 14.4 m3/d. However we have to fill in 2 Q to take into consideration that the well extrats half its flow outside the river bed as can be seen in the figure above. Using the Theis well function, we can compute the head when we extract a constant rate from the wells from $t=0$. The solution will be transient and will cause the riverbed to permanently loose water. Such a computation gives the result for the case that there is no supply of water at all. Using the Theis well function is a compromise, because it assumes a constant transmissivity, which we don't have when the aquifer has a free declining water table. Also the well will have a seepage face, which cause the interior of the well be become dry, while the water outside the well may still be substantial. The water drips down along the interior of the well. But we don't have to consider partial penetration. The head computed is valide for the head along the bottom of the aquifer. We can set as a boundary condition that the head equals the bottom of the aquifer. This can be reached when the well is dug actually deeper than the river bed, into the underlying bedrock, such that the pump can be installe while the head is at the bottom of the aquifer. In this case the total water will drip into the well, while inside the well there is no water level (head equals bottom elevation of the well, with pump in blind part below aquifer bottom. At the same time the groundwater stands above the aquifer bottom outside the well.
# well pattern kD = 1 * 30 S = 0.2 Q = 2 * 0.5 * 3.6 * 8# m3/d (0.5 L/s for 8h/d) x0 = 0. y0 = 0. N = 10 x = np.linspace(-2* N * w, 2* N * w, 2 * N + 1) y = np.linspace(-2* N * w, 2* N * w, 2 * N + 1) X, Y = np.meshgrid(x, y) rw = 0.1 # well radius R = np.fmax(np.sqrt((X - x0)**2 + (Y - y0)**2), rw) times = np.logspace(-2, 2, 51) s = np.zeros_like(times) for i, t in enumerate(times): u = R**2 * S / (4 * kD * t) s[i] += np.sum(Q / (4 * np.pi * kD) * exp1(u)) fig, ax = plt.subplots() ax.set_xlabel('time [d]') ax.set_ylabel('dd [m]') ax.set_title('Well in river bed, $r_w$ = {} m'.format(rw)) ax.plot(times, s) ax.invert_yaxis() plt.show()
exercises_notebooks/a_well_in_a_river_bed.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
Balance The decline of the water level is $$ \frac {\partial h} {\partial t} = \frac {2 Q} { A S} = \frac {2 \times 14.4} {200\times200\times0.2} = 0.0036 \,m/d= 0.36 m/(100d)$$ Let's compare this with the decline in the graph above:
decline_rate = np.diff(np.interp([50, 100], times, s))[0]/50 # decline rate over last 50 days print('Decline rate over the last 50 days is = {:.3g} m/(100d)'.format(100 * decline_rate))
exercises_notebooks/a_well_in_a_river_bed.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
So we see that this matches the hand-calculation. So the approach shows what happens of you install a well at one side of the 100 m wide river bed, you do that every 200 m, and extract 0.5 L/d for 8h/d from each of them. There is an initial rapid decline due to the fact that we have a well, which causes the steamlines to converge. After that the water level declines like it would in a bathtub, i.e uniformly. Clearly if you have only one m of water in the riverbed, the water level cannot be pulled down by more than 1 m as the river bed would be empty as far as the well is concerned. We see that with the small well installed, we have 1 m decline already in 10 days. This makes nobody happy. An alternative would be ton install a wider well or a drain hat the bottom of a given length. This could be simulated by the same method, but replacing the drain of length $b$ by a well with a representative radius. Say of the same circumference. Or with a circumference that equals twice the length of the drain. If our drain is 3 m long this yields $$ R = \frac {2 \times 3 \, m} { 2 \pi} = 0.95 \, m$$
# situation with drain (circumference = 6 m) N = 10 x = np.linspace(-2* N * w, 2* N * w, 2 * N + 1) y = np.linspace(-2* N * w, 2* N * w, 2 * N + 1) X, Y = np.meshgrid(x, y) rw = 6 / (2 * np.pi) # well radius R = np.fmax(np.sqrt((X - x0)**2 + (Y - y0)**2), rw) times = np.logspace(-2, 2, 51) s2 = np.zeros_like(times) for i, t in enumerate(times): u = R**2 * S / (4 * kD * t) s2[i] += np.sum(Q / (4 * np.pi * kD) * exp1(u)) fig, ax = plt.subplots() ax.set_xlabel('time [d]') ax.set_ylabel('dd [m]') ax.set_title('Well in river bed, $r_w$ = {} m'.format(rw)) ax.plot(times, s, label='rw = 0.1 m') ax.plot(times, s2, label='rw = 1.0 m') ax.invert_yaxis() ax.legend(loc='best') plt.show()
exercises_notebooks/a_well_in_a_river_bed.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
whfast refers to the 2nd order symplectic integrator WHFast described by Rein & Tamayo (2015). By default, no symplectic correctors are used, but they can be easily turned on (see Advanced Settings for WHFast). We are now ready to start the integration. Let's integrate the simulation for one orbit, i.e. until $t=2\pi$. Because we use a fixed timestep, rebound would have to change it to integrate exactly up to $2\pi$. Changing the timestep in a symplectic integrator is a bad idea, so we'll tell rebound to not worry about the exact_finish_time.
sim.integrate(6.28318530717959, exact_finish_time=0) # 6.28318530717959 is 2*pi
ipython_examples/WHFast.ipynb
cshankm/rebound
gpl-3.0
That looks much more like it. Let us finally plot the orbital elements as a function of time.
times = np.linspace(1000.*torb, 9000.*torb, Noutputs) a = np.zeros(Noutputs) e = np.zeros(Noutputs) for i,time in enumerate(times): sim.integrate(time, exact_finish_time=0) orbits = sim.calculate_orbits() a[i] = orbits[1].a e[i] = orbits[1].e fig = plt.figure(figsize=(15,5)) ax = plt.subplot(121) ax.set_xlabel("time") ax.set_ylabel("semi-major axis") plt.plot(times, a); ax = plt.subplot(122) ax.set_xlabel("time") ax.set_ylabel("eccentricity") plt.plot(times, e);
ipython_examples/WHFast.ipynb
cshankm/rebound
gpl-3.0
First, let's consider: <dl compact> <dt>``f(x,y)``</dt><dd>a simple function that accepts a location in a 2D plane specified in millimeters (mm)</dd> <dt>``region``</dt><dd>a 1mm&times;1mm square region of this 2D plane, centered at the origin, and</dd> <dt>``coords``</dt><dd>a function returning a square (s&times;s) grid of (x,y) coordinates regularly sampling the region in the given bounds, at the centers of each grid cell:</dd> </dl>
def f(x,y): return x+y/3.1 region=(-0.5,-0.5,0.5,0.5) def coords(bounds,samples): l,b,r,t=bounds hc=0.5/samples return np.meshgrid(np.linspace(l+hc,r-hc,samples), np.linspace(b+hc,t-hc,samples))
doc/Tutorials/Continuous_Coordinates.ipynb
mjabri/holoviews
bsd-3-clause
We can visualize this array (and thus the function f) either using a Raster, which uses the array's own integer-based coordinate system (which we will call "array" coordinates), or an Image, which uses a continuous coordinate system, or as a HeatMap labelling each value explicitly:
r5 = hv.Raster(f5, label="R5") i5 = hv.Image( f5, label="I5", bounds=region) h5 = hv.HeatMap({(x, y): f5[4-y,x] for x in range(0,5) for y in range(0,5)}, label="H5") r5+i5+h5
doc/Tutorials/Continuous_Coordinates.ipynb
mjabri/holoviews
bsd-3-clause
Both the Raster and Image Element types accept the same input data, but a visualization of the Raster type reveals the underlying raw array indexing, while the Image type has been labelled with the coordinate system from which we know the data has been sampled. All Image operations work with this continuous coordinate system instead, while the corresponding operations on a Raster use raw array indexing. For instance, all five of these indexing operations refer to the same element of the underlying Numpy array, i.e. the second item in the first row:
"r5[0,1]=%0.2f r5.data[0,1]=%0.2f i5[-0.2,0.4]=%0.2f i5[-0.24,0.37]=%0.2f i5.data[0,1]=%0.2f" % \ (r5[1,0], r5.data[0,1], i5[-0.2,0.4], i5[-0.24,0.37], i5.data[0,1])
doc/Tutorials/Continuous_Coordinates.ipynb
mjabri/holoviews
bsd-3-clause
You can see that the Raster and the underlying .data elements use Numpy's integer indexing, while the Image uses floating-point values that are then mapped onto the appropriate array element. This diagram should help show the relationships between the Raster coordinate system in the plot (which ranges from 0 at the top edge to 5 at the bottom), the underlying raw Numpy integer array indexes (labelling each dot in the Array coordinates figure), and the underlying Continuous coordinates: <TABLE style='border:5'> <TR> <TH><CENTER>Array coordinates</CENTER></TH> <TH><CENTER>Continuous coordinates</CENTER></TH> </TR> <TR> <TD><IMG src="http://ioam.github.io/topographica/_images/matrix_coords.png"></TD> <TD><IMG src="http://ioam.github.io/topographica/_images/sheet_coords_-0.2_0.4.png"></TD> </TR> </TABLE> Importantly, although we used a 5&times;5 array in this example, we could substitute a much larger array with the same continuous coordinate system if we wished, without having to change any of our continuous indexes -- they will still point to the correct location in the continuous space:
f10=f(*coords(region,10)) f10 r10 = hv.Raster(f10, label="R10") i10 = hv.Image(f10, label="I10", bounds=region) r10+i10
doc/Tutorials/Continuous_Coordinates.ipynb
mjabri/holoviews
bsd-3-clause
The array-based indexes used by Raster and the Numpy array in .data still return the second item in the first row of the array, but this array element now corresponds to location (-0.35,0.4) in the continuous function, and so the value is different. These indexes thus do not refer to the same location in continuous space as they did for the other array density, so this type of indexing is not independent of density or resolution. Luckily, the two continuous coordinates still return very similar values to what they did before, since they always return the value of the array element corresponding to the closest location in continuous space. They now return elements just above and to the right, or just below and to the left, of the earlier location, because the array now has a higher resolution with elements centered at different locations. Indexing in continuous coordinates always returns the value closest to the requested value, given the available resolution. Note that in the case of coordinates truly on the boundary between array elements (as for -0.2,0.4), the bounds of each array cell are taken as right exclusive and upper exclusive, and so (-0.2,0.4) returns array index (3,0). Slicing in 2D In addition to indexing (looking up a value), slicing (selecting a region) works as expected in continuous space. For instance, we can ask for a slice from (-0.275,-0.0125) to (0.025,0.2885) in continuous coordinates:
sl10=i10[-0.275:0.025,-0.0125:0.2885] sl10.data sl10
doc/Tutorials/Continuous_Coordinates.ipynb
mjabri/holoviews
bsd-3-clause
Hopefully these examples make it clear that if you are using data that is sampled from some underlying continuous system, you should use the continuous coordinates offered by HoloViews objects like Image so that your programs can be independent of the resolution or sampling density of that data, and so that your axes and indexes can be expressed in the underlying continuous space. The data will still be stored in the same Numpy array, but now you can treat it consistently like the approximation to continuous values that it is. 1D and nD Continuous coordinates All of the above examples use the common case for visualizations of a two-dimensional regularly gridded continuous space, which is implemented in holoviews.core.sheetcoords.SheetCoordinateSystem. Similar continuous coordinates and slicing are also supported for Chart elements, such as Curves, but using a single index and allowing arbitrary irregular spacing, implemented in holoviews.elements.chart.Chart. They also work the same for the n-dimensional coordinates and slicing supported by the container types HoloMap, NdLayout, and NdOverlay, implemented in holoviews.core.dimension.Dimensioned and again allowing arbitrary irregular spacing. Together, these powerful continuous-coordinate indexing and slicing operations allow you to work naturally and simply in the full n-dimensional space that characterizes your data and parameter values. Sampling The above examples focus on indexing and slicing, but there is another related operation supported for continuous spaces, called sampling. Sampling is similar to indexing and slicing, in that all of them can reduce the dimensionality of your data, but sampling is implemented in a general way that applies for any of the 1D, 2D, or nD datatypes. For instance, if we take our 10&times;10 array from above, we can ask for the value at a given location, which will come back as a Table, i.e. a dictionary with one (key,value) pair:
e10=i10.sample(x=-0.275, y=0.2885) e10
doc/Tutorials/Continuous_Coordinates.ipynb
mjabri/holoviews
bsd-3-clause
We will look at an arbitrary expression $f(x, y)$: $$ f(x, y) = 3 x^{2} + \log{\left (x^{2} + y^{2} + 1 \right )} $$
x, y = sym.symbols('x y') expr = 3*x**2 + sym.log(x**2 + y**2 + 1) expr
notebooks/22-lambdify.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
One way to evaluate above expression numerically is to invoke the subs method followed by the evalf method:
expr.subs({x: 17, y: 42}).evalf()
notebooks/22-lambdify.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
However, if we need to do this repeatedly it can be quite slow:
%timeit expr.subs({x: 17, y: 42}).evalf()
notebooks/22-lambdify.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
even compared to a simple lambda function:
import math f = lambda x, y: 3*x**2 + math.log(x**2 + y**2 + 1) f(17, 42) %timeit f(17, 42)
notebooks/22-lambdify.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
SymPy can also create a function analogous to f above. The function for doing so is called lambdify:
g = sym.lambdify([x, y], expr, modules=['math']) g(17, 42) %timeit g(17, 42)
notebooks/22-lambdify.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Note how we specified modules above: it tells lambdify to use math.log, if we don't specify modules SymPy will (since v1.1) use numpy by default. This can be useful when dealing with arrays in the input:
import numpy as np xarr = np.linspace(17, 18, 5) h = sym.lambdify([x, y], expr) out = h(xarr, 42) out.shape
notebooks/22-lambdify.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
NumPy's broadcasting (handling of different shapes) then works as expected:
yarr = np.linspace(42, 43, 7).reshape((1, 7)) out2 = h(xarr.reshape((5, 1)), yarr) # if we would try to use g() here, it would fail out2.shape
notebooks/22-lambdify.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Behind the scenes lambdify constructs a string representation of the Python code and uses Python's eval function to compile the function. Let's now look at how we can get a specific function signature from lambdify:
z = z1, z2, z3 = sym.symbols('z:3') expr2 = x*y*(z1 + z2 + z3) func2 = sym.lambdify([x, y, z], expr2) func2(1, 2, (3, 4, 5))
notebooks/22-lambdify.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Exercise: Create a function from a symbolic expression Plot $z(x, y) = \frac{\partial^2 f(x,y)}{\partial x \partial y}$ from above ($f(x, y)$ is available as expr) as a surface plot for $-5 < x < 5, -5 < y < 5$.
xplot = np.outer(np.linspace(-5, 5, 100), np.ones(100)) yplot = xplot.T %load_ext scipy2017codegen.exercise
notebooks/22-lambdify.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Use either the %exercise or %load magic to get the exercise / solution respecitvely
%exercise exercise_lambdify_expr.py
notebooks/22-lambdify.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Replace ??? with the correct expression above.
from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt %matplotlib inline fig = plt.figure(figsize=(15, 13)) ax = plt.axes(projection='3d') ax.plot_surface(xplot, yplot, zplot, cmap=plt.cm.coolwarm) ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('$%s$' % sym.latex(d2fdxdy))
notebooks/22-lambdify.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Input
file_path = "../data/df_model01.pkl" df = pd.read_pickle(file_path) print(df.shape) df.head()
EventDec/event_dec/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
Binning
bins = [10, 40, 120, 180] # SR, LNZ, SZG, VIE, >VIE df["binned_distance"] = np.digitize(df.distance.values, bins=bins)
EventDec/event_dec/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
Conversion for Scikit-learn Feature selection based on expert knowledge. Model-based hardly interpretable, at least confirmed "binned_distance" as relevant feature.
feature_names = ["buzzwordy_title", "main_topic_Daten", "binned_distance"] X = df[feature_names].values y = df.rating.map(lambda x: 1 if x>5 else 0).values # binary target: >5 (better as all the same) was worth attending print("X:", X.shape, "y:", y.shape)
EventDec/event_dec/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
Train-Test Split
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=23, test_size=0.5) # 50% split, small dataset size
EventDec/event_dec/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
Modeling Model
from sklearn.linear_model import LinearRegression from sklearn.tree import DecisionTreeClassifier linreg = LinearRegression() # Benchmark model dec_tree = DecisionTreeClassifier() # Actual model
EventDec/event_dec/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
Benchmark Linear Regression
# Benchmark linreg.fit(X_train, y_train) print("Score (r^2): {:.3f}".format(linreg.score(X_test, y_test))) print("Coef: {}".format(linreg.coef_))
EventDec/event_dec/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
=> Really bad perfomance Model Decision Tree CV <= Ability to visualize, hasn't performed much worse than other applied models Parameter Tuning
from sklearn.model_selection import GridSearchCV parameter_grid = {"criterion": ["gini", "entropy"], "max_depth": [None, 1, 2, 3, 4, 5, 6], "min_samples_leaf": list(range(1, 14)), "max_leaf_nodes": list(range(3, 25))} grid_search = GridSearchCV(DecisionTreeClassifier(presort=True), parameter_grid, cv=5) # 5 fold cross-val grid_search.fit(X_train, y_train) print("Score (Accuracy): {:.3f}".format(grid_search.score(X_test, y_test))) print("Best Estimator: {}".format(grid_search.best_estimator_)) print("Best Parameters: {}".format(grid_search.best_params_))
EventDec/event_dec/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
=> Not that good accuracy, but at least better than random draw Build Model
model = DecisionTreeClassifier(presort=True, criterion="gini", max_depth=None, min_samples_leaf=2, max_leaf_nodes=5) model.fit(X_train, y_train) print("Score (Accuracy): {:.3f}".format(model.score(X_test, y_test)))
EventDec/event_dec/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
Print Decision Tree
from sklearn.tree import export_graphviz export_graphviz(model, class_names=True, feature_names=feature_names, rounded=True, filled=True, label="root", impurity=False, proportion=True, out_file="plots/dectree_Model_best.dot")
EventDec/event_dec/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
Model Evaluation Evaluation Scores
from sklearn.metrics import classification_report y_pred = model.predict(X_test) print(classification_report(y_test, y_pred))
EventDec/event_dec/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
=> Not so good in predicting class 1 ("worth attending), better in predicting class 0. Weighted average of precision and recall 0.62 (F1 Score) ROC Curve
from sklearn.metrics import roc_curve, roc_auc_score fpr, tpr, thresholds = roc_curve(y_test, y_pred) plt.plot(fpr, tpr, label="ROC") plt.plot([0, 1], c="r", label="Chance level") plt.title("ROC Curve") plt.xlabel("False Positive Rate") plt.ylabel("Recall") plt.legend(loc=4) plt.savefig("plots/ROC_Curve_Model.png", dpi=180) plt.show() print("AUC: {:.3f}".format(roc_auc_score(y_test, y_pred)))
EventDec/event_dec/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
=> Indeed only slightly better than random guess, ~ 0.05 percentage points (AUC)
from sklearn.model_selection import cross_val_score scores_benchmark = cross_val_score(model, X, y, cv=5) # 5 folds print("Cross-Val Scores (Accuracy): {}".format(scores_benchmark)) print("Cross-Val Mean (Accuracy): {}".format(scores_benchmark.mean()))
EventDec/event_dec/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
=> Generalization acceptable, but not good (as expected from a Decision Tree) Persist Model
from sklearn.externals import joblib file_path = "../data/model_trained.pkl" joblib.dump(model, file_path)
EventDec/event_dec/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
Utilizaremos a função head() para visualizar as primeiras linhas do dataframe.
data.head()
2019/06-decision-tree/Tutorial 11 (scikitlearn) - Árvores de Classificação e Regressão.ipynb
InsightLab/data-science-cookbook
mit
Se desejar visualizar todo o dataset executa a seguinte linha de código.
data data.info()
2019/06-decision-tree/Tutorial 11 (scikitlearn) - Árvores de Classificação e Regressão.ipynb
InsightLab/data-science-cookbook
mit
2. Dividir o dataset Iremos agora utilizar a função iloc do pandas para dividir o nosso dataset. Em X iremos guardar os valores da coluna 0 até a coluna 3 do dataset, em Y iremos guardar os valores da última coluna, que correspondem aos valores de saída.
X = data.iloc[:,[0,1,2,3]] Y = data.iloc[:,4]
2019/06-decision-tree/Tutorial 11 (scikitlearn) - Árvores de Classificação e Regressão.ipynb
InsightLab/data-science-cookbook
mit
O próximo passo é dividir o conjunto de dados, em dados de treino e de teste. Para isso usaremos a função train_test_split da biblioteca sklearn.
from sklearn.model_selection import train_test_split
2019/06-decision-tree/Tutorial 11 (scikitlearn) - Árvores de Classificação e Regressão.ipynb
InsightLab/data-science-cookbook
mit
Dividiremos os dados da seguinte forma: 80% dos dados serão utilizados para treino e 20% para teste. Os dados de treino serão então separados e guardados em duas variáveis: o X_train guarda os valores da coluna 0 até a 4 e o y_train guarda os valores de saída correspondente a cada linha do X_train. A divisão acontecerá da mesma forma entre o X_test e o y_test para os dados de treino.
X_train, X_test, y_train, y_test = train_test_split(X, Y, train_size=0.8, test_size= 0.2, random_state=1)
2019/06-decision-tree/Tutorial 11 (scikitlearn) - Árvores de Classificação e Regressão.ipynb
InsightLab/data-science-cookbook
mit
3. Treinar o algoritmo
from sklearn.tree import DecisionTreeClassifier
2019/06-decision-tree/Tutorial 11 (scikitlearn) - Árvores de Classificação e Regressão.ipynb
InsightLab/data-science-cookbook
mit
Vamos utilizar a função DecisionTreeClassifier para modelar. O parâmetro criterion define a função utilizada para medir a qualidade da divisão. No nosso caso definimos o critério com 'gini'. O coeficiente é uma função de custo utilizada para avaliar divisões no dataset. Uma divisão neste envolve um atributo de entrada e um valor para este atributo. Uma separação perfeita resulta em um valor de Gini igual a 0, no pior caso, ou seja, no caso em que a separação resulta em 50/50, resulta em um valor de Gini igual a 0.5. O parâmetro max_depth define a máxima profundidade de uma árvore: Isso é o número máximo de nós da raiz da árvore. Uma vez que a profundidade máxima é atingida devemos parar de adicionar novos nós. Árvores muito profundas são mais complexas e mais prováveis de superar os dados de treinamento. O parâmetro min_samples_split define o número mínimo de registros nos nós: É o número mínimo de padrões de treinamento que um nó é responsável. Uma vez menor ou igual que o mínimo, devemos parar de dividir e adicionar novos nós. Nós que são responsaveis por poucos padrões de treinamento são muitos específicos. Esta função aceita ainda outros parâmetros, mas para o nosso caso estes são suficientes.
model = DecisionTreeClassifier(criterion='gini', max_depth= 5, min_samples_split= 10)
2019/06-decision-tree/Tutorial 11 (scikitlearn) - Árvores de Classificação e Regressão.ipynb
InsightLab/data-science-cookbook
mit
Agora iremos treinar o nosso modelo, utilizando a função fit do sklearn, usaremos os conjuntos de dados de treino que tínhamos preparados anteriormente.
model.fit(X_train, y_train)
2019/06-decision-tree/Tutorial 11 (scikitlearn) - Árvores de Classificação e Regressão.ipynb
InsightLab/data-science-cookbook
mit
4. Fazer uma predição e avaliar o algoritmo Vamos agora utilizar o nosso modelo para efetuar uma predição sobre o nosso dataset. Usaremos a função predict do sklearn e o nosso conjunto de dados X_test.
predicao = model.predict(X_test) predicao
2019/06-decision-tree/Tutorial 11 (scikitlearn) - Árvores de Classificação e Regressão.ipynb
InsightLab/data-science-cookbook
mit
É natural querermos saber quão bom é o desempenho do modelo que desenvolvemos sobre o dataset. Para calcular a acurácia podemos utilizar a função score do sklearn.tree.
accuracy = model.score(X_test, y_test)*100 print('Accuracy: %s%%' % accuracy)
2019/06-decision-tree/Tutorial 11 (scikitlearn) - Árvores de Classificação e Regressão.ipynb
InsightLab/data-science-cookbook
mit
5. Avaliar o algortimo utilizando o K-fold Cross Validation Podemos também usar o método da validação cruzada com k-fold para avaliar o nosso algoritmo. Iremos utilizar as funções KFold e cross_val_score, ambas terão de ser importadas da biblioteca sklearn.model_selection.
from sklearn.model_selection import KFold, cross_val_score import numpy as np
2019/06-decision-tree/Tutorial 11 (scikitlearn) - Árvores de Classificação e Regressão.ipynb
InsightLab/data-science-cookbook
mit
Utilizaremos 5 folds.
k_fold = KFold(n_splits=5) scores = cross_val_score(model, X, Y, cv=k_fold, n_jobs=-1)
2019/06-decision-tree/Tutorial 11 (scikitlearn) - Árvores de Classificação e Regressão.ipynb
InsightLab/data-science-cookbook
mit
Podemos então ver o desempenho do algoritmo quando avaliado sobre cada um dos folds.
scores
2019/06-decision-tree/Tutorial 11 (scikitlearn) - Árvores de Classificação e Regressão.ipynb
InsightLab/data-science-cookbook
mit
Por fim podemos calcular a média dos scores utilizando a função mean da biblioteca numpy, que já foi importada acima.
media = np.mean(scores) * 100 print('Media: %s%%' % media)
2019/06-decision-tree/Tutorial 11 (scikitlearn) - Árvores de Classificação e Regressão.ipynb
InsightLab/data-science-cookbook
mit
Range
# Imprimindo números pares entre 50 e 101 for i in range(50, 101, 2): print(i) for i in range(3, 6): print (i) for i in range(0, -20, -2): print(i) lista = ['Morango', 'Banana', 'Abacaxi', 'Uva'] lista_tamanho = len(lista) for i in range(0, lista_tamanho): print(lista[i]) # Tudo em Python é um objeto type(range(0,3))
Cap03/Notebooks/DSA-Python-Cap03-04-Range.ipynb
dsacademybr/PythonFundamentos
gpl-3.0
使用sklearn做logistic回归 王成军 wangchengjun@nju.edu.cn 计算传播网 http://computational-communication.com logistic回归是一个分类算法而不是一个回归算法。 可根据已知的一系列因变量估计离散数值(比方说二进制数值 0 或 1 ,是或否,真或假)。 简单来说,它通过将数据拟合进一个逻辑函数(logistic function)来预估一个事件出现的概率。 因此,它也被叫做逻辑回归。因为它预估的是概率,所以它的输出值大小在 0 和 1 之间(正如所预计的一样)。 $$odds= \frac{p}{1-p} = \frac{probability\: of\: event\: occurrence} {probability \:of \:not\: event\: occurrence}$$ $$ln(odds)= ln(\frac{p}{1-p})$$ $$logit(x) = ln(\frac{p}{1-p}) = b_0+b_1X_1+b_2X_2+b_3X_3....+b_kX_k$$
repost = [] for i in df.title: if u'转载' in i.decode('utf8'): repost.append(1) else: repost.append(0) data_X = [[df.click[i], df.reply[i]] for i in range(len(df))] data_X[:3] from sklearn.linear_model import LogisticRegression df['repost'] = repost model.fit(data_X,df.repost) model.score(data_X,df.repost) def randomSplitLogistic(dataX, dataY, num): dataX_train = [] dataX_test = [] dataY_train = [] dataY_test = [] import random test_index = random.sample(range(len(df)), num) for k in range(len(dataX)): if k in test_index: dataX_test.append(dataX[k]) dataY_test.append(dataY[k]) else: dataX_train.append(dataX[k]) dataY_train.append(dataY[k]) return dataX_train, dataX_test, dataY_train, dataY_test, # Split the data into training/testing sets data_X_train, data_X_test, data_y_train, data_y_test = randomSplitLogistic(data_X, df.repost, 20) # Create linear regression object log_regr = LogisticRegression() # Train the model using the training sets log_regr.fit(data_X_train, data_y_train) # Explained variance score: 1 is perfect prediction print'Variance score: %.2f' % log_regr.score(data_X_test, data_y_test) logre = LogisticRegression() scores = cross_val_score(logre, data_X,df.repost, cv = 3) scores.mean()
code/.ipynb_checkpoints/9.machine_learning_with_sklearn-checkpoint.ipynb
computational-class/computational-communication-2016
mit
The csv (comma separated values) data will be read into a Pandas dataframe (nyr) and the first 5 records are displayed using the 'head()' method.<br>
import pandas as pd nyr = pd.read_csv(nyrdata) nyr.head()
New York Restaurants Demo.ipynb
sharynr/notebooks
apache-2.0
Ingesting data can be as simple as using one line of code. Similarly, data can be ingested from Cloudant, DashDB, Object Storage, relational databases, and many others. Another dataframe will be created that will only contain the columns that are pertinent. The 'head()' method will display the first 5 records of this dataframe.
nyrcols = nyr[['FACILITY','TOTAL # CRITICAL VIOLATIONS','Location1']] nyrcols.head()
New York Restaurants Demo.ipynb
sharynr/notebooks
apache-2.0
The data will be transformed into a Spark dataframe 'nyrDF' and a table will be registered. Spark dataframes are conceptually equivalent to a table in a relational database or a dataframe in R/Python, but with richer optimizations under the hood. A table that is registered can be used in subsequent SQL statements.
nyrDF = spark.createDataFrame(nyrcols) nyrDF.registerTempTable("nyrDF")
New York Restaurants Demo.ipynb
sharynr/notebooks
apache-2.0