repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
qutip/qutip-notebooks
|
docs/guide/CorrelationFunctions.ipynb
|
lgpl-3.0
|
[
"Correlation Functions\nContents\n\nTwo-Time Correlation Functions\nSteady State Correlation Functions\nEmission Spectrum\nNon-Steady State Correlation Function",
"%matplotlib inline\nimport numpy as np\nfrom pylab import *\nfrom qutip import *",
"<a id='twotime'></a>\nTwo-Time Correlation Functions\nWith the QuTiP time-evolution functions (for example mesolve and mcsolve), a state vector or density matrix can be evolved from an initial state at :math:t_0 to an arbitrary time $t$, $\\rho(t)=V(t, t_0)\\left{\\rho(t_0)\\right}$, where $V(t, t_0)$ is the propagator defined by the equation of motion. The resulting density matrix can then be used to evaluate the expectation values of arbitrary combinations of same-time operators.\nTo calculate two-time correlation functions on the form $\\left<A(t+\\tau)B(t)\\right>$, we can use the quantum regression theorem to write\n$$\n \\left<A(t+\\tau)B(t)\\right> = {\\rm Tr}\\left[A V(t+\\tau, t)\\left{B\\rho(t)\\right}\\right]\n = {\\rm Tr}\\left[A V(t+\\tau, t)\\left{BV(t, 0)\\left{\\rho(0)\\right}\\right}\\right]\n$$\nWe therefore first calculate $\\rho(t)=V(t, 0)\\left{\\rho(0)\\right}$ using one of the QuTiP evolution solvers with $\\rho(0)$ as initial state, and then again use the same solver to calculate $V(t+\\tau, t)\\left{B\\rho(t)\\right}$ using $B\\rho(t)$ as the initial state. Note that if the intial state is the steady state, then $\\rho(t)=V(t, 0)\\left{\\rho_{\\rm ss}\\right}=\\rho_{\\rm ss}$ and \n$$\n \\left<A(t+\\tau)B(t)\\right> = {\\rm Tr}\\left[A V(t+\\tau, t)\\left{B\\rho_{\\rm ss}\\right}\\right] \n = {\\rm Tr}\\left[A V(\\tau, 0)\\left{B\\rho_{\\rm ss}\\right}\\right] = \\left<A(\\tau)B(0)\\right>,\n$$\nwhich is independent of $t$, so that we only have one time coordinate $\\tau$.\nQuTiP provides a family of functions that assists in the process of calculating two-time correlation functions. The available functions and their usage is show in the table below. Each of these functions can use one of the following evolution solvers: Master-equation, Exponential series and the Monte-Carlo. The choice of solver is defined by the optional argument solver. \n<table>\n <tr>\n <th>QuTiP Function</th>\n <th>Correlation Function Type</th>\n </tr>\n <tr>\n <td>`correlation` or `correlation_2op_2t`</td>\n <td>$\\left<A(t+\\tau)B(t)\\right>$ or $\\left<A(t)B(t+\\tau)\\right>$. </td>\n </tr>\n <tr>\n <td>`correlation_ss` or `correlation_2op_1t`</td>\n <td>$\\left<A(\\tau)B(0)\\right>$ or $\\left<A(0)B(\\tau)\\right>$.</td>\n </tr>\n <tr>\n <td>`correlation_3op_1t`</td>\n <td>$\\left<A(0)B(\\tau)C(0)\\right>$.</td>\n </tr>\n <tr>\n <td>`correlation_3op_2t`</td>\n <td>$\\left<A(t)B(t+\\tau)C(t)\\right>$.</td>\n </tr>\n <tr>\n <td>`correlation_4op_1t` <font color='red'>(Depreciated)</font></td>\n <td>$\\left<A(0)B(\\tau)C(\\tau)D(0)\\right>$</td>\n </tr>\n <tr>\n <td>`correlation_4op_2t` <font color='red'>(Depreciated)</font></td>\n <td style='min-width:200px'>$\\left<A(t)B(t+\\tau)C(t+\\tau)D(t)\\right>$ </td>\n </tr>\n</table>\n\nThe most common use-case is to calculate correlation functions of the kind $\\left<A(\\tau)B(0)\\right>$, in which case we use the correlation function solvers that start from the steady state, e.g., the correlation_2op_1t function. These correlation function solvers return a vector or matrix (in general complex) with the correlations as a function of the delay times. \n<a id='steady'></a>\nSteady State Correlation Function\nThe following code demonstrates how to calculate the $\\left<x(t)x(0)\\right>$ correlation for a leaky cavity with three different relaxation rates.",
"times = np.linspace(0,10.0,200)\na = destroy(10)\nx = a.dag() + a\nH = a.dag() * a\ncorr1 = correlation_2op_1t(H, None, times, [np.sqrt(0.5) * a], x, x)\ncorr2 = correlation_2op_1t(H, None, times, [np.sqrt(1.0) * a], x, x)\ncorr3 = correlation_2op_1t(H, None, times, [np.sqrt(2.0) * a], x, x)\n\nplot(times, np.real(corr1), times, np.real(corr2), times, np.real(corr3))\nlegend(['0.5','1.0','2.0'])\nxlabel(r'Time $t$')\nylabel(r'Correlation $\\left<x(t)x(0)\\right>$')\nshow()",
"<a id='emission'></a>\nEmission Spectrum\nGiven a correlation function $\\left<A(\\tau)B(0)\\right>$ we can define the corresponding power spectrum as\n$$\nS(\\omega) = \\int_{-\\infty}^{\\infty} \\left<A(\\tau)B(0)\\right> e^{-i\\omega\\tau} d\\tau.\n$$\nIn QuTiP, we can calculate $S(\\omega)$ using either spectrum, which first calculates the correlation function using the essolve solver and then performs the Fourier transform semi-analytically, or we can use the function spectrum_correlation_fft to numerically calculate the Fourier transform of a given correlation data using FFT. \nThe following example demonstrates how these two functions can be used to obtain the emission power spectrum.",
"N = 4 # number of cavity fock states\nwc = wa = 1.0 * 2 * np.pi # cavity and atom frequency\ng = 0.1 * 2 * np.pi # coupling strength\nkappa = 0.75 # cavity dissipation rate\ngamma = 0.25 # atom dissipation rate\n\n# Jaynes-Cummings Hamiltonian\na = tensor(destroy(N), qeye(2))\nsm = tensor(qeye(N), destroy(2))\nH = wc * a.dag() * a + wa * sm.dag() * sm + g * (a.dag() * sm + a * sm.dag())\n\n# collapse operators\nn_th = 0.25\nc_ops = [np.sqrt(kappa * (1 + n_th)) * a, \n np.sqrt(kappa * n_th) * a.dag(), np.sqrt(gamma) * sm]\n\n# calculate the correlation function using the mesolve solver, and then fft to\n# obtain the spectrum. Here we need to make sure to evaluate the correlation\n# function for a sufficient long time and sufficiently high sampling rate so \n# that the discrete Fourier transform (FFT) captures all the features in the\n# resulting spectrum.\ntlist = np.linspace(0, 100, 5000)\ncorr = correlation_2op_1t(H, None, tlist, c_ops, a.dag(), a)\nwlist1, spec1 = spectrum_correlation_fft(tlist, corr)\n\n\n# calculate the power spectrum using spectrum, which internally uses essolve\n# to solve for the dynamics (by default)\nwlist2 = np.linspace(0.25, 1.75, 200) * 2 * np.pi\nspec2 = spectrum(H, wlist2, c_ops, a.dag(), a)\n\n# plot the spectra\nfig, ax = subplots(1, 1)\nax.plot(wlist1 / (2 * np.pi), spec1, 'b', lw=2, label='eseries method')\nax.plot(wlist2 / (2 * np.pi), spec2, 'r--', lw=2, label='me+fft method')\nax.legend()\nax.set_xlabel('Frequency')\nax.set_ylabel('Power spectrum')\nax.set_title('Vacuum Rabi splitting')\nax.set_xlim(wlist2[0]/(2*np.pi), wlist2[-1]/(2*np.pi))\nshow()",
"<a id='nonsteady'></a>\nNon-Steady State Correlation Function\nMore generally, we can also calculate correlation functions of the kind $\\left<A(t_1+t_2)B(t_1)\\right>$, i.e., the correlation function of a system that is not in its steadystate. In QuTiP, we can evoluate such correlation functions using the function correlation_2op_2t. The default behavior of this function is to return a matrix with the correlations as a function of the two time coordinates ($t_1$ and $t_2$).",
"times = np.linspace(0, 10.0, 200)\na = destroy(10)\nx = a.dag() + a\nH = a.dag() * a\nalpha = 2.5\nrho0 = coherent_dm(10, alpha)\ncorr = correlation_2op_2t(H, rho0, times, times, [np.sqrt(0.25) * a], x, x)\n\npcolor(np.real(corr))\ncolorbar()\nxlabel(r'Time $t_2$')\nylabel(r'Time $t_1$')\ntitle(r'Correlation $\\left<x(t)x(0)\\right>$')\nshow()",
"However, in some cases we might be interested in the correlation functions on the form $\\left<A(t_1+t_2)B(t_1)\\right>$, but only as a function of time coordinate $t_2$. In this case we can also use the correlation_2op_2t function, if we pass the density matrix at time $t_1$ as second argument, and None as third argument. The correlation_2op_2t function then returns a vector with the correlation values corresponding to the times in taulist (the fourth argument).\nEx: First-Order Optical Coherence Function\nThis example demonstrates how to calculate a correlation function on the form $\\left<A(\\tau)B(0)\\right>$ for a non-steady initial state. Consider an oscillator that is interacting with a thermal environment. If the oscillator initially is in a coherent state, it will gradually decay to a thermal (incoherent) state. The amount of coherence can be quantified using the first-order optical coherence function \n$$\ng^{(1)}(\\tau) = \\frac{\\left<a^\\dagger(\\tau)a(0)\\right>}{\\sqrt{\\left<a^\\dagger(\\tau)a(\\tau)\\right>\\left<a^\\dagger(0)a(0)\\right>}}.\n$$ \nFor a coherent state $|g^{(1)}(\\tau)| = 1$, and for a completely incoherent (thermal) state $g^{(1)}(\\tau) = 0$. The following code calculates and plots $g^{(1)}(\\tau)$ as a function of $\\tau$.",
"N = 15\ntaus = np.linspace(0,10.0,200)\na = destroy(N)\nH = 2 * np.pi * a.dag() * a\n\n# collapse operator\nG1 = 0.75\nn_th = 2.00 # bath temperature in terms of excitation number\nc_ops = [np.sqrt(G1 * (1 + n_th)) * a, np.sqrt(G1 * n_th) * a.dag()]\n\n# start with a coherent state\nrho0 = coherent_dm(N, 2.0)\n\n# first calculate the occupation number as a function of time\nn = mesolve(H, rho0, taus, c_ops, [a.dag() * a]).expect[0]\n\n# calculate the correlation function G1 and normalize with n to obtain g1\nG1 = correlation_2op_2t(H, rho0, None, taus, c_ops, a.dag(), a)\ng1 = G1 / np.sqrt(n[0] * n)\n\nplot(taus, np.real(g1), 'b')\nplot(taus, n, 'r')\ntitle('Decay of a coherent state to an incoherent (thermal) state')\nxlabel(r'$\\tau$')\nlegend((r'First-order coherence function $g^{(1)}(\\tau)$', \n r'occupation number $n(\\tau)$'))\nshow()",
"For convenience, the steps for calculating the first-order coherence function have been collected in the function coherence_function_g1.\nExample: Second-Order Optical Coherence Function\nThe second-order optical coherence function, with time-delay $\\tau$, is defined as\n$$\n\\displaystyle g^{(2)}(\\tau) = \\frac{\\langle a^\\dagger(0)a^\\dagger(\\tau)a(\\tau)a(0)\\rangle}{\\langle a^\\dagger(0)a(0)\\rangle^2}\n$$\nFor a coherent state $g^{(2)}(\\tau) = 1$, for a thermal state $g^{(2)}(\\tau=0) = 2$ and it decreases as a function of time (bunched photons, they tend to appear together), and for a Fock state with $n$ photons $g^{(2)}(\\tau = 0) = n(n - 1)/n^2 < 1$ and it increases with time (anti-bunched photons, more likely to arrive separated in time). \nTo calculate this type of correlation function with QuTiP, we could use correlation_4op_1t, which computes a correlation function of the form $\\left<A(0)B(\\tau)C(\\tau)D(0)\\right>$ (four operators, one delay-time vector). However, the middle pair of operators are evaluated at the same time $\\tau$, and thus can be simplified to a single operator $E(\\tau)=B(\\tau)C(\\tau)$, and we can instead call the correlation_3op_1t function to compute $\\left<A(0)E(\\tau)D(0)\\right>$. This simplification is done automatically inside the depreciated correlation_4op_1t function that calls correlation_3op_1t internally.\nThe following code calculates and plots $g^{(2)}(\\tau)$ as a function of $\\tau$ for coherent, thermal and fock states.",
"N = 25\ntaus = np.linspace(0, 25.0, 200)\na = destroy(N)\nH = 2 * np.pi * a.dag() * a\n\nkappa = 0.25\nn_th = 2.0 # bath temperature in terms of excitation number\nc_ops = [np.sqrt(kappa * (1 + n_th)) * a, np.sqrt(kappa * n_th) * a.dag()]\n\nstates = [{'state': coherent_dm(N, np.sqrt(2)), 'label': \"coherent state\"},\n {'state': thermal_dm(N, 2), 'label': \"thermal state\"},\n {'state': fock_dm(N, 2), 'label': \"Fock state\"}]\n\nfig, ax = subplots(1, 1)\n\nfor state in states:\n rho0 = state['state']\n\n # first calculate the occupation number as a function of time\n n = mesolve(H, rho0, taus, c_ops, [a.dag() * a]).expect[0]\n\n # calculate the correlation function G2 and normalize with n(0)n(t) to\n # obtain g2\n G2 = correlation_3op_1t(H, rho0, taus, c_ops, a.dag(), a.dag() * a, a)\n g2 = G2 / (n[0] * n)\n\n ax.plot(taus, np.real(g2), label=state['label'], lw=2)\n\nax.legend(loc=0)\nax.set_xlabel(r'$\\tau$')\nax.set_ylabel(r'$g^{(2)}(\\tau)$')\nshow()",
"For convenience, the steps for calculating the second-order coherence function have been collected in the function coherence_function_g2.",
"from IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/guide.css\", \"r\").read()\n return HTML(styles)\ncss_styling()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
phoebe-project/phoebe2-docs
|
2.3/examples/legacy.ipynb
|
gpl-3.0
|
[
"Comparing PHOEBE 2 vs PHOEBE Legacy\nNOTE: PHOEBE 1.0 legacy is an alternate backend and is not installed with PHOEBE 2. In order to run this backend, you'll need to have PHOEBE 1.0 installed and manually build the python bindings in the phoebe-py directory.\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).",
"#!pip install -I \"phoebe>=2.3,<2.4\"",
"As always, let's do imports and initialize a logger and a new bundle.",
"import phoebe\nfrom phoebe import u\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nphoebe.devel_on() # needed to use WD-style meshing, which isn't fully supported yet\n\nlogger = phoebe.logger()\n\nb = phoebe.default_binary()\nb['q'] = 0.7\nb['requiv@secondary'] = 0.7",
"Adding Datasets and Compute Options",
"b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')\nb.add_dataset('rv', times=np.linspace(0,1,101), dataset='rvdyn')\nb.add_dataset('rv', times=np.linspace(0,1,101), dataset='rvnum')",
"Let's add compute options for phoebe using both the new (marching) method for creating meshes as well as the WD method which imitates the format of the mesh used within legacy.",
"b.add_compute(compute='phoebe2marching', irrad_method='none', mesh_method='marching')\n\nb.add_compute(compute='phoebe2wd', irrad_method='none', mesh_method='wd', eclipse_method='graham')",
"Now we add compute options for the 'legacy' backend.",
"b.add_compute('legacy', compute='phoebe1', irrad_method='none')",
"And set the two RV datasets to use the correct methods (for both compute options)",
"b.set_value_all('rv_method', dataset='rvdyn', value='dynamical')\n\nb.set_value_all('rv_method', dataset='rvnum', value='flux-weighted')",
"Let's use the external atmospheres available for both phoebe1 and phoebe2",
"b.set_value_all('atm', 'extern_planckint')",
"Let's make sure both 'phoebe1' and 'phoebe2wd' use the same value for gridsize",
"b.set_value_all('gridsize', 30)",
"Let's also disable other special effect such as heating, gravity, and light-time effects.",
"b.set_value_all('ld_mode', 'manual')\nb.set_value_all('ld_func', 'logarithmic')\nb.set_value_all('ld_coeffs', [0.,0.])\n\nb.set_value_all('rv_grav', False)\n\nb.set_value_all('ltte', False)",
"Finally, let's compute all of our models",
"b.run_compute(compute='phoebe2marching', model='phoebe2marchingmodel')\n\nb.run_compute(compute='phoebe2wd', model='phoebe2wdmodel')\n\nb.run_compute(compute='phoebe1', model='phoebe1model')",
"Plotting\nLight Curve",
"colors = {'phoebe2marchingmodel': 'g', 'phoebe2wdmodel': 'b', 'phoebe1model': 'r'}\nafig, mplfig = b['lc01'].plot(c=colors, legend=True, show=True)",
"Now let's plot the residuals between these two models",
"artist, = plt.plot(b.get_value('fluxes@lc01@phoebe2marchingmodel') - b.get_value('fluxes@lc01@phoebe1model'), 'g-')\nartist, = plt.plot(b.get_value('fluxes@lc01@phoebe2wdmodel') - b.get_value('fluxes@lc01@phoebe1model'), 'b-')\nartist = plt.axhline(0.0, linestyle='dashed', color='k')\nylim = plt.ylim(-0.003, 0.003)",
"Dynamical RVs\nSince the dynamical RVs don't depend on the mesh, there should be no difference between the 'phoebe2marching' and 'phoebe2wd' synthetic models. Here we'll just choose one to plot.",
"afig, mplfig = b.filter(dataset='rvdyn', model=['phoebe2wdmodel', 'phoebe1model']).plot(c=colors, legend=True, show=True)",
"And also plot the residuals of both the primary and secondary RVs (notice the scale on the y-axis)",
"artist, = plt.plot(b.get_value('rvs@rvdyn@primary@phoebe2wdmodel') - b.get_value('rvs@rvdyn@primary@phoebe1model'), color='b', ls=':')\nartist, = plt.plot(b.get_value('rvs@rvdyn@secondary@phoebe2wdmodel') - b.get_value('rvs@rvdyn@secondary@phoebe1model'), color='b', ls='-.')\nartist = plt.axhline(0.0, linestyle='dashed', color='k')\nylim = plt.ylim(-1.5e-12, 1.5e-12)",
"Numerical (flux-weighted) RVs",
"afig, mplfig = b.filter(dataset='rvnum').plot(c=colors, show=True)\n\nartist, = plt.plot(b.get_value('rvs@rvnum@primary@phoebe2marchingmodel', ) - b.get_value('rvs@rvnum@primary@phoebe1model'), color='g', ls=':')\nartist, = plt.plot(b.get_value('rvs@rvnum@secondary@phoebe2marchingmodel') - b.get_value('rvs@rvnum@secondary@phoebe1model'), color='g', ls='-.')\n\nartist, = plt.plot(b.get_value('rvs@rvnum@primary@phoebe2wdmodel', ) - b.get_value('rvs@rvnum@primary@phoebe1model'), color='b', ls=':')\nartist, = plt.plot(b.get_value('rvs@rvnum@secondary@phoebe2wdmodel') - b.get_value('rvs@rvnum@secondary@phoebe1model'), color='b', ls='-.')\n\nartist = plt.axhline(0.0, linestyle='dashed', color='k')\nylim = plt.ylim(-1e-2, 1e-2)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
brockk/clintrials
|
tutorials/matchpoint/Utility.ipynb
|
gpl-3.0
|
[
"Implementing the EffTox Dose-Finding Design in the Matchpoint Trials\nThis tutorial complements the manuscript Implementing the EffTox Dose-Finding Design in the Matchpoint Trial (Brock et al.,in submission). Please consult the paper for the clinical background, the methodology details, and full explanation of the terminology.\nPosterior Utility\nIn this notebook, we illustrate posterior utility plots of selected doses using the EffTox design in the seamless phase I/II dose-finding clinical trial, Matchpoint.",
"import numpy as np\nfrom scipy.stats import norm\n\nfrom clintrials.dosefinding.efftox import EffTox, LpNormCurve\n\n%matplotlib inline\n\nreal_doses = [7.5, 15, 30, 45]\ntrial_size = 30\ncohort_size = 3\nfirst_dose = 3\nprior_tox_probs = (0.025, 0.05, 0.1, 0.25)\nprior_eff_probs = (0.2, 0.3, 0.5, 0.6)\ntox_cutoff = 0.40\neff_cutoff = 0.45\ntox_certainty = 0.05\neff_certainty = 0.03\n\nmu_t_mean, mu_t_sd = -5.4317, 2.7643\nbeta_t_mean, beta_t_sd = 3.1761, 2.7703\nmu_e_mean, mu_e_sd = -0.8442, 1.9786\nbeta_e_1_mean, beta_e_1_sd = 1.9857, 1.9820\nbeta_e_2_mean, beta_e_2_sd = 0, 0.2\npsi_mean, psi_sd = 0, 1\nefftox_priors = [\n norm(loc=mu_t_mean, scale=mu_t_sd),\n norm(loc=beta_t_mean, scale=beta_t_sd),\n norm(loc=mu_e_mean, scale=mu_e_sd),\n norm(loc=beta_e_1_mean, scale=beta_e_1_sd),\n norm(loc=beta_e_2_mean, scale=beta_e_2_sd),\n norm(loc=psi_mean, scale=psi_sd),\n ]",
"The above parameters are explained in the manuscript.",
"hinge_points = [(0.4, 0), (1, 0.7), (0.5, 0.4)]\nmetric = LpNormCurve(hinge_points[0][0], hinge_points[1][1], hinge_points[2][0], hinge_points[2][1])\n\net = EffTox(real_doses, efftox_priors, tox_cutoff, eff_cutoff, tox_certainty, eff_certainty, metric, trial_size,\n first_dose)",
"The EffTox class is an object-oriented implementation of the trial design by Thall & Cook (Thall, P. F., & Cook, J. D. (2004). Dose-Finding Based on Efficacy-Toxicity Trade-Offs. Biometrics, 60(3), 684–693.)\nAfter observing outcomes 3NTE\nOutcomes for a patient are represented by a three item tuple, where:\n\nfirst item is 1-based dose-index give (i.e. 3 is dose-level 3);\nsecond item is 1 if toxicity happened, else 0;\nthird item is 1 if efficacy happened, else 0.\n\nOutcomes for several patients are represented as lists:",
"outcomes1 = [(3, 0, 0), (3, 1, 0), (3, 0, 1)]\n\nnp.random.seed(123)\net.update(outcomes1, n=10**6)",
"In this instance, escalation to dose-level 4 is recommended.",
"et.tabulate()",
"We see that all doses are admissible in this instance, and that the utilities of dose-levels 3 and 4 are very similar. Dose Ambivalence is the likely result, i.e. after observing 3NTE in the Matchpoint trial, the design would have recommended dose 3 or dose 4. The reason is made plain by the plot below.",
"et.plot_posterior_utility_density(include_doses=[3,4], boot_samps=1000)",
"The posterior distributions of the utility of doses 3 and 4 largely occupy the same space so picking between them is difficult. In the Ambivalence.ipynb tutorial, we demonstrate a method for dealing with dose ambivalence.\nThe plot above is similar (but not identical) to Figure 2 in the publication. I used the R package ggplot2 to produce the plots for the paper because the R package is more mature than the Python version. For instance, I could not get a legend to appear in Python. \nAfter observing outcomes 2NNN 3ENN 4EBE 3TEE 4NEE",
"outcomes2 = [\n (2, 0, 0), (2, 0, 0), (2, 0, 0),\n (3, 0, 1), (3, 0, 0), (3, 0, 0),\n (4, 0, 1), (4, 1, 1), (4, 0, 1),\n (3, 1, 0), (3, 0, 1), (3, 0, 1),\n (4, 0, 0), (4, 0, 1), (4, 0, 1),\n ]\n\net.reset()\net.update(outcomes2, n=10**6)\n\net.tabulate()",
"Dose 4 is now clearly the preferable dose.",
"et.plot_posterior_utility_density(include_doses=[3,4], boot_samps=1000)",
"That is reflected in the estimates of the posterior utility curves.\nThe plot above is similar (but not identical) to Figure 3 in the publication."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/docs-l10n
|
site/ko/guide/keras/customizing_what_happens_in_fit.ipynb
|
apache-2.0
|
[
"Copyright 2020 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Model.fit의 동작 사용자 정의하기\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org에서 보기</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/guide/keras/customizing_what_happens_in_fit.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Google Colab에서 실행</a></td>\n <td> <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/keras/customizing_what_happens_in_fit.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">View source on GitHub</a>\n</td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/guide/keras/customizing_what_happens_in_fit.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">노트북 다운로드</a></td>\n</table>\n\n시작하기\n감독 학습을 수행할 때 fit()를 사용할 수 있으며 모든 것이 원활하게 작동합니다.\n훈련 루프를 처음부터 작성해야 하는 경우, GradientTape를 사용하여 모든 세부 사항을 제어할 수 있습니다.\n그러나 사용자 정의 훈련 알고리즘이 필요하지만 콜백, 내장 배포 지원 또는 단계 융합과 같은 fit()의 편리한 특성을 계속 활용하려면 어떻게 해야 할까요?\nKeras의 핵심 원칙은 복잡성의 점진적인 공개입니다. 항상 점진적으로 저수준 워크플로부터 시작할 수 있어야 합니다. 높은 수준의 기능이 자신의 사용 사례와 정확하게 일치하지 않다고 해서 절망할 필요는 없습니다. 적절한 수준의 고수준 편의를 유지하면서 작은 세부 사항을 보다 효과적으로 제어할 수 있어야 합니다.\nfit()를 사용자 정의해야 하는 경우, Model 클래스의 훈련 단계 함수를 재정의해야 합니다. 이 함수는 모든 데이터 배치에 대해 fit()에 의해 호출되는 함수입니다. 그런 다음 평소와 같이 fit()을 호출 할 수 있으며 자체 학습 알고리즘을 실행합니다.\n이 패턴은 Functional API를 사용하여 모델을 빌드하는 데 방해가 되지 않습니다. Sequential 모델, Functional API 모델, 또는 하위 클래스화된 모델과 관계없이 수행할 수 있습니다.\n어떻게 동작하는지 살펴보겠습니다.\n설정\nTensorFlow 2.2 이상이 필요합니다.",
"import tensorflow as tf\nfrom tensorflow import keras",
"첫 번째 간단한 예제\n간단한 예제부터 시작하겠습니다.\n\nkeras.Model을 하위 클래스화하는 새 클래스를 만듭니다.\ntrain_step(self, data) 메서드를 재정의합니다.\n손실을 포함하여 사전 매핑 메트릭 이름을 현재 값으로 반환합니다.\n\n입력 인수 data는 훈련 데이터에 맞게 전달됩니다.\n\nfit(x, y, ...)를 호출하여 Numpy 배열을 전달하면 data는 튜플 (x, y)가 됩니다.\ntf.data.Dataset를 전달하는 경우, fit(dataset, ...)를 호출하여 data가 각 배치에서 dataset에 의해 산출됩니다.\n\ntrain_step 메서드의 본문에서 이미 익숙한 것과 유사한 정기적인 훈련 업데이트를 구현합니다. 중요한 것은 self.compiled_loss를 통해 손실을 계산하여 compile()로 전달된 손실 함수를 래핑합니다.\n마찬가지로, self.compiled_metrics.update_state(y, y_pred)를 호출하여 compile()에 전달된 메트릭의 상태를 업데이트하고, 마지막에 self.metrics의 결과를 쿼리하여 현재 값을 검색합니다.",
"class CustomModel(keras.Model):\n def train_step(self, data):\n # Unpack the data. Its structure depends on your model and\n # on what you pass to `fit()`.\n x, y = data\n\n with tf.GradientTape() as tape:\n y_pred = self(x, training=True) # Forward pass\n # Compute the loss value\n # (the loss function is configured in `compile()`)\n loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)\n\n # Compute gradients\n trainable_vars = self.trainable_variables\n gradients = tape.gradient(loss, trainable_vars)\n # Update weights\n self.optimizer.apply_gradients(zip(gradients, trainable_vars))\n # Update metrics (includes the metric that tracks the loss)\n self.compiled_metrics.update_state(y, y_pred)\n # Return a dict mapping metric names to current value\n return {m.name: m.result() for m in self.metrics}\n",
"다음을 시도해봅시다.",
"import numpy as np\n\n# Construct and compile an instance of CustomModel\ninputs = keras.Input(shape=(32,))\noutputs = keras.layers.Dense(1)(inputs)\nmodel = CustomModel(inputs, outputs)\nmodel.compile(optimizer=\"adam\", loss=\"mse\", metrics=[\"mae\"])\n\n# Just use `fit` as usual\nx = np.random.random((1000, 32))\ny = np.random.random((1000, 1))\nmodel.fit(x, y, epochs=3)",
"더 낮은 수준으로 구성하기\n당연히 compile()에서 손실 함수의 전달을 건너뛰고, 대신 <code>train_step</code>에서 <em>수동으로</em> 모두 수행할 수 있습니다. 메트릭도 마찬가지입니다.\n다음은 옵티마이저를 구성하기 위해 compile()만 사용하는 하위 수준의 예입니다.\n\n먼저 손실과 MAE 점수를 추적하기 위해 Metric 인스턴스를 생성합니다.\n(메트릭에 대한 update_state()를 호출하여) 메트릭의 상태를 업데이트하는 사용자 정의train_step()을 구현한 다음, 쿼리하여(result()를 통해) 현재 평균 값을 반환하여 진행률 표시줄에 표시되고 모든 콜백에 전달되도록 합니다.\n각 epoch 사이의 메트릭에 대해 reset_states()를 호출해야 합니다. 그렇지 않으면, result()를 호출하면 훈련 시작 이후부터 평균이 반환되지만, 일반적으로 epoch당 평균을 사용합니다. 다행히도 프레임워크에서는 다음과 같이 수행할 수 있습니다. 즉, 재설정하려는 매트릭을 모델의 metrics 속성에 나열하기만 하면 됩니다. 모델은 각 fit() epoch가 시작될 때 또는 evaluate() 호출이 시작될 때 여기에 나열된 모든 객체에 대해 reset_states()를 호출합니다.",
"loss_tracker = keras.metrics.Mean(name=\"loss\")\nmae_metric = keras.metrics.MeanAbsoluteError(name=\"mae\")\n\n\nclass CustomModel(keras.Model):\n def train_step(self, data):\n x, y = data\n\n with tf.GradientTape() as tape:\n y_pred = self(x, training=True) # Forward pass\n # Compute our own loss\n loss = keras.losses.mean_squared_error(y, y_pred)\n\n # Compute gradients\n trainable_vars = self.trainable_variables\n gradients = tape.gradient(loss, trainable_vars)\n\n # Update weights\n self.optimizer.apply_gradients(zip(gradients, trainable_vars))\n\n # Compute our own metrics\n loss_tracker.update_state(loss)\n mae_metric.update_state(y, y_pred)\n return {\"loss\": loss_tracker.result(), \"mae\": mae_metric.result()}\n\n @property\n def metrics(self):\n # We list our `Metric` objects here so that `reset_states()` can be\n # called automatically at the start of each epoch\n # or at the start of `evaluate()`.\n # If you don't implement this property, you have to call\n # `reset_states()` yourself at the time of your choosing.\n return [loss_tracker, mae_metric]\n\n\n# Construct an instance of CustomModel\ninputs = keras.Input(shape=(32,))\noutputs = keras.layers.Dense(1)(inputs)\nmodel = CustomModel(inputs, outputs)\n\n# We don't passs a loss or metrics here.\nmodel.compile(optimizer=\"adam\")\n\n# Just use `fit` as usual -- you can use callbacks, etc.\nx = np.random.random((1000, 32))\ny = np.random.random((1000, 1))\nmodel.fit(x, y, epochs=5)\n",
"sample_weight 및 class_weight 지원하기\n첫 번째 기본 예제에서는 샘플 가중치에 대해 언급하지 않았습니다. fit() 인수 sample_weight 및 class_weight를 지원하려면 다음을 수행하면 됩니다.\n\ndata 인수에서 sample_weight 패키지를 풉니다.\ncompiled_loss 및 compiled_metrics에 전달합니다(손실 및 메트릭을 위해 compile()에 의존하지 않는다면 수동으로 적용할 수도 있습니다).\n다음은 그 목록입니다.",
"class CustomModel(keras.Model):\n def train_step(self, data):\n # Unpack the data. Its structure depends on your model and\n # on what you pass to `fit()`.\n if len(data) == 3:\n x, y, sample_weight = data\n else:\n sample_weight = None\n x, y = data\n\n with tf.GradientTape() as tape:\n y_pred = self(x, training=True) # Forward pass\n # Compute the loss value.\n # The loss function is configured in `compile()`.\n loss = self.compiled_loss(\n y,\n y_pred,\n sample_weight=sample_weight,\n regularization_losses=self.losses,\n )\n\n # Compute gradients\n trainable_vars = self.trainable_variables\n gradients = tape.gradient(loss, trainable_vars)\n\n # Update weights\n self.optimizer.apply_gradients(zip(gradients, trainable_vars))\n\n # Update the metrics.\n # Metrics are configured in `compile()`.\n self.compiled_metrics.update_state(y, y_pred, sample_weight=sample_weight)\n\n # Return a dict mapping metric names to current value.\n # Note that it will include the loss (tracked in self.metrics).\n return {m.name: m.result() for m in self.metrics}\n\n\n# Construct and compile an instance of CustomModel\ninputs = keras.Input(shape=(32,))\noutputs = keras.layers.Dense(1)(inputs)\nmodel = CustomModel(inputs, outputs)\nmodel.compile(optimizer=\"adam\", loss=\"mse\", metrics=[\"mae\"])\n\n# You can now use sample_weight argument\nx = np.random.random((1000, 32))\ny = np.random.random((1000, 1))\nsw = np.random.random((1000, 1))\nmodel.fit(x, y, sample_weight=sw, epochs=3)",
"자신만의 평가 단계 제공하기\nmodel.evaluate() 호출에 대해 같은 작업을 수행하려면 어떻게 해야 할까요? 정확히 같은 방식으로 test_step을 재정의합니다. 다음과 같습니다.",
"class CustomModel(keras.Model):\n def test_step(self, data):\n # Unpack the data\n x, y = data\n # Compute predictions\n y_pred = self(x, training=False)\n # Updates the metrics tracking the loss\n self.compiled_loss(y, y_pred, regularization_losses=self.losses)\n # Update the metrics.\n self.compiled_metrics.update_state(y, y_pred)\n # Return a dict mapping metric names to current value.\n # Note that it will include the loss (tracked in self.metrics).\n return {m.name: m.result() for m in self.metrics}\n\n\n# Construct an instance of CustomModel\ninputs = keras.Input(shape=(32,))\noutputs = keras.layers.Dense(1)(inputs)\nmodel = CustomModel(inputs, outputs)\nmodel.compile(loss=\"mse\", metrics=[\"mae\"])\n\n# Evaluate with our custom test_step\nx = np.random.random((1000, 32))\ny = np.random.random((1000, 1))\nmodel.evaluate(x, y)",
"마무리: 엔드-투-엔드 GAN 예제\n방금 배운 모든 내용을 활용하는 엔드 투 엔드 예제를 살펴보겠습니다.\n다음을 고려합니다.\n\n생성기 네트워크는 28x28x1 이미지를 생성합니다.\ndiscriminator 네트워크는 28x28x1 이미지를 두 개의 클래스(\"false\" 및 \"real\")로 분류하기 위한 것입니다.\n각각 하나의 옵티마이저를 가집니다.\ndiscriminator를 훈련하는 손실 함수입니다.",
"from tensorflow.keras import layers\n\n# Create the discriminator\ndiscriminator = keras.Sequential(\n [\n keras.Input(shape=(28, 28, 1)),\n layers.Conv2D(64, (3, 3), strides=(2, 2), padding=\"same\"),\n layers.LeakyReLU(alpha=0.2),\n layers.Conv2D(128, (3, 3), strides=(2, 2), padding=\"same\"),\n layers.LeakyReLU(alpha=0.2),\n layers.GlobalMaxPooling2D(),\n layers.Dense(1),\n ],\n name=\"discriminator\",\n)\n\n# Create the generator\nlatent_dim = 128\ngenerator = keras.Sequential(\n [\n keras.Input(shape=(latent_dim,)),\n # We want to generate 128 coefficients to reshape into a 7x7x128 map\n layers.Dense(7 * 7 * 128),\n layers.LeakyReLU(alpha=0.2),\n layers.Reshape((7, 7, 128)),\n layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding=\"same\"),\n layers.LeakyReLU(alpha=0.2),\n layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding=\"same\"),\n layers.LeakyReLU(alpha=0.2),\n layers.Conv2D(1, (7, 7), padding=\"same\", activation=\"sigmoid\"),\n ],\n name=\"generator\",\n)",
"다음은 자신만의 서명을 사용하기 위해 compile()을 재정의하고 train_step 17줄로 전체 GAN 알고리즘을 구현하는 특성 완료형 GAN 클래스입니다.",
"class GAN(keras.Model):\n def __init__(self, discriminator, generator, latent_dim):\n super(GAN, self).__init__()\n self.discriminator = discriminator\n self.generator = generator\n self.latent_dim = latent_dim\n\n def compile(self, d_optimizer, g_optimizer, loss_fn):\n super(GAN, self).compile()\n self.d_optimizer = d_optimizer\n self.g_optimizer = g_optimizer\n self.loss_fn = loss_fn\n\n def train_step(self, real_images):\n if isinstance(real_images, tuple):\n real_images = real_images[0]\n # Sample random points in the latent space\n batch_size = tf.shape(real_images)[0]\n random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))\n\n # Decode them to fake images\n generated_images = self.generator(random_latent_vectors)\n\n # Combine them with real images\n combined_images = tf.concat([generated_images, real_images], axis=0)\n\n # Assemble labels discriminating real from fake images\n labels = tf.concat(\n [tf.ones((batch_size, 1)), tf.zeros((batch_size, 1))], axis=0\n )\n # Add random noise to the labels - important trick!\n labels += 0.05 * tf.random.uniform(tf.shape(labels))\n\n # Train the discriminator\n with tf.GradientTape() as tape:\n predictions = self.discriminator(combined_images)\n d_loss = self.loss_fn(labels, predictions)\n grads = tape.gradient(d_loss, self.discriminator.trainable_weights)\n self.d_optimizer.apply_gradients(\n zip(grads, self.discriminator.trainable_weights)\n )\n\n # Sample random points in the latent space\n random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))\n\n # Assemble labels that say \"all real images\"\n misleading_labels = tf.zeros((batch_size, 1))\n\n # Train the generator (note that we should *not* update the weights\n # of the discriminator)!\n with tf.GradientTape() as tape:\n predictions = self.discriminator(self.generator(random_latent_vectors))\n g_loss = self.loss_fn(misleading_labels, predictions)\n grads = tape.gradient(g_loss, self.generator.trainable_weights)\n self.g_optimizer.apply_gradients(zip(grads, self.generator.trainable_weights))\n return {\"d_loss\": d_loss, \"g_loss\": g_loss}\n",
"테스트해 봅시다.",
"# Prepare the dataset. We use both the training & test MNIST digits.\nbatch_size = 64\n(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()\nall_digits = np.concatenate([x_train, x_test])\nall_digits = all_digits.astype(\"float32\") / 255.0\nall_digits = np.reshape(all_digits, (-1, 28, 28, 1))\ndataset = tf.data.Dataset.from_tensor_slices(all_digits)\ndataset = dataset.shuffle(buffer_size=1024).batch(batch_size)\n\ngan = GAN(discriminator=discriminator, generator=generator, latent_dim=latent_dim)\ngan.compile(\n d_optimizer=keras.optimizers.Adam(learning_rate=0.0003),\n g_optimizer=keras.optimizers.Adam(learning_rate=0.0003),\n loss_fn=keras.losses.BinaryCrossentropy(from_logits=True),\n)\n\n# To limit the execution time, we only train on 100 batches. You can train on\n# the entire dataset. You will need about 20 epochs to get nice results.\ngan.fit(dataset.take(100), epochs=1)",
"딥 러닝의 기본 개념은 간단합니다. 구현이 고통스러울 이유가 없습니다."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/docs-l10n
|
site/en-snapshot/quantum/tutorials/gradients.ipynb
|
apache-2.0
|
[
"Copyright 2020 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Calculate gradients\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/quantum/tutorials/gradients\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/quantum/blob/master/docs/tutorials/gradients.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/quantum/blob/master/docs/tutorials/gradients.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/quantum/docs/tutorials/gradients.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nThis tutorial explores gradient calculation algorithms for the expectation values of quantum circuits.\nCalculating the gradient of the expectation value of a certain observable in a quantum circuit is an involved process. Expectation values of observables do not have the luxury of having analytic gradient formulas that are always easy to write down—unlike traditional machine learning transformations such as matrix multiplication or vector addition that have analytic gradient formulas which are easy to write down. As a result, there are different quantum gradient calculation methods that come in handy for different scenarios. This tutorial compares and contrasts two different differentiation schemes.\nSetup",
"!pip install tensorflow==2.7.0",
"Install TensorFlow Quantum:",
"!pip install tensorflow-quantum\n\n# Update package resources to account for version changes.\nimport importlib, pkg_resources\nimportlib.reload(pkg_resources)",
"Now import TensorFlow and the module dependencies:",
"import tensorflow as tf\nimport tensorflow_quantum as tfq\n\nimport cirq\nimport sympy\nimport numpy as np\n\n# visualization tools\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom cirq.contrib.svg import SVGCircuit",
"1. Preliminary\nLet's make the notion of gradient calculation for quantum circuits a little more concrete. Suppose you have a parameterized circuit like this one:",
"qubit = cirq.GridQubit(0, 0)\nmy_circuit = cirq.Circuit(cirq.Y(qubit)**sympy.Symbol('alpha'))\nSVGCircuit(my_circuit)",
"Along with an observable:",
"pauli_x = cirq.X(qubit)\npauli_x",
"Looking at this operator you know that $⟨Y(\\alpha)| X | Y(\\alpha)⟩ = \\sin(\\pi \\alpha)$",
"def my_expectation(op, alpha):\n \"\"\"Compute ⟨Y(alpha)| `op` | Y(alpha)⟩\"\"\"\n params = {'alpha': alpha}\n sim = cirq.Simulator()\n final_state_vector = sim.simulate(my_circuit, params).final_state_vector\n return op.expectation_from_state_vector(final_state_vector, {qubit: 0}).real\n\n\nmy_alpha = 0.3\nprint(\"Expectation=\", my_expectation(pauli_x, my_alpha))\nprint(\"Sin Formula=\", np.sin(np.pi * my_alpha))",
"and if you define $f_{1}(\\alpha) = ⟨Y(\\alpha)| X | Y(\\alpha)⟩$ then $f_{1}^{'}(\\alpha) = \\pi \\cos(\\pi \\alpha)$. Let's check this:",
"def my_grad(obs, alpha, eps=0.01):\n grad = 0\n f_x = my_expectation(obs, alpha)\n f_x_prime = my_expectation(obs, alpha + eps)\n return ((f_x_prime - f_x) / eps).real\n\n\nprint('Finite difference:', my_grad(pauli_x, my_alpha))\nprint('Cosine formula: ', np.pi * np.cos(np.pi * my_alpha))",
"2. The need for a differentiator\nWith larger circuits, you won't always be so lucky to have a formula that precisely calculates the gradients of a given quantum circuit. In the event that a simple formula isn't enough to calculate the gradient, the tfq.differentiators.Differentiator class allows you to define algorithms for computing the gradients of your circuits. For instance you can recreate the above example in TensorFlow Quantum (TFQ) with:",
"expectation_calculation = tfq.layers.Expectation(\n differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))\n\nexpectation_calculation(my_circuit,\n operators=pauli_x,\n symbol_names=['alpha'],\n symbol_values=[[my_alpha]])",
"However, if you switch to estimating expectation based on sampling (what would happen on a true device) the values can change a little bit. This means you now have an imperfect estimate:",
"sampled_expectation_calculation = tfq.layers.SampledExpectation(\n differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))\n\nsampled_expectation_calculation(my_circuit,\n operators=pauli_x,\n repetitions=500,\n symbol_names=['alpha'],\n symbol_values=[[my_alpha]])",
"This can quickly compound into a serious accuracy problem when it comes to gradients:",
"# Make input_points = [batch_size, 1] array.\ninput_points = np.linspace(0, 5, 200)[:, np.newaxis].astype(np.float32)\nexact_outputs = expectation_calculation(my_circuit,\n operators=pauli_x,\n symbol_names=['alpha'],\n symbol_values=input_points)\nimperfect_outputs = sampled_expectation_calculation(my_circuit,\n operators=pauli_x,\n repetitions=500,\n symbol_names=['alpha'],\n symbol_values=input_points)\nplt.title('Forward Pass Values')\nplt.xlabel('$x$')\nplt.ylabel('$f(x)$')\nplt.plot(input_points, exact_outputs, label='Analytic')\nplt.plot(input_points, imperfect_outputs, label='Sampled')\nplt.legend()\n\n# Gradients are a much different story.\nvalues_tensor = tf.convert_to_tensor(input_points)\n\nwith tf.GradientTape() as g:\n g.watch(values_tensor)\n exact_outputs = expectation_calculation(my_circuit,\n operators=pauli_x,\n symbol_names=['alpha'],\n symbol_values=values_tensor)\nanalytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)\n\nwith tf.GradientTape() as g:\n g.watch(values_tensor)\n imperfect_outputs = sampled_expectation_calculation(\n my_circuit,\n operators=pauli_x,\n repetitions=500,\n symbol_names=['alpha'],\n symbol_values=values_tensor)\nsampled_finite_diff_gradients = g.gradient(imperfect_outputs, values_tensor)\n\nplt.title('Gradient Values')\nplt.xlabel('$x$')\nplt.ylabel('$f^{\\'}(x)$')\nplt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')\nplt.plot(input_points, sampled_finite_diff_gradients, label='Sampled')\nplt.legend()",
"Here you can see that although the finite difference formula is fast to compute the gradients themselves in the analytical case, when it came to the sampling based methods it was far too noisy. More careful techniques must be used to ensure a good gradient can be calculated. Next you will look at a much slower technique that wouldn't be as well suited for analytical expectation gradient calculations, but does perform much better in the real-world sample based case:",
"# A smarter differentiation scheme.\ngradient_safe_sampled_expectation = tfq.layers.SampledExpectation(\n differentiator=tfq.differentiators.ParameterShift())\n\nwith tf.GradientTape() as g:\n g.watch(values_tensor)\n imperfect_outputs = gradient_safe_sampled_expectation(\n my_circuit,\n operators=pauli_x,\n repetitions=500,\n symbol_names=['alpha'],\n symbol_values=values_tensor)\n\nsampled_param_shift_gradients = g.gradient(imperfect_outputs, values_tensor)\n\nplt.title('Gradient Values')\nplt.xlabel('$x$')\nplt.ylabel('$f^{\\'}(x)$')\nplt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')\nplt.plot(input_points, sampled_param_shift_gradients, label='Sampled')\nplt.legend()",
"From the above you can see that certain differentiators are best used for particular research scenarios. In general, the slower sample-based methods that are robust to device noise, etc., are great differentiators when testing or implementing algorithms in a more \"real world\" setting. Faster methods like finite difference are great for analytical calculations and you want higher throughput, but aren't yet concerned with the device viability of your algorithm.\n3. Multiple observables\nLet's introduce a second observable and see how TensorFlow Quantum supports multiple observables for a single circuit.",
"pauli_z = cirq.Z(qubit)\npauli_z",
"If this observable is used with the same circuit as before, then you have $f_{2}(\\alpha) = ⟨Y(\\alpha)| Z | Y(\\alpha)⟩ = \\cos(\\pi \\alpha)$ and $f_{2}^{'}(\\alpha) = -\\pi \\sin(\\pi \\alpha)$. Perform a quick check:",
"test_value = 0.\n\nprint('Finite difference:', my_grad(pauli_z, test_value))\nprint('Sin formula: ', -np.pi * np.sin(np.pi * test_value))",
"It's a match (close enough).\nNow if you define $g(\\alpha) = f_{1}(\\alpha) + f_{2}(\\alpha)$ then $g'(\\alpha) = f_{1}^{'}(\\alpha) + f^{'}_{2}(\\alpha)$. Defining more than one observable in TensorFlow Quantum to use along with a circuit is equivalent to adding on more terms to $g$.\nThis means that the gradient of a particular symbol in a circuit is equal to the sum of the gradients with regards to each observable for that symbol applied to that circuit. This is compatible with TensorFlow gradient taking and backpropagation (where you give the sum of the gradients over all observables as the gradient for a particular symbol).",
"sum_of_outputs = tfq.layers.Expectation(\n differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))\n\nsum_of_outputs(my_circuit,\n operators=[pauli_x, pauli_z],\n symbol_names=['alpha'],\n symbol_values=[[test_value]])",
"Here you see the first entry is the expectation w.r.t Pauli X, and the second is the expectation w.r.t Pauli Z. Now when you take the gradient:",
"test_value_tensor = tf.convert_to_tensor([[test_value]])\n\nwith tf.GradientTape() as g:\n g.watch(test_value_tensor)\n outputs = sum_of_outputs(my_circuit,\n operators=[pauli_x, pauli_z],\n symbol_names=['alpha'],\n symbol_values=test_value_tensor)\n\nsum_of_gradients = g.gradient(outputs, test_value_tensor)\n\nprint(my_grad(pauli_x, test_value) + my_grad(pauli_z, test_value))\nprint(sum_of_gradients.numpy())",
"Here you have verified that the sum of the gradients for each observable is indeed the gradient of $\\alpha$. This behavior is supported by all TensorFlow Quantum differentiators and plays a crucial role in the compatibility with the rest of TensorFlow.\n4. Advanced usage\nAll differentiators that exist inside of TensorFlow Quantum subclass tfq.differentiators.Differentiator. To implement a differentiator, a user must implement one of two interfaces. The standard is to implement get_gradient_circuits, which tells the base class which circuits to measure to obtain an estimate of the gradient. Alternatively, you can overload differentiate_analytic and differentiate_sampled; the class tfq.differentiators.Adjoint takes this route.\nThe following uses TensorFlow Quantum to implement the gradient of a circuit. You will use a small example of parameter shifting.\nRecall the circuit you defined above, $|\\alpha⟩ = Y^{\\alpha}|0⟩$. As before, you can define a function as the expectation value of this circuit against the $X$ observable, $f(\\alpha) = ⟨\\alpha|X|\\alpha⟩$. Using parameter shift rules, for this circuit, you can find that the derivative is\n$$\\frac{\\partial}{\\partial \\alpha} f(\\alpha) = \\frac{\\pi}{2} f\\left(\\alpha + \\frac{1}{2}\\right) - \\frac{ \\pi}{2} f\\left(\\alpha - \\frac{1}{2}\\right)$$\nThe get_gradient_circuits function returns the components of this derivative.",
"class MyDifferentiator(tfq.differentiators.Differentiator):\n \"\"\"A Toy differentiator for <Y^alpha | X |Y^alpha>.\"\"\"\n\n def __init__(self):\n pass\n\n def get_gradient_circuits(self, programs, symbol_names, symbol_values):\n \"\"\"Return circuits to compute gradients for given forward pass circuits.\n \n Every gradient on a quantum computer can be computed via measurements\n of transformed quantum circuits. Here, you implement a custom gradient\n for a specific circuit. For a real differentiator, you will need to\n implement this function in a more general way. See the differentiator\n implementations in the TFQ library for examples.\n \"\"\"\n\n # The two terms in the derivative are the same circuit...\n batch_programs = tf.stack([programs, programs], axis=1)\n\n # ... with shifted parameter values.\n shift = tf.constant(1/2)\n forward = symbol_values + shift\n backward = symbol_values - shift\n batch_symbol_values = tf.stack([forward, backward], axis=1)\n \n # Weights are the coefficients of the terms in the derivative.\n num_program_copies = tf.shape(batch_programs)[0]\n batch_weights = tf.tile(tf.constant([[[np.pi/2, -np.pi/2]]]),\n [num_program_copies, 1, 1])\n\n # The index map simply says which weights go with which circuits.\n batch_mapper = tf.tile(\n tf.constant([[[0, 1]]]), [num_program_copies, 1, 1])\n\n return (batch_programs, symbol_names, batch_symbol_values,\n batch_weights, batch_mapper)",
"The Differentiator base class uses the components returned from get_gradient_circuits to calculate the derivative, as in the parameter shift formula you saw above. This new differentiator can now be used with existing tfq.layer objects:",
"custom_dif = MyDifferentiator()\ncustom_grad_expectation = tfq.layers.Expectation(differentiator=custom_dif)\n\n# Now let's get the gradients with finite diff.\nwith tf.GradientTape() as g:\n g.watch(values_tensor)\n exact_outputs = expectation_calculation(my_circuit,\n operators=[pauli_x],\n symbol_names=['alpha'],\n symbol_values=values_tensor)\n\nanalytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)\n\n# Now let's get the gradients with custom diff.\nwith tf.GradientTape() as g:\n g.watch(values_tensor)\n my_outputs = custom_grad_expectation(my_circuit,\n operators=[pauli_x],\n symbol_names=['alpha'],\n symbol_values=values_tensor)\n\nmy_gradients = g.gradient(my_outputs, values_tensor)\n\nplt.subplot(1, 2, 1)\nplt.title('Exact Gradient')\nplt.plot(input_points, analytic_finite_diff_gradients.numpy())\nplt.xlabel('x')\nplt.ylabel('f(x)')\nplt.subplot(1, 2, 2)\nplt.title('My Gradient')\nplt.plot(input_points, my_gradients.numpy())\nplt.xlabel('x')",
"This new differentiator can now be used to generate differentiable ops.\nKey Point: A differentiator that has been previously attached to an op must be refreshed before attaching to a new op, because a differentiator may only be attached to one op at a time.",
"# Create a noisy sample based expectation op.\nexpectation_sampled = tfq.get_sampled_expectation_op(\n cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.01)))\n\n# Make it differentiable with your differentiator:\n# Remember to refresh the differentiator before attaching the new op\ncustom_dif.refresh()\ndifferentiable_op = custom_dif.generate_differentiable_op(\n sampled_op=expectation_sampled)\n\n# Prep op inputs.\ncircuit_tensor = tfq.convert_to_tensor([my_circuit])\nop_tensor = tfq.convert_to_tensor([[pauli_x]])\nsingle_value = tf.convert_to_tensor([[my_alpha]])\nnum_samples_tensor = tf.convert_to_tensor([[5000]])\n\nwith tf.GradientTape() as g:\n g.watch(single_value)\n forward_output = differentiable_op(circuit_tensor, ['alpha'], single_value,\n op_tensor, num_samples_tensor)\n\nmy_gradients = g.gradient(forward_output, single_value)\n\nprint('---TFQ---')\nprint('Foward: ', forward_output.numpy())\nprint('Gradient:', my_gradients.numpy())\nprint('---Original---')\nprint('Forward: ', my_expectation(pauli_x, my_alpha))\nprint('Gradient:', my_grad(pauli_x, my_alpha))",
"Success: Now you can use all the differentiators that TensorFlow Quantum has to offer—and define your own."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
peterdalle/mij
|
3 News robot/Earthquake news robot.ipynb
|
gpl-3.0
|
[
"News robot\nA simple and stupid news robot written in Python that takes some earthquake data (which we will just assume) and writes a news article based on that data.\n1. Input data\nLets say we have these earthquake data that comes from somwehere.",
"# Import datetime to use dates.\nfrom datetime import *\n\n# The data comes in a dictionary (key-value pairs). Note that it looks just like JSON!\ndata = {\n \"Richter\": 7.5,\n \"Latitud\": 12,\n \"Longitud\": 12,\n \"City\": \"Gothenburg\",\n \"Country\": \"Sweden\",\n \"Datetime\": \"2017-02-01 22:15:43\"\n }",
"2. Check for errors\nDon't assume that the data is always good. Create tests to make sure.",
"# We start with the assumption that the data is news worthy.\nIsNewsWorthy = True\n\n# Are there errors in the data? \nif data[\"Richter\"] > 100:\n # A earthquake with magnitue of 100 seems unlikely, so we ignore it.\n IsNewsWorthy = False\n\n# We are only interested in earthquakes in Sweden.\nif data[\"Country\"] != \"Sweden\":\n IsNewsWorthy = False",
"3. Create text\nBasically, it's just a lot of if-statements.\nHow to make the code easier to read:\n\nDon't nest if-statements inside each other too much. Use elif instead.\nUse multiline strings, with \"\"\" around them.\nUse .format on strings.\n\ntext = \"My name is {0} and I am {1} years old\"\ntext = text.format(Name, Age)\nWhich is the same as this:\ntext = \"My name is \" + Name + \" and I am \" + str(Age) + \" years old\"",
"# If the earthquake is deemed news worthy, then create a journalistic text.\ntext = \"\"\nif IsNewsWorthy:\n if (data[\"Richter\"] > 6):\n # Text for large quake (6+).\n text = \"\"\"BREAKING: Major earthquake in {0}\n \nToday at {1} there was a severe earthquake in {2}, {3}, with a magnitude of {4} on the Richter scale.\n\n\"\"\"\n text = text.format(data[\"City\"], data[\"Datetime\"], data[\"City\"], data[\"Country\"], data[\"Richter\"])\n elif (data[\"Richter\"] < 6 or data[\"Richter\"] >= 3):\n # Text for medium quake (3-5).\n text = \"\"\"Earthquake in {0}\n \nToday at {1} there was a earthquake in {2}, {3}, with a magnitude of {4} on the Richter scale.\n\n\"\"\"\n text = text.format(data[\"City\"], data[\"Datetime\"], data[\"City\"], data[\"Country\"], data[\"Richter\"])\n\n # Add this at the end of all texts.\n text = text + \"Published \" + datetime.now().strftime(\"%Y-%m-%d %H:%M\") + \" by Ada the news robot\"\n\n# Look at the text\nprint(text)",
"4. Save the results to a text file\nLets create a function that saves the text to a file.",
"# Function to save the text to a file.\ndef savefile(filename, text):\n f = open(filename, mode=\"w\") # Open file for writing (w = writing, a = append, r = reading)\n f.write(text)\n f.close()\n\n# Only save as text file if there is some text.\nif text != \"\":\n savefile(\"newsrobot-earthquake.txt\", text)\n print(\"Text published!\")\nelse:\n print(\"Text is not published.\")",
"5. Present the results (read from text file)\nLets create a function that reads the text from the text file.",
"# Function that reads text from a file.\ndef readfile(filename):\n f = open(filename, mode=\"r\") # Open file for reading (w = writing, a = append, r = reading)\n lines = f.read()\n f.close()\n return(lines)\n\n# Read the file created earlier by the news robot.\ntext = readfile(\"newsrobot-earthquake.txt\")\n\n# Look at the text from the file.\nprint(text)",
"Exercise\nModify the code so that the robot writes a text based on these decisions:\n\nRichter under 0.5 - Don't write anything.\nRichter 0.5 to 2.0 - Small print on the news site.\nRichter 2.0 to 4.0 - Front page.\nRichter above 4.0 - Front page, breaking news."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
geektoni/shogun
|
doc/ipython-notebooks/structure/multilabel_structured_prediction.ipynb
|
bsd-3-clause
|
[
"Multi-Label Classification with Shogun Machine Learning Toolbox\nAbinash Panda (github: abinashpanda)\nThanks Thoralf Klein for taking time to help me on this project! ;)\nThis notebook presents training of multi-label classification using structured SVM presented in shogun. We would be using MultilabelModel for multi-label classfication.\nWe begin with brief introduction to Multi-Label Structured Prediction [1] followed by corresponding API in Shogun. Then we are going to implement a toy example (for illustration) before getting to the real one. Finally, we evaluate the multi-label classification on well-known datasets [2]. We showed that SHOGUNs [3] implementation delivers same accuracy as scikit-learn and same or better training time.\nIntroduction\nMulti-Label Structured Prediction\nMulti-Label Structured Prediction combines the aspects of multi-label prediction and structured output. Structured prediction typically involves an input $\\mathbf{x}$ (can be structured) and a structured output $\\mathbf{y}$. Given a training set ${(x^i, y^i)}_{i=1,...,n} \\subset \\mathcal{X} \\times \\mathbb{P}(\\mathcal{Y})$ where $\\mathcal{Y}$ is a structured output set of potentially very large size (in this case $\\mathcal{Y} = {y_1, y_2, ...., y_q}$ where $q$ is total number of possible classes). A joint feature map $\\psi(x, y)$ is defined to incorporate structure information into the labels. \nThe joint feature map $\\psi(x, y)$ for MultilabelModel is defined as $\\psi(x, y) \\rightarrow x \\otimes y$ where $\\otimes$ is the tensor product.\nWe formulate the prediction as: \n$h(x) = {y \\in \\mathcal{Y} : f(x, y) > 0}$\nThe compatibility function, $f(x, y)$, acts on individual inputs and outputs, as in single-label prediction, but the prediction step consists of collecting all outputs of positive scores instead of finding the outputs of maximal score.\nMulti-Label Models\nIn this notebook, we are going to compare the performance of two multi-label models:\n* MultilabelModel model : with constant entry term $0$ in joint feature vector to not model bias term.\n* MultilabelModel model_with_bias : with constant entry $1$ in the joint feature vector to model bias term.\nThe joint feature vector are:\n* model$\\leftrightarrow \\psi(x, y) = [x || 0] \\otimes y$.\n* model_with_bias$\\leftrightarrow \\psi(x, y) = [x || 1] \\otimes y$.\nFor comparision of the two models, we are going to perform on the datasets with binary labels. \nExperiment 1 : Binary Label Data\nGeneration of some synthetic data\nFirst of all we create some synthetic data for our toy example. We add some static offset to the data to compare the models with/without threshold.",
"import os\nSHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')\n\ntry:\n from sklearn.datasets import make_classification\nexcept ImportError:\n !pip install --user scikit-learn\n from sklearn.datasets import make_classification\n \nimport numpy as np\nimport shogun as sg\n\nX, Y = make_classification(n_samples=1000,\n n_features=2,\n n_informative=2,\n n_redundant=0,\n n_clusters_per_class=2)\n\n# adding some static offset to the data\nX = X + 1",
"Preparation of data and model\nTo create a multi-label model in shogun, we'll first create an instance of MultilabelModel and initialize it by the features and labels. The labels should be MultilabelSOLables. It should be initialized by providing with the n_labels (number of examples) and n_classes (total number of classes) and then individually adding a label using set_sparse_label() method.",
"def create_features(X, constant):\n feats = sg.create_features(\n np.c_[X, constant * np.ones(X.shape[0])].T)\n \n return feats\n\ndef create_labels(Y, n_classes):\n try:\n n_samples = Y.shape[0]\n except AttributeError:\n n_samples = len(Y)\n \n labels = sg.MultilabelSOLabels(n_samples, n_classes)\n for i, sparse_label in enumerate(Y):\n try:\n sparse_label = sorted(sparse_label)\n except TypeError:\n sparse_label = [sparse_label]\n labels.set_sparse_label(i, np.array(sparse_label, dtype=np.int32))\n \n return labels\n\ndef split_data(X, Y, ratio):\n num_samples = X.shape[0]\n train_samples = int(ratio * num_samples)\n return (X[:train_samples], Y[:train_samples],\n X[train_samples:], Y[train_samples:])\n\nX_train, Y_train, X_test, Y_test = split_data(X, Y, 0.9)\n\nfeats_0 = create_features(X_train, 0)\nfeats_1 = create_features(X_train, 1)\nlabels = create_labels(Y_train, 2)\n\nmodel = sg.structured_model(\"MultilabelModel\", features=feats_0, labels=labels)\nmodel_with_bias = sg.structured_model(\"MultilabelModel\", features=feats_1, labels=labels)",
"Training and Evaluation of Structured Machines with/without Threshold\nIn Shogun, several solvers and online solvers have been implemented for SO-Learning. Let's try to train the model using an online solver StochasticSOSVM.",
"import time\n\nsgd = sg.create_machine(\"StochasticSOSVM\", model=model, labels=labels)\nsgd_with_bias = sg.create_machine(\"StochasticSOSVM\", model=model_with_bias, labels=labels)\n\nstart = time.process_time()\nsgd.train()\nprint(\">>> Time taken for SGD *without* threshold tuning = %f\" % (time.process_time() - start))\nstart = time.process_time()\nsgd_with_bias.train()\nprint(\">>> Time taken for SGD *with* threshold tuning = %f\" % (time.process_time() - start))",
"Accuracy\nFor measuring accuracy in multi-label classification, Jaccard Similarity Coefficients $\\big(J(A, B) = \\frac{|A \\cap B|}{|A \\cup B|}\\big)$ is used :\n$Accuracy = \\frac{1}{p}\\sum_{i=1}^{p}\\frac{ |Y_i \\cap h(x_i)|}{|Y_i \\cup h(x_i)|}$\nThis is available in MultilabelAccuracy for MultilabelLabels and StructuredAccuracy for MultilabelSOLabels.",
"def evaluate_machine(machine,\n X_test,\n Y_test,\n n_classes,\n bias):\n if bias:\n feats_test = create_features(X_test, 1)\n else:\n feats_test = create_features(X_test, 0)\n \n test_labels = create_labels(Y_test, n_classes)\n \n out_labels = machine.apply(feats_test)\n evaluator = sg.create_evaluation(\"StructuredAccuracy\")\n jaccard_similarity_score = evaluator.evaluate(out_labels, test_labels)\n \n return jaccard_similarity_score \n\nprint(\">>> Accuracy of SGD *without* threshold tuning = %f \" % evaluate_machine(sgd, X_test, Y_test, 2, False))\nprint(\">>> Accuracy of SGD *with* threshold tuning = %f \" %evaluate_machine(sgd_with_bias, X_test, Y_test, 2, True))",
"Plotting the Data along with the Boundary",
"import matplotlib.pyplot as plt\n%matplotlib inline\n\ndef get_parameters(weights):\n return -weights[0]/weights[1], -weights[2]/weights[1]\n\ndef scatter_plot(X, y):\n zeros_class = np.where(y == 0)\n ones_class = np.where(y == 1)\n plt.scatter(X[zeros_class, 0], X[zeros_class, 1], c='b', label=\"Negative Class\")\n plt.scatter(X[ones_class, 0], X[ones_class, 1], c='r', label=\"Positive Class\")\n \ndef plot_hyperplane(machine_0,\n machine_1,\n label_0,\n label_1,\n title,\n X, y):\n scatter_plot(X, y)\n x_min, x_max = np.min(X[:, 0]) - 0.5, np.max(X[:, 0]) + 0.5\n y_min, y_max = np.min(X[:, 1]) - 0.5, np.max(X[:, 1]) + 0.5\n xx = np.linspace(x_min, x_max, 1000)\n \n m_0, c_0 = get_parameters(machine_0.get(\"w\")) \n m_1, c_1 = get_parameters(machine_1.get(\"w\"))\n yy_0 = m_0 * xx + c_0\n yy_1 = m_1 * xx + c_1\n plt.plot(xx, yy_0, \"k--\", label=label_0)\n plt.plot(xx, yy_1, \"g-\", label=label_1)\n \n plt.xlim((x_min, x_max))\n plt.ylim((y_min, y_max))\n plt.grid()\n plt.legend(loc=\"best\")\n plt.title(title)\n plt.show()\n\nfig = plt.figure(figsize=(10, 10))\nplot_hyperplane(sgd, sgd_with_bias,\n \"Boundary for machine *without* bias for class 0\",\n \"Boundary for machine *with* bias for class 0\",\n \"Binary Classification using SO-SVM with/without threshold tuning\",\n X, Y)",
"As we can see from the above plot that sgd_with_bias can produce better classification boundary. The model without threshold tuning is crossing origin of space, while the one with threshold tuning is crossing $(1,1)$ (the constant we have added earlier).",
"from shogun import SparseMultilabel_obtain_from_generic\n\ndef plot_decision_plane(machine,\n title,\n X, y, bias):\n plt.figure(figsize=(24, 8))\n plt.suptitle(title)\n plt.subplot(1, 2, 1)\n x_min, x_max = np.min(X[:, 0]) - 0.5, np.max(X[:, 0]) + 0.5\n y_min, y_max = np.min(X[:, 1]) - 0.5, np.max(X[:, 1]) + 0.5\n xx = np.linspace(x_min, x_max, 200)\n yy = np.linspace(y_min, y_max, 200)\n x_mesh, y_mesh = np.meshgrid(xx, yy)\n\n if bias:\n feats = create_features(np.c_[x_mesh.ravel(), y_mesh.ravel()], 1)\n else:\n feats = create_features(np.c_[x_mesh.ravel(), y_mesh.ravel()], 0)\n out_labels = machine.apply_structured(feats)\n print(out_labels)\n z = []\n for i in range(out_labels.get_num_labels()):\n label = SparseMultilabel_obtain_from_generic(out_labels.get(\"labels\")[i]).get_data()\n if label.shape[0] == 1:\n # predicted a single label\n z.append(label[0])\n elif label.shape[0] == 2:\n # predicted both the classes\n z.append(2)\n elif label.shape[0] == 0:\n # predicted none of the class\n z.append(3)\n z = np.array(z)\n z = z.reshape(x_mesh.shape)\n c = plt.pcolor(x_mesh, y_mesh, z, cmap=plt.cm.gist_heat)\n scatter_plot(X, y)\n plt.xlim((x_min, x_max))\n plt.ylim((y_min, y_max))\n plt.colorbar(c)\n plt.title(\"Decision Surface\")\n plt.legend(loc=\"best\")\n\n plt.subplot(1, 2, 2)\n weights = machine.get_w()\n m_0, c_0 = get_parameters(weights[:3])\n m_1, c_1 = get_parameters(weights[3:])\n yy_0 = m_0 * xx + c_0\n yy_1 = m_1 * xx + c_1\n plt.plot(xx, yy_0, \"r--\", label=\"Boundary for class 0\")\n plt.plot(xx, yy_1, \"g-\", label=\"Boundary for class 1\")\n plt.title(\"Hyper planes for different classes\")\n plt.legend(loc=\"best\")\n plt.xlim((x_min, x_max))\n plt.ylim((y_min, y_max))\n \n plt.show()\n\nsgd\n\nplot_decision_plane(sgd,\"Model *without* Threshold Tuning\", X, Y, False)\nplot_decision_plane(sgd_with_bias,\"Model *with* Threshold Tuning\", X, Y, True)",
"As we can see from the above plots of decision surface, the black region corresponds to the region of negative (label = $0$) class, where as the red region corresponds to the positive (label = $1$). But along with that there are some regions (although very small) of white surface and orange surface. The white surface corresponds to the region not classified to any label, whereas the orange region correspond to the region classified to both the labels. The reason for existence of these type of surface is that the above boundaries for both the class don't overlap exactly with each other (illustrated above). So, there are some regions for which both the compatibility function $f(x, 0) > 0$ as well as $f(x, 1) > 0$ (predicted both the labels) and there are some regions where both the compatibility function $f(x, 0) < 0$ and $f(x, 1) < 0$ (predicted none of the labels).\nExperiment 2 : Multi-Label Data\nLoading of data from LibSVM File",
"def load_data(file_name):\n input_file = open(file_name)\n lines = input_file.readlines()\n n_samples = len(lines)\n n_features = len(lines[0].split()) - 1\n Y = []\n X = []\n for line in lines:\n data = line.split()\n Y.append(map(int, data[0].split(\",\")))\n feats = []\n for feat in data[1:]:\n feats.append(float(feat.split(\":\")[1]))\n X.append(feats)\n X = np.array(X)\n n_classes = max(max(label) for label in Y) + 1\n return X, Y, n_samples, n_features, n_classes",
"Training and Evaluation of Structured Machines with/without Threshold",
"def test_multilabel_data(train_file,\n test_file):\n X_train, Y_train, n_samples, n_features, n_classes = load_data(train_file)\n\n X_test, Y_test, n_samples, n_features, n_classes = load_data(test_file)\n\n # create features and labels\n multilabel_feats_0 = create_features(X_train, 0)\n multilabel_feats_1 = create_features(X_train, 1)\n multilabel_labels = create_labels(Y_train, n_classes)\n\n # create multi-label model\n multilabel_model = MultilabelModel(multilabel_feats_0, multilabel_labels)\n multilabel_model_with_bias = MultilabelModel(multilabel_feats_1, multilabel_labels)\n \n # initializing machines for SO-learning\n multilabel_sgd = StochasticSOSVM(multilabel_model, multilabel_labels)\n multilabel_sgd_with_bias = StochasticSOSVM(multilabel_model_with_bias, multilabel_labels)\n \n start = time()\n multilabel_sgd.train()\n t1 = time() - start\n multilabel_sgd_with_bias.train()\n t2 = time() - start - t1\n \n return (evaluate_machine(multilabel_sgd,\n X_test, Y_test,\n n_classes, False), t1,\n evaluate_machine(multilabel_sgd_with_bias,\n X_test, Y_test,\n n_classes, True), t2)\n ",
"Comparision with scikit-learn's implementation",
"from sklearn.multiclass import OneVsRestClassifier\nfrom sklearn.svm import SVC\nfrom sklearn.metrics import jaccard_similarity_score\nfrom sklearn.preprocessing import LabelBinarizer\n\ndef sklearn_implementation(train_file,\n test_file):\n label_binarizer = LabelBinarizer()\n\n X_train, Y_train, n_samples, n_features, n_classes = load_data(train_file)\n X_test, Y_test, n_samples, n_features, n_classes = load_data(test_file)\n\n clf = OneVsRestClassifier(SVC(kernel='linear'))\n start = time()\n clf.fit(X_train, label_binarizer.fit_transform(Y_train))\n t1 = time() - start\n return (jaccard_similarity_score(label_binarizer.fit_transform(Y_test),\n clf.predict(X_test)), t1)\n\ndef print_table(train_file,\n test_file,\n caption):\n acc_0, t1, acc_1, t2 = test_multilabel_data(train_file,\n test_file)\n sk_acc, sk_t1 = sklearn_implementation(train_file,\n test_file)\n result = '''\n \\t\\t%s\n Machine\\t\\t\\t\\tAccuracy\\tTrain-time\\n\n SGD *without* threshold tuning \\t%f \\t%f\n SGD *with* threshold tuning \\t%f \\t%f\n scikit-learn's implementation \\t%f \\t%f\n ''' % (caption, acc_0, t1, acc_1, t2,\n sk_acc, sk_t1)\n print(result)",
"Yeast Multi-Label Data [2]",
"print_table(os.path.join(SHOGUN_DATA_DIR, \"multilabel/yeast_train.svm\"),\n os.path.join(SHOGUN_DATA_DIR, \"multilabel/yeast_test.svm\"),\n \"Yeast dataset\")",
"Scene Multi-Label Data [2]",
"print_table(os.path.join(SHOGUN_DATA_DIR, \"multilabel/scene_train\"),\n os.path.join(SHOGUN_DATA_DIR, \"multilabel/scene_test\"),\n \"Scene dataset\")",
"As we can see that the accuracy of the machine with threshold tuning is comparable to that of scikit-learn's implementation. A possible explanation of that is : for multi-label classification using scikit-learn, we have used OneVsRestClassifier strategy. This strategy fits one classifier per class. It also support multi-label classification. It is initiated using an estimator, for eg. in our case:\n<pre><code>\nclf = OneVsRestClassifier(SVC(kernel='linear'))\n</code></pre>\nthe estimator is SVC(kernel=\"linear\") a support vector machine for classification using linear kernel. So, the OneVsRestClassifier would train a number of estimator (one for each class). The SVC estimator learns the weight ($w$) as well as the thresholds/bias($b$). \nIn the shogun implementation, the structured machines only learn the weights($w$) and there is no threshold or bias. So, to model the threshold to we have to add an constant entry to the joint feature vector. \nThus the machines with constant entry have the same accuracy as that of scikit-learn implementation.\nReferences\n[1] C. Lampert. Maximum Margin Multi-Label Structured Prediction, NIPS 2011\n[2] LIBSVM Data: Multi-label Classification\n[3] Shogun Machine Learning Toolbox"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
rajuniit/udacity
|
language-translation/dlnd_language_translation.ipynb
|
mit
|
[
"Language Translation\nIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.\nGet the Data\nSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n\nsource_path = 'data/small_vocab_en'\ntarget_path = 'data/small_vocab_fr'\nsource_text = helper.load_data(source_path)\ntarget_text = helper.load_data(target_path)",
"Explore the Data\nPlay around with view_sentence_range to view different parts of the data.",
"view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))\n\nsentences = source_text.split('\\n')\nword_counts = [len(sentence.split()) for sentence in sentences]\nprint('Number of sentences: {}'.format(len(sentences)))\nprint('Average number of words in a sentence: {}'.format(np.average(word_counts)))\n\nprint()\nprint('English sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(source_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\nprint()\nprint('French sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(target_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))",
"Implement Preprocessing Function\nText to Word Ids\nAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.\nYou can get the <EOS> word id by doing:\npython\ntarget_vocab_to_int['<EOS>']\nYou can get other word ids using source_vocab_to_int and target_vocab_to_int.",
"def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):\n \"\"\"\n Convert source and target text to proper word ids\n :param source_text: String that contains all the source text.\n :param target_text: String that contains all the target text.\n :param source_vocab_to_int: Dictionary to go from the source words to an id\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: A tuple of lists (source_id_text, target_id_text)\n \"\"\"\n \n # TODO: Implement Function\n \n source_ids = [[source_vocab_to_int[word] for word in line.split()] for line in source_text.split('\\n')]\n \n target_ids = [[target_vocab_to_int[word] for word in line.split()] for line in target_text.split('\\n')]\n \n eos = target_vocab_to_int['<EOS>']\n \n target_ids = [line + [eos] for line in target_ids]\n \n return source_ids, target_ids\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_text_to_ids(text_to_ids)",
"Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nhelper.preprocess_and_save_data(source_path, target_path, text_to_ids)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\nimport helper\n\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()",
"Check the Version of TensorFlow and Access to GPU\nThis will check to make sure you have the correct version of TensorFlow and access to a GPU",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))",
"Build the Neural Network\nYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:\n- model_inputs\n- process_decoding_input\n- encoding_layer\n- decoding_layer_train\n- decoding_layer_infer\n- decoding_layer\n- seq2seq_model\nInput\nImplement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n\nInput text placeholder named \"input\" using the TF Placeholder name parameter with rank 2.\nTargets placeholder with rank 2.\nLearning rate placeholder with rank 0.\nKeep probability placeholder named \"keep_prob\" using the TF Placeholder name parameter with rank 0.\n\nReturn the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)",
"def model_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate, keep probability)\n \"\"\"\n \n inputs = tf.placeholder(tf.int32, [None, None], name = 'input')\n \n targets = tf.placeholder(tf.int32, [None, None], name = 'targets')\n \n learning_rate = tf.placeholder(tf.float32, name = 'learning_rate')\n \n keep_prob = tf.placeholder(tf.float32, name = 'keep_prob')\n \n return inputs, targets, learning_rate, keep_prob\n \n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_inputs(model_inputs)",
"Process Decoding Input\nImplement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.",
"def process_decoding_input(target_data, target_vocab_to_int, batch_size):\n \"\"\"\n Preprocess target data for dencoding\n :param target_data: Target Placehoder\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param batch_size: Batch Size\n :return: Preprocessed target data\n \"\"\"\n # TODO: Implement Function\n \n end = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])\n \n pre_data = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), end], 1)\n \n return pre_data\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_process_decoding_input(process_decoding_input)",
"Encoding\nImplement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().",
"def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):\n \"\"\"\n Create encoding layer\n :param rnn_inputs: Inputs for the RNN\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param keep_prob: Dropout keep probability\n :return: RNN state\n \"\"\"\n # TODO: Implement Function\n \n lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)\n \n drop_out = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n \n encoder_cell = tf.contrib.rnn.MultiRNNCell([drop_out] * num_layers)\n _, output_rnn = tf.nn.dynamic_rnn(encoder_cell, rnn_inputs, dtype=tf.float32)\n \n return output_rnn\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_encoding_layer(encoding_layer)",
"Decoding - Training\nCreate training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.",
"def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,\n output_fn, keep_prob):\n \"\"\"\n Create a decoding layer for training\n :param encoder_state: Encoder State\n :param dec_cell: Decoder RNN Cell\n :param dec_embed_input: Decoder embedded input\n :param sequence_length: Sequence Length\n :param decoding_scope: TenorFlow Variable Scope for decoding\n :param output_fn: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: Train Logits\n \"\"\"\n # TODO: Implement Function\n \n drop_out = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)\n \n decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)\n \n dynamic_rnn_decoder, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(drop_out, decoder_fn, \n dec_embed_input, sequence_length, \n scope=decoding_scope)\n \n \n train_logits = output_fn(dynamic_rnn_decoder)\n \n return train_logits\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_train(decoding_layer_train)",
"Decoding - Inference\nCreate inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().",
"def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,\n maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):\n \"\"\"\n Create a decoding layer for inference\n :param encoder_state: Encoder state\n :param dec_cell: Decoder RNN Cell\n :param dec_embeddings: Decoder embeddings\n :param start_of_sequence_id: GO ID\n :param end_of_sequence_id: EOS Id\n :param maximum_length: The maximum allowed time steps to decode\n :param vocab_size: Size of vocabulary\n :param decoding_scope: TensorFlow Variable Scope for decoding\n :param output_fn: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: Inference Logits\n \"\"\"\n #fixing length issue \n size = maximum_length-1\n # TODO: Implement Function\n \n decoder_fn_inference = tf.contrib.seq2seq.simple_decoder_fn_inference(\n output_fn, \n encoder_state, \n dec_embeddings, \n start_of_sequence_id, \n end_of_sequence_id, \n size, \n vocab_size\n )\n \n inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(\n dec_cell, \n decoder_fn_inference, \n scope=decoding_scope\n )\n \n return inference_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_infer(decoding_layer_infer)",
"Build the Decoding Layer\nImplement decoding_layer() to create a Decoder RNN layer.\n\nCreate RNN cell for decoding using rnn_size and num_layers.\nCreate the output fuction using lambda to transform it's input, logits, to class logits.\nUse the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.\nUse your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.\n\nNote: You'll need to use tf.variable_scope to share variables between training and inference.",
"def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,\n num_layers, target_vocab_to_int, keep_prob):\n \"\"\"\n Create decoding layer\n :param dec_embed_input: Decoder embedded input\n :param dec_embeddings: Decoder embeddings\n :param encoder_state: The encoded state\n :param vocab_size: Size of vocabulary\n :param sequence_length: Sequence Length\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param keep_prob: Dropout keep probability\n :return: Tuple of (Training Logits, Inference Logits)\n \"\"\"\n # TODO: Implement Function\n \n rnn_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers) \n\n \n with tf.variable_scope(\"decoding\") as decoding_scope:\n output_fully_connected = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)\n training_logits = decoding_layer_train(\n encoder_state, \n rnn_cell, \n dec_embed_input, \n sequence_length, \n decoding_scope, \n output_fully_connected, \n keep_prob\n )\n\n \n with tf.variable_scope(\"decoding\", reuse=True) as decoding_scope:\n inference_logits = decoding_layer_infer(\n encoder_state, \n rnn_cell, \n dec_embeddings, \n target_vocab_to_int['<GO>'], \n target_vocab_to_int['<EOS>'], \n sequence_length, \n vocab_size, \n decoding_scope, \n output_fully_connected, \n keep_prob\n )\n\n return training_logits, inference_logits\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer(decoding_layer)",
"Build the Neural Network\nApply the functions you implemented above to:\n\nApply embedding to the input data for the encoder.\nEncode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).\nProcess target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.\nApply embedding to the target data for the decoder.\nDecode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).",
"def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):\n \"\"\"\n Build the Sequence-to-Sequence part of the neural network\n :param input_data: Input placeholder\n :param target_data: Target placeholder\n :param keep_prob: Dropout keep probability placeholder\n :param batch_size: Batch Size\n :param sequence_length: Sequence Length\n :param source_vocab_size: Source vocabulary size\n :param target_vocab_size: Target vocabulary size\n :param enc_embedding_size: Decoder embedding size\n :param dec_embedding_size: Encoder embedding size\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: Tuple of (Training Logits, Inference Logits)\n \"\"\"\n # TODO: Implement Function\n \n encoder_embedd_input = tf.contrib.layers.embed_sequence(\n input_data, \n source_vocab_size, \n enc_embedding_size\n )\n \n encoder_layer = encoding_layer(\n encoder_embedd_input, \n rnn_size, \n num_layers, \n keep_prob\n )\n \n \n decoder_input = process_decoding_input(\n target_data, \n target_vocab_to_int, \n batch_size\n )\n \n \n decoder_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))\n \n decoder_embedding_input = tf.nn.embedding_lookup(decoder_embeddings, decoder_input)\n \n\n training_logits, inference_logits = decoding_layer(\n decoder_embedding_input, \n decoder_embeddings, \n encoder_layer, \n target_vocab_size, \n sequence_length, \n rnn_size, \n num_layers, \n target_vocab_to_int, \n keep_prob\n )\n \n return training_logits, inference_logits\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_seq2seq_model(seq2seq_model)",
"Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet num_layers to the number of layers.\nSet encoding_embedding_size to the size of the embedding for the encoder.\nSet decoding_embedding_size to the size of the embedding for the decoder.\nSet learning_rate to the learning rate.\nSet keep_probability to the Dropout keep probability",
"# Number of Epochs\nepochs = 5\n# Batch Size\nbatch_size = 256\n# RNN Size\nrnn_size = 512\n# Number of Layers\nnum_layers = 2\n# Embedding Size\nencoding_embedding_size = 196\ndecoding_embedding_size = 196\n# Learning Rate\nlearning_rate = 0.005\n# Dropout Keep Probability\nkeep_probability = 0.9",
"Build the Graph\nBuild the graph using the neural network you implemented.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_path = 'checkpoints/dev'\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()\nmax_source_sentence_length = max([len(sentence) for sentence in source_int_text])\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n input_data, targets, lr, keep_prob = model_inputs()\n sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')\n input_shape = tf.shape(input_data)\n \n train_logits, inference_logits = seq2seq_model(\n tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),\n encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)\n\n tf.identity(inference_logits, 'logits')\n with tf.name_scope(\"optimization\"):\n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n train_logits,\n targets,\n tf.ones([input_shape[0], sequence_length]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)",
"Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport time\n\ndef get_accuracy(target, logits):\n \"\"\"\n Calculate accuracy\n \"\"\"\n max_seq = max(target.shape[1], logits.shape[1])\n if max_seq - target.shape[1]:\n target = np.pad(\n target,\n [(0,0),(0,max_seq - target.shape[1])],\n 'constant')\n if max_seq - logits.shape[1]:\n logits = np.pad(\n logits,\n [(0,0),(0,max_seq - logits.shape[1]), (0,0)],\n 'constant')\n\n return np.mean(np.equal(target, np.argmax(logits, 2)))\n\ntrain_source = source_int_text[batch_size:]\ntrain_target = target_int_text[batch_size:]\n\nvalid_source = helper.pad_sentence_batch(source_int_text[:batch_size])\nvalid_target = helper.pad_sentence_batch(target_int_text[:batch_size])\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch) in enumerate(\n helper.batch_data(train_source, train_target, batch_size)):\n start_time = time.time()\n \n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch,\n targets: target_batch,\n lr: learning_rate,\n sequence_length: target_batch.shape[1],\n keep_prob: keep_probability})\n \n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch, keep_prob: 1.0})\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_source, keep_prob: 1.0})\n \n train_acc = get_accuracy(target_batch, batch_train_logits)\n valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)\n end_time = time.time()\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'\n .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_path)\n print('Model Trained and Saved')",
"Save Parameters\nSave the batch_size and save_path parameters for inference.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params(save_path)",
"Checkpoint",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()\nload_path = helper.load_params()",
"Sentence to Sequence\nTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.\n\nConvert the sentence to lowercase\nConvert words into ids using vocab_to_int\nConvert words not in the vocabulary, to the <UNK> word id.",
"def sentence_to_seq(sentence, vocab_to_int):\n \"\"\"\n Convert a sentence to a sequence of ids\n :param sentence: String\n :param vocab_to_int: Dictionary to go from the words to an id\n :return: List of word ids\n \"\"\"\n # TODO: Implement Function\n \n int_sentence = [vocab_to_int.get(w.lower(), vocab_to_int['<UNK>']) for w in sentence.split()]\n return int_sentence\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_sentence_to_seq(sentence_to_seq)",
"Translate\nThis will translate translate_sentence from English to French.",
"translate_sentence = 'he saw a old yellow truck .'\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ntranslate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_path + '.meta')\n loader.restore(sess, load_path)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('logits:0')\n keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n\n translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in translate_sentence]))\nprint(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))\nprint(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))",
"Imperfect Translation\nYou might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.\nYou can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_language_translation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
blue-yonder/tsfresh
|
notebooks/examples/05 Timeseries Forecasting.ipynb
|
mit
|
[
"Timeseries Forecasting\nThis notebook explains how to use tsfresh in time series foreacasting.\nMake sure you also read through the documentation to learn more on this feature.\nWe will use the stock price of Apple for this.\nIn this notebook we will only showcase how to work with a single time series at a time (one stock).\nThere exist another notebook in the advanced folder, which treats several stocks at the same time.\nBasically the same - but a bit more complex when it comes to pandas multi-indexing.",
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pylab as plt\n\nfrom tsfresh import extract_features, select_features\nfrom tsfresh.utilities.dataframe_functions import roll_time_series, make_forecasting_frame\nfrom tsfresh.utilities.dataframe_functions import impute\n\ntry:\n import pandas_datareader.data as web\nexcept ImportError:\n print(\"You need to install the pandas_datareader. Run pip install pandas_datareader.\")\n\nfrom sklearn.linear_model import LinearRegression",
"Reading the data\nWe download the data from \"stooq\" and only store the High value.\nPlease note: this notebook is for showcasing tsfreshs feature extraction - not to predict stock market prices :-)",
"df = web.DataReader(\"AAPL\", 'stooq')[\"High\"]\ndf.head()\n\nplt.figure(figsize=(15, 6))\ndf.plot(ax=plt.gca())\nplt.show()",
"We want to make the time dependency a bit clearer and add an identifier to each of the stock values (in this notebook we only have Google though).",
"df_melted = pd.DataFrame({\"high\": df.copy()})\ndf_melted[\"date\"] = df_melted.index\ndf_melted[\"Symbols\"] = \"AAPL\"\n\ndf_melted.head()",
"Create training data sample\nForecasting typically involves the following steps:\n* take all data up to today\n* do feature extraction (e.g. by running extract_features)\n* run a prediction model (e.g. a regressor, see below)\n* use the result as the forecast for tomorrow\nIn training however, we need multiple examples to train.\nIf we would only use the time series until today (and wait for the value of tomorrow to have a target), we would only have a single training example.\nTherefore we use a trick: we replay the history.\nImagine you have a cut-out window sliding over your data.\nAt each time step $t$, you treat the data as it would be today. \nYou extract the features with everything you know until today (which is all data until and including $t$).\nThe target for the features until time $t$ is the time value of time $t + 1$ (which you already know, because everything has already happened).\nThe process of window-sliding is implemented in the function roll_time_series.\nOur window size will be 20 (we look at max 20 days in the past) and we disregard all windows which are shorter than 5 days.",
"df_rolled = roll_time_series(df_melted, column_id=\"Symbols\", column_sort=\"date\",\n max_timeshift=20, min_timeshift=5)\n\ndf_rolled.head()",
"The resulting dataframe now consists of these \"windows\" stamped out of the original dataframe.\nFor example all data with the id = (AAPL, 2020-07-14 00:00:00) comes from the original data of stock AAPL including the last 20 days until 2020-07-14:",
"df_rolled[df_rolled[\"id\"] == (\"AAPL\", pd.to_datetime(\"2020-07-14\"))]\n\ndf_melted[(df_melted[\"date\"] <= pd.to_datetime(\"2020-07-14\")) & \n (df_melted[\"date\"] >= pd.to_datetime(\"2020-06-15\")) & \n (df_melted[\"Symbols\"] == \"AAPL\")]",
"If you now group by the new id column, each of the groups will be a certain stock symbol until and including the data until a certain day (and including the last 20 days in the past).\nWhereas we started with 1259 data samples:",
"len(df_melted)",
"we now have 1254 unique windows (identified by stock symbol and ending date):",
"df_rolled[\"id\"].nunique()",
"We \"lost\" 5 windows, as we required to have a minimum history of more than 5 days.",
"df_rolled.groupby(\"id\").size().agg([np.min, np.max])",
"The process is also shown in this image (please note that the window size is smaller for better visibility):\n<img src=\"./stocks.png\"/>\nExtract Features\nThe rolled (windowed) data sample is now in the correct format to use it for tsfreshs feature extraction.\nAs normal, features will be extracted using all data for a given id, which is in our case all data of a given window and a given id (one colored box in the graph above).\nIf the feature extraction returns a row with the index (AAPL, 2020-07-14 00:00:00), you know it has been calculated using the AAPL data up and including 2020-07-14 (and 20 days of history).",
"X = extract_features(df_rolled.drop(\"Symbols\", axis=1), \n column_id=\"id\", column_sort=\"date\", column_value=\"high\", \n impute_function=impute, show_warnings=False)\n\nX.head()",
"We make the data a bit easier to work with by removing the tuple-index",
"X = X.set_index(X.index.map(lambda x: x[1]), drop=True)\nX.index.name = \"last_date\"\nX.head()",
"Our (AAPL, 2020-07-14 00:00:00) is also in the data again:",
"X.loc['2020-07-14']",
"Just to repeat: the features in this row were only calculated using the time series values of AAPL up to and including 2015-07-14 and the last 20 days.\nPrediction\nWe can now use the extracted features to train a regressor.\nBut what will be our targets?\nThe target for the row 2020-07-13 is the value on the next timestep (that would be 2020-07-14 in this case).\nSo all we need to do is go back to our original dataframe and take the stock value of tomorrow.\nThis is done with shift:",
"y = df_melted.set_index(\"date\").sort_index().high.shift(-1)",
"Quick consistency test:",
"y[\"2020-07-13\"], df[\"2020-07-14\"].iloc[0]",
"However, we need to be a bit careful here: X is missing the first 5 dates (as our minimum window size was 5) and y is missing the last date (as there is nothing to predict on today).\nSo lets make sure we have a consistent view on the data.",
"y = y[y.index.isin(X.index)]\nX = X[X.index.isin(y.index)]",
"We can now train normal AdaBoostRegressors to predict the next time step .\nLet's split the data into a training and testing sample (but make sure to keep temporal consistency).\nWe take everything until 2019 as train data an the rest as test:",
"X[:\"2018\"]\n\nX_train = X[:\"2018\"]\nX_test = X[\"2019\":]\n\ny_train = y[:\"2018\"]\ny_test = y[\"2019\":]",
"and do feature selection before training",
"X_train_selected = select_features(X_train, y_train)\n\nada = LinearRegression()\n\nada.fit(X_train_selected, y_train)",
"Now lets check how good our prediction is:",
"X_test_selected = X_test[X_train_selected.columns]\n\ny_pred = pd.Series(ada.predict(X_test_selected), index=X_test_selected.index)",
"The prediction is for the next day, so for drawing we need to shift 1 step back:",
"plt.figure(figsize=(15, 6))\n\ny.plot(ax=plt.gca())\ny_pred.plot(ax=plt.gca(), legend=None, marker=\".\")",
"Well, clearly not perfect ;-)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jtwhite79/pyemu
|
examples/errvarexample_henry.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline\nimport os\nimport sys\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport plot_domain\nfig = plot_domain.henry_domain()",
"Model background\nHere is an example based on the Henry saltwater intrusion problem. The synthetic model is a 2-dimensional SEAWAT model (X-Z domain) with 1 row, 120 columns and 20 layers. The left boundary is a specified flux of freshwater, the right boundary is a specified head and concentration saltwater boundary. The model has two stress periods: an initial steady state (calibration) period, then a transient period with less flux (forecast). \nThe inverse problem has 603 parameters: 600 hydraulic conductivity pilot points, 1 global hydraulic conductivity, 1 specified flux multiplier for history matching and 1 specified flux multiplier for forecast conditions. The inverse problem has 36 obseravtions (21 heads and 15 concentrations) measured at the end of the steady-state calibration period. The forecasts of interest of the distance from the left model edge to the 10% seawater concentration in the basal model layer and the concentration at location 10. Both of there forecasts are \"measured\" at the end of the forecast stress period. The forecasts are both in the Jacobian matrix as zero-weight observations named pd_ten and C_obs10_2.I previously calculated the jacobian matrix, which is in the henry/ folder, along with the PEST control file.\nUnlike the Schur's complement example notebook, here we will examine the consequences of not adjusting the specified flux multiplier parameters (mult1 and mult2) during inversion, since these types of model inputs are not typically considered for adjustment.\nUsing pyemu",
"import pyemu",
"First create a linear_analysis object. We will use err_var derived type, which replicates the behavior of the PREDVAR suite of PEST as well as ident_par utility. We pass it the name of the jacobian matrix file. Since we don't pass an explicit argument for parcov or obscov, pyemu attempts to build them from the parameter bounds and observation weights in a pest control file (.pst) with the same base case name as the jacobian. Since we are interested in forecast uncertainty as well as parameter uncertainty, we also pass the names of the forecast sensitivity vectors we are interested in, which are stored in the jacobian as well. Note that the forecasts argument can be a mixed list of observation names, other jacobian files or PEST-compatible ASCII matrix files. Remember you can pass a filename to the verbose argument to write log file.\nSince most groundwater model history-matching analyses focus on adjusting hetergeneous hydraulic properties and not boundary condition elements, let's identify the mult1 and mult2 parameters as omitted in the error variance analysis. We can conceptually think of this action as excluding the mult1 and mult2 parameters from the history-matching process. Later we will explicitly calculate the penalty for not adjusting this parameter.",
"la = pyemu.ErrVar(jco=os.path.join(\"henry\", \"pest.jcb\"),\n omitted_parameters=[\"mult1\",\"mult2\"])\nprint(la.jco.shape) #without the omitted parameter or the prior info\nla.forecast_names",
"Parameter identifiability\nThe errvar dervied type exposes a method to get a pandas dataframe of parameter identifiability information. Recall that parameter identifiability is expressed as $d_i = \\Sigma(\\mathbf{V}_{1i})^2$, where $d_i$ is the parameter identifiability, which ranges from 0 (not identified by the data) to 1 (full identified by the data), and $\\mathbf{V}_1$ are the right singular vectors corresonding to non-(numerically) zero singular values. First let's look at the singular spectrum of $\\mathbf{Q}^{\\frac{1}{2}}\\mathbf{J}$, where $\\mathbf{Q}$ is the cofactor matrix and $\\mathbf{J}$ is the jacobian:",
"s = la.qhalfx.s\n\nimport pylab as plt\nfigure = plt.figure(figsize=(10, 5))\nax = plt.subplot(111)\nax.plot(s.x)\nax.set_title(\"singular spectrum\")\nax.set_ylabel(\"power\")\nax.set_xlabel(\"singular value\")\nax.set_xlim(0,20)\nplt.show()",
"We see that the singluar spectrum decays rapidly (not uncommon) and that we can really only support about 3 right singular vectors even though we have 600+ parameters in the inverse problem. \nLet's get the identifiability dataframe at 15 singular vectors:",
"ident_df = la.get_identifiability_dataframe(3) # the method is passed the number of singular vectors to include in V_1\nident_df.sort_values(by=\"ident\").iloc[0:10]",
"Plot the indentifiability:\nWe see that the global_k parameter has a much higher identifiability than any one of the 600 pilot points\nForecast error variance\nNow let's explore the error variance of the forecasts we are interested in. We will use an extended version of the forecast error variance equation: \n$\\sigma_{s - \\hat{s}}^2 = \\underbrace{\\textbf{y}i^T({\\bf{I}} - {\\textbf{R}})\\boldsymbol{\\Sigma}{{\\boldsymbol{\\theta}}i}({\\textbf{I}} - {\\textbf{R}})^T\\textbf{y}_i}{1} + \\underbrace{{\\textbf{y}}i^T{\\bf{G}}\\boldsymbol{\\Sigma}{\\mathbf{\\epsilon}}{\\textbf{G}}^T{\\textbf{y}}i}{2} + \\underbrace{{\\bf{p}}\\boldsymbol{\\Sigma}{{\\boldsymbol{\\theta}}_o}{\\bf{p}}^T}{3}$\nWhere term 1 is the null-space contribution, term 2 is the solution space contribution and term 3 is the model error term (the penalty for not adjusting uncertain parameters). Remember the mult1 and mult2 parameters that we marked as omitted? The consequences of that action can now be explicitly evaluated. See Moore and Doherty (2005) and White and other (2014) for more explanation of these terms. Note that if you don't have any omitted_parameters, the only terms 1 and 2 contribute to error variance\nFirst we need to create a list (or numpy ndarray) of the singular values we want to test. Since we have $\\lt40$ data, we only need to test up to $40$ singular values because that is where the action is:",
"sing_vals = np.arange(40)",
"The errvar derived type exposes a convience method to get a multi-index pandas dataframe with each of the terms of the error variance equation:",
"errvar_df = la.get_errvar_dataframe(sing_vals)\nerrvar_df.iloc[0:10]",
"plot the error variance components for each forecast:",
"fig = plt.figure(figsize=(10, 10))\nax_1, ax_2= plt.subplot(211), plt.subplot(212)\naxes = [ax_1,ax_2]\n\ncolors = {\"first\": 'g', \"second\": 'b', \"third\": 'c'}\nmax_idx = 19\nidx = sing_vals[:max_idx]\nfor ipred, pred in enumerate(la.forecast_names):\n pred = pred.lower()\n ax = axes[ipred]\n ax.set_title(pred)\n first = errvar_df[(\"first\", pred)][:max_idx]\n second = errvar_df[(\"second\", pred)][:max_idx]\n third = errvar_df[(\"third\", pred)][:max_idx]\n ax.bar(idx, first, width=1.0, edgecolor=\"none\", facecolor=colors[\"first\"], label=\"first\",bottom=0.0)\n ax.bar(idx, second, width=1.0, edgecolor=\"none\", facecolor=colors[\"second\"], label=\"second\", bottom=first)\n ax.bar(idx, third, width=1.0, edgecolor=\"none\", facecolor=colors[\"third\"], label=\"third\", bottom=second+first)\n ax.set_xlim(-1,max_idx+1)\n ax.set_xticks(idx+0.5)\n ax.set_xticklabels(idx)\n if ipred == 2:\n ax.set_xlabel(\"singular value\")\n \n ax.set_ylabel(\"error variance\")\n ax.legend(loc=\"upper right\")\nplt.show()\n",
"Here we see the trade off between getting a good fit to push down the null-space (1st) term and the penalty for overfitting (the rise of the solution space (2nd) term)). The sum of the first two terms in the \"appearent\" error variance (e.g. the uncertainty that standard analyses would yield) without considering the contribution from the omitted parameters. You can verify this be checking prior uncertainty from the Schur's complement notebook against the zero singular value result using only terms 1 and 2.\nWe also see the added penalty for not adjusting the mult1 and mult2 parameters (3rd term). The ability to forecast the distance from the left edge of the model to the 10% saltwater concentration and the forecast the concentration at location 10 has been compromised by not adjusting mult1 and mult2 during calibration. \nLet's check the errvar results against the results from schur. This is simple with pyemu, we simply cast the errvar type to a schur type:",
"schur = la.get(astype=pyemu.Schur)\nschur_prior = schur.prior_forecast\nschur_post = schur.posterior_forecast\nprint(\"{0:10s} {1:>12s} {2:>12s} {3:>12s} {4:>12s}\"\n .format(\"forecast\",\"errvar prior\",\"errvar min\",\n \"schur prior\", \"schur post\"))\nfor ipred, pred in enumerate(la.forecast_names):\n first = errvar_df[(\"first\", pred)][:max_idx]\n second = errvar_df[(\"second\", pred)][:max_idx] \n min_ev = np.min(first + second)\n prior_ev = first[0] + second[0]\n prior_sh = schur_prior[pred]\n post_sh = schur_post[pred]\n print(\"{0:12s} {1:12.6f} {2:12.6f} {3:12.6} {4:12.6f}\"\n .format(pred,prior_ev,min_ev,prior_sh,post_sh))",
"We see that the prior from schur class matches the two-term errvar result at zero singular values. We also see, as expected, the posterior from schur is slightly lower than the two-term errvar result. This shows us that the \"appearent\" uncertainty in these predictions, as found through application of Bayes equation, is being under estimated because if the ill effects of the omitted mult1 and mult2 parameters."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
igabr/Metis_Projects_Chicago_2017
|
05-project-kojack/Final_Notebook.ipynb
|
mit
|
[
"from sklearn.metrics import r2_score\n%run helper_functions.py\n%run prophet_helper.py #this runs the TS models for features\n%run regression_ts_model.py #nested TS script \n%run btc_info_df.py #helps loan jup new BTC data\n%autosave 120\n%matplotlib inline\nplt.style.use('fivethirtyeight')\nplt.rcParams[\"figure.figsize\"] = (15,10)\nplt.rcParams[\"xtick.labelsize\"] = 16\nplt.rcParams[\"ytick.labelsize\"] = 16\nplt.rcParams[\"axes.labelsize\"] = 20\nplt.rcParams['legend.fontsize'] = 20\nplt.style.use('fivethirtyeight')\npd.set_option('display.max_colwidth', -1)",
"Notebook Overview\nIn this notebook, I will construct:\n- A naive model of bitcoin price prediction\n\nA nested time series model.\n\nWhat do I mean by a nested time series model?\nI will illustrate with a simple example.\nLet's say that I wish to predict the mkt_price on 2016-10-30. I could fit a Linear Regression on all the features from 2016-10-26 - 29-10-2016. However, in order to predict the price of mkt_price on 2016-10-30 I need to have values for the features on 2016-10-30. This presents a problem as all my features are time series! That is, I cannot simply plug in a value for all the features because I don't know what their values would be on this future date!\nOne possible remedy for this is to simply use the values of all the features on 2016-10-29. In fact, it is well know that the best predictor of a variable tomorrow is it's current state today. However, I wish to be more rigorous.\nInstead of simply plugging in t-1 values for the features at time t, I construct a time series model for each feature in order to predict its value at time t based on the entire history of data that I have for the features!\nThese predicted values are then passed as inputs to our linear regression models!\nThus, if I have N features, I am creating N-Time Series models in order to do a single prediction with Linear Regression for the mkt_price variable.\nNaive Baseline Model\nI will construct a naive baseline model that will most likely outperorm any other model I build below.\nThe model will work as follows:\nWhen predicting the price on Day 91, I will take the average price change between Day 90 and Day 0. Let's call this average price change alpha.\nI will then take the price of Day 90 and add alpha to it. This will serve as the 'predicted' price for day 91.",
"df = unpickle_object(\"FINAL_DATAFRAME_PROJ_5.pkl\")\ndf.head()\n\ndef linear_extrapolation(df, window):\n pred_lst = []\n true_lst = []\n\n cnt = 0\n\n all_rows = df.shape[0]\n\n while cnt < window:\n start = df.iloc[cnt:all_rows-window+cnt, :].index[0].date()\n end = df.iloc[cnt:all_rows-window+cnt, :].index[-1].date()\n predicting = df.iloc[all_rows-window+cnt, :].name.date()\n\n print(\"---- Running model from {} to {} and predicting on {} ----\".format(start,end,predicting))\n\n training_df = df.iloc[cnt:all_rows-window+cnt, :]\n\n testing_df = df.iloc[all_rows-window+cnt, :]\n \n true_val = testing_df[-1]\n \n first_row_value = training_df.iloc[0, :]['mkt_price']\n first_row_date = training_df.iloc[0, :].name\n \n last_row_value = training_df.iloc[-1, :]['mkt_price']\n last_row_date = training_df.iloc[-1, :].name\n \n alpha = (last_row_value-first_row_value)/90\n \n prediction = last_row_value + alpha\n \n pred_lst.append(prediction)\n \n true_lst.append(true_val)\n \n \n cnt += 1\n \n return pred_lst, true_lst\n\npred_lst, true_lst = linear_extrapolation(df, 30)\n\nr2_score(true_lst, pred_lst)",
"Naïve Model Caveats\nWe can see above that we can use this extremely basic model to obtain an $R^2$ of 0.86. In fact, this should be the baseline model score that we need to beat!\nLet me mention some caveats to this result:\n\n\nI only have 4 months of Bitcoin data. It should be obvious to the reader that such a naive model is NOT the appropriate way to forecast bitcoin price in general. For if it were this simple, we would all be millionaires.\n\n\nSince I have 120 days worth of day, I am choosing to subset my data in 90 day periods, as such, I will produce 30 predictions. The variability of bitcoin prices around these 30 days will significantly impact the $R^2$ score. Again, more data is needed.\n\n\nWhile bitcoin data itself is not hard to come by, twitter data is! It is the twitter data that is limiting a deeper analysis. I hope that this notebook serves as a starting point for further investigation in the relationship between tweets and bitcoin price fluctuations.\n\n\nLastly, I have made this notebook in Sept. 2017. The data for this project spans Oct 2016 - Feb 2017. Since that timeframe, bitcoin grew to unprecedented highs of \\$4k/coin. Furthermore, media sound bites of CEOs such as James Dimon of JPMorgan have sent bitcoin prices tumbling by as much as $1k/coin. For me, this is what truly lies at the crux of the difficulty of cryptocurrency forecasting. I searched at great length for a free, searchable NEWS API, however, I could not find one. I think I great next step for this project would be to incorporate sentiment of news headlines concerning bitcoin!\n\n\nFurthermore, with the aforementioned timeframe, the overall bitcoin trend was upward. That is, there was not that much volatility in the price - as such, it is expected that the Naïve Model would outperform the nested time series model. The next step would again, be to collect more data and re-run all the models.\n\n\nNested Time Series Model",
"df = unpickle_object(\"FINAL_DATAFRAME_PROJ_5.pkl\")\ndf.head()\n\ndf.corr()\n\nplot_corr_matrix(df)\n\nbeta_values, pred, true = master(df, 30)\n\nr2_score(true, pred)#blows our Prophet TS only model away!",
"Nested TS VS. FB Prophet TS\nWe see from the above that our model has an $R^2$ of 0.75! This greatly outperforms our baseline model of just using FaceBook Prophet to forecast the price of bitcoin! The RMSE is 1.40\nThis is quite impressive given that we only have 3 months of training data and are testing on one month!\nThe output above also shows regression output from statsmodels!\nThe following features were significant in all 30 models:\n\n\nGold Price\n\n\nEthereum Price\n\n\nPositive Sentiment (Yay!)\n\n\nAverage Transactions Per Block\n\n\nIt is important, yet again, to note that this data does NOT take into account the wild fluctuations in price that bitcoin later experienced. We would need more data to affirm the significance of the above variables.",
"plt.plot(pred)\nplt.plot(true)\nplt.legend([\"Prediction\", 'Actual'], loc='upper left')\nplt.xlabel(\"Prediction #\")\nplt.ylabel(\"Price\")\nplt.title(\"Nested TS - Price Prediction\");\n\nfig, ax = plt.subplots()\nax.scatter(true, pred, edgecolors=(0, 0, 0))\nax.plot([min(true), max(true)], [min(true), max(true)], 'k--', lw=3)\nax.set_xlabel('Actual')\nax.set_ylabel('Predicted')\n\nplotting_dict_1 = {\"eth_price\": [], \"pos_sent\": [], \"neg_sent\": [], \"unique_addr\": [], \"gold_price\": [], \"tot_num_trans\": [], \"mempool_trans\":[], \"hash_rate\": [], \"avg_trans_per_block\":[]}\n\nfor index, sub_list in enumerate(beta_values):\n for tup in sub_list:\n plotting_dict_1[tup[0]].append(tup[1])\n\nplot_key(plotting_dict_1, \"pos_sent\")# here we say the effect of positive sentiment through time!\nplt.title(\"Positive Sentiment Effect on BTC Price\")\nplt.ylabel(\"Beta Value\")\nplt.xlabel(\"Model #\")\nplt.tight_layout()\n\nplot_key(plotting_dict_1, \"gold_price\")\nplt.title(\"Gold Price Effect on BTC Price\")\nplt.ylabel(\"Beta Value\")\nplt.xlabel(\"Model #\")\nplt.tight_layout()\n\nplot_key(plotting_dict_1, \"avg_trans_per_block\")\nplt.title(\"Avg. Trans per Block Effect on BTC Price\")\nplt.ylabel(\"Beta Value\")\nplt.xlabel(\"Model #\")\nplt.tight_layout()",
"Percent change model!\nI will now run the same nested TS model as above, however, I will now make my 'target' variable the percent change in bitcoin price. In order to make this a log-og model, I will use the percentage change of all features as inputs into the TS model and thus the linear regression!\nSince percent change will 'shift' our dataframe by one row, I omit the first row (which is all NaN's).\nThus, if we were to predict a percent change of $0.008010$ on 28-10-2017, then this would mean that the predicted price would be the price on 27-10-2017 $*predicted_percent_change$.",
"df_pct = df.copy(deep=True)\ndf_pct = df_pct.pct_change()\ndf_pct.rename(columns={\"mkt_price\": \"percent_change\"}, inplace=True)\ndf_pct = df_pct.iloc[1:, :] #first row is all NaN's\ndf_pct.head()\n\nbeta_values_p, pred_p, true_p = master(df_pct, 30)\n\nr2_score(true_p, pred_p) # this is expected due to the range of values on the y-axis!\n\n#very good!\nplt.plot(pred_p)\nplt.plot(true_p)\nplt.legend([\"Prediction\", 'Actual'], loc='upper left')\nplt.xlabel(\"Prediction #\")\nplt.ylabel(\"Price\")\nplt.title(\"Nested TS - % Change Prediction\");",
"From the above, it seems that our model is not tuned well enough to anticipate the large dip shown above. This is due to a lack of training data. However, while our model might not be the best in predicting percent change how does it fair when we turn the percent change into prices.",
"fig, ax = plt.subplots()\nax.scatter(true_p, pred_p, edgecolors=(0, 0, 0))\nax.plot([min(true), max(true)], [min(true), max(true)], 'k--', lw=3)\nax.set_xlabel('Actual')\nax.set_ylabel('Predicted');\n\ndf.set_index('date', inplace=True)\nprices_to_be_multiplied = df.loc[pd.date_range(start=\"2017-01-23\", end=\"2017-02-21\"), \"mkt_price\"]\nforecast_price_lst = []\nfor index, price in enumerate(prices_to_be_multiplied):\n predicted_percent_change = 1+float(pred_p[index])\n forecasted_price = (predicted_percent_change)*price\n forecast_price_lst.append(forecasted_price)\nground_truth_prices = df.loc[pd.date_range(start=\"2017-01-24\", end=\"2017-02-22\"), \"mkt_price\"]\nground_truth_prices = list(ground_truth_prices)\nr2_score(ground_truth_prices, forecast_price_lst)",
"We have an $R^2$ of 0.87!\nThis surpasses the baseline model and the nested TS model!\nThe caveats of the baseline model also apply here, however, it seems that the addition of additional variables have helped us slightly improve with regards to the $R^2$",
"plt.plot(forecast_price_lst)\nplt.plot(ground_truth_prices)\nplt.legend([\"Prediction\", 'Actual'], loc='upper left')\nplt.xlabel(\"Prediction #\")\nplt.ylabel(\"Price\")\nplt.title(\"Nested TS - % Change Prediction\");"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
anhiga/poliastro
|
docs/source/examples/Propagation using Cowell's formulation.ipynb
|
mit
|
[
"Cowell's formulation\nFor cases where we only study the gravitational forces, solving the Kepler's equation is enough to propagate the orbit forward in time. However, when we want to take perturbations that deviate from Keplerian forces into account, we need a more complex method to solve our initial value problem: one of them is Cowell's formulation.\nIn this formulation we write the two body differential equation separating the Keplerian and the perturbation accelerations:\n$$\\ddot{\\mathbb{r}} = -\\frac{\\mu}{|\\mathbb{r}|^3} \\mathbb{r} + \\mathbb{a}_d$$\n<div class=\"alert alert-info\">For an in-depth exploration of this topic, still to be integrated in poliastro, check out https://github.com/Juanlu001/pfc-uc3m</div>\n\nFirst example\nLet's setup a very simple example with constant acceleration to visualize the effects on the orbit.",
"import numpy as np\nfrom astropy import units as u\n\nfrom matplotlib import ticker\nfrom matplotlib import pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nplt.ion()\n\nfrom scipy.integrate import ode\n\nfrom poliastro.bodies import Earth\nfrom poliastro.twobody import Orbit\nfrom poliastro.examples import iss\n\nfrom poliastro.twobody.propagation import func_twobody\n\nfrom poliastro.util import norm\n\nfrom ipywidgets.widgets import interact, fixed\n\ndef state_to_vector(ss):\n r, v = ss.rv()\n x, y, z = r.to(u.km).value\n vx, vy, vz = v.to(u.km / u.s).value\n return np.array([x, y, z, vx, vy, vz])\n\nu0 = state_to_vector(iss)\nu0\n\nt = np.linspace(0, 10 * iss.period, 500).to(u.s).value\nt[:10]\n\ndt = t[1] - t[0]\ndt\n\nk = Earth.k.to(u.km**3 / u.s**2).value",
"To provide an acceleration depending on an extra parameter, we can use closures like this one:",
"def constant_accel_factory(accel):\n def constant_accel(t0, u, k):\n v = u[3:]\n norm_v = (v[0]**2 + v[1]**2 + v[2]**2)**.5\n return accel * v / norm_v\n\n return constant_accel\n\nconstant_accel_factory(accel=1e-5)(t[0], u0, k)\n\nhelp(func_twobody)",
"Now we setup the integrator manually using scipy.integrate.ode. We cannot provide the Jacobian since we don't know the form of the acceleration in advance.",
"res = np.zeros((t.size, 6))\nres[0] = u0\nii = 1\n\naccel = 1e-5\n\nrr = ode(func_twobody).set_integrator('dop853') # All parameters by default\nrr.set_initial_value(u0, t[0])\nrr.set_f_params(k, constant_accel_factory(accel))\n\nwhile rr.successful() and rr.t + dt < t[-1]:\n rr.integrate(rr.t + dt)\n res[ii] = rr.y\n ii += 1\n\nres[:5]",
"And we plot the results:",
"fig = plt.figure(figsize=(10, 10))\nax = fig.add_subplot(111, projection='3d')\n\nax.plot(*res[:, :3].T)\n\nax.view_init(14, 70)",
"Interactivity\nThis is the last time we used scipy.integrate.ode directly. Instead, we can now import a convenient function from poliastro:",
"from poliastro.twobody.propagation import cowell\n\ndef plot_iss(thrust=0.1, mass=2000.):\n r0, v0 = iss.rv()\n k = iss.attractor.k\n t = np.linspace(0, 10 * iss.period, 500).to(u.s).value\n u0 = state_to_vector(iss)\n\n res = np.zeros((t.size, 6))\n res[0] = u0\n\n accel = thrust / mass\n\n # Perform the whole integration\n r0 = r0.to(u.km).value\n v0 = v0.to(u.km / u.s).value\n k = k.to(u.km**3 / u.s**2).value\n ad = constant_accel_factory(accel)\n r, v = r0, v0\n for ii in range(1, len(t)):\n r, v = cowell(k, r, v, t[ii] - t[ii - 1], ad=ad)\n x, y, z = r\n vx, vy, vz = v\n res[ii] = [x, y, z, vx, vy, vz]\n\n fig = plt.figure(figsize=(8, 6))\n ax = fig.add_subplot(111, projection='3d')\n\n ax.set_xlim(-20e3, 20e3)\n ax.set_ylim(-20e3, 20e3)\n ax.set_zlim(-20e3, 20e3)\n\n ax.view_init(14, 70)\n\n return ax.plot(*res[:, :3].T)\n\ninteract(plot_iss, thrust=(0.0, 0.2, 0.001), mass=fixed(2000.))",
"Error checking",
"rtol = 1e-13\nfull_periods = 2\n\nu0 = state_to_vector(iss)\ntf = ((2 * full_periods + 1) * iss.period / 2).to(u.s).value\n\nu0, tf\n\niss_f_kep = iss.propagate(tf * u.s, rtol=1e-18)\n\nr0, v0 = iss.rv()\nr, v = cowell(k, r0.to(u.km).value, v0.to(u.km / u.s).value, tf, rtol=rtol)\n\niss_f_num = Orbit.from_vectors(Earth, r * u.km, v * u.km / u.s, iss.epoch + tf * u.s)\n\niss_f_num.r, iss_f_kep.r\n\nassert np.allclose(iss_f_num.r, iss_f_kep.r, rtol=rtol, atol=1e-08 * u.km)\nassert np.allclose(iss_f_num.v, iss_f_kep.v, rtol=rtol, atol=1e-08 * u.km / u.s)\n\n#assert np.allclose(iss_f_num.a, iss_f_kep.a, rtol=rtol, atol=1e-08 * u.km)\n#assert np.allclose(iss_f_num.ecc, iss_f_kep.ecc, rtol=rtol)\n#assert np.allclose(iss_f_num.inc, iss_f_kep.inc, rtol=rtol, atol=1e-08 * u.rad)\n#assert np.allclose(iss_f_num.raan, iss_f_kep.raan, rtol=rtol, atol=1e-08 * u.rad)\n#assert np.allclose(iss_f_num.argp, iss_f_kep.argp, rtol=rtol, atol=1e-08 * u.rad)\n#assert np.allclose(iss_f_num.nu, iss_f_kep.nu, rtol=rtol, atol=1e-08 * u.rad)",
"Too bad I cannot access the internal state of the solver. I will have to do it in a blackbox way.",
"u0 = state_to_vector(iss)\nfull_periods = 4\n\ntof_vector = np.linspace(0, ((2 * full_periods + 1) * iss.period / 2).to(u.s).value, num=100)\nrtol_vector = np.logspace(-3, -12, num=30)\n\nres_array = np.zeros((rtol_vector.size, tof_vector.size))\nfor jj, tof in enumerate(tof_vector):\n rf, vf = iss.propagate(tof * u.s, rtol=1e-12).rv()\n for ii, rtol in enumerate(rtol_vector):\n rr = ode(func_twobody).set_integrator('dop853', rtol=rtol, nsteps=1000)\n rr.set_initial_value(u0, 0.0)\n rr.set_f_params(k, constant_accel_factory(0.0)) # Zero acceleration\n\n rr.integrate(rr.t + tof)\n\n if rr.successful():\n uf = rr.y\n\n r, v = uf[:3] * u.km, uf[3:] * u.km / u.s\n\n res = max(norm((r - rf) / rf), norm((v - vf) / vf))\n else:\n res = np.nan\n\n res_array[ii, jj] = res\n\nfig, ax = plt.subplots(figsize=(16, 6))\n\nxx, yy = np.meshgrid(tof_vector, rtol_vector)\n\ncs = ax.contourf(xx, yy, res_array, levels=np.logspace(-12, -1, num=12),\n locator=ticker.LogLocator(), cmap=plt.cm.Spectral_r)\nfig.colorbar(cs)\n\nfor nn in range(full_periods + 1):\n lf = ax.axvline(nn * iss.period.to(u.s).value, color='k', ls='-')\n lh = ax.axvline((2 * nn + 1) * iss.period.to(u.s).value / 2, color='k', ls='--')\n\nax.set_yscale('log')\n\nax.set_xlabel(\"Time of flight (s)\")\nax.set_ylabel(\"Relative tolerance\")\n\nax.set_title(\"Maximum relative difference\")\n\nax.legend((lf, lh), (\"Full period\", \"Half period\"))",
"Numerical validation\nAccording to [Edelbaum, 1961], a coplanar, semimajor axis change with tangent thrust is defined by:\n$$\\frac{\\operatorname{d}!a}{a_0} = 2 \\frac{F}{m V_0}\\operatorname{d}!t, \\qquad \\frac{\\Delta{V}}{V_0} = \\frac{1}{2} \\frac{\\Delta{a}}{a_0}$$\nSo let's create a new circular orbit and perform the necessary checks, assuming constant mass and thrust (i.e. constant acceleration):",
"ss = Orbit.circular(Earth, 500 * u.km)\ntof = 20 * ss.period\n\nad = constant_accel_factory(1e-7)\n\nr0, v0 = ss.rv()\nr, v = cowell(k, r0.to(u.km).value, v0.to(u.km / u.s).value,\n tof.to(u.s).value, ad=ad)\n\nss_final = Orbit.from_vectors(Earth, r * u.km, v * u.km / u.s, ss.epoch + rr.t * u.s)\n\nda_a0 = (ss_final.a - ss.a) / ss.a\nda_a0\n\ndv_v0 = abs(norm(ss_final.v) - norm(ss.v)) / norm(ss.v)\n2 * dv_v0\n\nnp.allclose(da_a0, 2 * dv_v0, rtol=1e-2)\n\ndv = abs(norm(ss_final.v) - norm(ss.v))\ndv\n\naccel_dt = accel * u.km / u.s**2 * (t[-1] - t[0]) * u.s\naccel_dt\n\nnp.allclose(dv, accel_dt, rtol=1e-2, atol=1e-8 * u.km / u.s)",
"This means we successfully validated the model against an extremely simple orbit transfer with approximate analytical solution. Notice that the final eccentricity, as originally noticed by Edelbaum, is nonzero:",
"ss_final.ecc",
"References\n\n[Edelbaum, 1961] \"Propulsion requirements for controllable satellites\""
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
turbomanage/training-data-analyst
|
courses/machine_learning/deepdive2/text_classification/labs/automl_for_text_classification.ipynb
|
apache-2.0
|
[
"AutoML for Text Classification\nLearning Objectives\n\nLearn how to create a text classification dataset for AutoML using BigQuery\nLearn how to train AutoML to build a text classification model\nLearn how to evaluate a model trained with AutoML\nLearn how to predict on new test data with AutoML\n\nIntroduction\nIn this notebook, we will use AutoML for Text Classification to train a text model to recognize the source of article titles: New York Times, TechCrunch or GitHub. \nIn a first step, we will query a public dataset on BigQuery taken from hacker news ( it is an aggregator that displays tech related headlines from various sources) to create our training set.\nIn a second step, use the AutoML UI to upload our dataset, train a text model on it, and evaluate the model we have just trained.\nEach learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.",
"import os\n\nfrom google.cloud import bigquery\nimport pandas as pd\n\n%load_ext google.cloud.bigquery",
"Replace the variable values in the cell below:",
"PROJECT = \"cloud-training-demos\" # Replace with your PROJECT\nBUCKET = PROJECT # defaults to PROJECT\nREGION = \"us-central1\" # Replace with your REGION\nSEED = 0\n\n%%bash\ngsutil mb gs://$BUCKET",
"Create a Dataset from BigQuery\nHacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015. \nLab Task 1a:\nComplete the query below to create a sample dataset containing the url, title, and score of articles from the public dataset bigquery-public-data.hacker_news.stories. Use a WHERE clause to restrict to only those articles with\n* title length greater than 10 characters\n* score greater than 10\n* url length greater than 0 characters",
"%%bigquery --project $PROJECT\n\nSELECT\n # TODO: Your code goes here.\nFROM\n # TODO: Your code goes here.\nWHERE\n # TODO: Your code goes here.\n # TODO: Your code goes here.\n # TODO: Your code goes here.\nLIMIT 10",
"Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>\nLab task 1b:\nComplete the query below to count the number of titles within each 'source' category. Note that to grab the 'source' of the article we use the a regex command on the url of the article. To count the number of articles you'll use a GROUP BY in sql, and we'll also restrict our attention to only those articles whose title has greater than 10 characters.",
"%%bigquery --project $PROJECT\n\nSELECT\n ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,\n # TODO: Your code goes here.\nFROM\n `bigquery-public-data.hacker_news.stories`\nWHERE\n REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')\n # TODO: Your code goes here.\nGROUP BY\n # TODO: Your code goes here.\nORDER BY num_articles DESC\n LIMIT 100",
"Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.",
"regex = '.*://(.[^/]+)/'\n\n\nsub_query = \"\"\"\nSELECT\n title,\n ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source\n \nFROM\n `bigquery-public-data.hacker_news.stories`\nWHERE\n REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')\n AND LENGTH(title) > 10\n\"\"\".format(regex)\n\n\nquery = \"\"\"\nSELECT \n LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,\n source\nFROM\n ({sub_query})\nWHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')\n\"\"\".format(sub_query=sub_query)\n\nprint(query)",
"For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.",
"bq = bigquery.Client(project=PROJECT)\ntitle_dataset = bq.query(query).to_dataframe()\ntitle_dataset.head()",
"AutoML for text classification requires that\n* the dataset be in csv form with \n* the first column being the texts to classify or a GCS path to the text \n* the last colum to be the text labels\nThe dataset we pulled from BiqQuery satisfies these requirements.",
"print(\"The full dataset contains {n} titles\".format(n=len(title_dataset)))",
"Let's make sure we have roughly the same number of labels for each of our three labels:",
"title_dataset.source.value_counts()",
"Finally we will save our data, which is currently in-memory, to disk.\nWe will create a csv file containing the full dataset and another containing only 1000 articles for development.\nNote: It may take a long time to train AutoML on the full dataset, so we recommend to use the sample dataset for the purpose of learning the tool.",
"DATADIR = './data/'\n\nif not os.path.exists(DATADIR):\n os.makedirs(DATADIR)\n\nFULL_DATASET_NAME = 'titles_full.csv'\nFULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)\n\n# Let's shuffle the data before writing it to disk.\ntitle_dataset = title_dataset.sample(n=len(title_dataset))\n\ntitle_dataset.to_csv(\n FULL_DATASET_PATH, header=False, index=False, encoding='utf-8')",
"Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).\nLab Task 1c:\nUse .sample to create a sample dataset of 1,000 articles from the full dataset. Use .value_counts to see how many articles are contained in each of the three source categories?",
"sample_title_dataset = # TODO: Your code goes here.\n# TODO: Your code goes here.",
"Let's write the sample datatset to disk.",
"SAMPLE_DATASET_NAME = 'titles_sample.csv'\nSAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)\n\nsample_title_dataset.to_csv(\n SAMPLE_DATASET_PATH, header=False, index=False, encoding='utf-8')\n\nsample_title_dataset.head()\n\n%%bash\ngsutil cp data/titles_sample.csv gs://$BUCKET",
"Train a Model with AutoML for Text Classification\nLab Task 2:\nComplete Steps 1-3 below to train a text classification model using AutoML.\nStep 1: Launch AutoML\nGo the GCP console, and click on the Natural Language service in the console menu.\nClick on 'ENABLE API' if the API is not enable.\nThen click on \"Get started\" in the \"AutoML Text & Documentation Classification\" tile.\n\nStep 2: Create a Dataset\nSelect \"New Dataset\"\n\nThen\n\nGive the new dataset a name\nChoose \"Multi-label classification\"\nHit \"Create Dataset\"\n\n\nThen\n\nIn 'Select files to import' section, choose 'Select a CSV file on Cloud Storage'.\nClick on 'Browse' and upload titles_sample.csv we created above.\nHit \"Import\"\n\n\nThis step may take a while. You should see the following screen:\n\nStep 3: Train a AutoML text model\nWhen the dataset has been imported, you can inspect it in the AutoML UI, and get statistics about the label distribution. If you are happy with what you see, proceed to train a text model from this dataset:\n\nThen\n\nSwitch to 'Train' tab.\nClick on 'START TRAINING' and confirm again by clicking on 'START TRAINING'.\n\nThe training step may last a few hours, while AutoML is searching for the best model to crush this dataset. \nYou should see the following screen:\n\nLab Task 3:\nComplete Step 4 below to evaluate the AutoML model.\nStep 4: Evaluate the model\nOnce the model is trained, click on \"Evaluate\" to undertand how the model performed. You'll be able to see the averall presicion and recall, as well as drill down to preformances at the individual label level.\nAutoML UI will also show you examples where the model made a mistake for each of the labels.\n\nLab Task 4:\nComplete Step 5 below to call prediction on your AutoML text classification model.\nStep 5: Predict with the trained AutoML model\nNow you can test your model directly by entering new text in the UI and having AutoML predicts the source of your snippet:\n\nCopyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
gaufung/Data_Analytics_Learning_Note
|
DesignPattern/CommandPattern.ipynb
|
mit
|
[
"命令模式(Command Pattern) \n1 代码\n又是一个点餐系统。不过这次的点餐系统是个饭店的点餐系统。饭店的点餐系统有什么不同嘛?大伙想想看,在大多数饭店中,当服务员已经接到顾客的点单,录入到系统中后,根据不同的菜品,会有不同的后台反应。比如,饭店有凉菜间、热菜间、主食间,那当服务员将菜品录入到系统中后,凉菜间会打印出顾客所点的凉菜条目,热菜间会打印出顾客所点的热菜条目,主食间会打印出主食条目。那这个系统的后台模式该如何设计?当然,直接在场景代码中加if…else…语句判断是个方法,可这样做又一次加重了系统耦合,违反了单一职责原则,遇到系统需求变动时,又会轻易违反开闭原则。所以,我们需要重新组织一下结构。\n可以将该系统设计成前台服务员系统和后台系统,后台系统进一步细分成主食子系统,凉菜子系统,热菜子系统。后台三个子系统设计如下:",
"class backSys():\n def cook(self,dish):\n pass\nclass mainFoodSys(backSys):\n def cook(self,dish):\n print (\"MAINFOOD:Cook %s\"%dish)\nclass coolDishSys(backSys):\n def cook(self,dish):\n print (\"COOLDISH:Cook %s\"%dish)\nclass hotDishSys(backSys):\n def cook(self,dish):\n print (\"HOTDISH:Cook %s\"%dish)",
"前台服务员系统与后台系统的交互,我们可以通过命令的模式来实现,服务员将顾客的点单内容封装成命令,直接对后台下达命令,后台完成命令要求的事,即可。前台系统构建如下:",
"class waiterSys():\n def __init__(self):\n self.menu_map=dict()\n self.commandList=[]\n def setOrder(self,command):\n print (\"WAITER:Add dish\")\n self.commandList.append(command)\n\n def cancelOrder(self,command):\n print (\"WAITER:Cancel order...\")\n self.commandList.remove(command)\n\n def notify(self):\n print (\"WAITER:Nofify...\")\n for command in self.commandList:\n command.execute()",
"前台系统中的notify接口直接调用命令中的execute接口,执行命令。命令类构建如下:",
"class Command():\n receiver = None\n def __init__(self, receiver):\n self.receiver = receiver\n def execute(self):\n pass\nclass foodCommand(Command):\n dish=\"\"\n def __init__(self,receiver,dish):\n self.receiver=receiver\n self.dish=dish\n def execute(self):\n self.receiver.cook(self.dish)\n\nclass mainFoodCommand(foodCommand):\n pass\nclass coolDishCommand(foodCommand):\n pass\nclass hotDishCommand(foodCommand):\n pass",
"Command类是个比较通用的类,foodCommand类是本例中涉及的类,相比于Command类进行了一定的改造。由于后台系统中的执行函数都是cook,因而在foodCommand类中直接将execute接口实现,如果后台系统执行函数不同,需要在三个子命令系统中实现execute接口。这样,后台三个命令类就可以直接继承,不用进行修改了。(这里子系统没有变动,可以将三个子系统的命令废弃不用,直接用foodCommand吗?当然可以,各有利蔽。请读者结合自身开发经验,进行思考相对于自己业务场景的使用,哪种方式更好。)\n为使场景业务精简一些,我们再加一个菜单类来辅助业务,菜单类在本例中直接写死。",
"class menuAll:\n menu_map=dict()\n def loadMenu(self):#加载菜单,这里直接写死\n self.menu_map[\"hot\"] = [\"Yu-Shiang Shredded Pork\", \"Sauteed Tofu, Home Style\", \"Sauteed Snow Peas\"]\n self.menu_map[\"cool\"] = [\"Cucumber\", \"Preserved egg\"]\n self.menu_map[\"main\"] = [\"Rice\", \"Pie\"]\n def isHot(self,dish):\n if dish in self.menu_map[\"hot\"]:\n return True\n return False\n def isCool(self,dish):\n if dish in self.menu_map[\"cool\"]:\n return True\n return False\n def isMain(self,dish):\n if dish in self.menu_map[\"main\"]:\n return True\n return False\n\ndish_list=[\"Yu-Shiang Shredded Pork\",\"Sauteed Tofu, Home Style\",\"Cucumber\",\"Rice\"]#顾客点的菜\nwaiter_sys=waiterSys()\nmain_food_sys=mainFoodSys()\ncool_dish_sys=coolDishSys()\nhot_dish_sys=hotDishSys()\nmenu=menuAll()\nmenu.loadMenu()\nfor dish in dish_list:\n if menu.isCool(dish):\n cmd=coolDishCommand(cool_dish_sys,dish)\n elif menu.isHot(dish):\n cmd=hotDishCommand(hot_dish_sys,dish)\n elif menu.isMain(dish):\n cmd=mainFoodCommand(main_food_sys,dish)\n else:\n continue\n waiter_sys.setOrder(cmd)\nwaiter_sys.notify()",
"2 Discriptions\n命令模式的定义为:将一个请求封装成一个对象,从而可以使用不同的请求将客户端参数化,对请求排队或者记录请求日志,可以提供命令的撤销和恢复功能。命令模式中通常涉及三类对象的抽象:Receiver,Command,Invoker(本例中的waiterSys)。\n只有一个Invoker的命令模式也可以抽象成一个类似的“星形网络”,但与之前介绍的中介者模式不同,单纯的命令模式更像是一个辐射状的结构,由Invoker直接对Receiver传递命令,而一般不反向传递,中介者模式“星形网络”的中心,是个协调者,抽象结节间的信息流全部或者部分是双向的。\n另外,命令模式的定义中提到了“撤销和恢复功能”,也给了各位开发人员一个命令模式使用过程中的建议:各个Receiver中可以设计一个回滚接口,支持命令的“撤销”。\n3 Advantages\n\n低耦合:调用者和接收者之间没有什么直接关系,二者通过命令中的execute接口联系;\n扩展性好:新命令很容易加入,也很容易拼出“组合命令”。\n\n4 Usages\n触发-反馈机制的系统,都可以使用命令模式思想。如基于管道结构的命令系统(如SHELL),可以直接套用命令模式;此外,GUI系统中的操作反馈(如点击、键入等),也可以使用命令模式思想。\n5 Disadvantages\n如果业务场景中命令比较多,那么对应命令类和命令对象的数量也会增加,这样系统会膨胀得很大。"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ealogar/curso-python
|
advanced/5_decorators.ipynb
|
apache-2.0
|
[
"# Exercise:\n# Evaluate with time the execution time of factorial function\nfrom time import sleep\nimport simcache\n\n\ndef factorial(x):\n sleep(0.1) # This sleep can not be removed!!\n if x < 2:\n return 1\n return x * factorial(x - 1)\n\nimport time\nt_start = time.time()\nprint factorial(100)\nt_end = time.time()\nprint \"Ellapsed time: {}\".format(t_end-t_start)\n\n# Exercise:\n# Evaluate with timeit the execution time of new factorial function\ndef factorial(x):\n sleep(0.1) # This sleep can not be removed!!\n if x < 2:\n return 1\n res = simcache.get_key(x - 1)\n if not res:\n res = factorial(x - 1)\n simcache.set_key(x - 1, res)\n return x * res\n\n# How to evaluate the time with timeit: this module is __main__\nprint __name__\n#import timeit\n#print timeit.timeit(stmt='factorial(20)',\n# setup='from __main__ import factorial',\n# number=10)\n\n# Exercise: check fibonaccci execution time\ndef fibonacci(n):\n \"\"\"Return the nth fibonacci number\"\"\"\n if n < 2:\n return n\n return fibonacci(n - 1) + fibonacci(n - 2)\n\n# Use this cell to measure",
"Remember DRY: Don't Repeat Yourself!\n\nLet's try to apply memoization in a generic way to not modified functions\nLet's do a bit of magic to apply memoization easily",
"real_fibonacci = fibonacci\ndef fibonacci(n):\n res = simcache.get_key(n)\n if not res:\n res = real_fibonacci(n)\n simcache.set_key(n, res)\n return res\n\nt1_start = time.time()\nprint fibonacci(30)\nt1_elapsed = time.time() - t1_start\nprint \"fibonacci time {}\".format(t1_elapsed)\nt1_start = time.time()\nprint real_fibonacci(30)\nt1_elapsed = time.time() - t1_start\nprint \"fibonacci_real time {}\".format(t1_elapsed)",
"Let's explain the trick in slow motion",
"simcache.clear_keys() # Let's clean the cache\n# Let's define the real fibonacci computation function\ndef fibonacci(n):\n if n < 2:\n return n\n print \"Real fibonacci func, calling recursively to\", fibonacci, n\n # Once the trick is done globals will contain a different function binded to 'fibonacci'\n return fibonacci(n - 1) + fibonacci(n - 2)\n\nprint fibonacci\n\nprint fibonacci(5)\n\n# Call graph of fibonacci for n=5\n#\n# __ 4 ---- 3 ----------- 2 ---- 1\n# 5 __/ \\__ 2 ---- 1 \\__ 1 \\__ 0\n# | \\__ 0\n# \\__ 3 ---- 2 ---- 1\n# \\__ 1 \\__ 0\n#\n\n# Let's save a reference to the real function\nreal_fibonacci = fibonacci\n\nprint real_fibonacci # Points to real fibonacci calculation function\n\n# Let's create a new function which will use memoization\ndef memoized_fibonacci(n):\n # Try to retrieve value from cache\n res = simcache.get_key(n)\n if not res:\n # If failed, call real fibonacci func\n print \"Memoized fibonacci func, proceeding to call real func\",\\\n real_fibonacci, n\n res = real_fibonacci(n)\n # Store real result\n simcache.set_key(n, res)\n return res\n\nprint memoized_fibonacci # This is the new function with memoization\n\n# Let's replace the real function by the memoized version in module globals\nfibonacci = memoized_fibonacci\n\nprint fibonacci(5) # Let's see what happens now\n\nprint fibonacci(5) # Let's try again\n\nprint fibonacci(10) # Let's try with a bigger number",
"We have applied our first hand-crafted decorator\nHow would you memoize any function, not just fibonacci?\n\nDo you remember functions are first class objects? They can be used as arguments or return values...\nDo you remember we can declare functions inside other functions?\nLet's apply these concepts to find a generic method to use memoization",
"def fibonacci(n):\n if n < 2:\n return n\n return fibonacci(n - 1) + fibonacci(n - 2)\n\ndef memoize_any_function(func_to_memoize):\n \"\"\"Function to return a wrapped version of input function using memoization\n \"\"\"\n print \"Called memoize_any_function\"\n\n def memoized_version_of_func(n):\n \"\"\"Wrapper using memoization\n \"\"\"\n res = simcache.get_key(n)\n if not res:\n res = func_to_memoize(n) # Call the real function\n simcache.set_key(n, res)\n return res\n return memoized_version_of_func\n\nfibonacci = memoize_any_function(fibonacci)\n\nprint fibonacci(35)\n\n# Much nice if we do:\n@memoize_any_function # This is the simplest decorator syntax\ndef fibonacci(n):\n if n < 2:\n return n\n return fibonacci(n - 1) + fibonacci(n - 2)\n\nprint fibonacci(150)",
"Python decorators:\n\nA callable which receives a funtion as only argument and returns another function. Typically the resulting function wrapps the first function executing some code before and/or after the first is called.\nUsed with the at @ symbol before a function or method\nDon't forget to deal with 'self' as first argument of methods\nThe decoration is done at import / evaluation time",
"def timing_decorator(decorated_func):\n print \"Called timing_decorator\"\n\n def wrapper(*args): # Use variable arguments to be compatible with any function\n \"\"\"Wrapper for time executions\n \"\"\"\n start = time.time()\n res = decorated_func(*args) # Call the real function\n elapsed = time.time() - start\n print \"Execution of '{0}{1}' took {2} seconds\".format(decorated_func.__name__, args, elapsed)\n return res\n return wrapper\n\n@timing_decorator\n@memoize_any_function # We can accumulate decorators\ndef fibonacci(n):\n if n < 2:\n return n\n return fibonacci(n - 1) + fibonacci(n - 2)\n\nsimcache.clear_keys()\nprint fibonacci(5)",
"It is possible to accumulate decorators\nOrder matters, they are run in strict top - down order",
"print fibonacci\n# Why is the wrapper? Can we maintain the original name ?\n\nimport functools\ndef memoize_any_function(decorated_func):\n \"\"\"Function to return a wrapped version of input function using memoization\n \"\"\"\n @functools.wraps(decorated_func) # Use functools.wraps to smooth the decoration\n def memoized_version_of_f(*args):\n \"\"\"Wrapper using memoization\n \"\"\"\n res = simcache.get_key(args)\n if not res:\n res = decorated_func(*args) # Call the real function\n simcache.set_key(args, res)\n return res\n return memoized_version_of_f\n\ndef timing_decorator(decorated_func):\n @functools.wraps(decorated_func)\n def wrapper(*args): # Use variable arguments to be compatible with any function\n \"\"\"Wrapper for time executions\n \"\"\"\n start = time.time()\n res = decorated_func(*args) # Call the real function\n elapsed = time.time() - start\n print \"Execution of '{0}{1}' took {2} seconds\".format(decorated_func.__name__, args, elapsed)\n return res\n return wrapper\n\n@timing_decorator\n@memoize_any_function # We can accumulate decorators, and they are run in strict top-down order\ndef fibonacci(n):\n if n < 2:\n return n\n return fibonacci(n - 1) + fibonacci(n - 2)\n\nprint fibonacci(100)",
"functools.wraps copies name, module and docstring of wrapped function to its wrapper\nUse variable number of positional and keyword arguments for higher compatibility\n\nPython decorators:\n\n\nA callable which receives a funtion as only argument and returns another function. Typically the resulting function wrapps the first function executing some code before and/or after the first is called.\n\n\nNew in Python 2.4, they are the pythonic implementation of Decorator Pattern\n\n\nUsed with the at @ symbol before a function or method\n\nDon't forget to deal with 'self' as first argument of methods\n\n\nThe decoration is done at import / evaluation time\n\nIt is possible to accumulate decorators\n\nOrder matters, they are run in strict top - down order\n\n\n\nfunctools.wraps copies name, module and docstring of wrapped function to its wrapper\n\n\nUse variable number of positional and keyword arguments for higher compatibility\n\n\nDecorators are executed each time the decorated function is called\n\nPotential performance loss\n\n\n\nTypical uses:\n\nMemoization\nTiming, profiling, logging, stats...\nOverriding arguments, pre / post conditions\nRetries\nException handling"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Olsthoorn/IHE-python-course-2017
|
exercises/Mar14/shapesGetPrecDataForYourModelFromChirps.ipynb
|
gpl-2.0
|
[
"<figure>\n <IMG SRC=\"../../logo/logo.png\" WIDTH=250 ALIGN=\"right\">\n</figure>\n\nIHE Python course, 2017\nCHIRPS data for precipitation (worldwide between 50S and 50N latitude)\nNever again find yourself without appropriate precipitation data\nThisi is what we've leaned from the presentation by Tim Hessels on March 14.\nTo put this in practice, we'll download precipitaion data for a groundwater model in Morocco in the Tafilat area near Erfoud (find it in GoogleMaps.",
"import numpy as np\nfrom pprint import pprint \n\n\ndef prar(A, ncol=8, maxsize=1000):\n \"\"\"prints 2D arrays the Matlab 2 (more readable)\"\"\"\n if A.size>1000: # don't try to print a million values, or your pc will hang. \n print(A)\n return\n n = A.shape[1]\n # print columns in formatted chunks that fit on on line\n for i, Asub in enumerate(np.split(A, range(ncol, n, ncol), axis=1)):\n if Asub.size == 0: Asub=A\n print(\"columns[{}:{}]\".format(i * ncol, i * ncol +Asub.shape[1]))\n for L in Asub:\n print((\" {:10.5g}\" * len(L)).format(*L))\n print()\n",
"CHIRPS (Climate Hazards group Infrared Precipitation with Stations\nDownload the data files for the desired periods for the whole of Africa from CHIRPS. You can do this with FileZilla a free app for this purpose.\nFor access to the CHIRPTS data see\nhttp://chg.ucsb.edu/data/chirps/\nNext to tiff files one can find png (images) on the sight that can be directly viewed in your browser or imported into any application, whithout any processing. But of course a pictures does not have the original data.\nglob (unix-like file handling for python)\nAssuming that you have downloaded some files, use glob to get a list of them on your computer.",
"import glob \n\nchirps_files = glob.glob('../**/*/*.tif')\npprint(chirps_files)\nfname = chirps_files[0]",
"gdal (working with tiff files among others, GIS)\nimport gdal and check if the file is present by opening it.",
"import gdal\n\ntry: # is the file present?\n g = gdal.Open(fname)\nexcept:\n exception(FileExistsError(\"Can't open file <{}>\".fname))",
"Get some basic information from the tiff file\nOk, now with g the successfully opended CHIRPS file, get some basic information from that file.",
"print(\"\\nBasic information on file <{}>\\n\".format(fname))\nprint(\"Driver: \", g.GetDriver().ShortName, '/', g.GetDriver().LongName)\nprint(\"Size : \", g.RasterXSize, 'x', g.RasterYSize, 'x', g.RasterCount)\nprint(\"Projection :\\n\", g.GetProjection())\nprint()\nprint(\"\\nGeotransform information:\\n\")\ngt = g.GetGeoTransform()\nprint(\"Geotransform :\", gt)\n\n# assign the individual fields to more recognizable variables\nxUL, dx, xRot, yUL, yRot, dy = gt\n\n# get the size of the data and the number of bands in the tiff file (is 1)\nNx, Ny, Nband = g.RasterXSize, g.RasterYSize, g.RasterCount\n\n# show what we've got:\nprint('Nx = {}\\nNy = {}\\nxUL = {}\\nyUL = {}\\ndx = {}\\ndy = {} <--- Negative !'.format(Nx, Ny, xUL, yUL, dx, dy))",
"This projection says that it's WGS1984 (same as GoogleEarth and GoogleMaps. Therefore it is in longitude (x) and latitute (y) coordinates. This allows to immediately compute the WGS coordinates (lat/lon) from it, for instance for each pixel/cell center. It's also straightforward to compute the bounding box of this array and plot it in QGIS for instance:",
"# Bounding box around the tiff data set\ntbb = [xUL, yUL + Ny * dy, xUL + Nx * dx, yUL]\nprint(\"Boudning box of data in tiff file :\", tbb)\n\n# Generate coordinates for tiff pixel centers\nxm = 0.5 * dx + np.linspace(xUL, xUL + Nx * dx, Nx) \nym = 0.5 * dy + np.linspace(yUL, yUL + Ny * dy, Ny)",
"Generate a shapefile with a polyline that represents the model boundary\nThe contour coordinates of the Erfoud/Tafilalet groundwater model happen to be the file ErfoudModelContour.kml. Kml files come from GoogleEarth and are in WGS84 coordinates. It was obtained by digitizing the line directly in Google Earth.\nWe extract the coordinates from that HTML file and put them in a list of lists, the form needed to inject the coordinates into the shapefile.\nExtraction can be done in several ways, for instance with one of the HTML parsers that are available on the internet. However if you look at this file in the internet, it's clear that we may do this in a simple way. Read the file line by line until we find the word \"coordinates\". Then read the next line, which contains all the coordinates. Then clean that line form tabs, put a comma between each tripple of coordinate values and turn it into a list of lists with each list the x, y an z values of one vertice of the model boundary:",
"with open('ErfoudModelContour.kml', 'r') as f:\n for s in f: # read lines from this file\n if s.find('coord') > 0: # word \"coord\" bound?\n \n # Then the next line has all coordinates. Read it and clean up.\n pnts_as_str = f.readline().replace(' ',',').replace('\\t','').split(',')\n \n # Use a comprehension to put these coordinates in a list, where list[i] has\n # a sublist of the three x, y and z coordinates.\n points = [ [float(p) for p in p3]\n for p3 in [pnts_as_str[i:i+3]\n for i in range(0, len(pnts_as_str), 3)] ]\n break;\n\n# The points\npnts = np.array(points)\n\n# The bounding box\nmbb = [np.min(pnts[:,0]), np.min(pnts[:,1]), np.max(pnts[:,0]), np.max(pnts[:,1])]\n\n#pprint(points)\n#print(mbb)",
"Generate the shapefile holding a 3 polygons a) The bonding box around the data in the tiff file, b) the bounding box of around the model contour. 3) the model contour",
"import shapefile as shp\n\ntb = lambda indices: [tbb[i] for i in indices] # convenience for selecting from tiff bounding box\nmb = lambda indices: [mbb[i] for i in indices] # same for selecting from model bounding box\n\n# open a shape file writer objetc\nw = shp.Writer(shapeType=shp.POLYGON)\n\n# add the three polylines to w.shapes\n# each shape has parts of of which can contain a polyline. We have one polyline, i.e. one part\n# in each chape. Therfore parts is a list of one item, which is a list of points of the polyline.\nw.poly(parts=[points]) # only one part, therefore, put points inbetween brackets.\nw.poly(parts=[[ tb([0, 1]), tb([2, 1]), tb([2, 3]), tb([0, 3]), tb([0, 1])]]) # bbox of tiff file\nw.poly(parts=[[ mb([0, 1]), mb([2, 1]), mb([2, 3]), mb([0, 3]), mb([0, 1])]]) # bbox of model\n\nw.field(\"Id\",\"C\", 20) # Add one field\nw.field(\"Id2\", \"N\") # Add another field, just to see if it works and how\n\n# Aadd three records to w.records (one for eache shape\nw.record(\"model contour\", 1) # each record has two values, a string and a nuber, see fields\nw.record(\"model bbox\", 2)\nw.record(\"Tiff bbox\", 3)\n\n# save this to a new shapefile\nw.save(\"ErfoudModelContour\")\n\n# Change False to True so see the coordinates and the records\nif False:\n print()\n for i, sh in enumerate(w.shapes()):\n pprint(sh.points)\n print()\n #w.shapes()[0].points # check if w knows about these points\n\n for r in w.records:\n print(r)\n\n# To verify what's been saved read the saved file and show what's in it:\nif False:\n s = shp.Reader(\"ErfoudModelContour\")\n\n for sh in s.shapeRecords():\n pprint(sh.shape.points)\n print(sh.record)",
"Show shapefile in QGIS\nFire up QGIS and load the shape file. Set its CRS to WGS84 (same coordinates as GoogleMaps, most general LatLon)\nHere are the pictures taken van de screen of QGIS after the shapefile was loaded and the label under properties was set tot transparent with solid contour line.\nTo get the GoogleMaps image, look for it it under web in the main menu.\nThe first image is zoomed out, so that the location of the model can be seen in the south east of this image. It's in Morocco.\n<figure>\n <IMG SRC=\"./EfoudModelContour2.png\" WIDTH=750 ALIGN=\"center\">\n</figure>\n\nThe more detailed image shows the contour of the model and its bounding box. It proves that it works.\n<figure>\n <IMG SRC=\"./EfoudModelContour1.png\" WIDTH=750 ALIGN=\"center\">\n</figure>\n\nThe next step is to select the appropriage precipitation data from the CHIRPS file.\nGet the precipitation data from the CHIRPS tiff file\nThe actual data are stored in rasterbands. We saw from the size above, that this file has only one rasterband. Rasterband information is obtained one band at a time. So here we pass band number 1.",
"A = g.GetRasterBand(1).ReadAsArray()\n\nA[A <- 9000] = 0. # replace no-dta values by 0\n\nprint()\nprint(\"min precipitation in mm \", np.min(A))\nprint(\"max precipitation in mm \", np.max(A))",
"Select a subarea equal to the bbox of the model contour.",
"# define a function to get the indices of the center points between the bounding box extents of the model\n\ndef between(x, a, b):\n \"\"\"returns indices of ponts between a and b\"\"\"\n I = np.argwhere(np.logical_and(min(a, b) < x, x < max(a, b)))\n return [i[0] for i in I] \n\nix = between(xm, mbb[0], mbb[2])\niy = between(ym, mbb[1], mbb[3])\n\nprint(ix)\nprint(iy)",
"Read the data again, but now only the part that covers the model in Marocco:",
"A = g.GetRasterBand(1).ReadAsArray(xoff=int(ix[0]), yoff=int(iy[0]), win_xsize=len(ix), win_ysize=len(iy))\n\nprint(\"Preciptation on the Erfoud model area in Marocco from file\\n{}:\\n\".format(fname))\nprar(A)",
"Just for curiosity, show the size of the area covered and the size resolution of the precipitation data.",
"# The extent of this area can be obtained from the latiture and longitude together with the radius of the earth.\nR = 6371 # km\nEWN = R * np.cos(np.pi/180 * mbb[1]) * np.pi/180. *(mbb[2] - mbb[0])\nEWS = R * np.cos(np.pi/180 * mbb[3]) * np.pi/180. *(mbb[2] - mbb[0])\nNS = R * np.pi/180 * (mbb[3] - mbb[1])\n\nprint(\"The size of the bounding box in km:\")\nprint(\"EW along the north boundary : \",EWN)\nprint(\"EW along the south boundary : \",EWS)\nprint(\"NS : \",NS)\nprint(\"Size of each tile (the resolution) = {:.3f} x {:.3f} km: \".format(EWN/A.shape[1], NS/A.shape[0]))",
"It should be clear that the EW resolution depends on the latitude while the NS resolution is constant.\nConclusion\nIt is now straightforward to get the data of an arbitrary number of periods from the CHIRPS website for the model, in fact for any location covered by CHIRPS on a 5 by b km resolution."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
francesco-mannella/neunet-basics
|
course/hopfield-MNIST-simulation.ipynb
|
mit
|
[
"Storing and recalling MNIST digits with Hopfield nets",
"%matplotlib inline\nfrom pylab import *",
"Let us implement a Hopfield network using images from the MNIST dataset as patterns.\nInitialize the dataset\nFirst we initialize the dataset:",
"#### Download the dataset \n# Get the script from internet\n! wget https://raw.githubusercontent.com/sorki/python-mnist/master/get_data.sh > /dev/null 2>&1 \n\n# Run it to dovnload all files in a local dir named 'data'\n! bash get_data.sh >/dev/null 2>&1\n\n# We do not need the script anymore, remove it\n! rm get_data.sh* > /dev/null 2>&1\n\n# Initialize the dataset variables\n%run utils",
"We now fill a array with all parameters. We only need few samples, we take them from the training set. \nWe take samples 2 and 5, representing respectively a '4' and a '2'",
"# Take two rows\npatterns = array(mndata.train_images)[[2,5],]\nlabels = array(mndata.train_labels)[[2,5],]\n\n# We need only the sign (transform to binary input)\npatterns = sign(patterns/255.0 - 0.5)\n\n# Set the number of patterns (two in out case)\nn_patterns = patterns.shape[0]\n\n# Number of units of the network\nn = img_side*img_side",
"Let us visualize our two patterns:",
"fig = figure(figsize = (8, 4))\nfor i in xrange(n_patterns):\n plot_img( to_mat(patterns[i]), \n fig, i+1, windows = 2 )",
"Learning the weights\nLearning of the weight happens offline at the beginning, in one shot:",
"# Initialize weights to zero values\nW = zeros([n,n])\n\n# Accumulate outer products \nfor pattern in patterns :\n W += outer(pattern, pattern)\n\n# Divide times the number of patterns\nW /= float(n_patterns)\n\n# Exclude the autoconnections\nW *= 1.0 - eye(n, n)",
"Recall: Iterating the timesteps\nNow we implement the recall part, in which we give an initial activation to the network and iterate the timesteps unitil it relaxes to a steady state.",
"# Number of timesteps\nstime = 1000\n\n# Number of samples to store as long \n# as spreading goes on\nsamples = 100\n\n# store data at each sampling interval\nsample_interval = stime/samples\n\n# Init the stories of spreading as a zero array,\n# we will fill it in at each timestep and we will \n# plot it at the end\nstore_images = zeros([n_patterns, n, samples])\n\n# Init the stories of energy as a zero array,\n# we will fill it in at each timestep and we will \n# plot it at the end\nstore_energy = zeros([n_patterns, samples])\n\n# We simulate two iterations, each one starting\n# with a coorupted version of one of our two patterns\nfor target_index in xrange(n_patterns) :\n\n # Copy the original pattern\n target = patterns[target_index]\n x = target.copy()\n \n # Then modify the second half of the image \n # putting random binary values\n x[(n/2):] = sign(randn(n/2))\n\n \n # During the iterations we need to peek \n # one unit at random. Thus we must prepare\n # a random sequence of indices:\n # we get the sequence of indices \n # of the network units\n x_indices = arange(n)\n \n # and we shuffle it\n shuffle(x_indices)\n \n # the iterations\n for t in xrange(stime) :\n # Get the current index browsing \n # the random sequence \n current_x = x_indices[t%n] \n \n # Activation of a unit\n x[current_x] = sign(dot(W[current_x,:], x))\n \n # Store current activations\n if stime%sample_interval == 0 :\n # Energy of the current state of the network\n store_energy[target_index, t/sample_interval] = -0.5*dot(x, dot(W, x))\n \n # array containing samples of network activation\n store_images[target_index,:,t/sample_interval] = x\n",
"Here you can see two animations showing the network that is initially activated with one of the two patterns. The initial activation is corrupted with a lot of noise so that the bottom half of the figure is completelly obscured. \nThe network moves from this initial activation to the correct attractor state (the original uncorrupted figure). During this process the energy of the network lowers untill it reaches a steady state.\n<img src=\"mnist-hopfield_4.gif\" width=100%>\n<img src=\"mnist-hopfield_2.gif\" width=100%>\nAppendix: How to build the animation\nWe use the matplotlib.animation package for animations and the gridspec class to customize the layout of subplots.",
"# The matplotlib object to do animations\nfrom matplotlib import animation\n\n# This grid allows to layout subplots in a more\n# flexible way\nimport matplotlib.gridspec as gridspec",
"To plot the two animations we need a function to initialize a figure with three plots: the first showing the target digit, the second showing the current activity of the network and the third showing the sum of squared errors.",
"def init_figure(fig) :\n\n # Init the grid and the figure\n gs = gridspec.GridSpec(6, 20)\n\n #-------------------------------------------------\n # Plot 1 - plot the target digit\n\n # Create subplot\n ax1 = fig.add_subplot(gs[:4,:4])\n\n title(\"target\")\n\n # Create the imshow and save the handler\n im_target = ax1.imshow(to_mat(patterns[0]), \n interpolation = 'none', \n aspect = 'auto',\n cmap = cm.binary) \n axis('off')\n\n\n #-------------------------------------------------\n # Plot 2 - plot the current state of the network\n\n # Create subplot\n ax2 = fig.add_subplot(gs[:4,6:10])\n\n title(\"recalling\")\n\n # Create the imshow and save the handler\n im_activation = ax2.imshow(to_mat(store_images[0,:,0]), \n interpolation = 'none', \n aspect = 'auto',\n cmap = cm.binary) \n axis('off')\n\n #-------------------------------------------------\n # Plot 3 - plot the current history of energy\n\n # Create subplot\n ax3 = fig.add_subplot(gs[:4,12:])\n\n title(\"Energy\")\n\n # Create the line plot and save the handler\n im_energy, = ax3.plot(store_energy[0,])\n\n # Only bottom-left axes - no tics\n ax3.spines['top'].set_visible(False)\n ax3.spines['right'].set_visible(False)\n ax3.set_xticks([])\n ax3.set_yticks([]) \n \n # return plot handlers\n return im_target, im_activation, im_energy",
"We also need one another function that updates the figure at \neach animation timestep with a new sample",
"# Updates images at each frame of the animation\n# data : list of tuples Each row contains the\n# arguments of update for \n# a frame\n# returns : tuple The handlers of the \n# images \ndef update(data) :\n\n # unpack plot handlers and data \n im_A, im_B, im_C, A, B, C = data\n \n # Update data of plot 1, plot 2 and 3\n im_A.set_array(to_mat(A))\n im_B.set_array(to_mat(B))\n im_C.set_data(arange( len(C)), C) \n \n # return plot handlers\n return im_A, im_B, im_C",
"Finally we use the FuncAnimation class. We first build a data list where each row is a tuple containing plot handlers and data do for plot updates..",
"for target_index in xrange(n_patterns):\n\n # Init the figure\n fig = figure(figsize=(8, 3.5)) \n \n im_target, im_activation, im_energy = init_figure(fig)\n \n # Build the sequence of update arguments.\n # each row of the list contains:\n # 1 the target plot handler\n # 2 the activation plot handler\n # 3 the energy plot handler\n # 4 the target update data\n # 5 the activation update data\n # 6 the energy update data\n data = [(\n im_target, \n im_activation, \n im_energy,\n patterns[target_index],\n squeeze(store_images[target_index,:,t]), \n store_energy[target_index, :t] ) \n for t in xrange(samples ) ]\n\n # Create and render the animation\n anim = animation.FuncAnimation(fig, func = update, frames = data )\n # save it to file\n anim.save(\"mnist-hopfield_{:d}.gif\".format(labels[target_index]),\n fps = 10, writer='imagemagick')\n",
"<br><br><br><br><br><br><br><br><br><br><br><br><br><br>\n<br><br><br><br><br><br><br><br><br><br><br><br><br><br>\n<br><br><br><br><br><br><br><br><br><br><br><br><br><br>\nNext cell is just for styling",
"from IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../style/ipybn.css\", \"r\").read()\n return HTML(styles)\ncss_styling()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
FluVigilanciaBR/fludashboard
|
Notebooks/historical_estimated_values.ipynb
|
gpl-3.0
|
[
"Table of Contents\n\n Detailed panel \n Weekly incidence curve with typical intensity and thresholds \n Function for incidence plot: \n State example \n Regional example \n Example with state where estimates are not available: \n\n\n Obtaining the most probable activity level at selected week \n Age distribution \n Function for age distribution plot: \n\n\n Incidence table information \n Summary panel \n Season level categorization: \n Function to calculate seasonal level \n Example applying to a given entry \n Applying to the whole dataset \n\n\n Seasonal age distribution \n Incidence table information \n Displaying data for user selected week\n\nDetailed panel<a name=\"_detailed panel\"></a>\nWeekly incidence curve with typical intensity and thresholds<a name=\"_weekly incidence curve with typical intensity and thresholds\"></a>",
"# local\nfrom fludashboard.libs.flu_data import prepare_keys_name\n\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np",
"In this example, we show the current year incidence up to given week.<br>\nAlong with the current incidence, we present the following intensity thresholds:<br>\n\n\nLow activity threshold: estimated epidemic threshold based on historical levels. Minimum: incidence equivalent to 5 cases.\n\n\nHigh activity threshold: incidence considered high based on historical levels. Minimum: incidence equivalent to 10 cases.\n\n\nVery high activity threshold: incidence considered very high based on historical levels. Minimum: incidence equivalent to 20 cases.",
"df_hist = pd.read_csv('../data/historical_estimated_values.csv', encoding='utf-8')\ndf_inci = pd.read_csv('../data/current_estimated_values.csv', encoding='utf-8')\ndf_typi = pd.read_csv('../data/mem-typical.csv', encoding='utf-8')\ndf_thre = pd.read_csv('../data/mem-report.csv', encoding='utf-8')\n\nprepare_keys_name(df_hist)\nprepare_keys_name(df_inci)\nprepare_keys_name(df_typi)\nprepare_keys_name(df_thre)\n\nlevel_dict = {\n 'L0': 'Baixa', \n 'L1': 'Epidêmica',\n 'L2': 'Alta', \n 'L3': 'Muito alta'\n}\n\ndf_inci.columns",
"UF: locality code (includes UFs, Regions and Country)\nTipo: locality type (Estado, Regional or País)\nmean: estimated mean incidence\n50%: estimated median\n2.5%: estimation lower 95% confidence interval\n97.5%: estimation upper 95% confidence interval\nL0: probability of being below epi. threshold (low level)\nL1: probability of being above epi. threshold and below high activity (epidemic level)\nL2: prob. of being above high activity and below very high (high level)\nL3: prob. of being above very high activity threshold (very high level)\nSituation:\nstable: might suffer minor changes in the future. Reliable as is;\nestimated: data estimated based on opportunity (i.e. notification delay) profile. Reliable within confidence interval;\nunknown: might suffer significant changes in the coming weeks. This is the case for locations where estimation is not possible and data is still \"fresh\". Unreliable.",
"df_inci.head(5)\n\ndf_typi.head(5)\n\ndf_thre.tail(5)",
"Entries with dfthresholds['se típica do inicio do surto'] = NaN have activity too low for proper epidemic threshold definition",
"k = ['epiyear', 'epiweek', 'base_epiyear', 'base_epiweek']\ndf_inci2017 = df_inci[\n (df_inci.epiyear == 2017) &\n # (df_inci.epiweek >= 15) &\n (df_inci.dado == 'srag') &\n (df_inci.escala == 'incidência') & \n (df_inci.uf == 'BR')\n].copy()\n\ndf_inci2017.sort_values(['epiyear', 'epiweek'], inplace=True)\ndf_inci_chart = df_inci2017.copy()\ndf_inci_chart.index = df_inci_chart.epiweek\n\nk = ['epiyear', 'epiweek', 'base_epiyear', 'base_epiweek']\ndf_hist2017 = df_hist[\n (df_hist.base_epiyear == 2017) &\n (df_hist.base_epiweek == 23) &\n (df_hist.dado == 'srag') &\n (df_hist.escala == 'incidência') & \n (df_hist.uf == 'BR')\n].copy()\n\ndf_hist2017.sort_values(['epiyear', 'epiweek'], inplace=True)\n\ndf_hist_chart = df_hist2017.copy()\ndf_hist_chart.index = df_hist_chart.epiweek\n\n# 50% estimated cases\n\ndf_inci_chart[['srag', '50%', '2.5%', '97.5%']].plot()\nplt.title('Incidence')\nplt.grid(True)\nplt.show()\n\ndf_hist_chart[['srag', '50%', '2.5%', '97.5%']].plot()\nplt.title('Historial')\nplt.grid(True)\n\nplt.show()\n\ndf_hist2017['estimated_cases'] = df_hist2017['50%']\n\ndf = pd.merge(\n df_inci2017[['epiweek', 'srag', '2.5%', '97.5%']], \n df_hist2017[['epiweek', 'estimated_cases']], \n on='epiweek', how='outer'\n)\n\ndf.set_index('epiweek', inplace=True)\n\ndf.plot()\nplt.grid(True)\nplt.title('Incidence X Historial')\nplt.show()",
"Displaying data for user selected week w<a name=\"_historical data display\"></a>\nFor each week w selected by the user, the notification curve will always be that which is found on df_inci, while the estimates will be that stored in df_hist. Data df_inci only has the most recent estimates, which are based on the most recent week with data. The estimates obtained at each week is stored at df_hist.\nSo, first of all, we will slice the historical data to week w, and limit current data to week <= w.\nIf w=23, the historical dataset is already correctly sliced in df_hist2017, so we just have to limit the current for the proper plot:",
"df_hist[\n (df_hist.base_epiyear == 2017) &\n (df_hist.dado == 'srag') &\n (df_hist.escala == 'incidência') & \n (df_hist.uf == 'BR')\n ].base_epiweek.unique()\n\n# First, last keep only stable weeksfor notification curve:\ndf_inci2017.loc[(df_inci2017.situation != 'stable'), 'srag'] = np.nan\n\n# Adapt historical dataset:\ndf_hist.sort_values(['epiyear', 'epiweek'], inplace=True)\ndf_hist['estimated_cases'] = df_hist['50%']\n\n# User selected week:\ny = 2017\nw = 23\n\ndef week_data(y, w):\n df_week_inci = df_inci2017[(df_inci2017.epiweek <= w)]\n\n df_week_hist = df_hist[\n (df_hist.base_epiyear == y) &\n (df_hist.base_epiweek == w) &\n (df_hist.dado == 'srag') &\n (df_hist.escala == 'incidência') & \n (df_hist.uf == 'BR')\n ].copy()\n\n df = pd.merge(\n df_week_inci[['epiweek', 'srag']], \n df_week_hist[['epiweek', 'estimated_cases', '2.5%', '97.5%']], \n on='epiweek', how='outer'\n )\n\n df.set_index('epiweek', inplace=True)\n return df\n\ndf = week_data(y, w)\ndf.plot()\nplt.grid(True)\nplt.show()\n\n\n\nw = 28\ndf = week_data(y, w)\ndf.plot()\nplt.grid(True)\nplt.show()\n\nw = 33\ndf = week_data(y, w)\ndf.plot()\nplt.grid(True)\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ToqueWillot/M2DAC
|
FDMS/TME4/TME4_FiltrageCollaboratif.ipynb
|
gpl-2.0
|
[
"from random import random\n\nimport math",
"Collect data",
"def loadMovieLens(path='./data/movielens'):\n #Get movie titles\n movies={}\n for line in open(path+'/u.item'):\n id,title=line.split('|')[0:2]\n movies[id]=title\n\n # Load data\n prefs={}\n for line in open(path+'/u.data'):\n (user,movieid,rating,ts)=line.split('\\t')\n prefs.setdefault(user,{})\n prefs[user][movies[movieid]]=float(rating)\n \n return prefs\n\ndata = loadMovieLens(\"data/ml-100k\")",
"Explore data",
"data['3']",
"Creation of train set and test set\nWe want to split data in two set (train and test)\nActually : \n train= 80%totaldataset\n test = 20%totaldataset",
"def split_train_test(data,percent_test):\n test={}\n train={}\n movie={}\n for u in data.keys():\n test.setdefault(u,{})\n train.setdefault(u,{})\n for movie in data[u]:\n #print(data[u][movie])\n if (random()<percent_test):\n test[u][movie]=data[u][movie]\n else:\n train[u][movie]=data[u][movie]\n return train, test\n\npercent_test=0.2\ntrain,test=split_train_test(data,percent_test)",
"Part that allows to clean train and test\nWe don't want to have user in test set which are not in train test, the same for the movies so we delete them",
"\ndef get_moove(data):\n moove = {}\n for u in data:\n for m in data[u]:\n moove[m]=0\n return moove\n \ndef get_youser(data):\n youser = {}\n for u in data:\n youser[u]=0\n return youser \n\ndef clean(d1,d2):\n to_erase = {}\n for i in d1:\n try:\n d2[i]\n except KeyError:\n to_erase[i]=0\n for i in d2:\n try:\n d1[i]\n except KeyError:\n to_erase[i]=0\n return to_erase\ndef _remove_users(test,rem):\n for i in rem:\n try:\n del test[i]\n except KeyError:\n pass\ndef _remove_movies(test,rem):\n for i in test:\n for j in rem:\n try:\n del test[i][j]\n except KeyError:\n pass\n\n\nmooveToRemoove = clean(get_moove(train),get_moove(test))\nyouserToRemoove = clean(get_youser(train),get_youser(test))\n_remove_users(test,youserToRemoove)\n_remove_movies(test,mooveToRemoove)",
"Collaboritive Filtering classes",
"class BaselineMeanUser:\n def __init__(self):\n self.users={}\n self.movies={}\n def fit(self,train):\n for user in train:\n note=0\n for movie in train[user]:\n note+=train[user][movie]\n note=note/len(train[user])\n self.users[user]=round(note)\n \n def predict(self,user,movie):\n return self.users[user]\n def score(self,X):\n nb_movies = len(get_moove(X))\n score = 0\n for user in X:\n for movie in X[user]:\n if(self.predict(user,movie)==X[user][movie]):\n score+=1\n return float(score)/nb_movies\n \n \n \nclass BaselineMeanMovie:\n def __init__(self):\n self.users={}\n self.movies={}\n def fit(self,train):\n movies = get_moove(train)\n for movie in movies:\n note=0\n cpt=0\n for user in train:\n try:\n note+=train[user][movie]\n cpt+=1\n except KeyError:\n pass\n note=note/cpt\n self.movies[movie]=round(note)\n \n def predict(self,user,movie):\n return self.movies[movie]\n def score(self,X):\n nb_movies = len(get_moove(X))\n score = 0\n for user in X:\n for movie in X[user]:\n if(self.predict(user,movie)==X[user][movie]):\n score+=1\n return float(score)/nb_movies\n\nbaseline_mu= BaselineMeanUser()\nbaseline_mm= BaselineMeanMovie()\n\nbaseline_mu.fit(train)\nbaseline_mm.fit(train)\n\nprint(\"score baseline mean user \",baseline_mu.score(test))\nprint(\"score baseline mean movie \",baseline_mm.score(test))",
"",
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ntag_headers = ['user_id', 'movie_id', 'tag', 'timestamp']\ntags = pd.read_table('data/ml-10M/tags.dat', sep='::', header=None, names=tag_headers)\n\nrating_headers = ['user_id', 'movie_id', 'rating', 'timestamp']\nratings = pd.read_table('data/ml-10M/ratings.dat', sep='::', header=None, names=rating_headers)\n\nmovie_headers = ['movie_id', 'title', 'genres']\nmovies = pd.read_table('data/ml-10M/movies.dat',\n sep='::', header=None, names=movie_headers)\nmovie_titles = movies.title.tolist()\n\nmovies.head()\n\nratings.head()\n\ntags.head()\n\ndf = movies.join(ratings, on=['movie_id'], rsuffix='_r').join(tags, on=['movie_id'], rsuffix='_t')\ndel df['movie_id_r']\ndel df['user_id_t']\ndel df['movie_id_t']\ndel df['timestamp_t']\n\ndf.head()\n\nrp = df.pivot_table(columns=['movie_id'],index=['user_id'],values='rating')\nrp.head()\n\nrp = rp.fillna(0); # Replace NaN\nrp.head()\n\nQ = rp.values\n\nQ\n\nQ.shape\n\nW = Q>0.5\nW[W == True] = 1\nW[W == False] = 0\n# To be consistent with our Q matrix\nW = W.astype(np.float64, copy=False)\n\nW\n\nlambda_ = 0.1\nn_factors = 100\nm, n = Q.shape\nn_iterations = 20\n\nX = 5 * np.random.rand(m, n_factors) \nY = 5 * np.random.rand(n_factors, n)\n\ndef get_error(Q, X, Y, W):\n return np.sum((W * (Q - np.dot(X, Y)))**2)\n\nerrors = []\nfor ii in range(n_iterations):\n X = np.linalg.solve(np.dot(Y, Y.T) + lambda_ * np.eye(n_factors), \n np.dot(Y, Q.T)).T\n Y = np.linalg.solve(np.dot(X.T, X) + lambda_ * np.eye(n_factors),\n np.dot(X.T, Q))\n if ii % 100 == 0:\n print('{}th iteration is completed'.format(ii))\n errors.append(get_error(Q, X, Y, W))\nQ_hat = np.dot(X, Y)\nprint('Error of rated movies: {}'.format(get_error(Q, X, Y, W)))\n\n%matplotlib inline\n\nplt.plot(errors);\nplt.ylim([0, 20000]);\n\ndef print_recommendations(W=W, Q=Q, Q_hat=Q_hat, movie_titles=movie_titles):\n #Q_hat -= np.min(Q_hat)\n #Q_hat[Q_hat < 1] *= 5\n Q_hat -= np.min(Q_hat)\n Q_hat *= float(5) / np.max(Q_hat)\n movie_ids = np.argmax(Q_hat - 5 * W, axis=1)\n for jj, movie_id in zip(range(m), movie_ids):\n #if Q_hat[jj, movie_id] < 0.1: continue\n print('User {} liked {}\\n'.format(jj + 1, ', '.join([movie_titles[ii] for ii, qq in enumerate(Q[jj]) if qq > 3])))\n print('User {} did not like {}\\n'.format(jj + 1, ', '.join([movie_titles[ii] for ii, qq in enumerate(Q[jj]) if qq < 3 and qq != 0])))\n print('\\n User {} recommended movie is {} - with predicted rating: {}'.format(\n jj + 1, movie_titles[movie_id], Q_hat[jj, movie_id]))\n print('\\n' + 100 * '-' + '\\n')\n#print_recommendations()\n\nweighted_errors = []\nfor ii in range(n_iterations):\n for u, Wu in enumerate(W):\n X[u] = np.linalg.solve(np.dot(Y, np.dot(np.diag(Wu), Y.T)) + lambda_ * np.eye(n_factors),\n np.dot(Y, np.dot(np.diag(Wu), Q[u].T))).T\n for i, Wi in enumerate(W.T):\n Y[:,i] = np.linalg.solve(np.dot(X.T, np.dot(np.diag(Wi), X)) + lambda_ * np.eye(n_factors),\n np.dot(X.T, np.dot(np.diag(Wi), Q[:, i])))\n weighted_errors.append(get_error(Q, X, Y, W))\n print('{}th iteration is completed'.format(ii))\nweighted_Q_hat = np.dot(X,Y)\n#print('Error of rated movies: {}'.format(get_error(Q, X, Y, W)))\n\nweighted_Q_hat = np.dot(X,Y)\n\nplt.plot(weighted_errors);\nplt.xlabel('Iteration Number');\nplt.ylabel('Mean Squared Error');\n\nprint_recommendations(Q_hat=weighted_Q_hat)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ryo8128/study_python
|
s10_logistic_regression.ipynb
|
mit
|
[
"ロジスティック回帰",
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn.linear_model import LogisticRegression\n%matplotlib inline",
"$$y=\\frac{1}{1+\\exp(x^T \\beta)}$$",
"x = np.linspace(-5.0,5.0,200)\n\ny = -x\n\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.plot(x,y)\n\ny = np.exp(-x)\n\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.plot(x,y)\n\ny = 1.0+np.exp(-x)\n\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.plot(x,y)",
"ロジスティック・シグモイド関数",
"y = 1/(1.0+np.exp(-x))\n\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.plot(x,y)\n\ny = 1 - 1/(1.0+np.exp(-x))\n\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.plot(x,y)",
"オッズ",
"y = np.exp(x)\n\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.plot(x,y)",
"対数オッズ",
"y = x\n\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.plot(x,y)",
"微分\n商の微分\n\\begin{eqnarray}\n(\\frac{1}{f(x)})'&=&\\lim_{h \\rightarrow 0}\\frac{\\frac{1}{f(x+h)}-\\frac{1}{f(x)}}{h}\\\n&=&\\lim_{h \\rightarrow 0}\\frac{f(x)-f(x+h)}{hf(x)f(x+h)}\\\n&=&\\lim_{h \\rightarrow 0}-\\frac{1}{f(x)f(x+h)}\\frac{f(x+h)-f(x)}{h}\\\n&=&-\\frac{f'(x)}{{ f(x)}^2}\n\\end{eqnarray}\n$${ 1+\\exp(-x)}'=-\\exp(-x)$$\nロジスティック関数の微分\n$$f(x)=\\frac{1}{1+\\exp(-x)}$$\n\\begin{eqnarray}\nf'(x)&=&-\\frac{{1+\\exp(-x)}'}{{ 1+\\exp(-x)}^2}\\\n&=&\\frac{\\exp(-x)}{{ 1+\\exp(-x)}^2}\\\n&=&\\frac{1}{ 1+\\exp(-x)}\\frac{\\exp(-x)}{ 1+\\exp(-x)}\\\n&=&\\frac{1}{ 1+\\exp(-x)}(\\frac{1+\\exp(-x)}{ 1+\\exp(-x)}-\\frac{1}{ 1+\\exp(-x)})\\\n&=&\\frac{1}{ 1+\\exp(-x)}(1-\\frac{1}{ 1+\\exp(-x)})\\\n&=&f(x)(1-f(x))\n\\end{eqnarray}",
"y = 1/(1.0+np.exp(-x)) * (1 - 1/(1.0+np.exp(-x)))\n\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.plot(x,y)",
"正規分布の確率密度関数に近い形状であり,最大は0.25\n線形結合が正規分布に従うのに近いイメージ\n正規分布\n$$f(x)= \\frac{1}{\\sqrt{2 \\pi \\sigma^2}} \\exp (- \\frac{(x-\\mu)^2 }{2 \\sigma}) $$",
"y = x\n\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.plot(x,y)\n\ny = x*x\n\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.plot(x,y)\n\ny = -x*x\n\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.plot(x,y)\n\ny = np.exp(x)\n\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.plot(x,y)\n\ny = np.exp(-x)\n\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.plot(x,y)\n\ny = np.exp(-x*x)\n\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.plot(x,y)\n\ny = np.exp(-x*x/2)\n\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.plot(x,y)\n\ny = np.exp(-x*x/2)/np.sqrt(2*np.pi)\n\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.plot(x,y)",
"比較",
"y1 = np.exp(-x*x/2)/np.sqrt(2*np.pi)\ny2 = 1/(1.0+np.exp(-x)) * (1 - 1/(1.0+np.exp(-x)))\n\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.plot(x,y1)\nplt.plot(x,y2)\n\nsigma = 1.6\ny1 = np.exp(-x*x/2/sigma)/np.sqrt(2*np.pi)/sigma\ny2 = 1/(1.0+np.exp(-x)) * (1 - 1/(1.0+np.exp(-x)))\n\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.plot(x,y1)\nplt.plot(x,y2)",
"ランダムなデータ",
"t = np.random.randint(low=0,high=2,size=50)\nt\n\nfeature = np.random.normal(loc=0.0,scale=1.0,size=(50,1))\nfeature\n\nm = LogisticRegression(penalty='l2',C=10000,fit_intercept=True)\n\nm.fit(feature,t)\n\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.scatter(feature,t)\n\npredict = m.predict(x.reshape((200,1)))\n\ny = 1/(1.0+np.exp(-x))\n\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.scatter(feature,t)\nplt.scatter(x,predict)\nplt.plot(x,y)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
JasonNK/udacity-dlnd
|
batch-norm/Batch_Normalization_Lesson.ipynb
|
mit
|
[
"Batch Normalization – Lesson\n\nWhat is it?\nWhat are it's benefits?\nHow do we add it to a network?\nLet's see it work!\nWhat are you hiding?\n\nWhat is Batch Normalization?<a id='theory'></a>\nBatch normalization was introduced in Sergey Ioffe's and Christian Szegedy's 2015 paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. The idea is that, instead of just normalizing the inputs to the network, we normalize the inputs to layers within the network. It's called \"batch\" normalization because during training, we normalize each layer's inputs by using the mean and variance of the values in the current mini-batch.\nWhy might this help? Well, we know that normalizing the inputs to a network helps the network learn. But a network is a series of layers, where the output of one layer becomes the input to another. That means we can think of any layer in a neural network as the first layer of a smaller network.\nFor example, imagine a 3 layer network. Instead of just thinking of it as a single network with inputs, layers, and outputs, think of the output of layer 1 as the input to a two layer network. This two layer network would consist of layers 2 and 3 in our original network. \nLikewise, the output of layer 2 can be thought of as the input to a single layer network, consisting only of layer 3.\nWhen you think of it like that - as a series of neural networks feeding into each other - then it's easy to imagine how normalizing the inputs to each layer would help. It's just like normalizing the inputs to any other neural network, but you're doing it at every layer (sub-network).\nBeyond the intuitive reasons, there are good mathematical reasons why it helps the network learn better, too. It helps combat what the authors call internal covariate shift. This discussion is best handled in the paper and in Deep Learning a book you can read online written by Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Specifically, check out the batch normalization section of Chapter 8: Optimization for Training Deep Models.\nBenefits of Batch Normalization<a id=\"benefits\"></a>\nBatch normalization optimizes network training. It has been shown to have several benefits:\n1. Networks train faster – Each training iteration will actually be slower because of the extra calculations during the forward pass and the additional hyperparameters to train during back propagation. However, it should converge much more quickly, so training should be faster overall. \n2. Allows higher learning rates – Gradient descent usually requires small learning rates for the network to converge. And as networks get deeper, their gradients get smaller during back propagation so they require even more iterations. Using batch normalization allows us to use much higher learning rates, which further increases the speed at which networks train. \n3. Makes weights easier to initialize – Weight initialization can be difficult, and it's even more difficult when creating deeper networks. Batch normalization seems to allow us to be much less careful about choosing our initial starting weights.\n4. Makes more activation functions viable – Some activation functions do not work well in some situations. Sigmoids lose their gradient pretty quickly, which means they can't be used in deep networks. And ReLUs often die out during training, where they stop learning completely, so we need to be careful about the range of values fed into them. Because batch normalization regulates the values going into each activation function, non-linearlities that don't seem to work well in deep networks actually become viable again.\n5. Simplifies the creation of deeper networks – Because of the first 4 items listed above, it is easier to build and faster to train deeper neural networks when using batch normalization. And it's been shown that deeper networks generally produce better results, so that's great.\n6. Provides a bit of regularlization – Batch normalization adds a little noise to your network. In some cases, such as in Inception modules, batch normalization has been shown to work as well as dropout. But in general, consider batch normalization as a bit of extra regularization, possibly allowing you to reduce some of the dropout you might add to a network. \n7. May give better results overall – Some tests seem to show batch normalization actually improves the training results. However, it's really an optimization to help train faster, so you shouldn't think of it as a way to make your network better. But since it lets you train networks faster, that means you can iterate over more designs more quickly. It also lets you build deeper networks, which are usually better. So when you factor in everything, you're probably going to end up with better results if you build your networks with batch normalization.\nBatch Normalization in TensorFlow<a id=\"implementation_1\"></a>\nThis section of the notebook shows you one way to add batch normalization to a neural network built in TensorFlow. \nThe following cell imports the packages we need in the notebook and loads the MNIST dataset to use in our experiments. However, the tensorflow package contains all the code you'll actually need for batch normalization.",
"# Import necessary packages\nimport tensorflow as tf\nimport tqdm\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# Import MNIST data so we have something for our experiments\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True)",
"Neural network classes for testing\nThe following class, NeuralNet, allows us to create identical neural networks with and without batch normalization. The code is heavily documented, but there is also some additional discussion later. You do not need to read through it all before going through the rest of the notebook, but the comments within the code blocks may answer some of your questions.\nAbout the code:\n\nThis class is not meant to represent TensorFlow best practices – the design choices made here are to support the discussion related to batch normalization.\nIt's also important to note that we use the well-known MNIST data for these examples, but the networks we create are not meant to be good for performing handwritten character recognition. We chose this network architecture because it is similar to the one used in the original paper, which is complex enough to demonstrate some of the benefits of batch normalization while still being fast to train.",
"class NeuralNet:\n def __init__(self, initial_weights, activation_fn, use_batch_norm):\n \"\"\"\n Initializes this object, creating a TensorFlow graph using the given parameters.\n \n :param initial_weights: list of NumPy arrays or Tensors\n Initial values for the weights for every layer in the network. We pass these in\n so we can create multiple networks with the same starting weights to eliminate\n training differences caused by random initialization differences.\n The number of items in the list defines the number of layers in the network,\n and the shapes of the items in the list define the number of nodes in each layer.\n e.g. Passing in 3 matrices of shape (784, 256), (256, 100), and (100, 10) would \n create a network with 784 inputs going into a hidden layer with 256 nodes,\n followed by a hidden layer with 100 nodes, followed by an output layer with 10 nodes.\n :param activation_fn: Callable\n The function used for the output of each hidden layer. The network will use the same\n activation function on every hidden layer and no activate function on the output layer.\n e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.\n :param use_batch_norm: bool\n Pass True to create a network that uses batch normalization; False otherwise\n Note: this network will not use batch normalization on layers that do not have an\n activation function.\n \"\"\"\n # Keep track of whether or not this network uses batch normalization.\n self.use_batch_norm = use_batch_norm\n self.name = \"With Batch Norm\" if use_batch_norm else \"Without Batch Norm\"\n\n # Batch normalization needs to do different calculations during training and inference,\n # so we use this placeholder to tell the graph which behavior to use.\n self.is_training = tf.placeholder(tf.bool, name=\"is_training\")\n\n # This list is just for keeping track of data we want to plot later.\n # It doesn't actually have anything to do with neural nets or batch normalization.\n self.training_accuracies = []\n\n # Create the network graph, but it will not actually have any real values until after you\n # call train or test\n self.build_network(initial_weights, activation_fn)\n \n def build_network(self, initial_weights, activation_fn):\n \"\"\"\n Build the graph. The graph still needs to be trained via the `train` method.\n \n :param initial_weights: list of NumPy arrays or Tensors\n See __init__ for description. \n :param activation_fn: Callable\n See __init__ for description. \n \"\"\"\n self.input_layer = tf.placeholder(tf.float32, [None, initial_weights[0].shape[0]])\n layer_in = self.input_layer\n for weights in initial_weights[:-1]:\n layer_in = self.fully_connected(layer_in, weights, activation_fn) \n self.output_layer = self.fully_connected(layer_in, initial_weights[-1])\n \n def fully_connected(self, layer_in, initial_weights, activation_fn=None):\n \"\"\"\n Creates a standard, fully connected layer. Its number of inputs and outputs will be\n defined by the shape of `initial_weights`, and its starting weight values will be\n taken directly from that same parameter. If `self.use_batch_norm` is True, this\n layer will include batch normalization, otherwise it will not. \n \n :param layer_in: Tensor\n The Tensor that feeds into this layer. It's either the input to the network or the output\n of a previous layer.\n :param initial_weights: NumPy array or Tensor\n Initial values for this layer's weights. The shape defines the number of nodes in the layer.\n e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256 \n outputs. \n :param activation_fn: Callable or None (default None)\n The non-linearity used for the output of the layer. If None, this layer will not include \n batch normalization, regardless of the value of `self.use_batch_norm`. \n e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.\n \"\"\"\n # Since this class supports both options, only use batch normalization when\n # requested. However, do not use it on the final layer, which we identify\n # by its lack of an activation function.\n if self.use_batch_norm and activation_fn:\n # Batch normalization uses weights as usual, but does NOT add a bias term. This is because \n # its calculations include gamma and beta variables that make the bias term unnecessary.\n # (See later in the notebook for more details.)\n weights = tf.Variable(initial_weights)\n linear_output = tf.matmul(layer_in, weights)\n\n # Apply batch normalization to the linear combination of the inputs and weights\n batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)\n\n # Now apply the activation function, *after* the normalization.\n return activation_fn(batch_normalized_output)\n else:\n # When not using batch normalization, create a standard layer that multiplies\n # the inputs and weights, adds a bias, and optionally passes the result \n # through an activation function. \n weights = tf.Variable(initial_weights)\n biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))\n linear_output = tf.add(tf.matmul(layer_in, weights), biases)\n return linear_output if not activation_fn else activation_fn(linear_output)\n\n def train(self, session, learning_rate, training_batches, batches_per_sample, save_model_as=None):\n \"\"\"\n Trains the model on the MNIST training dataset.\n \n :param session: Session\n Used to run training graph operations.\n :param learning_rate: float\n Learning rate used during gradient descent.\n :param training_batches: int\n Number of batches to train.\n :param batches_per_sample: int\n How many batches to train before sampling the validation accuracy.\n :param save_model_as: string or None (default None)\n Name to use if you want to save the trained model.\n \"\"\"\n # This placeholder will store the target labels for each mini batch\n labels = tf.placeholder(tf.float32, [None, 10])\n\n # Define loss and optimizer\n cross_entropy = tf.reduce_mean(\n tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=self.output_layer))\n \n # Define operations for testing\n correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\n if self.use_batch_norm:\n # If we don't include the update ops as dependencies on the train step, the \n # tf.layers.batch_normalization layers won't update their population statistics,\n # which will cause the model to fail at inference time\n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\n train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\n else:\n train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\n \n # Train for the appropriate number of batches. (tqdm is only for a nice timing display)\n for i in tqdm.tqdm(range(training_batches)):\n # We use batches of 60 just because the original paper did. You can use any size batch you like.\n batch_xs, batch_ys = mnist.train.next_batch(60)\n session.run(train_step, feed_dict={self.input_layer: batch_xs, \n labels: batch_ys, \n self.is_training: True})\n \n # Periodically test accuracy against the 5k validation images and store it for plotting later.\n if i % batches_per_sample == 0:\n test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,\n labels: mnist.validation.labels,\n self.is_training: False})\n self.training_accuracies.append(test_accuracy)\n\n # After training, report accuracy against test data\n test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,\n labels: mnist.validation.labels,\n self.is_training: False})\n print('{}: After training, final accuracy on validation set = {}'.format(self.name, test_accuracy))\n\n # If you want to use this model later for inference instead of having to retrain it,\n # just construct it with the same parameters and then pass this file to the 'test' function\n if save_model_as:\n tf.train.Saver().save(session, save_model_as)\n\n def test(self, session, test_training_accuracy=False, include_individual_predictions=False, restore_from=None):\n \"\"\"\n Trains a trained model on the MNIST testing dataset.\n\n :param session: Session\n Used to run the testing graph operations.\n :param test_training_accuracy: bool (default False)\n If True, perform inference with batch normalization using batch mean and variance;\n if False, perform inference with batch normalization using estimated population mean and variance.\n Note: in real life, *always* perform inference using the population mean and variance.\n This parameter exists just to support demonstrating what happens if you don't.\n :param include_individual_predictions: bool (default True)\n This function always performs an accuracy test against the entire test set. But if this parameter\n is True, it performs an extra test, doing 200 predictions one at a time, and displays the results\n and accuracy.\n :param restore_from: string or None (default None)\n Name of a saved model if you want to test with previously saved weights.\n \"\"\"\n # This placeholder will store the true labels for each mini batch\n labels = tf.placeholder(tf.float32, [None, 10])\n\n # Define operations for testing\n correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\n # If provided, restore from a previously saved model\n if restore_from:\n tf.train.Saver().restore(session, restore_from)\n\n # Test against all of the MNIST test data\n test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.test.images,\n labels: mnist.test.labels,\n self.is_training: test_training_accuracy})\n print('-'*75)\n print('{}: Accuracy on full test set = {}'.format(self.name, test_accuracy))\n\n # If requested, perform tests predicting individual values rather than batches\n if include_individual_predictions:\n predictions = []\n correct = 0\n\n # Do 200 predictions, 1 at a time\n for i in range(200):\n # This is a normal prediction using an individual test case. However, notice\n # we pass `test_training_accuracy` to `feed_dict` as the value for `self.is_training`.\n # Remember that will tell it whether it should use the batch mean & variance or\n # the population estimates that were calucated while training the model.\n pred, corr = session.run([tf.arg_max(self.output_layer,1), accuracy],\n feed_dict={self.input_layer: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]],\n self.is_training: test_training_accuracy})\n correct += corr\n\n predictions.append(pred[0])\n\n print(\"200 Predictions:\", predictions)\n print(\"Accuracy on 200 samples:\", correct/200)\n",
"There are quite a few comments in the code, so those should answer most of your questions. However, let's take a look at the most important lines.\nWe add batch normalization to layers inside the fully_connected function. Here are some important points about that code:\n1. Layers with batch normalization do not include a bias term.\n2. We use TensorFlow's tf.layers.batch_normalization function to handle the math. (We show lower-level ways to do this later in the notebook.)\n3. We tell tf.layers.batch_normalization whether or not the network is training. This is an important step we'll talk about later.\n4. We add the normalization before calling the activation function.\nIn addition to that code, the training step is wrapped in the following with statement:\npython\nwith tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\nThis line actually works in conjunction with the training parameter we pass to tf.layers.batch_normalization. Without it, TensorFlow's batch normalization layer will not operate correctly during inference.\nFinally, whenever we train the network or perform inference, we use the feed_dict to set self.is_training to True or False, respectively, like in the following line:\npython\nsession.run(train_step, feed_dict={self.input_layer: batch_xs, \n labels: batch_ys, \n self.is_training: True})\nWe'll go into more details later, but next we want to show some experiments that use this code and test networks with and without batch normalization.\nBatch Normalization Demos<a id='demos'></a>\nThis section of the notebook trains various networks with and without batch normalization to demonstrate some of the benefits mentioned earlier. \nWe'd like to thank the author of this blog post Implementing Batch Normalization in TensorFlow. That post provided the idea of - and some of the code for - plotting the differences in accuracy during training, along with the idea for comparing multiple networks using the same initial weights.\nCode to support testing\nThe following two functions support the demos we run in the notebook. \nThe first function, plot_training_accuracies, simply plots the values found in the training_accuracies lists of the NeuralNet objects passed to it. If you look at the train function in NeuralNet, you'll see it that while it's training the network, it periodically measures validation accuracy and stores the results in that list. It does that just to support these plots.\nThe second function, train_and_test, creates two neural nets - one with and one without batch normalization. It then trains them both and tests them, calling plot_training_accuracies to plot how their accuracies changed over the course of training. The really imporant thing about this function is that it initializes the starting weights for the networks outside of the networks and then passes them in. This lets it train both networks from the exact same starting weights, which eliminates performance differences that might result from (un)lucky initial weights.",
"def plot_training_accuracies(*args, **kwargs):\n \"\"\"\n Displays a plot of the accuracies calculated during training to demonstrate\n how many iterations it took for the model(s) to converge.\n \n :param args: One or more NeuralNet objects\n You can supply any number of NeuralNet objects as unnamed arguments \n and this will display their training accuracies. Be sure to call `train` \n the NeuralNets before calling this function.\n :param kwargs: \n You can supply any named parameters here, but `batches_per_sample` is the only\n one we look for. It should match the `batches_per_sample` value you passed\n to the `train` function.\n \"\"\"\n fig, ax = plt.subplots()\n\n batches_per_sample = kwargs['batches_per_sample']\n \n for nn in args:\n ax.plot(range(0,len(nn.training_accuracies)*batches_per_sample,batches_per_sample),\n nn.training_accuracies, label=nn.name)\n ax.set_xlabel('Training steps')\n ax.set_ylabel('Accuracy')\n ax.set_title('Validation Accuracy During Training')\n ax.legend(loc=4)\n ax.set_ylim([0,1])\n plt.yticks(np.arange(0, 1.1, 0.1))\n plt.grid(True)\n plt.show()\n\ndef train_and_test(use_bad_weights, learning_rate, activation_fn, training_batches=50000, batches_per_sample=500):\n \"\"\"\n Creates two networks, one with and one without batch normalization, then trains them\n with identical starting weights, layers, batches, etc. Finally tests and plots their accuracies.\n \n :param use_bad_weights: bool\n If True, initialize the weights of both networks to wildly inappropriate weights;\n if False, use reasonable starting weights.\n :param learning_rate: float\n Learning rate used during gradient descent.\n :param activation_fn: Callable\n The function used for the output of each hidden layer. The network will use the same\n activation function on every hidden layer and no activate function on the output layer.\n e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.\n :param training_batches: (default 50000)\n Number of batches to train.\n :param batches_per_sample: (default 500)\n How many batches to train before sampling the validation accuracy.\n \"\"\"\n # Use identical starting weights for each network to eliminate differences in\n # weight initialization as a cause for differences seen in training performance\n #\n # Note: The networks will use these weights to define the number of and shapes of\n # its layers. The original batch normalization paper used 3 hidden layers\n # with 100 nodes in each, followed by a 10 node output layer. These values\n # build such a network, but feel free to experiment with different choices.\n # However, the input size should always be 784 and the final output should be 10.\n if use_bad_weights:\n # These weights should be horrible because they have such a large standard deviation\n weights = [np.random.normal(size=(784,100), scale=5.0).astype(np.float32),\n np.random.normal(size=(100,100), scale=5.0).astype(np.float32),\n np.random.normal(size=(100,100), scale=5.0).astype(np.float32),\n np.random.normal(size=(100,10), scale=5.0).astype(np.float32)\n ]\n else:\n # These weights should be good because they have such a small standard deviation\n weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,10), scale=0.05).astype(np.float32)\n ]\n\n # Just to make sure the TensorFlow's default graph is empty before we start another\n # test, because we don't bother using different graphs or scoping and naming \n # elements carefully in this sample code.\n tf.reset_default_graph()\n\n # build two versions of same network, 1 without and 1 with batch normalization\n nn = NeuralNet(weights, activation_fn, False)\n bn = NeuralNet(weights, activation_fn, True)\n \n # train and test the two models\n with tf.Session() as sess:\n tf.global_variables_initializer().run()\n\n nn.train(sess, learning_rate, training_batches, batches_per_sample)\n bn.train(sess, learning_rate, training_batches, batches_per_sample)\n \n nn.test(sess)\n bn.test(sess)\n \n # Display a graph of how validation accuracies changed during training\n # so we can compare how the models trained and when they converged\n plot_training_accuracies(nn, bn, batches_per_sample=batches_per_sample)\n",
"Comparisons between identical networks, with and without batch normalization\nThe next series of cells train networks with various settings to show the differences with and without batch normalization. They are meant to clearly demonstrate the effects of batch normalization. We include a deeper discussion of batch normalization later in the notebook.\nThe following creates two networks using a ReLU activation function, a learning rate of 0.01, and reasonable starting weights.",
"train_and_test(False, 0.01, tf.nn.relu)",
"As expected, both networks train well and eventually reach similar test accuracies. However, notice that the model with batch normalization converges slightly faster than the other network, reaching accuracies over 90% almost immediately and nearing its max acuracy in 10 or 15 thousand iterations. The other network takes about 3 thousand iterations to reach 90% and doesn't near its best accuracy until 30 thousand or more iterations.\nIf you look at the raw speed, you can see that without batch normalization we were computing over 1100 batches per second, whereas with batch normalization that goes down to just over 500. However, batch normalization allows us to perform fewer iterations and converge in less time over all. (We only trained for 50 thousand batches here so we could plot the comparison.)\nThe following creates two networks with the same hyperparameters used in the previous example, but only trains for 2000 iterations.",
"train_and_test(False, 0.01, tf.nn.relu, 2000, 50)",
"As you can see, using batch normalization produces a model with over 95% accuracy in only 2000 batches, and it was above 90% at somewhere around 500 batches. Without batch normalization, the model takes 1750 iterations just to hit 80% – the network with batch normalization hits that mark after around 200 iterations! (Note: if you run the code yourself, you'll see slightly different results each time because the starting weights - while the same for each model - are different for each run.)\nIn the above example, you should also notice that the networks trained fewer batches per second then what you saw in the previous example. That's because much of the time we're tracking is actually spent periodically performing inference to collect data for the plots. In this example we perform that inference every 50 batches instead of every 500, so generating the plot for this example requires 10 times the overhead for the same 2000 iterations.\nThe following creates two networks using a sigmoid activation function, a learning rate of 0.01, and reasonable starting weights.",
"train_and_test(False, 0.01, tf.nn.sigmoid)",
"With the number of layers we're using and this small learning rate, using a sigmoid activation function takes a long time to start learning. It eventually starts making progress, but it took over 45 thousand batches just to get over 80% accuracy. Using batch normalization gets to 90% in around one thousand batches. \nThe following creates two networks using a ReLU activation function, a learning rate of 1, and reasonable starting weights.",
"train_and_test(False, 1, tf.nn.relu)",
"Now we're using ReLUs again, but with a larger learning rate. The plot shows how training started out pretty normally, with the network with batch normalization starting out faster than the other. But the higher learning rate bounces the accuracy around a bit more, and at some point the accuracy in the network without batch normalization just completely crashes. It's likely that too many ReLUs died off at this point because of the high learning rate.\nThe next cell shows the same test again. The network with batch normalization performs the same way, and the other suffers from the same problem again, but it manages to train longer before it happens.",
"train_and_test(False, 1, tf.nn.relu)",
"In both of the previous examples, the network with batch normalization manages to gets over 98% accuracy, and get near that result almost immediately. The higher learning rate allows the network to train extremely fast.\nThe following creates two networks using a sigmoid activation function, a learning rate of 1, and reasonable starting weights.",
"train_and_test(False, 1, tf.nn.sigmoid)",
"In this example, we switched to a sigmoid activation function. It appears to hande the higher learning rate well, with both networks achieving high accuracy.\nThe cell below shows a similar pair of networks trained for only 2000 iterations.",
"train_and_test(False, 1, tf.nn.sigmoid, 2000, 50)",
"As you can see, even though these parameters work well for both networks, the one with batch normalization gets over 90% in 400 or so batches, whereas the other takes over 1700. When training larger networks, these sorts of differences become more pronounced.\nThe following creates two networks using a ReLU activation function, a learning rate of 2, and reasonable starting weights.",
"train_and_test(False, 2, tf.nn.relu)",
"With this very large learning rate, the network with batch normalization trains fine and almost immediately manages 98% accuracy. However, the network without normalization doesn't learn at all.\nThe following creates two networks using a sigmoid activation function, a learning rate of 2, and reasonable starting weights.",
"train_and_test(False, 2, tf.nn.sigmoid)",
"Once again, using a sigmoid activation function with the larger learning rate works well both with and without batch normalization.\nHowever, look at the plot below where we train models with the same parameters but only 2000 iterations. As usual, batch normalization lets it train faster.",
"train_and_test(False, 2, tf.nn.sigmoid, 2000, 50)",
"In the rest of the examples, we use really bad starting weights. That is, normally we would use very small values close to zero. However, in these examples we choose random values with a standard deviation of 5. If you were really training a neural network, you would not want to do this. But these examples demonstrate how batch normalization makes your network much more resilient. \nThe following creates two networks using a ReLU activation function, a learning rate of 0.01, and bad starting weights.",
"train_and_test(True, 0.01, tf.nn.relu)",
"As the plot shows, without batch normalization the network never learns anything at all. But with batch normalization, it actually learns pretty well and gets to almost 80% accuracy. The starting weights obviously hurt the network, but you can see how well batch normalization does in overcoming them. \nThe following creates two networks using a sigmoid activation function, a learning rate of 0.01, and bad starting weights.",
"train_and_test(True, 0.01, tf.nn.sigmoid)",
"Using a sigmoid activation function works better than the ReLU in the previous example, but without batch normalization it would take a tremendously long time to train the network, if it ever trained at all. \nThe following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.<a id=\"successful_example_lr_1\"></a>",
"train_and_test(True, 1, tf.nn.relu)",
"The higher learning rate used here allows the network with batch normalization to surpass 90% in about 30 thousand batches. The network without it never gets anywhere.\nThe following creates two networks using a sigmoid activation function, a learning rate of 1, and bad starting weights.",
"train_and_test(True, 1, tf.nn.sigmoid)",
"Using sigmoid works better than ReLUs for this higher learning rate. However, you can see that without batch normalization, the network takes a long time tro train, bounces around a lot, and spends a long time stuck at 90%. The network with batch normalization trains much more quickly, seems to be more stable, and achieves a higher accuracy.\nThe following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.<a id=\"successful_example_lr_2\"></a>",
"train_and_test(True, 2, tf.nn.relu)",
"We've already seen that ReLUs do not do as well as sigmoids with higher learning rates, and here we are using an extremely high rate. As expected, without batch normalization the network doesn't learn at all. But with batch normalization, it eventually achieves 90% accuracy. Notice, though, how its accuracy bounces around wildly during training - that's because the learning rate is really much too high, so the fact that this worked at all is a bit of luck.\nThe following creates two networks using a sigmoid activation function, a learning rate of 2, and bad starting weights.",
"train_and_test(True, 2, tf.nn.sigmoid)",
"In this case, the network with batch normalization trained faster and reached a higher accuracy. Meanwhile, the high learning rate makes the network without normalization bounce around erratically and have trouble getting past 90%.\nFull Disclosure: Batch Normalization Doesn't Fix Everything\nBatch normalization isn't magic and it doesn't work every time. Weights are still randomly initialized and batches are chosen at random during training, so you never know exactly how training will go. Even for these tests, where we use the same initial weights for both networks, we still get different weights each time we run.\nThis section includes two examples that show runs when batch normalization did not help at all.\nThe following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.",
"train_and_test(True, 1, tf.nn.relu)",
"When we used these same parameters earlier, we saw the network with batch normalization reach 92% validation accuracy. This time we used different starting weights, initialized using the same standard deviation as before, and the network doesn't learn at all. (Remember, an accuracy around 10% is what the network gets if it just guesses the same value all the time.)\nThe following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.",
"train_and_test(True, 2, tf.nn.relu)",
"When we trained with these parameters and batch normalization earlier, we reached 90% validation accuracy. However, this time the network almost starts to make some progress in the beginning, but it quickly breaks down and stops learning. \nNote: Both of the above examples use extremely bad starting weights, along with learning rates that are too high. While we've shown batch normalization can overcome bad values, we don't mean to encourage actually using them. The examples in this notebook are meant to show that batch normalization can help your networks train better. But these last two examples should remind you that you still want to try to use good network design choices and reasonable starting weights. It should also remind you that the results of each attempt to train a network are a bit random, even when using otherwise identical architectures.\nBatch Normalization: A Detailed Look<a id='implementation_2'></a>\nThe layer created by tf.layers.batch_normalization handles all the details of implementing batch normalization. Many students will be fine just using that and won't care about what's happening at the lower levels. However, some students may want to explore the details, so here is a short explanation of what's really happening, starting with the equations you're likely to come across if you ever read about batch normalization. \nIn order to normalize the values, we first need to find the average value for the batch. If you look at the code, you can see that this is not the average value of the batch inputs, but the average value coming out of any particular layer before we pass it through its non-linear activation function and then feed it as an input to the next layer.\nWe represent the average as $\\mu_B$, which is simply the sum of all of the values $x_i$ divided by the number of values, $m$ \n$$\n\\mu_B \\leftarrow \\frac{1}{m}\\sum_{i=1}^m x_i\n$$\nWe then need to calculate the variance, or mean squared deviation, represented as $\\sigma_{B}^{2}$. If you aren't familiar with statistics, that simply means for each value $x_i$, we subtract the average value (calculated earlier as $\\mu_B$), which gives us what's called the \"deviation\" for that value. We square the result to get the squared deviation. Sum up the results of doing that for each of the values, then divide by the number of values, again $m$, to get the average, or mean, squared deviation.\n$$\n\\sigma_{B}^{2} \\leftarrow \\frac{1}{m}\\sum_{i=1}^m (x_i - \\mu_B)^2\n$$\nOnce we have the mean and variance, we can use them to normalize the values with the following equation. For each value, it subtracts the mean and divides by the (almost) standard deviation. (You've probably heard of standard deviation many times, but if you have not studied statistics you might not know that the standard deviation is actually the square root of the mean squared deviation.)\n$$\n\\hat{x_i} \\leftarrow \\frac{x_i - \\mu_B}{\\sqrt{\\sigma_{B}^{2} + \\epsilon}}\n$$\nAbove, we said \"(almost) standard deviation\". That's because the real standard deviation for the batch is calculated by $\\sqrt{\\sigma_{B}^{2}}$, but the above formula adds the term epsilon, $\\epsilon$, before taking the square root. The epsilon can be any small, positive constant - in our code we use the value 0.001. It is there partially to make sure we don't try to divide by zero, but it also acts to increase the variance slightly for each batch. \nWhy increase the variance? Statistically, this makes sense because even though we are normalizing one batch at a time, we are also trying to estimate the population distribution – the total training set, which itself an estimate of the larger population of inputs your network wants to handle. The variance of a population is higher than the variance for any sample taken from that population, so increasing the variance a little bit for each batch helps take that into account. \nAt this point, we have a normalized value, represented as $\\hat{x_i}$. But rather than use it directly, we multiply it by a gamma value, $\\gamma$, and then add a beta value, $\\beta$. Both $\\gamma$ and $\\beta$ are learnable parameters of the network and serve to scale and shift the normalized value, respectively. Because they are learnable just like weights, they give your network some extra knobs to tweak during training to help it learn the function it is trying to approximate. \n$$\ny_i \\leftarrow \\gamma \\hat{x_i} + \\beta\n$$\nWe now have the final batch-normalized output of our layer, which we would then pass to a non-linear activation function like sigmoid, tanh, ReLU, Leaky ReLU, etc. In the original batch normalization paper (linked in the beginning of this notebook), they mention that there might be cases when you'd want to perform the batch normalization after the non-linearity instead of before, but it is difficult to find any uses like that in practice.\nIn NeuralNet's implementation of fully_connected, all of this math is hidden inside the following line, where linear_output serves as the $x_i$ from the equations:\npython\nbatch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)\nThe next section shows you how to implement the math directly. \nBatch normalization without the tf.layers package\nOur implementation of batch normalization in NeuralNet uses the high-level abstraction tf.layers.batch_normalization, found in TensorFlow's tf.layers package.\nHowever, if you would like to implement batch normalization at a lower level, the following code shows you how.\nIt uses tf.nn.batch_normalization from TensorFlow's neural net (nn) package.\n1) You can replace the fully_connected function in the NeuralNet class with the below code and everything in NeuralNet will still work like it did before.",
"def fully_connected(self, layer_in, initial_weights, activation_fn=None):\n \"\"\"\n Creates a standard, fully connected layer. Its number of inputs and outputs will be\n defined by the shape of `initial_weights`, and its starting weight values will be\n taken directly from that same parameter. If `self.use_batch_norm` is True, this\n layer will include batch normalization, otherwise it will not. \n \n :param layer_in: Tensor\n The Tensor that feeds into this layer. It's either the input to the network or the output\n of a previous layer.\n :param initial_weights: NumPy array or Tensor\n Initial values for this layer's weights. The shape defines the number of nodes in the layer.\n e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256 \n outputs. \n :param activation_fn: Callable or None (default None)\n The non-linearity used for the output of the layer. If None, this layer will not include \n batch normalization, regardless of the value of `self.use_batch_norm`. \n e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.\n \"\"\"\n if self.use_batch_norm and activation_fn:\n # Batch normalization uses weights as usual, but does NOT add a bias term. This is because \n # its calculations include gamma and beta variables that make the bias term unnecessary.\n weights = tf.Variable(initial_weights)\n linear_output = tf.matmul(layer_in, weights)\n\n num_out_nodes = initial_weights.shape[-1]\n\n # Batch normalization adds additional trainable variables: \n # gamma (for scaling) and beta (for shifting).\n gamma = tf.Variable(tf.ones([num_out_nodes]))\n beta = tf.Variable(tf.zeros([num_out_nodes]))\n\n # These variables will store the mean and variance for this layer over the entire training set,\n # which we assume represents the general population distribution.\n # By setting `trainable=False`, we tell TensorFlow not to modify these variables during\n # back propagation. Instead, we will assign values to these variables ourselves. \n pop_mean = tf.Variable(tf.zeros([num_out_nodes]), trainable=False)\n pop_variance = tf.Variable(tf.ones([num_out_nodes]), trainable=False)\n\n # Batch normalization requires a small constant epsilon, used to ensure we don't divide by zero.\n # This is the default value TensorFlow uses.\n epsilon = 1e-3\n\n def batch_norm_training():\n # Calculate the mean and variance for the data coming out of this layer's linear-combination step.\n # The [0] defines an array of axes to calculate over.\n batch_mean, batch_variance = tf.nn.moments(linear_output, [0])\n\n # Calculate a moving average of the training data's mean and variance while training.\n # These will be used during inference.\n # Decay should be some number less than 1. tf.layers.batch_normalization uses the parameter\n # \"momentum\" to accomplish this and defaults it to 0.99\n decay = 0.99\n train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))\n train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))\n\n # The 'tf.control_dependencies' context tells TensorFlow it must calculate 'train_mean' \n # and 'train_variance' before it calculates the 'tf.nn.batch_normalization' layer.\n # This is necessary because the those two operations are not actually in the graph\n # connecting the linear_output and batch_normalization layers, \n # so TensorFlow would otherwise just skip them.\n with tf.control_dependencies([train_mean, train_variance]):\n return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)\n \n def batch_norm_inference():\n # During inference, use the our estimated population mean and variance to normalize the layer\n return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)\n\n # Use `tf.cond` as a sort of if-check. When self.is_training is True, TensorFlow will execute \n # the operation returned from `batch_norm_training`; otherwise it will execute the graph\n # operation returned from `batch_norm_inference`.\n batch_normalized_output = tf.cond(self.is_training, batch_norm_training, batch_norm_inference)\n \n # Pass the batch-normalized layer output through the activation function.\n # The literature states there may be cases where you want to perform the batch normalization *after*\n # the activation function, but it is difficult to find any uses of that in practice.\n return activation_fn(batch_normalized_output)\n else:\n # When not using batch normalization, create a standard layer that multiplies\n # the inputs and weights, adds a bias, and optionally passes the result \n # through an activation function. \n weights = tf.Variable(initial_weights)\n biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))\n linear_output = tf.add(tf.matmul(layer_in, weights), biases)\n return linear_output if not activation_fn else activation_fn(linear_output)\n",
"This version of fully_connected is much longer than the original, but once again has extensive comments to help you understand it. Here are some important points:\n\nIt explicitly creates variables to store gamma, beta, and the population mean and variance. These were all handled for us in the previous version of the function.\nIt initializes gamma to one and beta to zero, so they start out having no effect in this calculation: $y_i \\leftarrow \\gamma \\hat{x_i} + \\beta$. However, during training the network learns the best values for these variables using back propagation, just like networks normally do with weights.\nUnlike gamma and beta, the variables for population mean and variance are marked as untrainable. That tells TensorFlow not to modify them during back propagation. Instead, the lines that call tf.assign are used to update these variables directly.\nTensorFlow won't automatically run the tf.assign operations during training because it only evaluates operations that are required based on the connections it finds in the graph. To get around that, we add this line: with tf.control_dependencies([train_mean, train_variance]): before we run the normalization operation. This tells TensorFlow it needs to run those operations before running anything inside the with block. \nThe actual normalization math is still mostly hidden from us, this time using tf.nn.batch_normalization.\ntf.nn.batch_normalization does not have a training parameter like tf.layers.batch_normalization did. However, we still need to handle training and inference differently, so we run different code in each case using the tf.cond operation.\nWe use the tf.nn.moments function to calculate the batch mean and variance.\n\n2) The current version of the train function in NeuralNet will work fine with this new version of fully_connected. However, it uses these lines to ensure population statistics are updated when using batch normalization: \npython\nif self.use_batch_norm:\n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\n train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\nelse:\n train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\nOur new version of fully_connected handles updating the population statistics directly. That means you can also simplify your code by replacing the above if/else condition with just this line:\npython\ntrain_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\n3) And just in case you want to implement every detail from scratch, you can replace this line in batch_norm_training:\npython\nreturn tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)\nwith these lines:\npython\nnormalized_linear_output = (linear_output - batch_mean) / tf.sqrt(batch_variance + epsilon)\nreturn gamma * normalized_linear_output + beta\nAnd replace this line in batch_norm_inference:\npython\nreturn tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)\nwith these lines:\npython\nnormalized_linear_output = (linear_output - pop_mean) / tf.sqrt(pop_variance + epsilon)\nreturn gamma * normalized_linear_output + beta\nAs you can see in each of the above substitutions, the two lines of replacement code simply implement the following two equations directly. The first line calculates the following equation, with linear_output representing $x_i$ and normalized_linear_output representing $\\hat{x_i}$: \n$$\n\\hat{x_i} \\leftarrow \\frac{x_i - \\mu_B}{\\sqrt{\\sigma_{B}^{2} + \\epsilon}}\n$$\nAnd the second line is a direct translation of the following equation:\n$$\ny_i \\leftarrow \\gamma \\hat{x_i} + \\beta\n$$\nWe still use the tf.nn.moments operation to implement the other two equations from earlier – the ones that calculate the batch mean and variance used in the normalization step. If you really wanted to do everything from scratch, you could replace that line, too, but we'll leave that to you. \nWhy the difference between training and inference?\nIn the original function that uses tf.layers.batch_normalization, we tell the layer whether or not the network is training by passing a value for its training parameter, like so:\npython\nbatch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)\nAnd that forces us to provide a value for self.is_training in our feed_dict, like we do in this example from NeuralNet's train function:\npython\nsession.run(train_step, feed_dict={self.input_layer: batch_xs, \n labels: batch_ys, \n self.is_training: True})\nIf you looked at the low level implementation, you probably noticed that, just like with tf.layers.batch_normalization, we need to do slightly different things during training and inference. But why is that?\nFirst, let's look at what happens when we don't. The following function is similar to train_and_test from earlier, but this time we are only testing one network and instead of plotting its accuracy, we perform 200 predictions on test inputs, 1 input at at time. We can use the test_training_accuracy parameter to test the network in training or inference modes (the equivalent of passing True or False to the feed_dict for is_training).",
"def batch_norm_test(test_training_accuracy):\n \"\"\"\n :param test_training_accuracy: bool\n If True, perform inference with batch normalization using batch mean and variance;\n if False, perform inference with batch normalization using estimated population mean and variance.\n \"\"\"\n\n weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,10), scale=0.05).astype(np.float32)\n ]\n\n tf.reset_default_graph()\n\n # Train the model\n bn = NeuralNet(weights, tf.nn.relu, True)\n \n # First train the network\n with tf.Session() as sess:\n tf.global_variables_initializer().run()\n\n bn.train(sess, 0.01, 2000, 2000)\n\n bn.test(sess, test_training_accuracy=test_training_accuracy, include_individual_predictions=True)",
"In the following cell, we pass True for test_training_accuracy, which performs the same batch normalization that we normally perform during training.",
"batch_norm_test(True)",
"As you can see, the network guessed the same value every time! But why? Because during training, a network with batch normalization adjusts the values at each layer based on the mean and variance of that batch. The \"batches\" we are using for these predictions have a single input each time, so their values are the means, and their variances will always be 0. That means the network will normalize the values at any layer to zero. (Review the equations from before to see why a value that is equal to the mean would always normalize to zero.) So we end up with the same result for every input we give the network, because its the value the network produces when it applies its learned weights to zeros at every layer. \nNote: If you re-run that cell, you might get a different value from what we showed. That's because the specific weights the network learns will be different every time. But whatever value it is, it should be the same for all 200 predictions.\nTo overcome this problem, the network does not just normalize the batch at each layer. It also maintains an estimate of the mean and variance for the entire population. So when we perform inference, instead of letting it \"normalize\" all the values using their own means and variance, it uses the estimates of the population mean and variance that it calculated while training. \nSo in the following example, we pass False for test_training_accuracy, which tells the network that we it want to perform inference with the population statistics it calculates during training.",
"batch_norm_test(False)",
"As you can see, now that we're using the estimated population mean and variance, we get a 97% accuracy. That means it guessed correctly on 194 of the 200 samples – not too bad for something that trained in under 4 seconds. :)\nConsiderations for other network types\nThis notebook demonstrates batch normalization in a standard neural network with fully connected layers. You can also use batch normalization in other types of networks, but there are some special considerations.\nConvNets\nConvolution layers consist of multiple feature maps. (Remember, the depth of a convolutional layer refers to its number of feature maps.) And the weights for each feature map are shared across all the inputs that feed into the layer. Because of these differences, batch normalizaing convolutional layers requires batch/population mean and variance per feature map rather than per node in the layer.\nWhen using tf.layers.batch_normalization, be sure to pay attention to the order of your convolutionlal dimensions.\nSpecifically, you may want to set a different value for the axis parameter if your layers have their channels first instead of last. \nIn our low-level implementations, we used the following line to calculate the batch mean and variance:\npython\nbatch_mean, batch_variance = tf.nn.moments(linear_output, [0])\nIf we were dealing with a convolutional layer, we would calculate the mean and variance with a line like this instead:\npython\nbatch_mean, batch_variance = tf.nn.moments(conv_layer, [0,1,2], keep_dims=False)\nThe second parameter, [0,1,2], tells TensorFlow to calculate the batch mean and variance over each feature map. (The three axes are the batch, height, and width.) And setting keep_dims to False tells tf.nn.moments not to return values with the same size as the inputs. Specifically, it ensures we get one mean/variance pair per feature map.\nRNNs\nBatch normalization can work with recurrent neural networks, too, as shown in the 2016 paper Recurrent Batch Normalization. It's a bit more work to implement, but basically involves calculating the means and variances per time step instead of per layer. You can find an example where someone extended tf.nn.rnn_cell.RNNCell to include batch normalization in this GitHub repo."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
phoebe-project/phoebe2-docs
|
2.3/tutorials/requiv.ipynb
|
gpl-3.0
|
[
"Equivalent Radius\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).",
"#!pip install -I \"phoebe>=2.3,<2.4\"",
"As always, let's do imports and initialize a logger and a new Bundle.",
"import phoebe\nfrom phoebe import u # units\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nlogger = phoebe.logger()\n\nb = phoebe.default_binary()",
"Now let's add a mesh dataset at a few different times so that we can see how the potentials affect the surfaces of the stars.",
"b.add_dataset('mesh', times=np.linspace(0,1,11), dataset='mesh01')",
"Relevant Parameters\nThe 'requiv' parameter defines the stellar surface to have a constant volume of 4./3 pi requiv^3.",
"print(b['requiv@component'])",
"Critical Potentials and System Checks\nAdditionally, for each detached component, there is an requiv_max Parameter which shows the critical value at which the Roche surface will overflow. Setting requiv to a larger value will fail system checks and raise a warning.",
"print(b['requiv_max@primary@component'])\n\nprint(b['requiv_max@primary@constraint'])\n\nb.set_value('requiv@primary@component', 3)",
"At this time, if you were to call run_compute, an error would be thrown. An error isn't immediately thrown when setting requiv, however, since the overflow can be recitified by changing any of the other relevant parameters. For instance, let's change sma to be large enough to account for this value of rpole and you'll see that the error does not occur again.",
"b.set_value('sma@binary@component', 10)",
"These logger warnings are handy when running phoebe interactively, but in a script its also handy to be able to check whether the system is currently computable /before/ running run_compute.\nThis can be done by calling run_checks which returns a boolean (whether the system passes all checks) and a message (a string describing the first failed check).",
"print(b.run_checks())\n\nb.set_value('sma@binary@component', 5)\n\nprint(b.run_checks())",
"Semi-Detached and Contact Systems\nSemi-detached systems are implemented by constraining the value of requiv to be the same as requiv_max by appyling the 'semidetached' constraint on the 'primary' component. For more information see the critical radii: semidetached systems tutorial.\nContact systems are implemented by constraining the value of requiv both stars to correspond to the potential of the contact envelope. For more information see the critical radii: contact systems tutorial."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
srippa/nn_deep
|
assignment1/svm.ipynb
|
mit
|
[
"Multiclass Support Vector Machine exercise\nComplete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.\nIn this exercise you will:\n\nimplement a fully-vectorized loss function for the SVM\nimplement the fully-vectorized expression for its analytic gradient\ncheck your implementation using numerical gradient\nuse a validation set to tune the learning rate and regularization strength\noptimize the loss function with SGD\nvisualize the final learned weights",
"# Run some setup code for this notebook.\n\nimport random\nimport numpy as np\nfrom cs231n.data_utils import load_CIFAR10\nimport matplotlib.pyplot as plt\n\n# This is a bit of magic to make matplotlib figures appear inline in the\n# notebook rather than in a new window.\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# Some more magic so that the notebook will reload external python modules;\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2",
"CIFAR-10 Data Loading and Preprocessing",
"# Load the raw CIFAR-10 data.\ncifar10_dir = 'cs231n/datasets/cifar-10-batches-py'\nX_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n\n# As a sanity check, we print out the size of the training and test data.\nprint 'Training data shape: ', X_train.shape\nprint 'Training labels shape: ', y_train.shape\nprint 'Test data shape: ', X_test.shape\nprint 'Test labels shape: ', y_test.shape\n\n# Visualize some examples from the dataset.\n# We show a few examples of training images from each class.\nclasses = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\nnum_classes = len(classes)\nsamples_per_class = 7\nfor y, cls in enumerate(classes):\n idxs = np.flatnonzero(y_train == y)\n idxs = np.random.choice(idxs, samples_per_class, replace=False)\n for i, idx in enumerate(idxs):\n plt_idx = i * num_classes + y + 1\n plt.subplot(samples_per_class, num_classes, plt_idx)\n plt.imshow(X_train[idx].astype('uint8'))\n plt.axis('off')\n if i == 0:\n plt.title(cls)\nplt.show()\n\n# Subsample the data for more efficient code execution in this exercise.\nnum_training = 49000\nnum_validation = 1000\nnum_test = 1000\n\n# Our validation set will be num_validation points from the original\n# training set.\nmask = range(num_training, num_training + num_validation)\nX_val = X_train[mask]\ny_val = y_train[mask]\n\n# Our training set will be the first num_train points from the original\n# training set.\nmask = range(num_training)\nX_train = X_train[mask]\ny_train = y_train[mask]\n\n# We use the first num_test points of the original test set as our\n# test set.\nmask = range(num_test)\nX_test = X_test[mask]\ny_test = y_test[mask]\n\nprint 'Train data shape: ', X_train.shape\nprint 'Train labels shape: ', y_train.shape\nprint 'Validation data shape: ', X_val.shape\nprint 'Validation labels shape: ', y_val.shape\nprint 'Test data shape: ', X_test.shape\nprint 'Test labels shape: ', y_test.shape\n\n# Preprocessing: reshape the image data into rows\nX_train = np.reshape(X_train, (X_train.shape[0], -1))\nX_val = np.reshape(X_val, (X_val.shape[0], -1))\nX_test = np.reshape(X_test, (X_test.shape[0], -1))\n\n# As a sanity check, print out the shapes of the data\nprint 'Training data shape: ', X_train.shape\nprint 'Validation data shape: ', X_val.shape\nprint 'Test data shape: ', X_test.shape\n\n# Preprocessing: subtract the mean image\n# first: compute the image mean based on the training data\nmean_image = np.mean(X_train, axis=0)\nprint mean_image[:10] # print a few of the elements\nplt.figure(figsize=(4,4))\nplt.imshow(mean_image.reshape((32,32,3)).astype('uint8')) # visualize the mean image\n\n# second: subtract the mean image from train and test data\nX_train -= mean_image\nX_val -= mean_image\nX_test -= mean_image\n\n# third: append the bias dimension of ones (i.e. bias trick) so that our SVM\n# only has to worry about optimizing a single weight matrix W.\n# Also, lets transform both data matrices so that each image is a column.\nX_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))]).T\nX_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))]).T\nX_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))]).T\n\nprint X_train.shape, X_val.shape, X_test.shape",
"SVM Classifier\nYour code for this section will all be written inside cs231n/classifiers/linear_svm.py. \nAs you can see, we have prefilled the function compute_loss_naive which uses for loops to evaluate the multiclass SVM loss function.",
"# Evaluate the naive implementation of the loss we provided for you:\nfrom cs231n.classifiers.linear_svm import svm_loss_naive\nimport time\n\n# generate a random SVM weight matrix of small numbers\nW = np.random.randn(10, 3073) * 0.0001 \nloss, grad = svm_loss_naive(W, X_train, y_train, 0.00001)\nprint 'loss: %f' % (loss, )",
"The grad returned from the function above is right now all zero. Derive and implement the gradient for the SVM cost function and implement it inline inside the function svm_loss_naive. You will find it helpful to interleave your new code inside the existing function.\nTo check that you have correctly implemented the gradient correctly, you can numerically estimate the gradient of the loss function and compare the numeric estimate to the gradient that you computed. We have provided code that does this for you:",
"# Once you've implemented the gradient, recompute it with the code below\n# and gradient check it with the function we provided for you\n\n# Compute the loss and its gradient at W.\nloss, grad = svm_loss_naive(W, X_train, y_train, 0.0)\n\n# Numerically compute the gradient along several randomly chosen dimensions, and\n# compare them with your analytically computed gradient. The numbers should match\n# almost exactly along all dimensions.\nfrom cs231n.gradient_check import grad_check_sparse\nf = lambda w: svm_loss_naive(w, X_train, y_train, 0.0)[0]\ngrad_numerical = grad_check_sparse(f, W, grad, 10)",
"Inline Question 1:\nIt is possible that once in a while a dimension in the gradcheck will not match exactly. What could such a discrepancy be caused by? Is it a reason for concern? What is a simple example in one dimension where a gradient check could fail? Hint: the SVM loss function is not strictly speaking differentiable\nYour Answer: fill this in.",
"# Next implement the function svm_loss_vectorized; for now only compute the loss;\n# we will implement the gradient in a moment.\ntic = time.time()\nloss_naive, grad_naive = svm_loss_naive(W, X_train, y_train, 0.00001)\ntoc = time.time()\nprint 'Naive loss: %e computed in %fs' % (loss_naive, toc - tic)\n\nfrom cs231n.classifiers.linear_svm import svm_loss_vectorized\ntic = time.time()\nloss_vectorized, _ = svm_loss_vectorized(W, X_train, y_train, 0.00001)\ntoc = time.time()\nprint 'Vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic)\n\n# The losses should match but your vectorized implementation should be much faster.\nprint 'difference: %f' % (loss_naive - loss_vectorized)\n\n# Complete the implementation of svm_loss_vectorized, and compute the gradient\n# of the loss function in a vectorized way.\n\n# The naive implementation and the vectorized implementation should match, but\n# the vectorized version should still be much faster.\ntic = time.time()\n_, grad_naive = svm_loss_naive(W, X_train, y_train, 0.00001)\ntoc = time.time()\nprint 'Naive loss and gradient: computed in %fs' % (toc - tic)\n\ntic = time.time()\n_, grad_vectorized = svm_loss_vectorized(W, X_train, y_train, 0.00001)\ntoc = time.time()\nprint 'Vectorized loss and gradient: computed in %fs' % (toc - tic)\n\n# The loss is a single number, so it is easy to compare the values computed\n# by the two implementations. The gradient on the other hand is a matrix, so\n# we use the Frobenius norm to compare them.\ndifference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')\nprint 'difference: %f' % difference",
"Stochastic Gradient Descent\nWe now have vectorized and efficient expressions for the loss, the gradient and our gradient matches the numerical gradient. We are therefore ready to do SGD to minimize the loss.",
"# Now implement SGD in LinearSVM.train() function and run it with the code below\nfrom cs231n.classifiers import LinearSVM\n\nlearning_rates = [1e-7, 5e-5]\nregularization_strengths = [5e4, 1e5]\n\nsvm = LinearSVM()\ntic = time.time()\nloss_hist = svm.train(X_train, y_train, learning_rate=1e-5, reg=5e4,\n num_iters=1500, verbose=True)\ntoc = time.time()\nprint 'That took %fs' % (toc - tic)\n\n# A useful debugging strategy is to plot the loss as a function of\n# iteration number:\nplt.plot(loss_hist)\nplt.xlabel('Iteration number')\nplt.ylabel('Loss value')\n\n# Write the LinearSVM.predict function and evaluate the performance on both the\n# training and validation set\ny_train_pred = svm.predict(X_train)\nprint 'training accuracy: %f' % (np.mean(y_train == y_train_pred), )\ny_val_pred = svm.predict(X_val)\nprint 'validation accuracy: %f' % (np.mean(y_val == y_val_pred), )\n\n# Use the validation set to tune hyperparameters (regularization strength and\n# learning rate). You should experiment with different ranges for the learning\n# rates and regularization strengths; if you are careful you should be able to\n# get a classification accuracy of about 0.4 on the validation set.\nlearning_rates = [1e-7, 5e-5]\nregularization_strengths = [5e4, 1e5]\n\n# results is dictionary mapping tuples of the form\n# (learning_rate, regularization_strength) to tuples of the form\n# (training_accuracy, validation_accuracy). The accuracy is simply the fraction\n# of data points that are correctly classified.\nresults = {}\nbest_val = -1 # The highest validation accuracy that we have seen so far.\nbest_svm = None # The LinearSVM object that achieved the highest validation rate.\n\n################################################################################\n# TODO: #\n# Write code that chooses the best hyperparameters by tuning on the validation #\n# set. For each combination of hyperparameters, train a linear SVM on the #\n# training set, compute its accuracy on the training and validation sets, and #\n# store these numbers in the results dictionary. In addition, store the best #\n# validation accuracy in best_val and the LinearSVM object that achieves this #\n# accuracy in best_svm. #\n# #\n# Hint: You should use a small value for num_iters as you develop your #\n# validation code so that the SVMs don't take much time to train; once you are #\n# confident that your validation code works, you should rerun the validation #\n# code with a larger value for num_iters. #\n################################################################################\npass\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n \n# Print out results.\nfor lr, reg in sorted(results):\n train_accuracy, val_accuracy = results[(lr, reg)]\n print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (\n lr, reg, train_accuracy, val_accuracy)\n \nprint 'best validation accuracy achieved during cross-validation: %f' % best_val\n\n# Visualize the cross-validation results\nimport math\nx_scatter = [math.log10(x[0]) for x in results]\ny_scatter = [math.log10(x[1]) for x in results]\n\n# plot training accuracy\nsz = [results[x][0]*1500 for x in results] # default size of markers is 20\nplt.subplot(1,2,1)\nplt.scatter(x_scatter, y_scatter, sz)\nplt.xlabel('log learning rate')\nplt.ylabel('log regularization strength')\nplt.title('CIFAR-10 training accuracy')\n\n# plot validation accuracy\nsz = [results[x][1]*1500 for x in results] # default size of markers is 20\nplt.subplot(1,2,2)\nplt.scatter(x_scatter, y_scatter, sz)\nplt.xlabel('log learning rate')\nplt.ylabel('log regularization strength')\nplt.title('CIFAR-10 validation accuracy')\n\n# Evaluate the best svm on test set\ny_test_pred = best_svm.predict(X_test)\ntest_accuracy = np.mean(y_test == y_test_pred)\nprint 'linear SVM on raw pixels final test set accuracy: %f' % test_accuracy\n\n# Visualize the learned weights for each class.\n# Depending on your choice of learning rate and regularization strength, these may\n# or may not be nice to look at.\nw = best_svm.W[:,:-1] # strip out the bias\nw = w.reshape(10, 32, 32, 3)\nw_min, w_max = np.min(w), np.max(w)\nclasses = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\nfor i in xrange(10):\n plt.subplot(2, 5, i + 1)\n \n # Rescale the weights to be between 0 and 255\n wimg = 255.0 * (w[i].squeeze() - w_min) / (w_max - w_min)\n plt.imshow(wimg.astype('uint8'))\n plt.axis('off')\n plt.title(classes[i])",
"Inline question 2:\nDescribe what your visualized SVM weights look like, and offer a brief explanation for why they look they way that they do.\nYour answer: fill this in"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/docs
|
site/en/guide/migrate/evaluator.ipynb
|
apache-2.0
|
[
"Copyright 2021 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Migrate evaluation\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/guide/migrate/evaluator\">\n <img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />\n View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/evaluator.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />\n Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/evaluator.ipynb\">\n <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />\n View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/migrate/evaluator.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nEvaluation is a critical part of measuring and benchmarking models.\nThis guide demonstrates how to migrate evaluator tasks from TensorFlow 1 to TensorFlow 2. In Tensorflow 1 this functionality is implemented by tf.estimator.train_and_evaluate, when the API is running distributedly. In Tensorflow 2, you can use the built-in tf.keras.utils.SidecarEvaluator, or a custom evaluation loop on the evaluator task.\nThere are simple serial evaluation options in both TensorFlow 1 (tf.estimator.Estimator.evaluate) and TensorFlow 2 (Model.fit(..., validation_data=(...)) or Model.evaluate). The evaluator task is preferable when you would like your workers not switching between training and evaluation, and built-in evaluation in Model.fit is preferable when you would like your evaluation to be distributed.\nSetup",
"import tensorflow.compat.v1 as tf1\nimport tensorflow as tf\nimport numpy as np\nimport tempfile\nimport time\nimport os\n\nmnist = tf.keras.datasets.mnist\n\n(x_train, y_train),(x_test, y_test) = mnist.load_data()\nx_train, x_test = x_train / 255.0, x_test / 255.0",
"TensorFlow 1: Evaluating using tf.estimator.train_and_evaluate\nIn TensorFlow 1, you can configure a tf.estimator to evaluate the estimator using tf.estimator.train_and_evaluate.\nIn this example, start by defining the tf.estimator.Estimator and speciyfing training and evaluation specifications:",
"feature_columns = [tf1.feature_column.numeric_column(\"x\", shape=[28, 28])]\n\nclassifier = tf1.estimator.DNNClassifier(\n feature_columns=feature_columns,\n hidden_units=[256, 32],\n optimizer=tf1.train.AdamOptimizer(0.001),\n n_classes=10,\n dropout=0.2\n)\n\ntrain_input_fn = tf1.estimator.inputs.numpy_input_fn(\n x={\"x\": x_train},\n y=y_train.astype(np.int32),\n num_epochs=10,\n batch_size=50,\n shuffle=True,\n)\n\ntest_input_fn = tf1.estimator.inputs.numpy_input_fn(\n x={\"x\": x_test},\n y=y_test.astype(np.int32),\n num_epochs=10,\n shuffle=False\n)\n\ntrain_spec = tf1.estimator.TrainSpec(input_fn=train_input_fn, max_steps=10)\neval_spec = tf1.estimator.EvalSpec(input_fn=test_input_fn,\n steps=10,\n throttle_secs=0)",
"Then, train and evaluate the model. The evaluation runs synchronously between training because it's limited as a local run in this notebook and alternates between training and evaluation. However, if the estimator is used distributedly, the evaluator will run as a dedicated evaluator task. For more information, check the migration guide on distributed training.",
"tf1.estimator.train_and_evaluate(estimator=classifier,\n train_spec=train_spec,\n eval_spec=eval_spec)",
"TensorFlow 2: Evaluating a Keras model\nIn TensorFlow 2, if you use the Keras Model.fit API for training, you can evaluate the model with tf.keras.utils.SidecarEvaluator. You can also visualize the evaluation metrics in TensorBoard which is not shown in this guide.\nTo help demonstrate this, let's first start by defining and training the model:",
"def create_model():\n return tf.keras.models.Sequential([\n tf.keras.layers.Flatten(input_shape=(28, 28)),\n tf.keras.layers.Dense(512, activation='relu'),\n tf.keras.layers.Dropout(0.2),\n tf.keras.layers.Dense(10)\n ])\n\nloss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\n\nmodel = create_model()\nmodel.compile(optimizer='adam',\n loss=loss,\n metrics=['accuracy'],\n steps_per_execution=10,\n run_eagerly=True)\n\nlog_dir = tempfile.mkdtemp()\nmodel_checkpoint = tf.keras.callbacks.ModelCheckpoint(\n filepath=os.path.join(log_dir, 'ckpt-{epoch}'),\n save_weights_only=True)\n\nmodel.fit(x=x_train,\n y=y_train,\n epochs=1,\n callbacks=[model_checkpoint])",
"Then, evaluate the model using tf.keras.utils.SidecarEvaluator. In real training, it's recommended to use a separate job to conduct the evaluation to free up worker resources for training.",
"data = tf.data.Dataset.from_tensor_slices((x_test, y_test))\ndata = data.batch(64)\n\ntf.keras.utils.SidecarEvaluator(\n model=model,\n data=data,\n checkpoint_dir=log_dir,\n max_evaluations=1\n).start()",
"Next steps\n\nTo learn more about sidecar evaluation consider reading the tf.keras.utils.SidecarEvaluator API docs.\nTo consider alternating training and evaluation in Keras consider reading about other built-in methods."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jdhp-docs/python-notebooks
|
ai_ml_multilayer_perceptron_fr.ipynb
|
mit
|
[
"Perceptron Multicouche\nTODO\nShallow et Deep learning à lire:\n- https://www.miximum.fr/blog/introduction-au-deep-learning-2/\n- https://sciencetonnante.wordpress.com/2016/04/08/le-deep-learning/\n- https://www.technologies-ebusiness.com/enjeux-et-tendances/le-deep-learning-pas-a-pas\n- http://scholar.google.fr/scholar_url?url=https://arxiv.org/pdf/1404.7828&hl=fr&sa=X&scisig=AAGBfm07Y2UDlPpbninerh4gxHUj2SJfDQ&nossl=1&oi=scholarr&sqi=2&ved=0ahUKEwjfxMu7jKnUAhUoCsAKHR_RDlkQgAMIKygAMAA\nPrincipales implémentations en Python\n\nScikit-learn: http://scikit-learn.org/stable/modules/neural_networks_supervised.html\n...\n\nNotes du livre Dunod\nNotations\nLes notations suivantes sont détaillées au fil du document:\n$\\newcommand{\\cur}{i}$\n$\\cur$: couche courante\n$\\newcommand{\\prev}{j}$\n$\\newcommand{\\prevcur}{{\\cur\\prev}}$\n$\\prev$: couche immédiatement en amont de la courche courrante (i.e. vers la couche d'entrée du réseau)\n$\\newcommand{\\next}{k}$\n$\\newcommand{\\curnext}{{\\next\\cur}}$\n$\\next$: couche immédiatement en aval de la courche courrante (i.e. vers la couche de sortie du réseau)\n$\\newcommand{\\ex}{\\eta}$\n$\\ex$: exemple (sample ou feature) courant (i.e. le vecteur des entrées courantes du réseau)\n$\\newcommand{\\pot}{x}$\n$\\pot_\\cur$: Potentiel d'activation du neurone $i$ pour l'exemple courant\n$\\newcommand{\\weight}{w}$\n$\\newcommand{\\wcur}{{\\weight_{\\cur\\prev}}}$\n$\\wcur$: Poids de la connexion entre le neurone $j$ et le neurone $i$\n$\\newcommand{\\activthres}{\\theta}$\n$\\activthres_\\cur$: Seuil d'activation du neurone $i$\n$\\newcommand{\\activfunc}{f}$\n$\\activfunc_\\cur$: Fonction d'activation du neurone $i$\n$\\newcommand{\\errfunc}{E}$\n$\\errfunc$: Fonction objectif ou fonction d'erreur\n$\\newcommand{\\learnrate}{\\epsilon}$\n$\\learnrate$: Pas d'apprentissage ou Taux d'apprentissage\n$\\newcommand{\\learnit}{n}$\n$\\learnit$: Numéro d'itération (ou cycle ou époque) du processus d'apprentissage\n$\\newcommand{\\sigout}{y}$\n$\\sigout_\\cur$: Signal de sortie du neurone $i$ pour l'exemple courant\n$\\newcommand{\\sigoutdes}{d}$\n$\\sigoutdes_\\cur$: Sortie désirée (étiquette) du neurone $i$ pour l'exemple courant\n$\\newcommand{\\weights}{\\boldsymbol{W}}$\n$\\weights$: Matrice des poids du réseau (en réalité il y a une matrice de taille potentiellement différente par couche)\n$\\newcommand{\\errsig}{\\Delta}$\n$\\errsig_i$: Signal d'erreur du neurone $i$ pour l'exemple courant",
"STR_CUR = r\"i\" # Couche courante\nSTR_PREV = r\"j\" # Couche immédiatement en amont de la courche courrante (i.e. vers la couche d'entrée du réseau)\nSTR_NEXT = r\"k\" # Couche immédiatement en aval de la courche courrante (i.e. vers la couche de sortie du réseau)\nSTR_EX = r\"\\eta\" # Exemple (*sample* ou *feature*) courant (i.e. le vecteur des entrées courantes du réseau)\nSTR_POT = r\"x\" # *Potentiel d'activation* du neurone $i$ pour l'exemple $\\ex$\nSTR_POT_CUR = r\"x_i\" # *Potentiel d'activation* du neurone $i$ pour l'exemple $\\ex$\nSTR_WEIGHT = r\"w\"\nSTR_WEIGHT_CUR = r\"w_{ij}\" # Poids de la connexion entre le neurone $j$ et le neurone $i$\nSTR_ACTIVTHRES = r\"\\theta\" # *Seuil d'activation* du neurone $i$\nSTR_ACTIVFUNC = r\"f\" # *Fonction d'activation* du neurone $i$\nSTR_ERRFUNC = r\"E\" # *Fonction objectif* ou *fonction d'erreur*\nSTR_LEARNRATE = r\"\\epsilon\" # *Pas d'apprentissage* ou *Taux d'apprentissage*\nSTR_LEARNIT = r\"n\" # Numéro d'itération (ou cycle ou époque) du processus d'apprentissage\nSTR_SIGOUT = r\"y\" # Signal de sortie du neurone $i$ pour l'exemple $\\ex$\nSTR_SIGOUT_CUR = r\"y_i\"\nSTR_SIGOUT_PREV = r\"y_j\"\nSTR_SIGOUT_DES = r\"d\" # Sortie désirée (*étiquette*) du neurone $i$ pour l'exemple $\\ex$\nSTR_SIGOUT_DES_CUR = r\"d_i\"\nSTR_WEIGHTS = r\"W\" # Matrice des poids du réseau (en réalité il y a une matrice de taille potentiellement différente par couche)\nSTR_ERRSIG = r\"\\Delta\" # *Signal d'erreur* du neurone $i$ pour l'exemple $\\ex$\n\ndef tex(tex_str):\n return r\"$\" + tex_str + r\"$\"\n\n%matplotlib inline\n\nimport nnfigs\n\n# https://github.com/jeremiedecock/neural-network-figures.git\nimport nnfigs.core as nnfig\nimport matplotlib.pyplot as plt\n\nfig, ax = nnfig.init_figure(size_x=8, size_y=4)\n\nnnfig.draw_synapse(ax, (0, -6), (10, 0))\nnnfig.draw_synapse(ax, (0, -2), (10, 0))\nnnfig.draw_synapse(ax, (0, 2), (10, 0))\nnnfig.draw_synapse(ax, (0, 6), (10, 0), label=tex(STR_WEIGHT_CUR), label_position=0.5, fontsize=14)\n\nnnfig.draw_synapse(ax, (10, 0), (12, 0))\n\nnnfig.draw_neuron(ax, (0, -6), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, -2), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, 2), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, 6), 0.5, empty=True)\nplt.text(x=0, y=7.5, s=tex(STR_PREV), fontsize=14)\nplt.text(x=10, y=1.5, s=tex(STR_CUR), fontsize=14)\nplt.text(x=0, y=0, s=r\"$\\vdots$\", fontsize=14)\nplt.text(x=-2.5, y=0, s=tex(STR_SIGOUT_PREV), fontsize=14)\nplt.text(x=13, y=0, s=tex(STR_SIGOUT_CUR), fontsize=14)\nplt.text(x=9.2, y=-1.8, s=tex(STR_POT_CUR), fontsize=14)\n\nnnfig.draw_neuron(ax, (10, 0), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\n\nplt.show()",
"$$\n\\pot_\\cur = \\sum_\\prev \\wcur \\sigout\\prev\n$$\n$$\n\\sigout\\cur = \\activfunc(\\pot_\\cur)\n$$\n$$\n\\weights = \\begin{pmatrix}\n \\weight_{11} & \\cdots & \\weight_{1m} \\\n \\vdots & \\ddots & \\vdots \\\n \\weight_{n1} & \\cdots & \\weight_{nm}\n\\end{pmatrix}\n$$\nDivers\nLe PMC peut approximer n'importe quelle fonction continue avec une précision arbitraire suivant le nombre de neurones présents sur la couche cachée.\nInitialisation des poids: généralement des petites valeurs aléatoires\nTODO: quelle différence entre:\n* réseau bouclé\n* réseau récurent\nFonction objectif (ou fonction d'erreur)\nFonction objectif: $\\errfunc \\left( \\weights \\left( \\learnit \\right) \\right)$\n$\\learnit$: itération courante de l'apprentissage $(1, 2, ...)$\nTypiquement, la fonction objectif (fonction d'erreur) est la somme du carré de l'erreur de chaque neurone de sortie.\n$$\n\\errfunc = \\frac12 \\sum_{\\cur \\in \\Omega} \\left[ \\sigout_\\cur - \\sigoutdes_\\cur \\right]^2\n$$\n$\\Omega$: l'ensemble des neurones de sortie\nLe $\\frac12$, c'est juste pour simplifier les calculs de la dérivée ?",
"%matplotlib inline\n\nimport nnfigs\n\n# https://github.com/jeremiedecock/neural-network-figures.git\nimport nnfigs.core as nnfig\nimport matplotlib.pyplot as plt\n\nfig, ax = nnfig.init_figure(size_x=8, size_y=4)\n\nnnfig.draw_synapse(ax, (0, -6), (10, 0))\nnnfig.draw_synapse(ax, (0, -2), (10, 0))\nnnfig.draw_synapse(ax, (0, 2), (10, 0))\nnnfig.draw_synapse(ax, (0, 6), (10, 0))\n\nnnfig.draw_synapse(ax, (0, -6), (10, -4))\nnnfig.draw_synapse(ax, (0, -2), (10, -4))\nnnfig.draw_synapse(ax, (0, 2), (10, -4))\nnnfig.draw_synapse(ax, (0, 6), (10, -4))\n\nnnfig.draw_synapse(ax, (0, -6), (10, 4))\nnnfig.draw_synapse(ax, (0, -2), (10, 4))\nnnfig.draw_synapse(ax, (0, 2), (10, 4))\nnnfig.draw_synapse(ax, (0, 6), (10, 4))\n\nnnfig.draw_synapse(ax, (10, -4), (12, -4))\nnnfig.draw_synapse(ax, (10, 0), (12, 0))\nnnfig.draw_synapse(ax, (10, 4), (12, 4))\n\nnnfig.draw_neuron(ax, (0, -6), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, -2), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, 2), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, 6), 0.5, empty=True)\n\nnnfig.draw_neuron(ax, (10, -4), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\nnnfig.draw_neuron(ax, (10, 0), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\nnnfig.draw_neuron(ax, (10, 4), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\n\nplt.text(x=0, y=7.5, s=tex(STR_PREV), fontsize=14)\nplt.text(x=10, y=7.5, s=tex(STR_CUR), fontsize=14)\n\nplt.text(x=0, y=0, s=r\"$\\vdots$\", fontsize=14)\nplt.text(x=9.7, y=-6.1, s=r\"$\\vdots$\", fontsize=14)\nplt.text(x=9.7, y=5.8, s=r\"$\\vdots$\", fontsize=14)\n\nplt.text(x=12.5, y=4, s=tex(STR_SIGOUT + \"_1\"), fontsize=14)\nplt.text(x=12.5, y=0, s=tex(STR_SIGOUT + \"_2\"), fontsize=14)\nplt.text(x=12.5, y=-4, s=tex(STR_SIGOUT + \"_3\"), fontsize=14)\n\nplt.text(x=16, y=4, s=tex(STR_ERRFUNC + \"_1 = \" + STR_SIGOUT + \"_1 - \" + STR_SIGOUT_DES + \"_1\"), fontsize=14)\nplt.text(x=16, y=0, s=tex(STR_ERRFUNC + \"_2 = \" + STR_SIGOUT + \"_2 - \" + STR_SIGOUT_DES + \"_2\"), fontsize=14)\nplt.text(x=16, y=-4, s=tex(STR_ERRFUNC + \"_3 = \" + STR_SIGOUT + \"_3 - \" + STR_SIGOUT_DES + \"_3\"), fontsize=14)\n\nplt.text(x=16, y=-8, s=tex(STR_ERRFUNC + \" = 1/2 ( \" + STR_ERRFUNC + \"^2_1 + \" + STR_ERRFUNC + \"^2_2 + \" + STR_ERRFUNC + \"^2_3 + \\dots )\"), fontsize=14)\n\nplt.show()",
"Apprentissage\nMise à jours des poids\n$$\n\\weights(\\learnit + 1) = \\weights(\\learnit) \\underbrace{- \\learnrate \\nabla \\errfunc \\left( \\weights(\\learnit) \\right)}\n$$\n$- \\learnrate \\nabla \\errfunc \\left( \\weights(\\learnit) \\right)$: descend dans la direction opposée au gradient (plus forte pente)\navec $\\nabla \\errfunc \\left( \\weights(\\learnit) \\right)$: gradient de la fonction objectif au point $\\weights$\n$\\learnrate > 0$: pas (ou taux) d'apprentissage\n$$\n\\begin{align}\n\\delta_{\\wcur} & = \\wcur(\\learnit + 1) - \\wcur(\\learnit) \\\n & = - \\learnrate \\frac{\\partial \\errfunc}{\\partial \\wcur}\n\\end{align}\n$$\n$$\n\\Leftrightarrow \\wcur(\\learnit + 1) = \\wcur(\\learnit) - \\learnrate \\frac{\\partial \\errfunc}{\\partial \\wcur}\n$$\nChaque présentation de l'ensemble des exemples = un cycle (ou une époque) d'apprentissage\nCritère d'arrêt de l'apprentissage: quand la valeur de la fonction objectif se stabilise (ou que le problème est résolu avec la précision souhaitée)\n\n\"généralement il n'y a qu'un seul minimum local\" (preuve ???)\n\"dans le cas contraire, le plus simple est de recommencer plusieurs fois l'apprentissage avec des poids initiaux différents et de conserver la meilleure matrice $\\weights$ (celle qui minimise $\\errfunc$)\"",
"fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(4, 4))\n\nx = np.arange(10, 30, 0.1)\ny = (x - 20)**2 + 2\n\nax.set_xlabel(r\"Poids $\" + STR_WEIGHTS + \"$\", fontsize=14)\nax.set_ylabel(r\"Fonction objectif $\" + STR_ERRFUNC + \"$\", fontsize=14)\n\n# See http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes.tick_params\nax.tick_params(axis='both', # changes apply to the x and y axis\n which='both', # both major and minor ticks are affected\n bottom='on', # ticks along the bottom edge are on\n top='off', # ticks along the top edge are off\n left='on', # ticks along the left edge are on\n right='off', # ticks along the right edge are off\n labelbottom='off', # labels along the bottom edge are off\n labelleft='off') # labels along the lefleft are off\n\nax.set_xlim(left=10, right=25)\nax.set_ylim(bottom=0, top=5)\n\nax.plot(x, y);",
"Apprentissage incrémentiel (ou partiel) (ang. incremental learning):\non ajuste les poids $\\weights$ après la présentation d'un seul exemple\n(\"ce n'est pas une véritable descente de gradient\").\nC'est mieux pour éviter les minimums locaux, surtout si les exemples sont\nmélangés au début de chaque itération\nApprentissage différé (ang. batch learning):\nTODO...\nEst-ce que la fonction objectif $\\errfunc$ est une fonction multivariée\nou est-ce une aggrégation des erreurs de chaque exemple ?\nTODO: règle du delta / règle du delta généralisée\nRétropropagation du gradient\nRétropropagation du gradient:\nune méthode pour calculer efficacement le gradient de la fonction objectif $\\errfunc$.\nIntuition:\nLa rétropropagation du gradient n'est qu'une méthode parmis d'autre pour résoudre le probème d'optimisation des poids $\\weight$. On pourrait très bien résoudre ce problème d'optimisation avec des algorithmes évolutionnistes par exemple.\nEn fait, l'intérêt de la méthode de la rétropropagation du gradient (et ce qui explique sa notoriété) est qu'elle formule le problème d'optimisation des poids avec une écriture analytique particulièrement efficace qui élimine astucieusement un grand nombre de calculs redondants (un peu à la manière de ce qui se fait en programmation dynamique): quand on decide d'optimiser les poids via une descente de gradient, certains termes (les signaux d'erreurs $\\errsig$) apparaissent un grand nombre de fois dans l'écriture analytique complète du gradient. La méthode de la retropropagation du gradient fait en sorte que ces termes ne soient calculés qu'une seule fois.\nÀ noter qu'on aurrait très bien pu résoudre le problème avec une descente de gradient oú le gradient $\\frac{\\partial \\errfunc}{\\partial\\wcur(\\learnit)}$ serait calculé via une approximation numérique (méthode des différences finies par exemple) mais ce serait beaucoup plus lent et beaucoup moins efficace...\nPrincipe:\non modifie les poids à l'aide des signaux d'erreur $\\errsig$.\n$$\n\\wcur(\\learnit + 1) = \\wcur(\\learnit) \\underbrace{- \\learnrate \\frac{\\partial \\errfunc}{\\partial \\wcur(\\learnit)}}{\\delta\\prevcur}\n$$\n$$\n\\begin{align}\n\\delta_\\prevcur & = - \\learnrate \\frac{\\partial \\errfunc}{\\partial \\wcur(\\learnit)} \\\n & = - \\learnrate \\errsig_\\cur \\sigout\\prev\n\\end{align}\n$$\n\nDans le cas de l'apprentissage différé (batch), on calcule pour chaque exemple l'erreur correspondante. Leur contribution individuelle aux modifications des poids sont additionnées\nL'apprentissage suppervisé fonctionne mieux avec des neurones de sortie linéaires (fonction d'activation $\\activfunc$ = fonction identitée) \"car les signaux d'erreurs se transmettent mieux\".\nDes données d'entrée binaires doivent être choisies dans ${-1,1}$ plutôt que ${0,1}$ car un signal nul ne contribu pas à l'apprentissage.\n\nVoc:\n- erreur marginale: TODO\nSignaux d'erreur $\\errsig_\\cur$ pour les neurones de sortie $(\\cur \\in \\Omega)$\n$$\n\\errsig_\\cur = \\activfunc'(\\pot_\\cur)[\\sigout_\\cur - \\sigoutdes_\\cur]\n$$\nSignaux d'erreur $\\errsig_\\cur$ pour les neurones cachés $(\\cur \\not\\in \\Omega)$\n$$\n\\errsig_\\cur = \\activfunc'(\\pot_\\cur) \\sum_\\next \\weight_\\curnext \\errsig_\\next\n$$",
"%matplotlib inline\n\nimport nnfigs\n\n# https://github.com/jeremiedecock/neural-network-figures.git\nimport nnfigs.core as nnfig\nimport matplotlib.pyplot as plt\n\nfig, ax = nnfig.init_figure(size_x=8, size_y=4)\n\nnnfig.draw_synapse(ax, (0, -2), (10, 0))\nnnfig.draw_synapse(ax, (0, 2), (10, 0), label=tex(STR_WEIGHT + \"_{\" + STR_NEXT + STR_CUR + \"}\"), label_position=0.5, fontsize=14)\n\nnnfig.draw_synapse(ax, (10, 0), (12, 0))\n\nnnfig.draw_neuron(ax, (0, -2), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, 2), 0.5, empty=True)\n\nplt.text(x=0, y=3.5, s=tex(STR_CUR), fontsize=14)\nplt.text(x=10, y=3.5, s=tex(STR_NEXT), fontsize=14)\nplt.text(x=0, y=-0.2, s=r\"$\\vdots$\", fontsize=14)\n\nnnfig.draw_neuron(ax, (10, 0), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\n\nplt.show()",
"Plus de détail : calcul de $\\errsig_\\cur$\nDans l'exemple suivant on ne s'intéresse qu'aux poids $\\weight_1$, $\\weight_2$, $\\weight_3$, $\\weight_4$ et $\\weight_5$ pour simplifier la demonstration.",
"%matplotlib inline\n\nimport nnfigs\n\n# https://github.com/jeremiedecock/neural-network-figures.git\nimport nnfigs.core as nnfig\nimport matplotlib.pyplot as plt\n\nfig, ax = nnfig.init_figure(size_x=8, size_y=4)\n\nHSPACE = 6\nVSPACE = 4\n\n# Synapse #####################################\n\n# Layer 1-2\nnnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, VSPACE), label=tex(STR_WEIGHT + \"_1\"), label_position=0.4)\nnnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, VSPACE), color=\"lightgray\")\n\nnnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, -VSPACE), color=\"lightgray\")\nnnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, -VSPACE), color=\"lightgray\")\n\n# Layer 2-3\nnnfig.draw_synapse(ax, (HSPACE, VSPACE), (2*HSPACE, VSPACE), label=tex(STR_WEIGHT + \"_2\"), label_position=0.4)\nnnfig.draw_synapse(ax, (HSPACE, -VSPACE), (2*HSPACE, VSPACE), color=\"lightgray\")\n\nnnfig.draw_synapse(ax, (HSPACE, VSPACE), (2*HSPACE, -VSPACE), label=tex(STR_WEIGHT + \"_3\"), label_position=0.4)\nnnfig.draw_synapse(ax, (HSPACE, -VSPACE), (2*HSPACE, -VSPACE), color=\"lightgray\")\n\n# Layer 3-4\nnnfig.draw_synapse(ax, (2*HSPACE, VSPACE), (3*HSPACE, 0), label=tex(STR_WEIGHT + \"_4\"), label_position=0.4)\nnnfig.draw_synapse(ax, (2*HSPACE, -VSPACE), (3*HSPACE, 0), label=tex(STR_WEIGHT + \"_5\"), label_position=0.4, label_offset_y=-0.8)\n\n# Neuron ######################################\n\n# Layer 1 (input)\nnnfig.draw_neuron(ax, (0, VSPACE), 0.5, empty=True)\nnnfig.draw_neuron(ax, (0, -VSPACE), 0.5, empty=True, line_color=\"lightgray\")\n\n# Layer 2\nnnfig.draw_neuron(ax, (HSPACE, VSPACE), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\nnnfig.draw_neuron(ax, (HSPACE, -VSPACE), 1, ag_func=\"sum\", tr_func=\"sigmoid\", line_color=\"lightgray\")\n\n# Layer 3\nnnfig.draw_neuron(ax, (2*HSPACE, VSPACE), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\nnnfig.draw_neuron(ax, (2*HSPACE, -VSPACE), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\n\n# Layer 4\nnnfig.draw_neuron(ax, (3*HSPACE, 0), 1, ag_func=\"sum\", tr_func=\"sigmoid\")\n\n# Text ########################################\n\n# Layer 1 (input)\nplt.text(x=0.5, y=VSPACE+1, s=tex(STR_SIGOUT + \"_i\"), fontsize=12)\n\n# Layer 2\nplt.text(x=HSPACE-1.25, y=VSPACE+1.5, s=tex(STR_POT + \"_1\"), fontsize=12)\nplt.text(x=HSPACE+0.4, y=VSPACE+1.5, s=tex(STR_SIGOUT + \"_1\"), fontsize=12)\n\n# Layer 3\nplt.text(x=2*HSPACE-1.25, y=VSPACE+1.5, s=tex(STR_POT + \"_2\"), fontsize=12)\nplt.text(x=2*HSPACE+0.4, y=VSPACE+1.5, s=tex(STR_SIGOUT + \"_2\"), fontsize=12)\n\nplt.text(x=2*HSPACE-1.25, y=-VSPACE-1.8, s=tex(STR_POT + \"_3\"), fontsize=12)\nplt.text(x=2*HSPACE+0.4, y=-VSPACE-1.8, s=tex(STR_SIGOUT + \"_3\"), fontsize=12)\n\n# Layer 4\nplt.text(x=3*HSPACE-1.25, y=1.5, s=tex(STR_POT + \"_o\"), fontsize=12)\nplt.text(x=3*HSPACE+0.4, y=1.5, s=tex(STR_SIGOUT + \"_o\"), fontsize=12)\n\nplt.text(x=3*HSPACE+2, y=-0.3,\n s=tex(STR_ERRFUNC + \" = (\" + STR_SIGOUT + \"_o - \" + STR_SIGOUT_DES + \"_o)^2/2\"),\n fontsize=12)\n\nplt.show()",
"Attention: $\\weight_1$ influe $\\pot_2$ et $\\pot_3$ en plus de $\\pot_1$ et $\\pot_o$.\nCalcul de $\\frac{\\partial \\errfunc}{\\partial \\weight_4}$\nrappel:\n$$\n\\begin{align}\n\\errfunc &= \\frac12 \\left( \\sigout_o - \\sigoutdes_o \\right)^2 \\tag{1} \\\n\\sigout_o &= \\activfunc(\\pot_o) \\tag{2} \\\n\\pot_o &= \\sigout_2 \\weight_4 + \\sigout_3 \\weight_5 \\tag{3} \\\n\\end{align}\n$$\nc'est à dire:\n$$\n\\errfunc = \\frac12 \\left( \\activfunc \\left( \\sigout_2 \\weight_4 + \\sigout_3 \\weight_5 \\right) - \\sigoutdes_o \\right)^2\n$$\ndonc, en appliquant les règles de derivation de fonctions composées, on a:\n$$\n\\frac{\\partial \\errfunc}{\\partial \\weight_4} =\n\\frac{\\partial \\pot_o}{\\partial \\weight_4}\n\\underbrace{\n\\frac{\\partial \\sigout_o}{\\partial \\pot_o}\n\\frac{\\partial \\errfunc}{\\partial \\sigout_o}\n}_{\\errsig_o}\n$$\nde (1), (2) et (3) on déduit:\n$$\n\\begin{align}\n\\frac{\\partial \\pot_o}{\\partial \\weight_4} &= \\sigout_2 \\\n\\frac{\\partial \\sigout_o}{\\partial \\pot_o} &= \\activfunc'(\\pot_o) \\\n\\frac{\\partial \\errfunc}{\\partial \\sigout_o} &= \\sigout_o - \\sigoutdes_o \\\n\\end{align}\n$$\nle signal d'erreur s'écrit donc:\n$$\n\\begin{align}\n\\errsig_o &=\n\\frac{\\partial \\sigout_o}{\\partial \\pot_o}\n\\frac{\\partial \\errfunc}{\\partial \\sigout_o} \\\n&= \\activfunc'(\\pot_o) [\\sigout_o - \\sigoutdes_o]\n\\end{align}\n$$\nCalcul de $\\frac{\\partial \\errfunc}{\\partial \\weight_5}$\n$$\n\\frac{\\partial \\errfunc}{\\partial \\weight_5} =\n\\frac{\\partial \\pot_o}{\\partial \\weight_5}\n\\underbrace{\n\\frac{\\partial \\sigout_o}{\\partial \\pot_o}\n\\frac{\\partial \\errfunc}{\\partial \\sigout_o}\n}_{\\errsig_o}\n$$\navec:\n$$\n\\begin{align}\n\\frac{\\partial \\pot_o}{\\partial \\weight_5} &= \\sigout_3 \\\n\\frac{\\partial \\sigout_o}{\\partial \\pot_o} &= \\activfunc'(\\pot_o) \\\n\\frac{\\partial \\errfunc}{\\partial \\sigout_o} &= \\sigout_o - \\sigoutdes_o \\\n\\errsig_o &=\n\\frac{\\partial \\sigout_o}{\\partial \\pot_o}\n\\frac{\\partial \\errfunc}{\\partial \\sigout_o} \\\n&= \\activfunc'(\\pot_o) [\\sigout_o - \\sigoutdes_o]\n\\end{align}\n$$\nCalcul de $\\frac{\\partial \\errfunc}{\\partial \\weight_2}$\n$$\n\\frac{\\partial \\errfunc}{\\partial \\weight_2} =\n\\frac{\\partial \\pot_2}{\\partial \\weight_2}\n%\n\\underbrace{\n \\frac{\\partial \\sigout_2}{\\partial \\pot_2}\n \\frac{\\partial \\pot_o}{\\partial \\sigout_2}\n \\underbrace{\n \\frac{\\partial \\sigout_o}{\\partial \\pot_o}\n \\frac{\\partial \\errfunc}{\\partial \\sigout_o}\n }{\\errsig_o}\n}{\\errsig_2}\n$$\navec:\n$$\n\\begin{align}\n\\frac{\\partial \\pot_2}{\\partial \\weight_2} &= \\sigout_1 \\\n\\frac{\\partial \\sigout_2}{\\partial \\pot_2} &= \\activfunc'(\\pot_2) \\\n\\frac{\\partial \\pot_o}{\\partial \\sigout_2} &= \\weight_4 \\\n\\errsig_2 &=\n\\frac{\\partial \\sigout_2}{\\partial \\pot_2}\n\\frac{\\partial \\pot_o}{\\partial \\sigout_2}\n\\errsig_o \\\n&= \\activfunc'(\\pot_2) \\weight_4 \\errsig_o\n\\end{align}\n$$\nCalcul de $\\frac{\\partial \\errfunc}{\\partial \\weight_3}$\n$$\n\\frac{\\partial \\errfunc}{\\partial \\weight_3} =\n\\frac{\\partial \\pot_3}{\\partial \\weight_3}\n%\n\\underbrace{\n \\frac{\\partial \\sigout_3}{\\partial \\pot_3}\n \\frac{\\partial \\pot_o}{\\partial \\sigout_3}\n \\underbrace{\n \\frac{\\partial \\sigout_o}{\\partial \\pot_o}\n \\frac{\\partial \\errfunc}{\\partial \\sigout_o}\n }{\\errsig_o}\n}{\\errsig_3}\n$$\navec:\n$$\n\\begin{align}\n\\frac{\\partial \\pot_3}{\\partial \\weight_3} &= \\sigout_1 \\\n\\frac{\\partial \\sigout_3}{\\partial \\pot_3} &= \\activfunc'(\\pot_3) \\\n\\frac{\\partial \\pot_o}{\\partial \\sigout_3} &= \\weight_5 \\\n\\errsig_3 &= \n\\frac{\\partial \\sigout_3}{\\partial \\pot_3}\n\\frac{\\partial \\pot_o}{\\partial \\sigout_3}\n\\errsig_o \\\n&= \\activfunc'(\\pot_3) \\weight_5 \\errsig_o\n\\end{align}\n$$\nCalcul de $\\frac{\\partial \\errfunc}{\\partial \\weight_1}$\n$$\n\\frac{\\partial \\errfunc}{\\partial \\weight_1} =\n\\frac{\\partial \\pot_1}{\\partial \\weight_1}\n%\n\\underbrace{\n \\frac{\\partial \\sigout_1}{\\partial \\pot_1}\n \\left(\n \\frac{\\partial \\pot_2}{\\partial \\sigout_1} % err?\n \\underbrace{\n \\frac{\\partial \\sigout_2}{\\partial \\pot_2}\n \\frac{\\partial \\pot_o}{\\partial \\sigout_2}\n \\underbrace{\n \\frac{\\partial \\sigout_o}{\\partial \\pot_o}\n \\frac{\\partial \\errfunc}{\\partial \\sigout_o}\n }{\\errsig_o}\n }{\\errsig_2}\n +\n \\frac{\\partial \\pot_3}{\\partial \\sigout_1} % err?\n \\underbrace{\n \\frac{\\partial \\sigout_3}{\\partial \\pot_3}\n \\frac{\\partial \\pot_o}{\\partial \\sigout_3}\n \\underbrace{\n \\frac{\\partial \\sigout_o}{\\partial \\pot_o}\n \\frac{\\partial \\errfunc}{\\partial \\sigout_o}\n }{\\errsig_o}\n }{\\errsig_3}\n \\right)\n}_{\\errsig_1}\n$$\navec:\n$$\n\\begin{align}\n\\frac{\\partial \\pot_1}{\\partial \\weight_1} &= \\sigout_i \\\n\\frac{\\partial \\sigout_1}{\\partial \\pot_1} &= \\activfunc'(\\pot_1) \\\n\\frac{\\partial \\pot_2}{\\partial \\sigout_1} &= \\weight_2 \\\n\\frac{\\partial \\pot_3}{\\partial \\sigout_1} &= \\weight_3 \\\n\\errsig_1 &=\n\\frac{\\partial \\sigout_1}{\\partial \\pot_1}\n\\left(\n\\frac{\\partial \\pot_2}{\\partial \\sigout_1}\n\\errsig_2\n+\n\\frac{\\partial \\pot_3}{\\partial \\sigout_1}\n\\errsig_3\n\\right) \\\n&= \n\\activfunc'(\\pot_1) \\left( \\weight_2 \\errsig_2 + \\weight_3 \\errsig_3 \\right)\n\\end{align}\n$$\nFonctions d'activation : fonctions sigmoides (en forme de \"S\")\nLa fonction sigmoïde (en forme de \"S\") est définie par :\n$$f(x) = \\frac{1}{1 + e^{-x}}$$\npour tout réel $x$.\nOn peut la généraliser à toute fonction dont l'expression est :\n$$f(x) = \\frac{1}{1 + e^{-\\lambda x}}$$",
"def sigmoid(x, _lambda=1.):\n y = 1. / (1. + np.exp(-_lambda * x))\n return y\n\n%matplotlib inline\n\nx = np.linspace(-5, 5, 300)\n\ny1 = sigmoid(x, 1.)\ny2 = sigmoid(x, 5.)\ny3 = sigmoid(x, 0.5)\n\nplt.plot(x, y1, label=r\"$\\lambda=1$\")\nplt.plot(x, y2, label=r\"$\\lambda=5$\")\nplt.plot(x, y3, label=r\"$\\lambda=0.5$\")\n\nplt.hlines(y=0, xmin=-5, xmax=5, color='gray', linestyles='dotted')\nplt.vlines(x=0, ymin=-2, ymax=2, color='gray', linestyles='dotted')\n\nplt.legend()\n\nplt.title(\"Fonction sigmoïde\")\nplt.axis([-5, 5, -0.5, 2]);",
"Fonction dérivée :\n$$\nf'(x) = \\frac{\\lambda e^{-\\lambda x}}{(1+e^{-\\lambda x})^{2}}\n$$\nqui peut aussi être défini par\n$$\n\\frac{\\mathrm{d} y}{\\mathrm{d} x} = \\lambda y (1-y)\n$$\noù $y$ varie de 0 à 1.",
"def d_sigmoid(x, _lambda=1.):\n e = np.exp(-_lambda * x)\n y = _lambda * e / np.power(1 + e, 2)\n return y\n\n%matplotlib inline\n\nx = np.linspace(-5, 5, 300)\n\ny1 = d_sigmoid(x, 1.)\ny2 = d_sigmoid(x, 5.)\ny3 = d_sigmoid(x, 0.5)\n\nplt.plot(x, y1, label=r\"$\\lambda=1$\")\nplt.plot(x, y2, label=r\"$\\lambda=5$\")\nplt.plot(x, y3, label=r\"$\\lambda=0.5$\")\n\nplt.hlines(y=0, xmin=-5, xmax=5, color='gray', linestyles='dotted')\nplt.vlines(x=0, ymin=-2, ymax=2, color='gray', linestyles='dotted')\n\nplt.legend()\n\nplt.title(\"Fonction dérivée de la sigmoïde\")\nplt.axis([-5, 5, -0.5, 2]);",
"Tangente hyperbolique",
"def tanh(x):\n y = np.tanh(x)\n return y\n\nx = np.linspace(-5, 5, 300)\ny = tanh(x)\n\nplt.plot(x, y)\n\nplt.hlines(y=0, xmin=-5, xmax=5, color='gray', linestyles='dotted')\nplt.vlines(x=0, ymin=-2, ymax=2, color='gray', linestyles='dotted')\n\nplt.title(\"Fonction tangente hyperbolique\")\nplt.axis([-5, 5, -2, 2]);",
"Dérivée :\n$$\n\\tanh '= \\frac{1}{\\cosh^{2}} = 1-\\tanh^{2}\n$$",
"def d_tanh(x):\n y = 1. - np.power(np.tanh(x), 2)\n return y\n\nx = np.linspace(-5, 5, 300)\ny = d_tanh(x)\n\nplt.plot(x, y)\n\nplt.hlines(y=0, xmin=-5, xmax=5, color='gray', linestyles='dotted')\nplt.vlines(x=0, ymin=-2, ymax=2, color='gray', linestyles='dotted')\n\nplt.title(\"Fonction dérivée de la tangente hyperbolique\")\nplt.axis([-5, 5, -2, 2]);",
"Fonction logistique\nFonctions ayant pour expression\n$$\nf(t) = K \\frac{1}{1+ae^{-rt}}\n$$\noù $K$ et $r$ sont des réels positifs et $a$ un réel quelconque.\nLes fonctions sigmoïdes sont un cas particulier de fonctions logistique avec $a > 0$.\nPython implementation",
"# Define the activation function and its derivative\nactivation_function = tanh\nd_activation_function = d_tanh\n\ndef init_weights(num_input_cells, num_output_cells, num_cell_per_hidden_layer, num_hidden_layers=1):\n \"\"\"\n The returned `weights` object is a list of weight matrices,\n where weight matrix at index $i$ represents the weights between\n layer $i$ and layer $i+1$.\n \n Numpy array shapes for e.g. num_input_cells=2, num_output_cells=2,\n num_cell_per_hidden_layer=3 (without taking account bias):\n - in: (2,)\n - in+bias: (3,)\n - w[0]: (3,3)\n - w[0]+bias: (3,4)\n - w[1]: (3,2)\n - w[1]+bias: (4,2)\n - out: (2,)\n \"\"\"\n \n # TODO:\n # - faut-il que wij soit positif ?\n # - loi normale plus appropriée que loi uniforme ?\n # - quel sigma conseillé ?\n \n W = []\n \n # Weights between the input layer and the first hidden layer\n W.append(np.random.uniform(low=0., high=1., size=(num_input_cells + 1, num_cell_per_hidden_layer + 1)))\n \n # Weights between hidden layers (if there are more than one hidden layer)\n for layer in range(num_hidden_layers - 1):\n W.append(np.random.uniform(low=0., high=1., size=(num_cell_per_hidden_layer + 1, num_cell_per_hidden_layer + 1)))\n \n # Weights between the last hidden layer and the output layer\n W.append(np.random.uniform(low=0., high=1., size=(num_cell_per_hidden_layer + 1, num_output_cells)))\n \n return W\n\ndef evaluate_network(weights, input_signal): # TODO: find a better name\n \n # Add the bias on the input layer\n input_signal = np.concatenate([input_signal, [-1]])\n \n assert input_signal.ndim == 1\n assert input_signal.shape[0] == weights[0].shape[0]\n \n # Compute the output of the first hidden layer\n p = np.dot(input_signal, weights[0])\n output_hidden_layer = activation_function(p)\n \n # Compute the output of the intermediate hidden layers\n # TODO: check this\n num_layers = len(weights)\n for n in range(num_layers - 2):\n p = np.dot(output_hidden_layer, weights[n + 1])\n output_hidden_layer = activation_function(p)\n \n # Compute the output of the output layer\n p = np.dot(output_hidden_layer, weights[-1])\n output_signal = activation_function(p)\n \n return output_signal\n\ndef compute_gradient():\n # TODO\n pass\n\nweights = init_weights(num_input_cells=2, num_output_cells=2, num_cell_per_hidden_layer=3, num_hidden_layers=1)\nprint(weights)\n#print(weights[0].shape)\n#print(weights[1].shape)\n\ninput_signal = np.array([.1, .2])\ninput_signal\n\nevaluate_network(weights, input_signal)",
"Notes de la documentation sklearn\n\nfeatures : les données d'entrée du réseau (i.e. les entrées de la 1ere couche du réseau)\n\"nombre de features\" = taille du vecteur d'entrées\nloss function: fonction objectif (ou fonction d'erreur)\nfitting: processus d'apprentissage (training)\nsample: exemple\n\nLes biais sont stockés dans une liste de vecteurs plutôt qu'une liste de scalaires... pourquoi ???\nAvantages des PMC:\n- capables d'apprendre des modèles non linéaires\n- capables d'apprendre des modèles en temps réel (apprentissage on-line)\nInconvenients des PMC:\n- les PMC avec une ou plusieurs couches cachées ont une fonction objectif non-convexe avec des minimas locaux. Par conséquent, le résultat du processus d'apprentissage peut varier d'une execution à l'autre suivant la valeur des poids initiaux et l'obtention d'un réseau optimal n'est pas garanti\n- pour obtenir un résultat satisfaisant, il est souvant nécessaire de régler (plus ou moins empiriquement) de nombreux meta-paramètres (nombres de couches cachées, nombre de neurones sur les couches cachées, nombres d'itérations, ...)\n- une mauvaise normalisation des données d'entrée a un impact très négatif sur la qualité du résultat (\"mal conditionné\" ???)\nCross-Entropy Loss Function: ...\nSoftmax: ...\nMulti-label classification: ... modèle de classifieur qui permet a un exemple d'appartenir à plusieurs classes\nNotes du livre de Jean-Philippe Rennard"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mathias-gibson/ps239t-final-project
|
01_collect-data.ipynb
|
mit
|
[
"To collect my data I used get requests to retrieve information from two ProPublica APIs in .json format, and exported the data into two separate .csv files.",
"# Import required libraries\nimport requests\nimport urllib\nimport json\nfrom __future__ import division\nimport math\nimport time",
"ProPublica Campaign Finance API\nhttps://propublica.github.io/campaign-finance-api-docs/#candidates",
"# set key\nkey=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\n\n# set base url\nbase_url=\"https://api.propublica.org/campaign-finance/v1/\"\n\n# set headers\nheaders = {'X-API-Key': key}\n\n# set url parameters\ncycle = \"2014/\"\nmethod = \"candidates/\"\nfile_format = \".json\"\n\n# create a list of FEC IDs from http://www.fec.gov/data/DataCatalog.do to run the API request on more than one ID\nfec_id_list = []\nwith open('fecid2014.txt') as file:\n for line in file:\n fec_id_list.append(line.strip())\n\n# make request, build list of results for each FEC ID\ndata = []\nfor fec_id in fec_id_list:\n r = requests.get(base_url+cycle+method+fec_id+file_format, headers=headers)\n candidate = r.json()['results']\n data.append(candidate)\n time.sleep(3)\nprint(data)\n\n# format data for export\ndata = [v for sublist in data for v in sublist]\ndata_keys = data[0].keys()\n\n\n# export to csv\nimport csv\nwith open('ppcampfin.csv', 'w') as file:\n dict_writer = csv.DictWriter(file, data_keys)\n dict_writer.writeheader()\n dict_writer.writerows(data)",
"ProPublica Congress API - list of all members\nhttps://propublica.github.io/congress-api-docs/?shell#lists-of-members",
"# set key\nkey=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\n\n# set base url\nbase_url=\"https://api.propublica.org/congress/v1/\"\n\n# set url parameters\n\ncongress = \"114/\" #102-114 for House, 80-114 for Senate\nchamber = \"senate\" #house or senate\nmethod=\"/members\"\nfile_format = \".json\"\n\n#set headers\nheaders = {'X-API-Key': key}\n\n# make request\nr = requests.get(base_url+congress+chamber+method+file_format, headers=headers)\n\n# parse data for component nested dictionaries\ndata=(r.json())\n\nbio_keys = data['results'][0]['members'][0]\nbio_list = data['results'][0]['members']\n\n# export to csv\nimport csv\nwith open('bio.csv', 'w') as file:\n dict_writer = csv.DictWriter(file, bio_keys)\n dict_writer.writeheader()\n dict_writer.writerows(bio_list)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
amandersillinois/landlab
|
notebooks/tutorials/plotting/animate-landlab-output.ipynb
|
mit
|
[
"<a href=\"http://landlab.github.io\"><img style=\"float: left\"\nsrc=\"https://raw.githubusercontent.com/landlab/tutorials/release/landlab_header.png\"></a>\nAnimate Landlab output\n<hr>\n\n<p><small>More Landlab tutorials:\n<a href=\"https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html\">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small></p>\n\n<hr>\n\nWARNING: This tutorial is not tested. This is because it relys on ffmpeg and imagemagik. \nIt has not been updated to work with Landlab 2.0\nIntroduction\nThis tutorial presents a workflow to animate model output. The workflow is presented in two phases of a Landlab model.\nThis workflow requires software that might not be installed on your computer. The software is open-source and freely available for Linux, MacOS, and Windows. The following software is required for the model phases of this tutorial:\n* Phase 1: A mp4 video format is produced using FFMpeg (use conda install ffmpeg, or the pip equivalent, or visit https://www.ffmpeg.org/download.html).\n* Phase 2: A gif video format is produced using ImageMagick (use conda install imagemagick, or the pip equivalent, or visit https://www.imagemagick.org/script/download.php).\nPrepare the model for both phases\nImport the modules called by this tutorial.",
"from landlab import RasterModelGrid\nfrom landlab.components import FastscapeEroder, FlowAccumulator\nfrom landlab.plot import channel_profile, imshow_grid\nfrom IPython.display import HTML, Image\nimport matplotlib.animation as animation\nimport matplotlib.pylab as plt\nimport numpy as np",
"Create a grid with random elevation, set boundary conditions, and initialize components.",
"mg = RasterModelGrid((40, 40), 100)\nz = mg.add_zeros('topographic__elevation', at='node')\nz += np.random.rand(z.size)\n\noutlet_id = int(mg.number_of_node_columns * 0.5)\nmg.set_watershed_boundary_condition_outlet_id(outlet_id, z)\nmg.at_node['topographic__elevation'][outlet_id] = 0\n\nfr = FlowAccumulator(mg)\nsp = FastscapeEroder(mg, K_sp=3e-5, m_sp=0.5, n_sp=1)",
"Set model time and uplift parameters.",
"simulation_duration = 1e6\ndt = 1000\nn_timesteps = int(simulation_duration // dt) + 1\ntimesteps = np.linspace(0, simulation_duration, n_timesteps)\n\nuplift_rate = 0.001\nuplift_per_timestep = uplift_rate * dt",
"Phase 1: Animate elevation change using imshow_grid\nWe first prepare the animation movie file. The model is run and the animation frames are captured together.",
"# Create a matplotlib figure for the animation.\nfig, ax = plt.subplots(1, 1)\n\n# Initiate an animation writer using the matplotlib module, `animation`.\n# Set up to animate 6 frames per second (fps)\nwriter = animation.FFMpegWriter(fps=6)\n\n# Setup the movie file.\nwriter.setup(fig, 'first_phase.mp4')\n\nfor t in timesteps:\n # Uplift and erode.\n z[mg.core_nodes] += uplift_per_timestep\n fr.run_one_step()\n sp.run_one_step(dt)\n\n # Update the figure every 50,000 years.\n if t % 5e4 == 0:\n imshow_grid(mg, z, colorbar_label='elevation (m)')\n plt.title('{:.0f} kiloyears'.format(t * 1e-3))\n\n # Capture the state of `fig`.\n writer.grab_frame()\n\n # Remove the colorbar and clear the axis to reset the\n # figure for the next animation timestep.\n plt.gci().colorbar.remove()\n ax.cla()\n\nplt.close()",
"Finish the animation\nThe method, writer.finish completes the processing of the movie and saves then it.",
"writer.finish()",
"This code loads the saved mp4 and presents it in a Jupyter Notebook.",
"HTML(\"\"\"<div align=\"middle\"> <video width=\"80%\" controls loop>\n <source src=\"first_phase.mp4\" type=\"video/mp4\"> </video></div>\"\"\")",
"Phase 2: Animate multiple visualizations of elevation change over time\nIn the second model phase, we will create an animation similar to the one above, although with the following differences:\n* The uplift rate is greater.\n* The animation file format is gif.\n* The figure has two subplots.\n* The data of one of the subplots is updated rather than recreating the plot from scratch for each frame.\n* The animation frame rate (fps) is lower.\nIncrease uplift rate prior to running the second phase of the model.",
"increased_uplift_per_timestep = 10 * uplift_per_timestep",
"Run the second phase of the model\nHere we layout the figure with a left and right subplot.\n* The left subplot will be an animation of the grid similar to phase 1. We will recreate the image of this subplot for each animation frame.\n* The right subplot will be a line plot of the mean elevation over time. We will layout the subplot elements (labels, limits) before running the model, and then extend the plot line at each animation frame.\naxes[0] and axes[1] refer to the left and right subplot, respectively.\nA gif formatted movie is created in this model phase using the software, ImageMagick.",
"# Create a matplotlib figure for the animation.\nfig2, axes = plt.subplots(1, 2, figsize=(9, 3))\nfig2.subplots_adjust(top=0.85, bottom=0.25, wspace=0.4)\n\n# Layout right subplot.\n\ntime = 0\n\nline, = axes[1].plot(time, z.mean(), 'k')\n\naxes[1].set_title('mean elevation over time')\naxes[1].set_xlim([0, 1000])\naxes[1].set_ylim([0, 1000])\naxes[1].set_xlabel('time (kyr)')\naxes[1].set_ylabel('elevation (m)')\n\n# Initiate a writer and set up a movie file.\nwriter = animation.ImageMagickWriter(fps=2)\nwriter.setup(fig2, 'second_phase.gif')\n\nfor t in timesteps:\n # Uplift and erode.\n z[mg.core_nodes] += increased_uplift_per_timestep\n fr.run_one_step()\n sp.run_one_step(dt)\n\n # Update the figure every 50,000 years.\n if t % 5e4 == 0:\n fig2.sca(axes[0])\n fig2.suptitle('{:.0f} kiloyears'.format(t * 1e-3))\n\n # Plot the left subplot.\n axes[0].set_title('topography')\n imshow_grid(mg, z, colorbar_label='elevation (m)')\n colorbar = plt.gci().colorbar\n\n # Update the right subplot.\n line.set_xdata(np.append(line.get_xdata(), t * 1e-3))\n line.set_ydata(np.append(line.get_ydata(), z.mean()))\n\n # Capture the state of `fig2`.\n writer.grab_frame()\n\n # Reset the figure for the next animation time step.\n plt.cla()\n colorbar.remove()\n\nwriter.finish()\n\nplt.close()",
"This code loads the saved mp4 and presents it in a Jupyter Notebook.",
"Image(filename='second_phase.gif')",
"Click here for more <a href=\"https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html\">Landlab tutorials</a>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
scottlittle/solar-sensors
|
.ipynb_checkpoints/all-datasets-together-checkpoint.ipynb
|
apache-2.0
|
[
"Summon any data\nI want to make a single query and have it return data across the datasets",
"from datetime import datetime,timedelta, time\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom data_helper_functions import *\nfrom IPython.display import display\npd.options.display.max_columns = 999\n%matplotlib inline\n\n\n\ndesired_channel = 'BAND_01'\ndesired_date = datetime(2014, 4, 1)\ndesired_timedelta = timedelta(hours = 15)\ndesired_datetime = desired_date + desired_timedelta\nsatellite_filefolder = '../../data/satellite/colorado/summer6months/data/'\nsensor_filefolder = '../../data/sensor_data/colorado6months/'\npvoutput_filefolder = '../../data/pvoutput/pvoutput6months/'\n\n#satellite data\nsatellite_filename = find_filename(desired_datetime, desired_channel, satellite_filefolder)\nlons, lats, data = return_satellite_data(satellite_filename, satellite_filefolder)\n\n\nplt.figure(figsize=(8, 8))\nimgplot = plt.imshow(data)\nimgplot.set_interpolation('none')\nplt.savefig('foo.png')\nplt.show()\n\n#sensor data\nsensor_filename = find_file_from_date(desired_date, sensor_filefolder)\ndf_sensor = return_sensor_data(sensor_filename, sensor_filefolder)\ndf_sensor[df_sensor.index == desired_datetime]\ndisplay(df_sensor[df_sensor.index == desired_datetime])\n\n#pvoutput data\npvoutput_filename = find_file_from_date(desired_date, pvoutput_filefolder)\ndf_pvoutput = return_pvoutput_data(pvoutput_filename, pvoutput_filefolder)\ndisplay(df_pvoutput[df_pvoutput.index == desired_datetime])\n\n#saving df to image\n\n# a = Image(data=df_sensor)\n# type(a)",
"Build up sensor to pvoutput model",
"from datetime import datetime,timedelta, time\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom data_helper_functions import *\nfrom IPython.display import display\npd.options.display.max_columns = 999\n%matplotlib inline\n\n#iterate over datetimes:\nmytime = datetime(2014, 4, 1, 13)\ntimes = make_time(mytime)\n\n# Now that we can call data up over any datetime and we have a list of interested datetimes,\n# we can finally construct an X matrix and y vector for regression.\n\nsensor_filefolder = 'data/sensor_data/colorado6months/'\npvoutput_filefolder = 'data/pvoutput/pvoutput6months/'\n\nX = []\ny = []\n\nfor desired_datetime in times:\n \n try: #something wrong with y on last day\n desired_date = (desired_datetime - timedelta(hours=6)).date() #make sure correct date\n desired_date = datetime.combine(desired_date, time.min) #get into datetime format\n\n sensor_filename = find_file_from_date(desired_date, sensor_filefolder)\n df_sensor = return_sensor_data(sensor_filename, sensor_filefolder).ix[:,-15:-1]\n df_sensor[df_sensor.index == desired_datetime]\n\n pvoutput_filename = find_file_from_date(desired_date, pvoutput_filefolder)\n df_pvoutput = return_pvoutput_data(pvoutput_filename, pvoutput_filefolder)\n \n y.append(df_pvoutput[df_pvoutput.index == desired_datetime].values[0][0])\n X.append(df_sensor[df_sensor.index == desired_datetime].values[0])\n except:\n pass\n\nX = np.array(X)\ny = np.array(y)\n\nprint X.shape\nprint y.shape",
"...finally ready to model!\nRandom Forest",
"from sklearn.cross_validation import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=99)\n\nfrom sklearn.ensemble import RandomForestRegressor\nrfr = RandomForestRegressor(oob_score = True)\n\nrfr.fit(X_train,y_train)\n\ny_pred = rfr.predict(X_test)\n\nrfr.score(X_test,y_test)\n\ndf_sensor.columns.values.shape\n\nsorted_mask = np.argsort(rfr.feature_importances_)\n\nfor i in zip(df_sensor.columns.values,rfr.feature_importances_[sorted_mask])[::-1]:\n print i",
"Linear model",
"#now do a linear model and compare:\nfrom sklearn.linear_model import LinearRegression\nlr = LinearRegression()\nlr.fit(X_train,y_train)\nlr.score(X_test,y_test)\n\nsorted_mask = np.argsort(lr.coef_)\n\nfor i in zip(df_sensor.columns.values,lr.coef_[sorted_mask])[::-1]:\n print i\n\ndf_sensor.ix[:,-15:-1].head() #selects photometer and AOD, \n# useful in next iteration of using sensor data to fit",
"When only keeping the photometer data, random forest and linear model do pretty similar. When I added all of the sensor instruments to the fit, rfr scored 0.87 and lr scored negative!\nAlso, I threw away the mysterious \"Research 2\" sensor, that was probably just a solar panel! I asked NREL what it is, so we'll see. If it turns out to be a solar panel, then I can do some feature engineering with the sensor data by simulating a solar panel!\nNeural Net Exploration",
"import pandas as pd\nimport numpy as np\nfrom sklearn.preprocessing import scale\nfrom lasagne import layers\nfrom lasagne.nonlinearities import softmax, rectify, sigmoid, linear, very_leaky_rectify, tanh\nfrom lasagne.updates import nesterov_momentum, adagrad, momentum\nfrom nolearn.lasagne import NeuralNet\nimport theano\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.preprocessing import StandardScaler\n\ny = y.astype('float32')\nx = X.astype('float32')\nscaler = StandardScaler()\nscaled_x = scaler.fit_transform(x)\nx_train, x_test, y_train, y_test = train_test_split(scaled_x, y, test_size = 0.2, random_state = 12)\n\nnn_regression = NeuralNet(layers=[('input', layers.InputLayer),\n# ('hidden1', layers.DenseLayer),\n# ('hidden2', layers.DenseLayer),\n ('output', layers.DenseLayer)\n ],\n\n # Input Layer\n input_shape=(None, x.shape[1]),\n\n # hidden Layer\n# hidden1_num_units=512,\n# hidden1_nonlinearity=softmax,\n \n # hidden Layer\n# hidden2_num_units=128,\n# hidden2_nonlinearity=linear,\n\n # Output Layer\n output_num_units=1,\n output_nonlinearity=very_leaky_rectify,\n\n # Optimization\n update=nesterov_momentum,\n update_learning_rate=0.03,#0.02\n update_momentum=0.8,#0.8\n max_epochs=600, #was 100\n\n # Others\n #eval_size=0.2,\n regression=True,\n verbose=0,\n )\n\nnn_regression.fit(x_train, y_train)\ny_pred = nn_regression.predict(x_test)\nnn_regression.score(x_test, y_test)\n\nval = 11\nprint y_pred[val][0]\nprint y_test[val]\n\nplt.plot(y_pred,'ro')\n\nplt.plot(y_test,'go')",
"Extra Trees!",
"from sklearn.ensemble import ExtraTreesRegressor\netr = ExtraTreesRegressor(oob_score=True, bootstrap=True,\n n_jobs=-1, n_estimators=1000) #nj_obs uses all cores!\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=99)\n\netr.fit(X_train, y_train)\n\nprint etr.score(X_test,y_test)\nprint etr.oob_score_\n\ny_pred = etr.predict(X_test)\n\nfrom random import randint\nval = randint(0,y_test.shape[0])\nprint y_pred[val]\nprint y_test[val]\n\nprint X.shape\nprint y.shape",
"Save this thing and try it out on the simulated sensors!",
"from sklearn.externals import joblib\njoblib.dump(etr, 'data/sensor-to-power-model/sensor-to-power-model.pkl') \n\nnp.savez_compressed('data/y.npz',y=y) #save y"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
claudiuskerth/PhDthesis
|
Data_analysis/SNP-indel-calling/dadi/dadiExercises/First_Steps_with_dadi.ipynb
|
mit
|
[
"import sys\n\nsys.path\n\nimport os\n\nos.getcwd()",
"I have cloned the $\\delta$a$\\delta$i repository into '/home/claudius/Downloads/dadi' and have compiled the code. Now I need to add that directory to the PYTHONPATH variable:",
"sys.path.insert(0, '/home/claudius/Downloads/dadi')\n\nsys.path",
"Now, I should be able to import $\\delta$a$\\delta$i",
"import dadi\n\ndir(dadi)\n\nimport pylab\n\n%matplotlib inline\n\nx = pylab.linspace(0, 4*pylab.pi, 1000)\n\npylab.plot(x, pylab.sin(x), '-r')\n\n%%sh \n# this allows me to execute a shell command\n\nls",
"I have turned the 1D folded SFS's from realSFS into $\\delta$d$\\delta$i format by hand according to the description in section 3.1 of the manual. I have left out the masking line from the input file.",
"fs_ery = dadi.Spectrum.from_file('ERY.FOLDED.sfs.dadi_format')\n\nfs_ery",
"$\\delta$a$\\delta$i is detecting that the spectrum is folded (as given in the input file), but it is also automatically masking the 0th and 18th count category. This is a not a good behaviour.",
"# number of segregating sites\n\nfs_ery.data[1:].sum()",
"Single population statistics\n$\\pi$",
"fs_ery.pi()",
"I have next added a masking line to the input file, setting it to '1' for the first position, i. e. the 0-count category.",
"fs_ery = dadi.Spectrum.from_file('ERY.FOLDED.sfs.dadi_format', mask_corners=False)",
"$\\delta$a$\\delta$i is issuing the following message when executing the above command:\nWARNING:Spectrum_mod:Creating Spectrum with data_folded = True, but mask is not True for all entries which are nonsensical for a folded Spectrum.",
"fs_ery",
"I do not understand this warning from $\\delta$a$\\delta$i. The 18-count category is sensical for a folded spectrum with even sample size, so should not be masked. Anyway, I do not understand why $\\delta$a$\\delta$i is so reluctant to keep all positions, including the non-variable one.",
"fs_ery.pi()",
"The function that returns $\\pi$ produces the same output with or without the last count category masked ?! I think that is because even if the last count class (966.62...) is masked, it is still included in the calculation of $\\pi$. However, there is no obvious unmasking in the pi function. Strange!\nThere are (at least) two formulas that allow the calculation of $\\pi$ from a folded sample allele frequency spectrum. One is given in Wakeley2009, p.16, equation (1.4):\n$$\n\\pi = \\frac{1}{n \\choose 2} \\sum_{i=1}^{n/2} i(n-i)\\eta_{i}\n$$\nHere, $n$ is the number of sequences and $\\eta_{i}$ is the SNP count in the i'th minor sample allele frequency class.\nThe other formula is on p. 45 in Gillespie \"Population Genetics - A concise guide\":\n$$\n\\hat{\\pi} = \\frac{n}{n-1} \\sum_{i=1}^{S_{n}} 2 \\hat{p_{i}}(1-\\hat{p_{i}})\n$$\nThis is the formula that $\\delta$a$\\delta$i's pi function uses, with the modification that it multiplies each $\\hat{p_{i}}$ by the count in the i'th class of the SFS, i. e. the sum is not over all SNP's but over all SNP frequency classes.",
"# Calcualting pi with the formula from Wakeley2009\n\nn = 36 # 36 sequences sampled from 18 diploid individuals\npi_Wakeley = (sum( [i*(n-i)*fs_ery[i] for i in range(1, n/2+1)] ) * 2.0 / (n*(n-1)))/pylab.sum(fs_ery.data)\n# note fs_ery.data gets the whole fs_ery list, including masked entries\npi_Wakeley",
"This is the value of $\\pi_{site}$ that I calculated previously and included in the first draft of the thesis.",
"fs_ery.mask\n\nfs_ery.data # gets all data, including the masked one\n\n# Calculating pi with the formula from Gillespie:\n\nn = 18 \np = pylab.arange(0, n+1)/float(n)\np\n\n# Calculating pi with the formula from Gillespie:\n\nn / (n-1.0) * 2 * pylab.sum(fs_ery * p*(1-p))",
"This is the same as the output of dadi's pi function on the same SFS.",
"# the sample size (n) that dadi stores in this spectrum object and uses as n in the pi function\nfs_ery.sample_sizes[0]\n\n# what is the total number of sites in the spectrum\npylab.sum(fs_ery.data)",
"So, 1.6 million sites went into the ery spectrum.",
"# pi per site\nn / (n-1.0) * 2 * pylab.sum(fs_ery * p*(1-p)) / pylab.sum(fs_ery.data)",
"Apart from the incorrect small sample size correction by $\\delta$a$\\delta$i in case of folded spectra ($n$ refers to sampled sequences, not individuals), Gillespie's formula leads to a much higher estimate of $\\pi_{site}$ than Wakeley's. Why is that?",
"# with correct small sample size correction\n2 * n / (2* n-1.0) * 2 * pylab.sum(fs_ery * p*(1-p)) / pylab.sum(fs_ery.data)\n\n# Calculating pi with the formula from Gillespie:\n\nn = 18 \np = pylab.arange(0, n+1)/float(n)\np = p/2 # with a folded spectrum, we are summing over minor allele freqs only\npi_Gillespie = 2*n / (2*n-1.0) * 2 * pylab.sum(fs_ery * p*(1-p)) / pylab.sum(fs_ery.data)\npi_Gillespie\n\npi_Wakeley - pi_Gillespie",
"As can be seen from the insignificant difference (must be due to numerical inaccuracies) between the $\\pi_{Wakeley}$ and the $\\pi_{Gillespie}$ estimates, they are equivalent with the calculation for folded spectra given above as well as the correct small sample size correction. Beware: $\\delta$a$\\delta$i does not handle folded spectra correctly.\nIt should be a relatively easy to fix the pi function to work correctly with folded spectra. Care should be taken to also correctly handle uneven sample sizes.",
"fs_ery.folded",
"I think for now it would be best to import unfolded spectra from realSFS and fold them if necessary in dadi.",
"fs_par = dadi.Spectrum.from_file('PAR.FOLDED.sfs.dadi_format')\n\npylab.plot(fs_ery, 'r', label='ery')\npylab.plot(fs_par, 'g', label='par')\npylab.legend()",
"ML estimate of $\\theta$ from 1D folded spectrum\nI am trying to fit eq. 4.21 of Wakeley2009 to the oberseved 1D folded spectra.\n$$\nE[\\eta_i] = \\theta \\frac{\\frac{1}{i} + \\frac{1}{n-i}}{1+\\delta_{i,n-i}} \\qquad 1 \\le i \\le \\big[n/2\\big]\n$$\nEach frequency class, $\\eta_i$, provides an estimate of $\\theta$. However, I would like to find the value of $\\theta$ that minimizes the deviation of the above equation from all observed counts $\\eta_i$.\nI am following the example given here: https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html#example-of-solving-a-fitting-problem\n$$\n\\frac{\\delta E}{\\delta \\theta} = \\frac{\\frac{1}{i} + \\frac{1}{n-i}}{1+\\delta_{i,n-i}} \\qquad 1 \\le i \\le \\big[n/2\\big]\n$$\nI have just one parameter to optimize.",
"from scipy.optimize import least_squares\n\ndef model(theta, eta, n):\n \"\"\"\n theta: scaled population mutation rate parameter [scalar]\n eta: the folded 1D spectrum, including 0-count cat. [list] \n n: number of sampled gene copies, i. e. 2*num_ind [scalar]\n \n returns a numpy array\n \"\"\"\n i = pylab.arange(1, eta.size)\n delta = pylab.where(i == n-i, 1, 0)\n return theta * 1/i + 1/(n-i) / (1 + delta)\n\n?pylab.where\n\n# test\ni = pylab.arange(1, 19)\nn = 36\nprint i == n-i\n#\nprint pylab.where(i == n-i, 1, 0)\n# get a theta estimate from pi:\ntheta = pi_Wakeley * fs_ery.data.sum() \nprint theta\n#\nprint len(fs_ery)\n#\nmodel(theta, fs_ery, 36)\n\ndef fun(theta, eta, n):\n \"\"\"\n return residuals between model and data\n \"\"\"\n return model(theta, eta, n) - eta[1:]\n\ndef jac(theta, eta, n, test=False):\n \"\"\"\n creates a Jacobian matrix\n \"\"\"\n J = pylab.empty((eta.size-1, theta.size))\n i = pylab.arange(1, eta.size, dtype=float)\n delta = pylab.where(i == n-i, 1, 0)\n num = 1/i + 1/(n-i)\n den = 1 + delta\n if test:\n print i\n print num\n print den\n J[:,0] = num / den\n return J\n\n# test\njac(theta, fs_ery, 36, test=True)\n\n# starting value\ntheta0 = theta # pi_Wakeley from above\n\n# sum over unmasked entries, i. e. without 0-count category, i. e. returns number of variable sites\nfs_ery.sum()\n\n# optimize\nres = least_squares(fun, x0=theta0, jac=jac, bounds=(0,fs_ery.sum()), \n kwargs={'eta': fs_ery, 'n': 36}, verbose=1)\n\nres.success\n\n?least_squares\n\nprint res.x\nprint theta\n\npylab.rcParams['figure.figsize'] = [12.0, 8.0]\n\nimport matplotlib.pyplot as plt\n\nplt.rcParams['font.size'] = 14.0\n\ni = range(1, len(fs_ery))\neta_model = model(res.x, eta=fs_ery, n=36) # get predicted values with optimal theta\n\nplt.plot(i, fs_ery[1:], \"bo\", label=\"data from ery\") # plot observed spectrum\n\nymax = max( fs_ery[1:].max(), eta_model.max() )\nplt.axis([0, 19, 0, ymax*1.1]) # set axis range\n\nplt.xlabel(\"minor allele frequency (i)\")\nplt.ylabel(r'$\\eta_i$', fontsize='large', rotation='horizontal')\nplt.title(\"folded SFS of ery\")\n\nplt.plot(i, eta_model, \"go-\", \n label=\"\\nneutral model\" \n + \"\\n\"\n + r'$\\theta_{opt} = $' + str(round(res.x, 1))\n ) # plot model prediction with optimal theta\n\nplt.legend()",
"The counts in each frequency class should be Poisson distributed with rate equal to $E[\\eta_i]$ as given above. The lowest frequency class has the highest rate and therefore also the highest variance",
"#?plt.ylabel\n\n#print plt.rcParams\n\nfs_ery[1:].max()\n\n#?pylab\n\nos.getcwd()\n\n%%sh\n\nls",
"The following function will take the file name of a file containing the flat 1D folded frequency spectrum of one population and plots it together with the best fitting neutral expectation.",
"def plot_folded_sfs(filename, n, pop = ''):\n # read in spectrum from file\n data = open(filename, 'r')\n sfs = pylab.array( data.readline().split(), dtype=float )\n data.close() # should close connection to file\n #return sfs\n \n # get starting value for theta from Watterson's theta\n S = sfs[1:].sum()\n T_total = sum([1.0/i for i in range(1, n)]) # onhe half the expected total length of the genealogy\n theta0 = S / T_total # see eq. 4.7 in Wakeley2009\n \n # optimize\n res = least_squares(fun, x0=theta0, jac=jac, bounds=(0, sfs.sum()), \n kwargs={'eta': sfs, 'n': 36}, verbose=1)\n #print \"Optimal theta per site is {0:.4f}\".format(res.x[0]/sfs.sum())\n #print res.x[0]/sfs.sum()\n \n #return theta0, res\n \n # plot\n plt.rcParams['font.size'] = 14.0\n\n i = range(1, len(sfs))\n eta_model = model(res.x, eta=sfs, n=36) # get predicted values with optimal theta\n\n plt.plot(i, sfs[1:], \"rs\", label=\"data of \" + pop) # plot observed spectrum\n\n ymax = max( sfs[1:].max(), eta_model.max() )\n plt.axis([0, 19, 0, ymax*1.1]) # set axis range\n\n plt.xlabel(\"minor allele frequency (i)\")\n plt.ylabel(r'$\\eta_i$', fontsize='large', rotation='horizontal')\n plt.title(\"folded SFS\")\n plt.text(5, 10000, \n r\"Optimal neutral $\\theta$ per site is {0:.4f}\".format(res.x[0]/sfs.sum()))\n\n plt.plot(i, eta_model, \"go-\", \n label=\"\\nneutral model\" \n + \"\\n\"\n + r'$\\theta_{opt} = $' + str(round(res.x, 1))\n ) # plot model prediction with optimal theta\n\n plt.legend()\n\nplot_folded_sfs('PAR.FOLDED.sfs', n=36, pop='par')\n\nplot_folded_sfs('ERY.FOLDED.sfs', n=36, pop='ery')",
"Univariate function minimizers or 1D scalar minimisation\nSince I only have one value to optimize, I can use a slightly simpler approach than used above:",
"from scipy.optimize import minimize_scalar\n\n?minimize_scalar\n\n# define cost function\ndef f(theta, eta, n):\n \"\"\"\n return sum of squared deviations between model and data\n \"\"\"\n return sum( (model(theta, eta, n) - eta[1:])**2 ) # see above for definition of the 'model' function",
"It would be interesting to know whether the cost function is convex or not.",
"theta = pylab.arange(0, fs_ery.data[1:].sum()) # specify range of theta\ncost = [f(t, fs_ery.data, 36) for t in theta]\nplt.plot(theta, cost, 'b-', label='ery')\nplt.xlabel(r'$\\theta$')\nplt.ylabel('cost')\nplt.title(\"cost function for ery\")\nplt.legend(loc='best')\n\n?plt.legend",
"Within the specified bounds (the observed $\\theta$, i. e. derived from the data, cannot lie outside these bounds), the cost function is convex. This is therefore an easy optimisation problem. See here for more details.",
"res = minimize_scalar(f, bounds = (0, fs_ery.data[1:].sum()), method = 'bounded', args = (fs_ery.data, 36))\n\nres\n\n# number of segregating sites\n\nfs_par.data[1:].sum()\n\nres = minimize_scalar(f, bounds = (0, fs_par.data[1:].sum()), method = 'bounded', args = (fs_par.data, 36))\n\nres",
"The fitted values of $\\theta$ are similar to the ones obtained above with the least_squares function. The estimates for ery deviate more than for par.",
"from sympy import *\n\nx0 , x1 = symbols('x0 x1')\n\ninit_printing(use_unicode=True)\n\ndiff(0.5*(1-x0)**2 + (x1-x0**2)**2, x0)\n\ndiff(0.5*(1-x0)**2 + (x1-x0**2)**2, x1)",
"Wow! Sympy is a replacement for Mathematica. There is also Sage, which may include even more functionality.",
"from scipy.optimize import curve_fit",
"Curve_fit is another function that can be used for optimization.",
"?curve_fit\n\ndef model(i, theta):\n \"\"\"\n i: indpendent variable, here minor SNP frequency classes\n theta: scaled population mutation rate parameter [scalar]\n \n returns a numpy array\n \"\"\"\n n = len(i)\n delta = pylab.where(i == n-i, 1, 0)\n return theta * 1/i + 1/(n-i) / (1 + delta)\n\ni = pylab.arange(1, fs_ery.size)\n\npopt, pcov = curve_fit(model, i, fs_ery.data[1:])\n\n# optimal theta\nprint popt\n\nperr = pylab.sqrt(pcov)\nperr\n\nprint str(int(popt[0] - 1.96*perr[0])) + ' < ' + str(int(popt[0])) + ' < ' + str(int(popt[0] + 1.96*perr[0]))\n\npopt, pcov = curve_fit(model, i, fs_par.data[1:])\nperr = pylab.sqrt(pcov)\nprint str(int(popt[0] - 1.96*perr[0])) + ' < ' + str(int(popt[0])) + ' < ' + str(int(popt[0] + 1.96*perr[0]))",
"I am not sure whether these standard errors (perr) are correct. It may be that it is assumed that errors are normally distributed, which they are not exactly in this case. They should be close to Poisson distributed (see Fu1995), which should be fairly similar to normal with such high expected values as here.\nIf the standard errors are correct, then the large overlap of the 95% confidence intervals would indicate that the data do not provide significant support for a difference in $\\theta$ between par and ery.\nParametric bootstrap from the observed SFS",
"%pwd\n\n% ll\n\n! cat ERY.FOLDED.sfs.dadi_format\n\nfs_ery = dadi.Spectrum.from_file('ERY.FOLDED.sfs.dadi_format', mask_corners=False)\n\nfs_ery\n\nfs_ery.pop_ids = ['ery']\n\n# get a Poisson sample from the observed spectrum\n\nfs_ery_param_boot = fs_ery.sample()\n\nfs_ery_param_boot\n\nfs_ery_param_boot.data\n\n%psource fs_ery.sample",
"There must be a way to get more than one bootstrap sample per call.",
"fs_ery_param_boot = pylab.array([fs_ery.sample() for i in range(100)])\n\n# get the first 3 boostrap samples from the doubleton class\n\nfs_ery_param_boot[:3, 2]",
"It would be good to get the 5% and 95% quantiles from the bootstrap samples of each frequency class and add those intervals to the plot of the observed frequency spectrum and the fitted neutral spectrum. This would require to find a quantile function and to find out how to add lines to a plot with matplotlib.\nIt would also be good to use the predicted counts from the neutral model above with the fitted $\\theta$ as parameters for the bootstrap with sample() and add 95% confidence intervals to the predicted neutral SFS. I have done this in R instead (see /data3/claudius/Big_Data/ANGSD/SFS/SFS.Rmd)\n\nUsing unfolded spectra\nI edited the 2D SFS created for estimating $F_{ST}$ by realSFS. I have convinced myself that realSFS outputs a flattened 2D matrix as expected by $\\delta$a$\\delta$i's Spectrum.from_file function (see section 3.1 of the manual with my comments). Note, that in the manual, \"samples\" stands for number of allele copies, so that the correct specification of dimensions for this 2D unfolded SFS of 18 diploid individuals in each of 2 populations is 37 x 37.",
"# read in the flattened 2D SFS\nEryPar_unfolded_2dsfs = dadi.Spectrum.from_file('EryPar.unfolded.2dsfs.dadi_format', mask_corners=True)\n\n# check dimension\nlen(EryPar_unfolded_2dsfs[0,])\n\nEryPar_unfolded_2dsfs.sample_sizes\n\n# add population labels\nEryPar_unfolded_2dsfs.pop_ids = [\"ery\", \"par\"]\n\nEryPar_unfolded_2dsfs.pop_ids",
"Marginalizing\n$\\delta$a$\\delta$i offers a function to get the marginal spectra from multidimensional spectra. Note, that this marginalisation is nothing fancy. In R it would be taking either the rowSums or the colSums of the matrix.",
"# marginalise over par to get 1D SFS for ery\n\nfs_ery = EryPar_unfolded_2dsfs.marginalize([1]) \n# note the argument is an array with dimensions, one can marginalise over more than one dimension at the same time,\n# but that is only interesting for 3-dimensional spectra, which I don't have here\n\nfs_ery\n\n# marginalise over ery to get 1D SFS for par\nfs_par = EryPar_unfolded_2dsfs.marginalize([0])\n\nfs_par",
"Note, that these marginalised 1D SFS's are not identical to the 1D SFS estimated directly with realSFS. This is because, for the estimation of the 2D SFS, realSFS has only taken sites that had data from at least 9 individuals in each population (see assembly.sh, lines 1423 onwards).\nThe SFS's of par and ery had conspicuous shape differences. It would therefore be good to plot them to see, whether the above commands have done the correct thing.",
"# plot 1D spectra for each population\npylab.plot(fs_par, 'g', label=\"par\")\npylab.plot(fs_ery, 'r', label=\"ery\")\npylab.legend()",
"These marginal unfolded spectra look similar in shape to the 1D folded spectra of each subspecies (see above).",
"fs_ery.pi() / pylab.sum(fs_ery.data)\n\nfs_ery.data\n\nn = 36 # 36 sequences sampled from 18 diploid individuals\npi_Wakeley = (sum( [i*(n-i)*fs_ery[i] for i in range(1, n)] ) * 2.0 / (n*(n-1)))\npi_Wakeley = pi_Wakeley / pylab.sum(fs_ery.data)\npi_Wakeley",
"$\\delta$a$\\delta$i's pi function seems to calculate the correct value of $\\pi$ for this unfolded spectrum. However, it is worrying that $\\pi$ from this marginal spectrum is about 20 times larger than the one calculated from the directly estimated 1D folded spectrum (see above the $\\pi$ calculated from the folded 1D spectrum).",
"fs_par.pi() / pylab.sum(fs_par.data)\n\npylab.sum(fs_par.data)\n\npylab.sum(EryPar_unfolded_2dsfs.data)",
"<font color=\"red\">The sum over the marginalised 1D spectra should be the same as the sum over the 2D spectrum !</font>",
"# from dadi's marginalise function:\nfs_ery.data\n\nsfs2d = EryPar_unfolded_2dsfs.copy()\n\n# this should get the marginal spectrum for ery\nery_mar = [pylab.sum(sfs2d.data[i]) for i in range(0, len(sfs2d))]\nery_mar\n\n# this should get the marginal spectrum for ery and then take the sum over it\nsum([pylab.sum(sfs2d.data[i]) for i in range(0, len(sfs2d))])\n\n# look what happens if I include masking\nsum([pylab.sum(sfs2d[i]) for i in range(0, len(sfs2d))])\n\nfs_ery.data - ery_mar",
"So, during the marginalisation the masking of data in the fixed categories (0, 36) is the problem, producing incorrectly marginalised counts in those masked categories. This is shown in the following:",
"sfs2d[0]\n\npylab.sum(sfs2d[0])\n\n# from dadi's marginalise function:\nfs_ery.data\n\n# dividing by the correct number of sites to get pi per site:\nfs_ery.pi() / pylab.sum(sfs2d.data)",
"This is very close to the estimate of $\\pi$ derived from the folded 1D spectrum of ery! (see above)",
"fs_par.pi() / pylab.sum(sfs2d.data)",
"This is also nicely close to the estimate of $\\pi_{site}$ of par from its folded 1D spectrum.\n\nTajima's D",
"fs_ery.Watterson_theta() / pylab.sum(sfs2d.data)\n\nfs_ery.Tajima_D()\n\nfs_par.Tajima_D()",
"Now, I am calculating Tajima's D from the ery marginal spectrum by hand in order to check whether $\\delta$a$\\delta$i is doing the right thing.",
"n = 36\npi_Wakeley = (sum( [i*(n-i)*fs_ery.data[i] for i in range(1, n+1)] ) \n * 2.0 / (n*(n-1)))\n #/ pylab.sum(sfs2d.data)\npi_Wakeley\n\n# number of segregating sites\n# this sums over all unmasked positions in the array\npylab.sum(fs_ery)\n\nfs_ery.S()\n\nS = pylab.sum(fs_ery)\ntheta_Watterson = S / pylab.sum(1.0 / (pylab.arange(1, n)))\ntheta_Watterson\n\n# normalizing constant, see page 45 in Gillespie\na1 = pylab.sum(1.0 / pylab.arange(1, n))\n#print a1\na2 = pylab.sum(1.0 / pylab.arange(1, n)**2.0)\n#print a2\nb1 = (n+1.0)/(3.0*(n-1))\n#print b1\nb2 = 2.0*(n**2 + n + 3)/(9.0*n*(n-1))\n#print b2\nc1 = b1 - (1.0/a1)\n#print c1\nc2 = b2 - (n+2.0)/(a1*n) + a2/a1**2\n#print c2\nC = ((c1/a1)*S + (c2/(a1**2.0 + a2))*S*(S-1))\nC = C**(1/2.0)\n\nery_Tajimas_D = (pi_Wakeley - theta_Watterson) / C\nprint '{0:.6f}'.format(ery_Tajimas_D)\n\nery_Tajimas_D - fs_ery.Tajima_D()",
"$\\delta$a$\\delta$i seems to do the right thing. Note, that the estimate of Tajima's D from this marginal spectrum of ery is slightly different from the estimate derived from the folded 1D spectrum of ery (see /data3/claudius/Big_Data/ANGSD/SFS/SFS.Rmd). The folded 1D spectrum resulted in a Tajima's D estimate of $\\sim$0.05, i. e. a difference of almost 0.1. Again, the 2D spectrum is based on only those sites for which there were at least 9 individiuals with data in both populations, whereas the 1D folded spectrum of ery included all sites for which there were 9 ery individuals with data (see line 1571 onwards in assembly.sh).",
"fs_par.Tajima_D()",
"My estimate from the folded 1D spectrum of par was -0.6142268 (see /data3/claudius/Big_Data/ANGSD/SFS/SFS.Rmd). \nMulti-population statistics",
"EryPar_unfolded_2dsfs.S()",
"The 2D spectrum contains counts from 60k sites that are variable in par or ery or both.",
"EryPar_unfolded_2dsfs.Fst()",
"This estimate of $F_{ST}$ according to Weir and Cockerham (1984) is well below the estimate of $\\sim$0.3 from ANGSD according to Bhatia/Hudson (2013). Note, however, that this estimate showed a positive bias of around 0.025 in 100 permutations of population labels of individuals. Taking the positive bias into account, both estimates of $F_{ST}$ are quite similar.\nThe following function scramble_pop_ids should generate a 2D SFS with counts as if individuals were assigned to populations randomly. Theoretically, the $F_{ST}$ calculated from this SFS should be 0.",
"%psource EryPar_unfolded_2dsfs.scramble_pop_ids\n\n# plot the scrambled 2D SFS\n\ndadi.Plotting.plot_single_2d_sfs(EryPar_unfolded_2dsfs.scramble_pop_ids(), vmin=1)",
"So, this is how the 2D SFS would look like if ery and par were not genetically differentiated.",
"# get Fst for scrambled SFS\n\nEryPar_unfolded_2dsfs.scramble_pop_ids().Fst()",
"The $F_{ST}$ from the scrambled SFS is much lower than the $F_{ST}$ of the observed SFS. That should mean that there is significant population structure. However, the $F_{ST}$ from the scrambled SFS is not 0. I don't know why that is.",
"# folding\n\nEryPar_folded_2dsfs = EryPar_unfolded_2dsfs.fold()\n\nEryPar_folded_2dsfs\n\nEryPar_folded_2dsfs.mask",
"Plotting",
"dadi.Plotting.plot_single_2d_sfs(EryPar_unfolded_2dsfs, vmin=1)\n\ndadi.Plotting.plot_single_2d_sfs(EryPar_folded_2dsfs, vmin=1)",
"The folded 2D spectrum is not a minor allele frequency spectrum as are the 1D folded spectra of ery and par. This is because an allele that is minor in one population can be the major allele in the other. What is not counted are the alleles that are major in both populations, i. e. the upper right corner.\nFor the 2D spectrum to make sense it is crucial that allele frequencies are polarised the same way in both populations, either with an outgroup sequence or arbitrarily with respect to the reference sequence (as I did here).\nHow to fold a 1D spectrum",
"# unfolded spectrum from marginalisation of 2D unfolded spectrum\nfs_ery\n\nlen(fs_ery)\n\nfs_ery.fold()",
"Let's use the formula (1.2) from Wakeley2009 to fold the 1D spectrum manually:\n$$\n\\eta_{i} = \\frac{\\zeta_{i} + \\zeta_{n-i}}{1 + \\delta_{i, n-i}} \\qquad 1 \\le i \\le [n/2]\n$$\n$n$ is the number of gene copies sampled, i. e. haploid sample size. $[n/2]$ is the largest integer less than or equal to n/2 (to handle uneven sample sizes). $\\zeta_{i}$ are the unfolded frequencies and $\\delta_{i, n-i}$ is Kronecker's $\\delta$ which is 1 if $i = n-i$ and zero otherwise (to avoid counting the unfolded n/2 frequency class twice with even sample sizes).",
"fs_ery_folded = fs_ery.copy() # make a copy of the UNfolded spectrum\nn = len(fs_ery)-1\nfor i in range(len(fs_ery)):\n fs_ery_folded[i] += fs_ery[n-i]\n if i == n/2.0:\n fs_ery_folded[i] /= 2\nfs_ery_folded[0:19]\n\nisinstance(fs_ery_folded, pylab.ndarray)\n\nmask = [True] \nmask.extend([False] * 18)\nmask.extend([True] * 18)\nprint mask\nprint sum(mask)\n\nmask = [True] * 37\nfor i in range(len(mask)):\n if i > 0 and i < 19:\n mask[i] = False\nprint mask\nprint sum(mask)",
"Here is how to flatten an array of arrays with list comprehension:",
"mask = [[True], [False] * 18, [True] * 18]\nprint mask\n\nprint [elem for a in mask for elem in a]",
"Set new mask for the folded spectrum:",
"fs_ery_folded.mask = mask\n\nfs_ery_folded.folded = True\n\nfs_ery_folded - fs_ery.fold()",
"The fold() function works correctly for 1D spectra, at least. How about 2D spectra?\n$$\n\\eta_{i,j} = \\frac{\\zeta_{i,j} + \\zeta_{n-i, m-j}}{1 + \\delta_{i, n-i; j, m-j}} \n \\qquad 1 \\le i+j \\le \\Big[\\frac{n+m}{2}\\Big]\n$$",
"EryPar_unfolded_2dsfs.sample_sizes\n\nEryPar_unfolded_2dsfs._total_per_entry()\n\n# copy the unfolded 2D spectrum for folding\nimport copy\nsfs2d_folded = copy.deepcopy(EryPar_unfolded_2dsfs)\n\nn = len(sfs2d_folded)-1\nm = len(sfs2d_folded[0])-1\nfor i in range(n+1):\n for j in range(m+1):\n sfs2d_folded[i,j] += sfs2d_folded[n-i, m-j]\n if i == n/2.0 and j == m/2.0:\n sfs2d_folded[i,j] /= 2\n\nmask = sfs2d_folded._total_per_entry() > (n+m)/2\nmask\n\nsfs2d_folded.mask = mask\nsfs2d_folded.fold = True\n\ndadi.Plotting.plot_single_2d_sfs(sfs2d_folded, vmin=1)",
"I am going to go through every step in the fold function of dadi:",
"# copy the unfolded 2D spectrum for folding\nimport copy\nsfs2d_unfolded = copy.deepcopy(EryPar_unfolded_2dsfs)\n\ntotal_samples = pylab.sum(sfs2d_unfolded.sample_sizes)\ntotal_samples\n\ntotal_per_entry = dadi.Spectrum(sfs2d_unfolded._total_per_entry(), pop_ids=['ery', 'par'])\n#total_per_entry.pop_ids = ['ery', 'par']\ndadi.Plotting.plot_single_2d_sfs(total_per_entry, vmin=1)\n\ntotal_per_entry = sfs2d_unfolded._total_per_entry()\ntotal_per_entry\n\nwhere_folded_out = total_per_entry > total_samples/2\nwhere_folded_out\n\noriginal_mask = sfs2d_unfolded.mask\noriginal_mask\n\npylab.logical_or([True, False, True], [False, False, True])\n\n# get the number of elements along each axis\nsfs2d_unfolded.shape\n\n[slice(None, None, -1) for i in sfs2d_unfolded.shape]\n\nmatrix = pylab.array([\n [1, 2, 3, 4],\n [5, 6, 7, 8],\n [9, 10, 11, 12]\n])\nreverse_slice = [slice(None, None, -1) for i in matrix.shape]\nreverse_slice\n\nmatrix[reverse_slice]\n\nmatrix[::-1,::-1]",
"With the variable length list of slice objects, one can generalise the reverse of arrays with any dimensions.",
"final_mask = pylab.logical_or(original_mask, dadi.Numerics.reverse_array(original_mask))\nfinal_mask",
"Here, folding doesn't mask new cells.",
"?pylab.where\n\npylab.where(matrix < 6, matrix, 0)\n\n# this takes the part of the spectrum that is non-sensical if the derived allele is not known\n# and sets the rest to 0\nprint pylab.where(where_folded_out, sfs2d_unfolded, 0)\n\n# let's plot the bit of the spectrum that we are going to fold onto the rest:\ndadi.Plotting.plot_single_2d_sfs(dadi.Spectrum(pylab.where(where_folded_out, sfs2d_unfolded, 0)), vmin=1)\n\n# now let's reverse this 2D array, i. e. last row first and last element of each row first:\n_reversed = dadi.Numerics.reverse_array(pylab.where(where_folded_out, sfs2d_unfolded, 0))\n_reversed\n\ndadi.Plotting.plot_single_2d_sfs(dadi.Spectrum(_reversed), vmin=1)",
"The transformation we have done with the upper-right diagonal 2D array above should be identical to projecting it across a vertical center line (creating an upper left triangular matrix) and then projecting it across a horizontal center line (creating the final lower left triangular matrix). Note, that this is not like mirroring the upper-right triangular 2D array across the 36-36 diagonal!",
"# This shall now be added to the original unfolded 2D spectrum.\nsfs2d_folded = pylab.ma.masked_array(sfs2d_unfolded.data + _reversed) \n\ndadi.Plotting.plot_single_2d_sfs(dadi.Spectrum(sfs2d_folded), vmin=1)\n\nsfs2d_folded.data\n\nsfs2d_folded.data[where_folded_out] = 0\nsfs2d_folded.data\n\ndadi.Plotting.plot_single_2d_sfs(dadi.Spectrum(sfs2d_folded), vmin=1)\n\nsfs2d_folded.shape\n\nwhere_ambiguous = (total_per_entry == total_samples/2.0)\nwhere_ambiguous",
"SNP's with joint frequencies in the True cells are counted twice at the moment due to the folding and the fact that the sample sizes are even.",
"# this extracts the diagonal values from the UNfolded spectrum and sets the rest to 0\nambiguous = pylab.where(where_ambiguous, sfs2d_unfolded, 0)\ndadi.Plotting.plot_single_2d_sfs(dadi.Spectrum(ambiguous), vmin=1)",
"These are the values in the diagonal before folding.",
"reversed_ambiguous = dadi.Numerics.reverse_array(ambiguous)\ndadi.Plotting.plot_single_2d_sfs(dadi.Spectrum(reversed_ambiguous), vmin=1)",
"These are the values that got added to the diagonal during folding. Comparing with the previous plot, one can see for instance that the value in the (0, 36) class got added to the value in the (36, 0) class and vice versa. The two frequency classes are equivalent, since it is arbitrary which allele we call minor in the total sample (of 72 gene copies). These SNP's are therefore counted twice.",
"a = -1.0*ambiguous + 0.5*ambiguous + 0.5*reversed_ambiguous\nb = -0.5*ambiguous + 0.5*reversed_ambiguous\na == b\n\nsfs2d_folded += -0.5*ambiguous + 0.5*reversed_ambiguous\n\nfinal_mask = pylab.logical_or(final_mask, where_folded_out)\nfinal_mask\n\nsfs2d_folded = dadi.Spectrum(sfs2d_folded, mask=final_mask, data_folded=True, pop_ids=['ery', 'par'])\n\npylab.rcParams['figure.figsize'] = [12.0, 8.0]\n\ndadi.Plotting.plot_single_2d_sfs(sfs2d_folded, vmin=1)",
"Model specification"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
sthuggins/phys202-2015-work
|
assignments/assignment04/MatplotlibEx01.ipynb
|
mit
|
[
"Matplotlib Exercise 1\nImports",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np",
"Line plot of sunspot data\nDownload the .txt data for the \"Yearly mean total sunspot number [1700 - now]\" from the SILSO website. Upload the file to the same directory as this notebook.",
"import os\nassert os.path.isfile('yearssn.dat')",
"Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.",
"data = np.loadtxt(\"yearssn.dat\")\na= np.array(data)\na\n\nyears = a[:,0]\nyears\n\nssc = a[:,1]\nssc\n\nassert len(year)==315\nassert year.dtype==np.dtype(float)\nassert len(ssc)==315\nassert ssc.dtype==np.dtype(float)",
"Make a line plot showing the sunspot count as a function of year.\n\nCustomize your plot to follow Tufte's principles of visualizations.\nAdjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.\nCustomize the box, grid, spines and ticks to match the requirements of this data.",
"plt.plot(years, ssc)\nplt.figsize=(10,8)\nplt.xlim(1700,2015) #plot is scaled from 1700 to 2015 so that the data fill the graph.\n\nassert True # leave for grading",
"Describe the choices you have made in building this visualization and how they make it effective.\nYOUR ANSWER HERE\nNow make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above:\n\nCustomize your plot to follow Tufte's principles of visualizations.\nAdjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.\nCustomize the box, grid, spines and ticks to match the requirements of this data.",
"plt.subplots(2, 2)\n\nfor i in range(1700, 1800):\n for j in range(1800,1900):\n for k in range(1900,2000):\n plt.plot(data)\nplt.tight_layout()\n\nassert True # leave for grading"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/cmcc/cmip6/models/cmcc-cm2-hr5/ocean.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Ocean\nMIP Era: CMIP6\nInstitute: CMCC\nSource ID: CMCC-CM2-HR5\nTopic: Ocean\nSub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. \nProperties: 133 (101 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:50\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-hr5', 'ocean')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Seawater Properties\n3. Key Properties --> Bathymetry\n4. Key Properties --> Nonoceanic Waters\n5. Key Properties --> Software Properties\n6. Key Properties --> Resolution\n7. Key Properties --> Tuning Applied\n8. Key Properties --> Conservation\n9. Grid\n10. Grid --> Discretisation --> Vertical\n11. Grid --> Discretisation --> Horizontal\n12. Timestepping Framework\n13. Timestepping Framework --> Tracers\n14. Timestepping Framework --> Baroclinic Dynamics\n15. Timestepping Framework --> Barotropic\n16. Timestepping Framework --> Vertical Physics\n17. Advection\n18. Advection --> Momentum\n19. Advection --> Lateral Tracers\n20. Advection --> Vertical Tracers\n21. Lateral Physics\n22. Lateral Physics --> Momentum --> Operator\n23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff\n24. Lateral Physics --> Tracers\n25. Lateral Physics --> Tracers --> Operator\n26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff\n27. Lateral Physics --> Tracers --> Eddy Induced Velocity\n28. Vertical Physics\n29. Vertical Physics --> Boundary Layer Mixing --> Details\n30. Vertical Physics --> Boundary Layer Mixing --> Tracers\n31. Vertical Physics --> Boundary Layer Mixing --> Momentum\n32. Vertical Physics --> Interior Mixing --> Details\n33. Vertical Physics --> Interior Mixing --> Tracers\n34. Vertical Physics --> Interior Mixing --> Momentum\n35. Uplow Boundaries --> Free Surface\n36. Uplow Boundaries --> Bottom Boundary Layer\n37. Boundary Forcing\n38. Boundary Forcing --> Momentum --> Bottom Friction\n39. Boundary Forcing --> Momentum --> Lateral Friction\n40. Boundary Forcing --> Tracers --> Sunlight Penetration\n41. Boundary Forcing --> Tracers --> Fresh Water Forcing \n1. Key Properties\nOcean key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of ocean model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean model code (NEMO 3.6, MOM 5.0,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Family\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of ocean model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OGCM\" \n# \"slab ocean\" \n# \"mixed layer ocean\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBasic approximations made in the ocean.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Primitive equations\" \n# \"Non-hydrostatic\" \n# \"Boussinesq\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of prognostic variables in the ocean component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# \"Salinity\" \n# \"U-velocity\" \n# \"V-velocity\" \n# \"W-velocity\" \n# \"SSH\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Seawater Properties\nPhysical properties of seawater in ocean\n2.1. Eos Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Wright, 1997\" \n# \"Mc Dougall et al.\" \n# \"Jackett et al. 2006\" \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2.2. Eos Functional Temp\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTemperature used in EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# TODO - please enter value(s)\n",
"2.3. Eos Functional Salt\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSalinity used in EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Practical salinity Sp\" \n# \"Absolute salinity Sa\" \n# TODO - please enter value(s)\n",
"2.4. Eos Functional Depth\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDepth or pressure used in EOS for sea water ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pressure (dbars)\" \n# \"Depth (meters)\" \n# TODO - please enter value(s)\n",
"2.5. Ocean Freezing Point\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2.6. Ocean Specific Heat\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nSpecific heat in ocean (cpocean) in J/(kg K)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"2.7. Ocean Reference Density\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nBoussinesq reference density (rhozero) in kg / m3",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3. Key Properties --> Bathymetry\nProperties of bathymetry in ocean\n3.1. Reference Dates\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nReference date of bathymetry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Present day\" \n# \"21000 years BP\" \n# \"6000 years BP\" \n# \"LGM\" \n# \"Pliocene\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Type\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the bathymetry fixed in time in the ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.3. Ocean Smoothing\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe any smoothing or hand editing of bathymetry in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.4. Source\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe source of bathymetry in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.source') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Nonoceanic Waters\nNon oceanic waters treatement in ocean\n4.1. Isolated Seas\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how isolated seas is performed",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. River Mouth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how river mouth mixing or estuaries specific treatment is performed",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Key Properties --> Software Properties\nSoftware properties of ocean code\n5.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Resolution\nResolution in the ocean grid\n6.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Range Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Number Of Horizontal Gridpoints\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"6.5. Number Of Vertical Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"6.6. Is Adaptive Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.7. Thickness Level 1\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nThickness of first surface ocean level (in meters)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7. Key Properties --> Tuning Applied\nTuning methodology for ocean component\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation\nConservation in the ocean component\n8.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBrief description of conservation methodology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProperties conserved in the ocean by the numerical schemes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Enstrophy\" \n# \"Salt\" \n# \"Volume of ocean\" \n# \"Momentum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Consistency Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAny additional consistency properties (energy conversion, pressure gradient discretisation, ...)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Corrected Conserved Prognostic Variables\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSet of variables which are conserved by more than the numerical scheme alone.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.5. Was Flux Correction Used\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDoes conservation involve flux correction ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9. Grid\nOcean grid\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of grid in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Grid --> Discretisation --> Vertical\nProperties of vertical discretisation in ocean\n10.1. Coordinates\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of vertical coordinates in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Z-coordinate\" \n# \"Z*-coordinate\" \n# \"S-coordinate\" \n# \"Isopycnic - sigma 0\" \n# \"Isopycnic - sigma 2\" \n# \"Isopycnic - sigma 4\" \n# \"Isopycnic - other\" \n# \"Hybrid / Z+S\" \n# \"Hybrid / Z+isopycnic\" \n# \"Hybrid / other\" \n# \"Pressure referenced (P)\" \n# \"P*\" \n# \"Z**\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Partial Steps\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nUsing partial steps with Z or Z vertical coordinate in ocean ?*",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"11. Grid --> Discretisation --> Horizontal\nType of horizontal discretisation scheme in ocean\n11.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal grid type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Lat-lon\" \n# \"Rotated north pole\" \n# \"Two north poles (ORCA-style)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Staggering\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nHorizontal grid staggering type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa E-grid\" \n# \"N/a\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.3. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite difference\" \n# \"Finite volumes\" \n# \"Finite elements\" \n# \"Unstructured grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Timestepping Framework\nOcean Timestepping Framework\n12.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of time stepping in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.2. Diurnal Cycle\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiurnal cycle type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Via coupling\" \n# \"Specific treatment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Timestepping Framework --> Tracers\nProperties of tracers time stepping in ocean\n13.1. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTracers time stepping scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTracers time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14. Timestepping Framework --> Baroclinic Dynamics\nBaroclinic dynamics in ocean\n14.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBaroclinic dynamics type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Preconditioned conjugate gradient\" \n# \"Sub cyling\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBaroclinic dynamics scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.3. Time Step\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nBaroclinic time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15. Timestepping Framework --> Barotropic\nBarotropic time stepping in ocean\n15.1. Splitting\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime splitting method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"split explicit\" \n# \"implicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.2. Time Step\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nBarotropic time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Timestepping Framework --> Vertical Physics\nVertical physics time stepping in ocean\n16.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDetails of vertical time stepping in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17. Advection\nOcean advection\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of advection in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Advection --> Momentum\nProperties of lateral momemtum advection scheme in ocean\n18.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of lateral momemtum advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flux form\" \n# \"Vector form\" \n# TODO - please enter value(s)\n",
"18.2. Scheme Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean momemtum advection scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. ALE\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nUsing ALE for vertical advection ? (if vertical coordinates are sigma)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.ALE') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"19. Advection --> Lateral Tracers\nProperties of lateral tracer advection scheme in ocean\n19.1. Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOrder of lateral tracer advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.2. Flux Limiter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nMonotonic flux limiter for lateral tracer advection scheme in ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"19.3. Effective Order\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nEffective order of limited lateral tracer advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.5. Passive Tracers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPassive tracers advected",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ideal age\" \n# \"CFC 11\" \n# \"CFC 12\" \n# \"SF6\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.6. Passive Tracers Advection\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIs advection of passive tracers different than active ? if so, describe.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Advection --> Vertical Tracers\nProperties of vertical tracer advection scheme in ocean\n20.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.2. Flux Limiter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nMonotonic flux limiter for vertical tracer advection scheme in ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"21. Lateral Physics\nOcean lateral physics\n21.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lateral physics in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of transient eddy representation in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Eddy active\" \n# \"Eddy admitting\" \n# TODO - please enter value(s)\n",
"22. Lateral Physics --> Momentum --> Operator\nProperties of lateral physics operator for momentum in ocean\n22.1. Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDirection of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.2. Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrder of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.3. Discretisation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiscretisation of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff\nProperties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean\n23.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLateral physics momemtum eddy viscosity coeff type in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Constant Coefficient\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"23.3. Variable Coefficient\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.4. Coeff Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.5. Coeff Backscatter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"24. Lateral Physics --> Tracers\nProperties of lateral physics for tracers in ocean\n24.1. Mesoscale Closure\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there a mesoscale closure in the lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"24.2. Submesoscale Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"25. Lateral Physics --> Tracers --> Operator\nProperties of lateral physics operator for tracers in ocean\n25.1. Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDirection of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrder of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Discretisation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiscretisation of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff\nProperties of eddy diffusity coeff in lateral physics tracers scheme in the ocean\n26.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLateral physics tracers eddy diffusity coeff type in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.2. Constant Coefficient\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.3. Variable Coefficient\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Coeff Background\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nDescribe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.5. Coeff Backscatter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"27. Lateral Physics --> Tracers --> Eddy Induced Velocity\nProperties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean\n27.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of EIV in lateral physics tracers in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"GM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Constant Val\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf EIV scheme for tracers is constant, specify coefficient value (M2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.3. Flux Type\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of EIV flux (advective or skew)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Added Diffusivity\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of EIV added diffusivity (constant, flow dependent or none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Vertical Physics\nOcean Vertical Physics\n28.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vertical physics in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Vertical Physics --> Boundary Layer Mixing --> Details\nProperties of vertical physics in ocean\n29.1. Langmuir Cells Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there Langmuir cells mixing in upper ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30. Vertical Physics --> Boundary Layer Mixing --> Tracers\n*Properties of boundary layer (BL) mixing on tracers in the ocean *\n30.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of boundary layer mixing for tracers in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.2. Closure Order\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.3. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant BL mixing of tracers, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground BL mixing of tracers coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. Vertical Physics --> Boundary Layer Mixing --> Momentum\n*Properties of boundary layer (BL) mixing on momentum in the ocean *\n31.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of boundary layer mixing for momentum in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Closure Order\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"31.3. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant BL mixing of momentum, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"31.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground BL mixing of momentum coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32. Vertical Physics --> Interior Mixing --> Details\n*Properties of interior mixing in the ocean *\n32.1. Convection Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of vertical convection in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Non-penetrative convective adjustment\" \n# \"Enhanced vertical diffusion\" \n# \"Included in turbulence closure\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.2. Tide Induced Mixing\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how tide induced mixing is modelled (barotropic, baroclinic, none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.3. Double Diffusion\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there double diffusion",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.4. Shear Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there interior shear mixing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33. Vertical Physics --> Interior Mixing --> Tracers\n*Properties of interior mixing on tracers in the ocean *\n33.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of interior mixing for tracers in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.2. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant interior mixing of tracers, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"33.3. Profile\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIs the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground interior mixing of tracers coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34. Vertical Physics --> Interior Mixing --> Momentum\n*Properties of interior mixing on momentum in the ocean *\n34.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of interior mixing for momentum in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"34.2. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant interior mixing of momentum, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"34.3. Profile\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIs the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground interior mixing of momentum coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35. Uplow Boundaries --> Free Surface\nProperties of free surface in ocean\n35.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of free surface in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nFree surface scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear implicit\" \n# \"Linear filtered\" \n# \"Linear semi-explicit\" \n# \"Non-linear implicit\" \n# \"Non-linear filtered\" \n# \"Non-linear semi-explicit\" \n# \"Fully explicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"35.3. Embeded Seaice\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the sea-ice embeded in the ocean model (instead of levitating) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36. Uplow Boundaries --> Bottom Boundary Layer\nProperties of bottom boundary layer in ocean\n36.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of bottom boundary layer in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.2. Type Of Bbl\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of bottom boundary layer in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diffusive\" \n# \"Acvective\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36.3. Lateral Mixing Coef\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"36.4. Sill Overflow\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe any specific treatment of sill overflows",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37. Boundary Forcing\nOcean boundary forcing\n37.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of boundary forcing in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.2. Surface Pressure\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.3. Momentum Flux Correction\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.4. Tracers Flux Correction\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.5. Wave Effects\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how wave effects are modelled at ocean surface.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.6. River Runoff Budget\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how river runoff from land surface is routed to ocean and any global adjustment done.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.7. Geothermal Heating\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how geothermal heating is present at ocean bottom.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38. Boundary Forcing --> Momentum --> Bottom Friction\nProperties of momentum bottom friction in ocean\n38.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of momentum bottom friction in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Non-linear\" \n# \"Non-linear (drag function of speed of tides)\" \n# \"Constant drag coefficient\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"39. Boundary Forcing --> Momentum --> Lateral Friction\nProperties of momentum lateral friction in ocean\n39.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of momentum lateral friction in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Free-slip\" \n# \"No-slip\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"40. Boundary Forcing --> Tracers --> Sunlight Penetration\nProperties of sunlight penetration scheme in ocean\n40.1. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of sunlight penetration scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"1 extinction depth\" \n# \"2 extinction depth\" \n# \"3 extinction depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"40.2. Ocean Colour\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the ocean sunlight penetration scheme ocean colour dependent ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"40.3. Extinction Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe and list extinctions depths for sunlight penetration scheme (if applicable).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"41. Boundary Forcing --> Tracers --> Fresh Water Forcing\nProperties of surface fresh water forcing in ocean\n41.1. From Atmopshere\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of surface fresh water forcing from atmos in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.2. From Sea Ice\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of surface fresh water forcing from sea-ice in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Real salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.3. Forced Mode Restoring\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of surface salinity restoring in forced mode (OMIP)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/bcc/cmip6/models/sandbox-2/landice.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Landice\nMIP Era: CMIP6\nInstitute: BCC\nSource ID: SANDBOX-2\nTopic: Landice\nSub-Topics: Glaciers, Ice. \nProperties: 30 (21 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:39\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'bcc', 'sandbox-2', 'landice')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Grid\n4. Glaciers\n5. Ice\n6. Ice --> Mass Balance\n7. Ice --> Mass Balance --> Basal\n8. Ice --> Mass Balance --> Frontal\n9. Ice --> Dynamics \n1. Key Properties\nLand ice key properties\n1.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Ice Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify how ice albedo is modelled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.ice_albedo') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"function of ice age\" \n# \"function of ice density\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Atmospheric Coupling Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhich variables are passed between the atmosphere and ice (e.g. orography, ice mass)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Oceanic Coupling Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhich variables are passed between the ocean and ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich variables are prognostically calculated in the ice model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice velocity\" \n# \"ice thickness\" \n# \"ice temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of land ice code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Grid\nLand ice grid\n3.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. Adaptive Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs an adative grid being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.3. Base Resolution\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nThe base resolution (in metres), before any adaption",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.base_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Resolution Limit\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf an adaptive grid is being used, what is the limit of the resolution (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.resolution_limit') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Projection\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThe projection of the land ice grid (e.g. albers_equal_area)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.projection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Glaciers\nLand ice glaciers\n4.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of glaciers in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of glaciers, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Dynamic Areal Extent\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDoes the model include a dynamic glacial extent?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5. Ice\nIce sheet and ice shelf\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the ice sheet and ice shelf in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Grounding Line Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.grounding_line_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grounding line prescribed\" \n# \"flux prescribed (Schoof)\" \n# \"fixed grid size\" \n# \"moving grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5.3. Ice Sheet\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre ice sheets simulated?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_sheet') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5.4. Ice Shelf\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre ice shelves simulated?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_shelf') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Ice --> Mass Balance\nDescription of the surface mass balance treatment\n6.1. Surface Mass Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Ice --> Mass Balance --> Basal\nDescription of basal melting\n7.1. Bedrock\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of basal melting over bedrock",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Ocean\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of basal melting over the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Ice --> Mass Balance --> Frontal\nDescription of claving/melting from the ice shelf front\n8.1. Calving\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of calving from the front of the ice shelf",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Melting\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of melting from the front of the ice shelf",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Ice --> Dynamics\n**\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description if ice sheet and ice shelf dynamics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Approximation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nApproximation type used in modelling ice dynamics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.approximation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SIA\" \n# \"SAA\" \n# \"full stokes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.3. Adaptive Timestep\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there an adaptive time scheme for the ice scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.4. Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
paulcon/active_subspaces
|
tutorials/test_functions/otl_circuit/otlcircuit_example.ipynb
|
mit
|
[
"Active Subspaces Example Function: Circuit Voltage\nRyan Howard, CO School of Mines, ryhoward@mines.edu\nPaul Constantine, CO School of Mines, pconstan@mines.edu\n<br>\nIn this tutorial, we'll be applying active subspaces to the function\n$$\nV_m = \\frac{(V_{b1}+0.74)\\beta(R_{c2}+9)}{\\beta(R_{c2}+9)+R_f}+\\frac{11.35R_f}{\\beta(R_{c2}+9)+R_f}+\\frac{0.74R_f\\beta(R_{c2}+9)}{R_{c1}(\\beta(R_{c2}+9)+R_f)},\n$$\nwhere $V_{b1} = 12R_{b2}/(R_{b1}+R_{b2})$, as seen on http://www.sfu.ca/~ssurjano/otlcircuit.html. This function models the midpoint voltage of a transformerless push-pull circuit, and its inputs and their distributions are described in the table below.\nVariable|Symbol|Distribution (U(min, max))\n:-----|:-----:|:-----\nresistance b1|$R_{b1}$|U(50, 150)\nresistance b2|$R_{b2}$|U(25, 70)\nresistance f|$R_f$|U(.5, 3)\nresistance c1|$R_{c1}$|U(1.2, 2.5)\nresistance c2|$R_{c2}$|U(.25, 1.2)\ncurrent gain|$\\beta$|U(50, 300)",
"import active_subspaces as ac\nimport numpy as np\n%matplotlib inline\n\n# The otlcircuit_functions.py file contains two functions: the circuit function (circuit(xx))\n# and its gradient (circuit_grad(xx)). Each takes an Mx6 matrix (M is the number of data\n# points) with rows being normalized inputs; circuit returns a column vector of function\n# values at each row of the input and circuit_grad returns a matrix whose ith row is the\n# gradient of circuit at the ith row of xx with respect to the normalized inputs\nfrom otlcircuit_functions import *",
"First we draw M samples randomly from the input space.",
"M = 1000 #This is the number of data points to use\n\n#Sample the input space according to the distributions in the table above\nRb1 = np.random.uniform(50, 150, (M, 1))\nRb2 = np.random.uniform(25, 70, (M, 1))\nRf = np.random.uniform(.5, 3, (M, 1))\nRc1 = np.random.uniform(1.2, 2.5, (M, 1))\nRc2 = np.random.uniform(.25, 1.2, (M, 1))\nbeta = np.random.uniform(50, 300, (M, 1))\n\n#the input matrix\nx = np.hstack((Rb1, Rb2, Rf, Rc1, Rc2, beta))",
"Now we normalize the inputs, linearly scaling each to the interval $[-1, 1]$.",
"#Upper and lower limits for inputs\nxl = np.array([50, 25, .5, 1.2, .25, 50])\nxu = np.array([150, 70, 3, 2.5, 1.2, 300])\n\n#XX = normalized input matrix\nXX = ac.utils.misc.BoundedNormalizer(xl, xu).normalize(x)",
"Compute gradients to approximate the matrix on which the active subspace is based.",
"#output values (f) and gradients (df)\nf = circuit(XX)\ndf = circuit_grad(XX)",
"Now we use our data to compute the active subspace.",
"#Set up our subspace using the gradient samples\nss = ac.subspaces.Subspaces()\nss.compute(df=df, nboot=500)",
"We use plotting utilities to plot eigenvalues, subspace error, components of the first 2 eigenvectors, and 1D and 2D sufficient summary plots (plots of function values vs. active variable values).",
"#Component labels\nin_labels = ['Rb1', 'Rb2', 'Rf', 'Rc1', 'Rc2', 'beta']\n\n#plot eigenvalues, subspace errors\nac.utils.plotters.eigenvalues(ss.eigenvals, ss.e_br)\nac.utils.plotters.subspace_errors(ss.sub_br)\n\n#manually make the subspace 2D for the eigenvector and 2D summary plots\nss.partition(2)\n#Compute the active variable values\ny = XX.dot(ss.W1)\n\n#Plot eigenvectors, sufficient summaries\nac.utils.plotters.eigenvectors(ss.W1, in_labels=in_labels)\nac.utils.plotters.sufficient_summary(y, f)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AllenDowney/ModSim
|
python/soln/chap14.ipynb
|
gpl-2.0
|
[
"Chapter 14\nModeling and Simulation in Python\nCopyright 2021 Allen Downey\nLicense: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International",
"# install Pint if necessary\n\ntry:\n import pint\nexcept ImportError:\n !pip install pint\n\n# download modsim.py if necessary\n\nfrom os.path import exists\n\nfilename = 'modsim.py'\nif not exists(filename):\n from urllib.request import urlretrieve\n url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'\n local, _ = urlretrieve(url+filename, filename)\n print('Downloaded ' + local)\n\n# import functions from modsim\n\nfrom modsim import *\n\n# import code from previous notebooks\n\nfrom chap11 import make_system\nfrom chap11 import update_func\nfrom chap11 import run_simulation\n\nfrom chap12 import calc_total_infected\n\nfrom chap13 import sweep_beta\nfrom chap13 import sweep_parameters",
"In the previous chapters we used simulation to predict the effect of an infectious disease in a susceptible population and to design\ninterventions that would minimize the effect.\nIn this chapter we use analysis to investigate the relationship between the parameters, beta and gamma, and the outcome of the simulation.\nNondimensionalization\nThe figures in\nSection [sweepframe]{reference-type=\"ref\"\nreference=\"sweepframe\"} suggest that there is a relationship between the parameters of the SIR model, beta and gamma, that determines the outcome of the simulation, the fraction of students infected. Let's think what that relationship might be.\n\n\nWhen beta exceeds gamma, that means there are more contacts\n (that is, potential infections) than recoveries during each day (or other unit of time). The difference between beta and gamma might be called the \"excess contact rate\\\", in units of contacts per day.\n\n\nAs an alternative, we might consider the ratio beta/gamma, which\n is the number of contacts per recovery. Because the numerator and\n denominator are in the same units, this ratio is dimensionless, which means it has no units.\n\n\nDescribing physical systems using dimensionless parameters is often a\nuseful move in the modeling and simulation game. It is so useful, in\nfact, that it has a name: nondimensionalization (see\nhttp://modsimpy.com/nondim).\nSo we'll try the second option first.\nExploring the results\nSuppose we have a SweepFrame with one row for each value of beta and one column for each value of gamma. Each element in the SweepFrame is the fraction of students infected in a simulation with a given pair of parameters.\nWe can print the values in the SweepFrame like this:",
"beta_array = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 , 1.1]\ngamma_array = [0.2, 0.4, 0.6, 0.8]\nframe = sweep_parameters(beta_array, gamma_array)\nframe.head()\n\nfor gamma in frame.columns:\n column = frame[gamma]\n for beta in column.index:\n frac_infected = column[beta]\n print(beta, gamma, frac_infected)",
"This is the first example we've seen with one for loop inside another:\n\n\nEach time the outer loop runs, it selects a value of gamma from\n the columns of the DataFrame and extracts the corresponding\n column.\n\n\nEach time the inner loop runs, it selects a value of beta from the\n column and selects the corresponding element, which is the fraction\n of students infected.\n\n\nIn the example from the previous chapter, frame has 4 columns, one for\neach value of gamma, and 11 rows, one for each value of beta. So\nthese loops print 44 lines, one for each pair of parameters.\nThe following function encapulates the previous loop and plots the\nfraction infected as a function of the ratio beta/gamma:",
"from matplotlib.pyplot import plot\n\ndef plot_sweep_frame(frame):\n for gamma in frame.columns:\n series = frame[gamma]\n for beta in series.index:\n frac_infected = series[beta]\n plot(beta/gamma, frac_infected, 'o', \n color='C1', alpha=0.4)\n\nplot_sweep_frame(frame)\n\ndecorate(xlabel='Contact number (beta/gamma)',\n ylabel='Fraction infected')",
"The results fall on a single curve, at least approximately. That means that we can predict the fraction of students who will be infected based on a single parameter, the ratio beta/gamma. We don't need to know the values of beta and gamma separately.\nContact number\nFrom Section xxx, recall that the number of new infections in a\ngiven day is $\\beta s i N$, and the number of recoveries is\n$\\gamma i N$. If we divide these quantities, the result is\n$\\beta s / \\gamma$, which is the number of new infections per recovery\n(as a fraction of the population).\nWhen a new disease is introduced to a susceptible population, $s$ is\napproximately 1, so the number of people infected by each sick person is $\\beta / \\gamma$. This ratio is called the \"contact number\\\" or \"basic reproduction number\\\" (see http://modsimpy.com/contact). By convention it is usually denoted $R_0$, but in the context of an SIR model, this notation is confusing, so we'll use $c$ instead.\nThe results in the previous section suggest that there is a relationship between $c$ and the total number of infections. We can derive this relationship by analyzing the differential equations from\nSection xxx:\n$$\\begin{aligned}\n\\frac{ds}{dt} &= -\\beta s i \\\n\\frac{di}{dt} &= \\beta s i - \\gamma i\\\n\\frac{dr}{dt} &= \\gamma i\\end{aligned}$$ \nIn the same way we divided the\ncontact rate by the infection rate to get the dimensionless quantity\n$c$, now we'll divide $di/dt$ by $ds/dt$ to get a ratio of rates:\n$$\\frac{di}{ds} = -1 + \\frac{1}{cs}$$ \nDividing one differential equation by another is not an obvious move, but in this case it is useful because it gives us a relationship between $i$, $s$ and $c$ that does not depend on time. From that relationship, we can derive an equation that relates $c$ to the final value of $s$. In theory, this equation makes it possible to infer $c$ by observing the course of an epidemic.\nHere's how the derivation goes. We multiply both sides of the previous\nequation by $ds$: \n$$di = \\left( -1 + \\frac{1}{cs} \\right) ds$$ \nAnd then integrate both sides: \n$$i = -s + \\frac{1}{c} \\log s + q$$ \nwhere $q$ is a constant of integration. Rearranging terms yields:\n$$q = i + s - \\frac{1}{c} \\log s$$ \nNow let's see if we can figure out what $q$ is. At the beginning of an epidemic, if the fraction infected is small and nearly everyone is susceptible, we can use the approximations $i(0) = 0$ and $s(0) = 1$ to compute $q$:\n$$q = 0 + 1 + \\frac{1}{c} \\log 1$$ \nSince $\\log 1 = 0$, we get $q = 1$.\nNow, at the end of the epidemic, let's assume that $i(\\infty) = 0$, and $s(\\infty)$ is an unknown quantity, $s_{\\infty}$. Now we have:\n$$q = 1 = 0 + s_{\\infty}- \\frac{1}{c} \\log s_{\\infty}$$ \nSolving for $c$, we get $$c = \\frac{\\log s_{\\infty}}{s_{\\infty}- 1}$$ By relating $c$ and $s_{\\infty}$, this equation makes it possible to estimate $c$ based on data, and possibly predict the behavior of future epidemics.\nAnalysis and simulation\nLet's compare this analytic result to the results from simulation. I'll create an array of values for $s_{\\infty}$",
"from numpy import linspace\n\ns_inf_array = linspace(0.0001, 0.999, 31)",
"And compute the corresponding values of $c$:",
"from numpy import log\n\nc_array = log(s_inf_array) / (s_inf_array - 1)",
"To get the total infected, we compute the difference between $s(0)$ and\n$s(\\infty)$, then store the results in a Series:",
"frac_infected = 1 - s_inf_array",
"We can use make_series to put c_array\nand frac_infected in a Pandas Series.",
"frac_infected_series = make_series(c_array, frac_infected)",
"Now we can plot the results:",
"plot_sweep_frame(frame)\nfrac_infected_series.plot(label='analysis')\n\ndecorate(xlabel='Contact number (c)',\n ylabel='Fraction infected')",
"When the contact number exceeds 1, analysis and simulation agree. When\nthe contact number is less than 1, they do not: analysis indicates there should be no infections; in the simulations there are a small number of infections.\nThe reason for the discrepancy is that the simulation divides time into a discrete series of days, whereas the analysis treats time as a\ncontinuous quantity. In other words, the two methods are actually based on different models. So which model is better?\nProbably neither. When the contact number is small, the early progress\nof the epidemic depends on details of the scenario. If we are lucky, the original infected person, \"patient zero\", infects no one and there is no epidemic. If we are unlucky, patient zero might have a large number of close friends, or might work in the dining hall (and fail to observe safe food handling procedures).\nFor contact numbers near or less than 1, we might need a more detailed\nmodel. But for higher contact numbers the SIR model might be good\nenough.\nEstimating contact number\nFigure xxx shows that if we know the contact number, we can compute the fraction infected. But we can also read the figure the other way; that is, at the end of an epidemic, if we can estimate the fraction of the population that was ever infected, we can use it to estimate the contact number.\nWell, in theory we can. In practice, it might not work very well,\nbecause of the shape of the curve. When the contact number is near 2,\nthe curve is quite steep, which means that small changes in $c$ yield\nbig changes in the number of infections. If we observe that the total\nfraction infected is anywhere from 20% to 80%, we would conclude that\n$c$ is near 2.\nOn the other hand, for larger contact numbers, nearly the entire\npopulation is infected, so the curve is nearly flat. In that case we\nwould not be able to estimate $c$ precisely, because any value greater\nthan 3 would yield effectively the same results. Fortunately, this is\nunlikely to happen in the real world; very few epidemics affect anything close to 90% of the population.\nSo the SIR model has limitations; nevertheless, it provides insight into the behavior of infectious disease, especially the phenomenon of herd immunity. As we saw in Chapter xxx, if we know the parameters of the model, we can use it to evaluate possible interventions. And as we saw in this chapter, we might be able to use data from earlier outbreaks to estimate the parameters.\nExercises\nExercise: If we didn't know about contact numbers, we might have explored other possibilities, like the difference between beta and gamma, rather than their ratio.\nWrite a version of plot_sweep_frame, called plot_sweep_frame_difference, that plots the fraction infected versus the difference beta-gamma.\nWhat do the results look like, and what does that imply?",
"# Solution\n\ndef plot_sweep_frame_difference(frame):\n for gamma in frame.columns:\n column = frame[gamma]\n for beta in column.index:\n frac_infected = column[beta]\n plot(beta - gamma, frac_infected, 'ro', \n color='C1', alpha=0.4)\n\n# Solution\n\nplot_sweep_frame_difference(frame)\n\ndecorate(xlabel='Excess infection rate (infections-recoveries per day)',\n ylabel='Fraction infected')\n\n# Solution\n\n# The results don't fall on a line, which means that if we \n# know the difference between `beta` and `gamma`, \n# but not their ratio, that's not enough to predict \n# the fraction infected.",
"Exercise: Suppose you run a survey at the end of the semester and find that 26% of students had the Freshman Plague at some point.\nWhat is your best estimate of c?\nHint: if you print frac_infected_series, you can read off the answer.",
"# Solution\n\nfrac_infected_series\n\n# Solution\n\n# It looks like the fraction infected is 0.26 when the contact \n# number is about 1.16"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kradams/MIDS-W261-2015-Adams
|
week11/MIDS-W261-2015-HW10-Week11-Adams.ipynb
|
mit
|
[
"DATASCI W261: Machine Learning at Scale\nWeek 11, Homework 10\nKatrina Adams\nkradams@ischool.berkeley.edu\n17 November 2015\n\nStart Spark",
"%cd ~/Documents/W261/hw10/\n\nimport os\nimport sys\n\nspark_home = os.environ['SPARK_HOME'] = \\\n '/Users/davidadams/packages/spark-1.5.1-bin-hadoop2.6/'\n\nif not spark_home:\n raise ValueError('SPARK_HOME enviroment variable is not set')\nsys.path.insert(0,os.path.join(spark_home,'python'))\nsys.path.insert(0,os.path.join(spark_home,'python/lib/py4j-0.8.2.1-src.zip'))\nexecfile(os.path.join(spark_home,'python/pyspark/shell.py'))",
"HW 10.0: Short answer questions\nWhat is Apache Spark and how is it different to Apache Hadoop? \nFill in the blanks:\nSpark API consists of interfaces to develop applications based on it in Java, ...... languages (list languages). \nUsing Spark, resource management can be done either in a single server instance or using a framework such as Mesos or ????? in a distributed manner.\nWhat is an RDD and show a fun example of creating one and bringing the first element back to the driver program.\nWhat is lazy evaluation and give an intuitoive example of lazy evaluation and comment on the massive computational savings to be had from lazy evaluation.\nAnswers\nApache Spark is a framework for parallel computations over big data with optimized genaral execution graphs over RDDs. It differs from Apache Hadoop by storing data in-memory instead of writing to disk so it is much faster. Spark also required 2-5 time less code than Hadoop. With Spark you can do read-eval-print loop, while Hadoop cannot. \nSpark API consists of interfaces to develop applications based on it in Java, <font color='green'>Scala, Python, and R</font> languages. \nUsing Spark, resource management can be done either in a single server instance or using a framework such as Mesos or <font color='green'>YARN</font> in a distributed manner. \nA Resilient distributed data set (RDD) is a distributed collection of elements, which are automatically distributed across the cluster for parallel computations. RDDs can also be recomputed from the execution graph providing fault tolerance. \nLazy evaluation means that transformations are not computed immediately, but only when an action is performed on the trandformed RDD. An example of lazy evaluation is reading the first line of a file. If creation of an RDD from a text file were not computed lazily, then the entire file would be read when the RDD was created. However, with laxy evaluation, if we then perform an action of examining the first line, only the first line needs to be read. Lazy evaluation means that values are only computed if they are required, potentially resulting in significant computational savings.",
"''' Example of creating an RDD and bringing the first element back to the driver'''\nimport numpy as np\n\ndataRDD = sc.parallelize(np.random.random_sample(1000)) \ndata2X= dataRDD.map(lambda x: x*2)\ndataGreaterThan1 = data2X.filter(lambda x: x > 1.0)\nprint dataGreaterThan1.take(1)\n",
"HW 10.1: \nIn Spark write the code to count how often each word appears in a text document (or set of documents). Please use this homework document as a the example document to run an experiment.\nReport the following: provide a sorted list of tokens in decreasing order of frequency of occurence.",
"def hw10_1():\n # create RDD from text file and split at spaces to get words\n rdd = sc.textFile(\"HW10-Public/MIDS-MLS-HW-10.txt\")\n words = rdd.flatMap(lambda x: x.strip().split(\" \"))\n # count words and sort\n sortedcounts = words.map(lambda x: (x, 1)) \\\n .reduceByKey(lambda x, y: x + y) \\\n .map(lambda (x,y): (y, x)) \\\n .sortByKey(False) \\\n .map(lambda (x,y): (y, x))\n\n for line in mysorted.collect():\n print line\n \n return None\n\nhw10_1()",
"HW 10.1.1\nModify the above word count code to count words that begin with lower case letters (a-z) and report your findings. Again sort the output words in decreasing order of frequency.",
"def hw10_1_1():\n \n def isloweraz(word):\n '''\n check if the word starts with a lower case letter\n '''\n lowercase = 'abcdefghijklmnopqrstuvwxyz'\n try:\n return word[0] in lowercase\n except IndexError:\n return False\n\n \n # create RDD from text file\n rdd = sc.textFile(\"HW10-Public/MIDS-MLS-HW-10.txt\")\n \n # get words and filter for those that start with a lowercase letter\n words = rdd.flatMap(lambda x: x.strip().split(\" \")) \\\n .filter(isloweraz)\n\n # count words and sort\n sortedcounts = words.map(lambda x: (x, 1)) \\\n .reduceByKey(lambda x, y: x + y) \\\n .map(lambda (x,y): (y, x)) \\\n .sortByKey(False) \\\n .map(lambda (x,y): (y, x))\n\n for line in sortedcounts.collect():\n print line\n \n return None\n\nhw10_1_1()\n",
"HW 10.2: KMeans a la MLLib\nUsing the MLlib-centric KMeans code snippet below\nNOTE: kmeans_data.txt is available here https://www.dropbox.com/s/q85t0ytb9apggnh/kmeans_data.txt?dl=0 \nRun this code snippet and list the clusters that your find and compute the Within Set Sum of Squared Errors for the found clusters. Comment on your findings.",
"from pyspark.mllib.clustering import KMeans, KMeansModel\nfrom numpy import array\nfrom math import sqrt\n\n# Load and parse the data\n# NOTE kmeans_data.txt is available here https://www.dropbox.com/s/q85t0ytb9apggnh/kmeans_data.txt?dl=0 \ndata = sc.textFile(\"HW10-Public/kmeans_data.txt\")\nparsedData = data.map(lambda line: array([float(x) for x in line.split(' ')]))\n\n\n\n# Build the model (cluster the data)\nclusters = KMeans.train(parsedData, 2, maxIterations=10,\n runs=10, initializationMode=\"random\")\n\n# Evaluate clustering by computing Within Set Sum of Squared Errors\ndef error(point):\n center = clusters.centers[clusters.predict(point)]\n return sqrt(sum([x**2 for x in (point - center)]))\n\nWSSSE = parsedData.map(lambda point: error(point)).reduce(lambda x, y: x + y)\nprint(\"Within Set Sum of Squared Error = \" + str(WSSSE))\n\n# Save and load model\nclusters.save(sc, \"myModelPath\")\nsameModel = KMeansModel.load(sc, \"myModelPath\")\n\n\n\nfor i,ctr in enumerate(clusters.centers):\n print(\"Cluster %i: %.1f, %.1f, %.1f\" % (i, ctr[0],ctr[1],ctr[2]))\n\nWSSSE = parsedData.map(lambda point: error(point)).reduce(lambda x, y: x + y)\nprint(\"Within Set Sum of Squared Error = \" + str(WSSSE))",
"HW 10.3:\nDownload the following KMeans notebook:\nhttps://www.dropbox.com/s/3nsthvp8g2rrrdh/EM-Kmeans.ipynb?dl=0 \nGenerate 3 clusters with 100 (one hundred) data points per cluster (using the code provided). Plot the data.\nThen run MLlib's Kmean implementation on this data and report your results as follows: \n-- plot the resulting clusters after 1 iteration, 10 iterations, after 20 iterations, after 100 iterations.\n -- in each plot please report the Within Set Sum of Squared Errors for the found clusters. Comment on the progress of this measure as the KMEans algorithms runs for more iterations",
"%matplotlib inline\nimport numpy as np\nimport pylab \nimport json\nsize1 = size2 = size3 = 100\nsamples1 = np.random.multivariate_normal([4, 0], [[1, 0],[0, 1]], size1)\ndata = samples1\nsamples2 = np.random.multivariate_normal([6, 6], [[1, 0],[0, 1]], size2)\ndata = np.append(data,samples2, axis=0)\nsamples3 = np.random.multivariate_normal([0, 4], [[1, 0],[0, 1]], size3)\ndata = np.append(data,samples3, axis=0)\n# Randomlize data\ndata = data[np.random.permutation(size1+size2+size3),]\nnp.savetxt('data.csv',data,delimiter = ',')\n\npylab.plot(samples1[:, 0], samples1[:, 1],'*', color = 'red')\npylab.plot(samples2[:, 0], samples2[:, 1],'o',color = 'blue')\npylab.plot(samples3[:, 0], samples3[:, 1],'+',color = 'green')\npylab.show()\n\n'''\nThen run MLlib's Kmean implementation on this data \nand report your results as follows:\n-- plot the resulting clusters after 1, 10, 20, and 100 iterations\n-- in each plot please report the Within Set Sum of Squared Errors \nfor the found clusters. Comment on the progress of this measure as \nthe KMeans algorithms runs for more iterations\n'''\n\n\nfrom pyspark.mllib.clustering import KMeans, KMeansModel\nfrom numpy import array\nfrom math import sqrt\n\n# Load and parse the data\ndata = sc.textFile(\"data.csv\")\nparsedData = data.map(lambda line: array([float(x) for x in line.split(',')]))\n\n\nimport numpy as np\n\n#Calculate which class each data point belongs to\ndef nearest_centroid(line):\n x = np.array([float(f) for f in line.split(',')])\n closest_centroid_idx = np.sum((x - centroids)**2, axis=1).argmin()\n return (closest_centroid_idx,(x,1))\n\n#plot centroids and data points for each iteration\ndef plot_iteration(means):\n pylab.plot(samples1[:, 0], samples1[:, 1], '.', color = 'blue')\n pylab.plot(samples2[:, 0], samples2[:, 1], '.', color = 'blue')\n pylab.plot(samples3[:, 0], samples3[:, 1],'.', color = 'blue')\n pylab.plot(means[0][0], means[0][1],'*',markersize =10,color = 'red')\n pylab.plot(means[1][0], means[1][1],'*',markersize =10,color = 'red')\n pylab.plot(means[2][0], means[2][1],'*',markersize =10,color = 'red')\n pylab.show()\n\nfrom time import time\n\nnumIters = [1, 10, 20, 100]\n\nfor i in numIters:\n clusters = KMeans.train(parsedData, k=3, maxIterations=i,\n initializationMode = \"random\")\n if i==1:\n print(\"Centroids after %d iteration:\" % i)\n else:\n print(\"Centroids after %d iterations:\" % i)\n for centroid in clusters.centers:\n print centroid\n WSSSE = parsedData.map(lambda point: error(point)).reduce(lambda x, y: x + y)\n print(\"Within Set Sum of Squared Error = \" + str(WSSSE))\n plot_iteration(clusters.centers)\n \n",
"The WSSE decreases with the number of iterations from 1 to 20 iterations. After 20 iterations, the centroids converge and the WSSE is stable.\n\nHW 10.4:\nUsing the KMeans code (homegrown code) provided repeat the experiments in HW10.3. Comment on any differences between the results in HW10.3 and HW10.4. Explain.",
"from numpy.random import rand\n\n#Calculate which class each data point belongs to\ndef nearest_centroid(line):\n x = np.array([float(f) for f in line.split(',')])\n closest_centroid_idx = np.sum((x - centroids)**2, axis=1).argmin()\n return (closest_centroid_idx,(x,1))\n\n\ndef error_p4(line, centroids):\n point = np.array([float(f) for f in line.split(',')])\n closest_centroid_idx = np.sum((point - centroids)**2, axis=1).argmin()\n center = centroids[closest_centroid_idx]\n return sqrt(sum([x**2 for x in (point - center)]))\n\nK = 3\n\nD = sc.textFile(\"./data.csv\").cache()\n\nnumIters = [1, 10, 20, 100]\n\nfor n in numIters:\n # randomly initialize centroids\n centroids = rand(3,2)*5\n iter_num = 0\n for i in range(n): \n res = D.map(nearest_centroid).reduceByKey(lambda x,y : (x[0]+y[0],x[1]+y[1])).collect()\n res = sorted(res,key = lambda x : x[0]) #sort based on cluster ID\n centroids_new = np.array([x[1][0]/x[1][1] for x in res]) #divide by cluster size\n if np.sum(np.absolute(centroids_new-centroids))<0.01:\n break\n iter_num = iter_num + 1 \n centroids = centroids_new\n \n if n==1:\n print(\"Centroids after %d iteration:\" % n)\n else:\n print(\"Centroids after %d iterations:\" % n)\n print centroids\n \n WSSSE = D.map(lambda line: error_p4(line, centroids)).reduce(lambda x, y: x + y)\n print(\"Within Set Sum of Squared Error = \" + str(WSSSE))\n \n plot_iteration(centroids)\n \n",
"These results are very similar to those for problem 10.3 with centroids converging after about 10 iterations to a WSSE of 365.94"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
blue-yonder/tsfresh
|
notebooks/advanced/friedrich_coefficients.ipynb
|
mit
|
[
"<h1><center> Estimating Friedrich's coefficients describing the deterministic dynamics of Langevin model</center></h1>\n\n<center>Andreas W. Kempa-Liehr (Department of Engineering Science, University of Auckland)</center>\nThis notebooks explains the friedrich_coefficient features, which has been inspired by the paper of Friedrich et al. (2000): Extracting model equations from experimental data. Physics Letters A 271, p. 217-222\nThe general idea is to assume a Langevin model for the dynamics of the time series $x(t)$ \n$$\\dot{x}(t) = h(x(t)) + \\mathcal{N}(0,R)$$\nwith $\\dot{x}(t)$ denoting the temporal derivative, $h(x(t))$ the deterministic dynamics, and $\\mathcal{N}(0,R)$ a Langevin force modelled as Gaussian white noise with standard deviation $R$.\nNow, an estimate $\\tilde{h}(x)$ of the deterministic dynamics can be computed by averaging $\\dot{x}(t)$ for a specific interval $x(t)\\in[x-\\epsilon,x+\\epsilon]$ with $|\\epsilon|\\ll 1$:\n$$\\left.\\tilde{h}(x)\\right|{x\\in[x-\\epsilon,x+\\epsilon]} \\approx \\frac{\\sum\\limits{x(t)\\in[x-\\epsilon,x+\\epsilon]} x(t+\\Delta_t)-x(t)}{\\Delta_t \\sum\\limits_{x(t)\\in[x-\\epsilon,x+\\epsilon]} 1}.$$\nHaving a set of estimations ${\\tilde{h}(x_1),\\tilde{h}(x_2),\\ldots,\\tilde{h}(x_n)}$ with $x_1<x_2<\\ldots<x_n$ at hand, Friedrich's coefficients are calculated by fitting a polynomial of order $m$ to these estimates.\nIn order to demonstrate this approach, the dynamics of a dissipative soliton before and after its drift-bifurcation is simulated (Liehr 2013: Dissipative Solitons in Reaction-Diffusion Systems. Springer, p. 164).\nBy applying the approach of Friedrich et al. for estimating the deterministic dynamics, the equilibrium velocity of the dissipative soliton is recovered.",
"from matplotlib import pylab as plt\nimport numpy as np\nimport seaborn as sbn\nimport pandas as pd\nfrom tsfresh.examples.driftbif_simulation import velocity\n\n%matplotlib inline\n\nfrom tsfresh.feature_extraction import ComprehensiveFCParameters\nfrom tsfresh.feature_extraction.feature_calculators import max_langevin_fixed_point, friedrich_coefficients\nsettings = ComprehensiveFCParameters()\ndefault_params = settings['max_langevin_fixed_point'][0]\ndefault = settings['friedrich_coefficients']\n\ndef friedrich_method(v, param):\n df = pd.DataFrame({'velocity': v[:-1,0], 'acceleration': np.diff(v[:,0])})\n df['quantiles']=pd.qcut(df.velocity.values, 30)\n groups = df.groupby('quantiles')\n result = pd.DataFrame({'a_mean': groups.acceleration.mean(),\n 'a_std': groups.acceleration.std(),\n 'v_mean': groups.velocity.mean(),\n 'v_std': groups.velocity.std()\n })\n dynamics = friedrich_coefficients(v[:,0], param)\n dynamics = [d[1] for d in dynamics]\n v0 = max_langevin_fixed_point(v[:,0], **default_params)\n \n plt.subplot(2,1,1)\n plt.plot(v[:,0])\n plt.axhline(y=v0, color='r')\n plt.xlabel('time')\n plt.ylabel('velocity')\n\n #Active Brownian motion is given if the linear term of the dynamics is positive\n if dynamics[-2]>0:\n active='Active'\n else:\n active=''\n \n plt.title('{} Brownian Motion (largest equilibrium velocity in red)'.format(active))\n plt.subplot(2,1,2)\n ax = plt.errorbar(result.v_mean,result.a_mean,\n xerr=result.v_std,fmt='o')\n x = np.linspace(-0.004, 0.004, 201)\n print(dynamics)\n plt.plot(x, np.poly1d(dynamics)(x), label='estimated dynamics')\n plt.plot(v0,0.,'ro')\n plt.axvline(x=v0, color='r')\n plt.xlabel('mean velocity')\n plt.ylabel('mean acceleration')",
"Beyond drift-bifurcation",
"ds = velocity(tau=3.8, delta_t=0.05, R=3e-4, seed=0)\nv = ds.simulate(1000000, v0=np.zeros(1))\n\nfriedrich_method(v, default)",
"Before drift-bifurcation",
"ds = velocity(tau=2./0.3-3.8, delta_t=0.05, R=3e-4, seed=0)\nv = ds.simulate(1000000, v0=np.zeros(1))\n\nfriedrich_method(v, default)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
informatics-isi-edu/deriva-py
|
docs/derivapy-catalog-snapshot.ipynb
|
apache-2.0
|
[
"Deriva Ermrest Catalog Snapshot Examples\nThis notebook gives examples of connecting to ERMrest versioned catalogs.",
"from deriva.core import DerivaServer, get_credential, ErmrestCatalogMutationError",
"Fill in your desired scheme, hostname and catalog number.",
"scheme = 'https'\nhostname = 'synapse-dev.isrd.isi.edu'\ncatalog_number = 1",
"Use DERIVA-Auth to get a credential or use None if your catalog allows anonymous access.",
"credential = get_credential(hostname)",
"Get a handle representing your server.",
"server = DerivaServer(scheme, hostname, credential)",
"Connect to a catalog (unversioned)\nConnect to a catalog, and list its schemas.",
"catalog = server.connect_ermrest(catalog_number)\npb = catalog.getPathBuilder()\nlist(pb.schemas)",
"Get latest snapshot\nThe current catalog handle can return a handle to the latest snapshot.",
"latest = catalog.latest_snapshot()\npb = latest.getPathBuilder()\nlist(pb.schemas)",
"Print the snaptime of this catalog snapshot.",
"print(latest.snaptime)",
"Connect to a catalog snapshot\nHere we pass the snaptime parameter explicitly in the connect_ermrest method.",
"snapshot = server.connect_ermrest('1', '2PM-DGYP-56Z4')\npb = snapshot.getPathBuilder()\nlist(pb.schemas)",
"Alternatively, we could pass a \"versioned\" catalog_id to the connect_ermrest method.",
"snapshot = server.connect_ermrest('1@2PM-DGYP-56Z4')\npb = snapshot.getPathBuilder()\nlist(pb.schemas)",
"Finally, we can poke around at schemas and tables as they existed at the specified snaptime.",
"subject = pb.schemas['Zebrafish'].tables['Subject']\nprint(subject.uri)",
"Data may be read from the snapshot. Here, we will see how many subjects existed at that point in time.",
"e = subject.entities()\nlen(e)",
"However, mutation operations on a catalog snapshot are disabled.",
"try:\n subject.insert([{'foo': 'bar'}])\nexcept ErmrestCatalogMutationError as e:\n print(e)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kadrlica/destools
|
notebook/intervals.ipynb
|
mit
|
[
"Likelihood Functions and Confidence Intervals\nby Alex Drlica-Wagner\nIntroduction\nThis notebook attempts to pragmatically address several questions about deriving uncertainty intervals from a likelihood analysis.",
"%matplotlib inline\n\nimport numpy as np\nimport pylab as plt\nimport scipy.stats as stats\nfrom scipy.stats import multivariate_normal as mvn\ntry:\n import emcee\n got_emcee = True\nexcept ImportError:\n got_emcee = False\n\ntry:\n import corner\n got_corner = True\nexcept ImportError:\n got_corner = False\n\nplt.rcParams['axes.labelsize'] = 16",
"1D Likelihood\nAs a simple and straightforward starting example, we begin with a 1D Gaussian likelihood function.",
"mean = 2.0; cov = 1.0\nrv = mvn(mean,cov)\nlnlfn = lambda x: rv.logpdf(x)\nx = np.linspace(-2,6,5000)\nlnlike = lnlfn(x)\n\nplt.plot(x,lnlike,'-k'); plt.xlabel(r'$x$'); plt.ylabel('$\\log \\mathcal{L}$');",
"For this simple likelihood function, we could analytically compute the maximum likelihood estimate and confidence intervals. However, for more complicated likelihoods an analytic solution may not be possible. As an introduction to these cases it is informative to proceed numerically.",
"# You can use any complicate optimizer that you want (i.e. scipy.optimize) \n# but for this application we just do a simple array operation\nmaxlike = np.max(lnlike)\nmle = x[np.argmax(lnlike)]\nprint \"Maximum Likelihood Estimate: %.2f\"%mle \nprint \"Maximum Likelihood Value: %.2f\"%maxlike",
"To find the 68% confidence intervals, we can calculate the delta-log-likelihood. The test statisitcs (TS) is defined as ${\\rm TS} = -2\\Delta \\log \\mathcal{L}$ and is $\\chi^2$-distributed. Therefore, the confidence intervals on a single parameter can be read off of a $\\chi^2$ table with 1 degree of freedom (dof).\n| 2-sided Interval | p-value | $\\chi^2_{1}$ | Gaussian $\\sigma$ |\n|------|------|------|------|\n| 68% | 32% | 1.000 | 1.00 |\n| 90% | 10% | 2.706 | 1.64 |\n| 95% | 5% | 3.841 | 1.96 |\n| 99% | 1% | 6.635 | 2.05 |",
"def interval(x, lnlike, delta=1.0):\n maxlike = np.max(lnlike)\n ts = -2 * (lnlike - maxlike)\n lower = x[np.argmax(ts < delta)]\n upper = x[len(ts) - np.argmax((ts < delta)[::-1]) - 1]\n return lower, upper\n\nintervals = [(68,1.0),\n (90,2.706),\n (95,3.841)]\n\nplt.plot(x,lnlike,'-k'); plt.xlabel(r'$x$'); plt.ylabel('$\\log \\mathcal{L}$');\nkwargs = dict(ls='--',color='k')\nplt.axhline(maxlike - intervals[0][1]/2.,**kwargs)\nprint \"Confidence Intervals:\"\nfor cl,delta in intervals:\n lower,upper = interval(x,lnlike,delta)\n print \" %i%% CL: x = %.2f [%+.2f,%+.2f]\"%(cl,mle,lower-mle,upper-mle)\n plt.axvline(lower,**kwargs); plt.axvline(upper,**kwargs); \n",
"These numbers might look familiar. They are the number of standard deviations that you need to go out in the standard normal distribution to contain the requested fraction of the distribution (i.e., 68%, 90%, 95%).",
"for cl, d in intervals:\n sigma = stats.norm.isf((100.-cl)/2./100.)\n print \" %i%% = %.2f sigma\"%(cl,sigma)",
"2D Likelihood\nNow we extend the example above to a 2D likelihood function. We define the likelihood with the same multivariat_normal function, but now add a second dimension and a covariance between the two dimensions. These parameters are adjustable if would like to play around with them.",
"mean = [2.0,1.0]\ncov = [[1,1],[1,2]]\nrv = stats.multivariate_normal(mean,cov)\nlnlfn = lambda x: rv.logpdf(x)\n\nprint \"Mean:\",rv.mean.tolist()\nprint \"Covariance\",rv.cov.tolist()\n\nxx, yy = np.mgrid[-4:6:.01, -4:6:.01]\nvalues = np.dstack((xx, yy))\nlnlike = lnlfn(values)\n\nfig2 = plt.figure(figsize=(8,6))\nax2 = fig2.add_subplot(111)\nim = ax2.contourf(values[:,:,0], values[:,:,1], lnlike ,aspect='auto'); plt.colorbar(im,label='$\\log \\mathcal{L}$')\nplt.xlabel('$x$'); plt.ylabel('$y$');\nplt.show()\n\n# You can use any complicate optimizer that you want (i.e. scipy.optimize) \n# but for this application we just do a simple array operation\nmaxlike = np.max(lnlike)\nmaxidx = np.unravel_index(np.argmax(lnlike),lnlike.shape)\nmle_x, mle_y = mle = values[maxidx]\n\nprint \"Maximum Likelihood Estimate:\",mle \nprint \"Maximum Likelihood Value:\",maxlike",
"The case now becomes a bit more complicated. If you want to set a confidence interval on a single parameter, you cannot simply projected the likelihood onto the dimension of interest. Doing so would ignore the correlation between the two parameters.",
"lnlike -= maxlike\nx = xx[:,maxidx[1]]\ndelta = 2.706\n\n# This is the loglike projected at y = mle[1] = 0.25\nplt.plot(x, lnlike[:,maxidx[1]],'-r'); \nlower,upper = max_lower,max_upper = interval(x,lnlike[:,maxidx[1]],delta)\nplt.axvline(lower,ls='--',c='r'); plt.axvline(upper,ls='--',c='r')\ny_max = yy[:,maxidx[1]]\n\n# This is the profile likelihood where we maximize over the y-dimension\nplt.plot(x, lnlike.max(axis=1),'-k')\nlower,upper = profile_lower,profile_upper = interval(x,lnlike.max(axis=1),delta)\nplt.axvline(lower,ls='--',c='k'); plt.axvline(upper,ls='--',c='k')\nplt.xlabel('$x$'); plt.ylabel('$\\log \\mathcal{L}$')\ny_profile = yy[lnlike.argmax(axis=0),lnlike.argmax(axis=1)]\n\nprint \"Projected Likelihood (red):\\t %.1f [%+.2f,%+.2f]\"%(mle[0],max_lower-mle[0],max_upper-mle[0])\nprint \"Profile Likelihood (black):\\t %.1f [%+.2f,%+.2f]\"%(mle[0],profile_lower-mle[0],profile_upper-mle[0])",
"In the plot above we are showing two different 1D projections of the 2D likelihood function. The red curve shows the projected likelihood scanning in values of $x$ and always assuming the value of $y$ that maximized the likelihood. On the other hand, the black curve shows the 1D likelihood derived by scanning in values of $x$ and at each value of $x$ maximizing the value of the likelihood with respect to the $y$-parameter. In other words, the red curve is ignoring the correlation between the two parameters while the black curve is accounting for it. As you can see from the values printed above the plot, the intervals derived from the red curve understimate the analytically derived values, while the intervals on the black curve properly reproduce the analytic estimate.\nJust to verify the result quoted above, we derive intervals on $x$ at several different confidence levels. We start with the projected likelihood with $y$ fixed at $y_{\\rm max}$.",
"for cl, d in intervals: \n lower,upper = interval(x,lnlike[:,maxidx[1]],d)\n print \" %s CL: x = %.2f [%+.2f,%+.2f]\"%(cl,mle[0],lower-mle[0],upper-mle[0])",
"Below are the confidence intervals in $x$ derived from the profile likelihood technique. As you can see, these values match the analytically derived values.",
"for cl, d in intervals: \n lower,upper = interval(x,lnlike.max(axis=1),d)\n print \" %s CL: x = %.2f [%+.2f,%+.2f]\"%(cl,mle[0],lower-mle[0],upper-mle[0])",
"By plotting the likelihood contours, it is easy to see why the profile likelihood technique performs correctly while naively slicing through the likelihood plane does not. The profile likelihood is essentially tracing the ridgeline of the 2D likelihood function, thus intersecting the countour of delta-log-likelihood at it's most distant point. This can be seen from the black lines in the 2D likelihood plot below.",
"fig2 = plt.figure(figsize=(8,6))\nax2 = fig2.add_subplot(111)\nim = ax2.contourf(values[:,:,0], values[:,:,1], lnlike ,aspect='auto'); plt.colorbar(im,label='$\\log \\mathcal{L}$')\nim = ax2.contour(values[:,:,0], values[:,:,1], lnlike , levels=[-delta/2], colors=['k'], aspect='auto', zorder=10,lw=2);\n\nplt.axvline(mle[0],ls='--',c='k'); plt.axhline(mle[1],ls='--',c='k');\nplt.axvline(max_lower,ls='--',c='r'); plt.axvline(max_upper,ls='--',c='r')\nplt.axvline(profile_lower,ls='--',c='k'); plt.axvline(profile_upper,ls='--',c='k')\n\nplt.plot(x,y_max,'-r'); plt.plot(x,y_profile,'-k')\nplt.xlabel('$x$'); plt.ylabel('$y$');\nplt.show()",
"MCMC Posterior Sampling\nOne way to explore the posterior distribution is through MCMC sampling. This gives an alternative method for deriving confidence intervals. Now, rather than maximizing the likelihood as a function of the other parameter, we marginalize (integrate) over that parameter. This is more computationally intensive, but is more robust in the case of complex likelihood functions.",
"# Remember, the posterior probability is the likelihood times the prior\nlnprior = lambda x: 0\ndef lnprob(x):\n return lnlfn(x) + lnprior(x)\n\nif got_emcee:\n nwalkers=100\n ndim, nwalkers = len(mle), 100\n pos0 = [np.random.rand(ndim) for i in range(nwalkers)]\n sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, threads=2)\n # This takes a while...\n sampler.run_mcmc(pos0, 5000)\n\n\nsamples = sampler.chain[:, 100:, :].reshape((-1, ndim))\nx_samples,y_samples = samples.T\n\nfor cl in [68,90,95]:\n x_lower,x_mle,x_upper = np.percentile(x_samples,q=[(100-cl)/2.,50,100-(100-cl)/2.])\n print \" %i%% CL:\"%cl, \"x = %.2f [%+.2f,%+.2f]\"%(x_mle,x_lower-x_mle,x_upper-x_mle)\n",
"These results aren't perfect since they are suspect to random variations in the sampling, but they are pretty close. Plotting the distribution of samples, we see something very similar to the plots we generated for the likelihood alone (which is good since out prior was flat).",
"if got_corner:\n fig = corner.corner(samples, labels=[\"$x$\",\"$y$\"],truths=mle,quantiles=[0.05, 0.5, 0.95],range=[[-4,6],[-4,6]])\n "
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
shankari/folium
|
examples/CRS.ipynb
|
mit
|
[
"import os\nimport folium\n\nprint(folium.__version__)",
"Illustration of CRS effect\nLeaflet is able to handle several CRS (coordinate reference systems). It means that depending on the data you have, you may need to use the one or the other.\nDon't worry ; in practice, almost everyone on the web uses EPSG3857 (the default value for folium and Leaflet). But it may be interesting to know the possible values.\nLet's create a GeoJSON map, and change it's CRS.",
"import json\n\nus_states = os.path.join('data', 'us-states.json')\n\ngeo_json_data = json.load(open(us_states))",
"EPSG3857 ; the standard\nProvided that our tiles are computed with this projection, this map has the expected behavior.",
"kw = dict(tiles=None, location=[43, -100], zoom_start=3)\n\nm = folium.Map(crs='EPSG3857', **kw)\n\nfolium.GeoJson(geo_json_data).add_to(m)\n\nm.save(os.path.join('results', 'CRS_0.html'))\n\nm",
"EPSG4326\nThis projection is a common CRS among GIS enthusiasts according to Leaflet's documentation. And we see it's quite different.",
"m = folium.Map(crs='EPSG4326', **kw)\n\nfolium.GeoJson(geo_json_data).add_to(m)\n\nm.save(os.path.join('results', 'CRS_1.html'))\n\nm",
"EPSG3395\nThe elliptical projection is almost equal to EPSG3857 ; though different.",
"m = folium.Map(crs='EPSG3395', **kw)\n\nfolium.GeoJson(geo_json_data).add_to(m)\n\nm.save(os.path.join('results', 'CRS_2.html'))\n\nm",
"Simple\nAt last, Leaflet also give the possibility to use no projection at all. With this, you get flat charts.\nIt can be useful if you want to use folium to draw non-geographical data.",
"m = folium.Map(crs='Simple', **kw)\n\nfolium.GeoJson(geo_json_data).add_to(m)\n\nm.save(os.path.join('results', 'CRS_3.html'))\n\nm"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Heroes-Academy/OOP_Spring_2016
|
notebooks/giordani/Python_3_OOP_Part_4__Polymorphism.ipynb
|
mit
|
[
"Good Morning, Polymorphism\nThe term polymorphism, in the OOP lingo, refers to the ability of an object to adapt the code to the type of the data it is processing.\nPolymorphism has two major applications in an OOP language. The first is that an object may provide different implementations of one of its methods depending on the type of the input parameters. The second is that code written for a given type of data may be used on data with a derived type, i.e. methods understand the class hierarchy of a type.\nIn Python polymorphism is one of the key concepts, and we can say that it is a built-in feature. Let us deal with it step by step.\nFirst of all, you know that in Python the type of a variable is not explicitly declared. Beware that this does not mean that Python variables are untyped. On the contrary, everything in Python has a type, it just happens that the type is implicitly assigned. If you remember the last paragraph of the previous post, I stated that in Python variables are just pointers (using a C-like nomenclature), in other words they just tell the language where in memory a variable has been stored. What is stored at that address is not a business of the variable.",
"a = 5\nprint(a)\n\nprint(type(a))\n\nprint(hex(id(a)))\n\na = 'five'\nprint(a)\n\nprint(type(a))\n\nprint(hex(id(a)))",
"This little example shows a lot about the Python typing system. The variable a is not statically declared, after all it can contain only one type of data: a memory address. When we assign the number 5 to it, Python stores in a the address of the number 5 (0x83fe540 in my case, but your result will be different). The type() built-in function is smart enough to understand that we are not asking about the type of a (which is always a reference), but about the type of the content. When you store another value in a, the string 'five', Python shamelessly replaces the previous content of the variable with the new address.\nSo, thanks to the reference system, Python type system is both strong and dynamic. The exact definition of those two concepts is not universal, so if you are interested be ready to dive into a broad matter. However, in Python, the meaning of those two words is the following:\n\ntype system is strong because everything has a well-defined type that you can check with the type() built-in function\ntype system is dynamic since the type of a variable is not explicitly declared, but changes with the content\n\nOnward! We just scratched the surface of the whole thing.\nTo explore the subject a little more, try to define the simplest function in Python (apart from an empty function)",
"def echo(a):\n return a",
"The function works as expected, just echoes the given parameter",
"print(echo(5))\n\nprint(echo('five'))",
"Pretty straightforward, isn't it? Well, if you come from a statically compiled language such as C or C++ you should be at least puzzled. What is a? I mean: what type of data does it contain? Moreover, how can Python know what it is returning if there is no type specification?\nAgain, if you recall the references stuff everything becomes clear: that function accepts a reference and returns a reference. In other words we just defined a sort of universal function, that does the same thing regardless of the input.\nThis is exactly the problem that polymorphism wants to solve. We want to describe an action regardless of the type of objects, and this is what we do when we talk among humans. When you describe how to move an object by pushing it, you may explain it using a box, but you expect the person you are addressing to be able to repeat the action even if you need to move a pen, or a book, or a bottle.\nThere are two main strategies you can apply to get code that performs the same operation regardless of the input types.\nThe first approach is to cover all cases, and this is a typical approach of procedural languages. If you need to sum two numbers that can be integers, float or complex, you just need to write three sum() functions, one bound to the integer type, the second bound to the float type and the third bound to the complex type, and to have some language feature that takes charge of choosing the correct implementation depending on the input type. This logic can be implemented by a compiler (if the language is statically typed) or by a runtime environment (if the language is dynamically typed) and is the approach chosen by C++. The disadvantage of this solution is that it requires the programmer to forecast all the possible situations: what if I need to sum an integer with a float? What if I need to sum two lists? (Please note that C++ is not so poorly designed, and the operator overloading technique allows to manage such cases, but the base polymorphism strategy of that language is the one exposed here).\nThe second strategy, the one implemented by Python, is simply to require the input objects to solve the problem for you. In other words you ask the data itself to perform the operation, reversing the problem. Instead of writing a bunch of functions that sum all the possible types in every possible combination you just write one function that requires the input data to sum, trusting that they know how to do it. Does it sound complex? It is not.\nLet's look at the Python implementation of the + operator. When we write c = a + b, Python actually executes c = a.__add__(b). As you can see the sum operation is delegated to the first input variable. So if we write",
"def sum(a, b):\n return a + b",
"there is no need to specify the type of the two input variables. The object a (the object contained in the variable a) shall be able to sum with the object b. This is a very beautiful and simple implementation of the polymorphism concept. Python functions are polymorphic simply because they accept everything and trust the input data to be able to perform some actions.\nLet us consider another simple example before moving on. The built-in len() function returns the length of the input object. For example",
"l = [1, 2, 3]\nprint(len(l))\n\ns = \"Just a sentence\"\nprint(len(s))",
"As you can see it is perfectly polymorphic: you can feed both a list or a string to it and it just computes its length. Does it work with any type? let's check",
"d = {'a': 1, 'b': 2}\nprint(len(d))\n\ni = 5\ntry:\n print(len(i))\nexcept TypeError as e:\n print(e)",
"Ouch! Seems that the len() function is smart enough to deal with dictionaries, but not with integers. Well, after all, the length of an integer is not defined.\nIndeed this is exactly the point of Python polymorphism: the integer type does not define a length operation. While you blame the len() function, the int type is at fault. The len() function just calls the __len__() method of the input object, as you can see from this code",
"print(l.__len__())\n\nprint(s.__len__())\n\nprint(d.__len__())\n\ntry:\n print(i.__len__())\nexcept AttributeError as e:\n print(e)",
"Very straightforward: the 'int' object does not define any __len__() method.\nSo, to sum up what we discovered until here, I would say that Python polymorphism is based on delegation. In the following sections we will talk about the EAFP Python principle, and you will see that the delegation principle is somehow ubiquitous in this language.\nType Hard\nAnother real-life concept that polymorphism wants to bring into a programming language is the ability to walk the class hierarchy, that is to run code on specialized types. This is a complex sentence to say something we are used to do every day, and an example will clarify the matter.\nYou know how to open a door, it is something you learned in your early years. Under an OOP point of view you are an object (sorry, no humiliation intended) which is capable of interacting with a wood rectangle rotating on hinges. When you can open a door, however, you can also open a window, which, after all, is a specialized type of wood-rectangle-with-hinges, hopefully with some glass in it too. You are also able to open the car door, which is also a specialized type (this one is a mix between a standard door and a window). This shows that, once you know how to interact with the most generic type (basic door) you can also interact with specialized types (window, car door) as soon as they act like the ancestor type (e.g. as soon as they rotate on hinges).\nThis directly translates into OOP languages: polymorphism requires that code written for a given type may also be run on derived types. For example, a list (a generic list object, not a Python one) that can contain \"numbers\" shall be able to accept integers because they are numbers. The list could specify an ordering operation which requires the numbers to be able to compare each other. So, as soon as integers specify a way to compare each other they can be inserted into the list and ordered.\nStatically compiled languages shall provide specific language features to implement this part of the polymorphism concept. In C++, for example, the language needs to introduce the concept of pointer compatibility between parent and child classes.\nIn Python there is no need to provide special language features to implement subtype polymorphism. As we already discovered Python functions accept any variable without checking the type and rely on the variable itself to provide the correct methods. But you already know that a subtype must provide the methods of the parent type, either redefining them or through implicit delegation, so as you can see Python implements subtype polymorphism from the very beginning.\nI think this is one of the most important things to understand when working with this language. Python is not really interested in the actual type of the variables you are working with. It is interested in how those variables act, that is it just wants the variable to provide the right methods. So, if you come from statically typed languages, you need to make a special effort to think about acting like instead of being. This is what we called \"duck typing\".\nTime to do an example. Let us define a Room class",
"class Room:\n def __init__(self, door):\n self.door = door\n \n def open(self):\n self.door.open()\n\n def close(self):\n self.door.close()\n\n def is_open(self):\n return self.door.is_open()",
"A very simple class, as you can see, just enough to exemplify polymorphism. The Room class accepts a door variable, and the type of this variable is not specified. Duck typing in action: the actual type of door is not declared, there is no \"acceptance test\" built in the language. Indeed, the incoming variable shall export the following methods that are used in the Room class: open(), close(), is_open(). So we can build the following classes",
"class Door:\n def __init__(self):\n self.status = \"closed\"\n\n def open(self):\n self.status = \"open\"\n\n def close(self):\n self.status = \"closed\"\n\n def is_open(self):\n return self.status == \"open\"\n\n\nclass BooleanDoor:\n def __init__(self):\n self.status = True\n\n def open(self):\n self.status = True\n\n def close(self):\n self.status = False\n\n def is_open(self):\n return self.status",
"Both represent a door that can be open or closed, and they implement the concept in two different ways: the first class relies on strings, while the second leverages booleans. Despite being two different types, both act the same way, so both can be used to build a Room object.",
"door = Door()\nbool_door = BooleanDoor()\nroom = Room(door)\nbool_room = Room(bool_door)\n\nroom.open()\nprint(room.is_open())\n\nroom.close()\nprint(room.is_open())\n\nbool_room.open()\nprint(bool_room.is_open())\n\nbool_room.close()\nprint(bool_room.is_open())",
"File Like Us\nFile-like objects are a concrete and very useful example of polymorphism in Python. A file-like object is a class (or the instance of a class) that acts like a file, i.e. it provides those methods a file object exposes.\nSay for example that you code a class that parses an XML tree, and that you expect the XML code to be contained in a file. So your class accepts a file in its __init__() method, and reads the content from it\n``` python\nclass XMLReader:\n def init(xmlfile):\n xmlfile.open()\n self.content = xmlfile.read()\n xmlfile.close()\ndef other(self):\n pass\n\n```\nThe class works well until your application shall be modified to receive XML content from a network stream. To use the class without modifying it you shall write the stream in a temporary file and load this latter, but this sounds a little overkill. So you plan to change the class to accept a string, but this way you shall change every single code that uses the class to read a file, since now you shall open, read and close the file on your own, outside the class.\nPolymorphism offers a better way. Why not storing the incoming stream inside an object that acts like a file, even if it is not an actual one? If you check the io module you will find that such an object has been already invented and provided in the standard Python library.\nOther very useful file-like classes are those contained in the gzip, bz2, and zipfile modules (just to name some of the most used), which provide objects that allow you to manage compressed files just like plain files, hiding the decompression/compression machinery.\nUnforgiveness\nEAFP is a Python acronym that stands for easier to ask for forgiveness than permission. This coding style is highly pushed in the Python community because it completely relies on the duck typing concept, thus fitting well with the language philosophy.\nThe concept behind EAFP is fairly easy: instead of checking if an object has a given attribute or method before actually accessing or using it, just trust the object to provide what you need and manage the error case. This can be probably better understood by looking at some code. According to EAFP, instead of writing\npython\nif hasattr(someobj, 'open'):\n [...]\nelse:\n [...]\nyou shall write\npython\ntry:\n someobj.open()\n [...]\nexcept AttributeError:\n [...]\nAs you can see, the second snippet directly uses the method and deals with the possible AttributeError exception (by the way: managing exceptions is one of the top Black Magic Topics in Python, more on it in a future post. A very quick preview: I think we may learn something from Erlang - check this).\nWhy is this coding style pushed so much in the Python community? I think the main reason is that through EAFP you think polymorphically: you are not interested in knowing if the object has the open attribute, you are interested in knowing if the object can satisfy your request, that is to perform the open() method call.\nMovie Trivia\nSection titles come from the following movies: Good Morning, Vietnam (1987), Die Hard (1988), Spies Like Us (1985), Unforgiven (1992).\nSources\nYou will find a lot of documentation in this Reddit post. Most of the information contained in this series come from those sources.\nFeedback\nFeel free to use the blog Google+ page to comment the post. The GitHub issues page is the best place to submit corrections."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
stallmanifold/cs229-machine-learning-stanford-fall-2016
|
src/homework1/homework1_5b.ipynb
|
apache-2.0
|
[
"CS229 Machine Learning Exercise Homework 1 Problem 5b",
"import numpy as np\nimport numpy.linalg as linalg\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as clrs",
"Part i\nThe linear regression function below implements linear regression using the normal equations. We could also use some form of gradient descent to do this.",
"def linear_regression(X, y):\n return linalg.inv(X.T.dot(X)).dot(X.T).dot(y)",
"Here we just load some data and get it into a form we can use.",
"# Load the data\ndata = np.loadtxt('quasar_train.csv', delimiter=',')\n\nwavelengths = data[0]\nfluxes = data[1]\nones = np.ones(fluxes.size)\n\ndf_ones = pd.DataFrame(ones, columns=['xint'])\ndf_wavelengths = pd.DataFrame(wavelengths, columns=['wavelength'])\ndf_fluxes = pd.DataFrame(fluxes, columns=['flux'])\n\ndf = pd.concat([df_ones, df_wavelengths, df_fluxes], axis=1)\n\nX = pd.concat([df['xint'], df['wavelength']], axis=1)\ny = df['flux']\nx = X['wavelength']",
"Performing linear regression on the first training example",
"theta = linear_regression(X, y)",
"yields the following parameters:",
"print('theta = {}'.format(theta))",
"Now we wish to display the results for part i. Evaluate the model",
"p = np.poly1d([theta[1], theta[0]])\nz = np.linspace(x[0], x[x.shape[0]-1])",
"at a set of design points. The data set and the results of linear regression come in the following figure.",
"fig = plt.figure(1, figsize=(12,10))\nplt.xlabel('Wavelength (Angstroms)')\nplt.ylabel('Flux (Watts/m^2')\nplt.xticks(np.linspace(x[0], x[x.shape[0]-1], 10))\nplt.yticks(np.linspace(-1, 9, 11))\nscatter = plt.scatter(x, y, marker='+', color='purple', label='quasar data')\nreg = plt.plot(z, p(z), color='blue', label='regression line')\nplt.legend()",
"The following plot displays the results.",
"plt.show()",
"Part ii\nFor the next part, we perform locally weighted linear regression on the data set with a Gaussian weighting function. We use the parameters that follow.",
"import homework1_5b as hm1b\nimport importlib as im\n\nXtrain = X.as_matrix()\nytrain = y.as_matrix()\ntau = 5",
"Training the model yields the following results. Here we place the results into the same plot at the data in part i. The figure shows that the weighted linear regression algorithm best fits the data, especially in the region around wavelength ~1225 Angstroms.",
"W = hm1b.weightM(tau)(Xtrain)\nm = hm1b.LWLRModel(W, Xtrain, ytrain)\n\nz = np.linspace(x[0], x[x.shape[0]-1], 200)\nfig = plt.figure(1, figsize=(12,10))\nplt.xlabel('Wavelength (Angstroms)')\nplt.ylabel('Flux (Watts/m^2')\nplt.xticks(np.arange(x[0], x[x.shape[0]-1]+50, step=50))\nplt.yticks(np.arange(-1, 9, step=0.5))\nplot1 = plt.scatter(x, y, marker='+', color='black', label='quasar data')\nplot2 = plt.plot(z, p(z), color='blue', label='regression line')\nplot3 = plt.plot(z, m(z), color='red', label='tau = 5')\nplt.legend()\n\nplt.show()",
"Part III\nHere we perform the same regression for more values of tau and plot the results.",
"taus = [1,5,10,100,1000]\nmodels = {}\n\nfor tau in taus:\n W = hm1b.weightM(tau)(Xtrain)\n models[tau] = hm1b.LWLRModel(W, Xtrain, ytrain)\n\nz = np.linspace(x[0], x[x.shape[0]-1], 200)\nfig = plt.figure(1, figsize=(12,10))\nplt.xlabel('Wavelength (Angstroms)')\nplt.ylabel('Flux (Watts/m^2')\nplt.xticks(np.arange(x[0], x[x.shape[0]-1]+50, step=50))\nplt.yticks(np.arange(-2, 9, step=0.5))\nplot1 = plt.scatter(x, y, marker='+', color='k', label='quasar data')\nplot4 = plt.plot(z, models[1](z), color='red', label='tau = 1') \nplot4 = plt.plot(z, models[5](z), color='blue', label='tau = 5') \nplot5 = plt.plot(z, models[10](z), color='green', label='tau = 10') \nplot6 = plt.plot(z, models[100](z), color='magenta', label='tau = 100') \nplot7 = plt.plot(z, models[1000](z), color='cyan', label='tau = 1000') \nplt.legend()\n\nplt.show()",
"As tau increases, the curve flattens out and becomes more like unweighted linear regression. As tau increases, it begins overfitting the data and the variance increases. The bias-variance tradeoff manifests itself indeed."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/mohc/cmip6/models/ukesm1-0-ll/atmos.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: MOHC\nSource ID: UKESM1-0-LL\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:15\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'mohc', 'ukesm1-0-ll', 'atmos')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties --> Overview\n2. Key Properties --> Resolution\n3. Key Properties --> Timestepping\n4. Key Properties --> Orography\n5. Grid --> Discretisation\n6. Grid --> Discretisation --> Horizontal\n7. Grid --> Discretisation --> Vertical\n8. Dynamical Core\n9. Dynamical Core --> Top Boundary\n10. Dynamical Core --> Lateral Boundary\n11. Dynamical Core --> Diffusion Horizontal\n12. Dynamical Core --> Advection Tracers\n13. Dynamical Core --> Advection Momentum\n14. Radiation\n15. Radiation --> Shortwave Radiation\n16. Radiation --> Shortwave GHG\n17. Radiation --> Shortwave Cloud Ice\n18. Radiation --> Shortwave Cloud Liquid\n19. Radiation --> Shortwave Cloud Inhomogeneity\n20. Radiation --> Shortwave Aerosols\n21. Radiation --> Shortwave Gases\n22. Radiation --> Longwave Radiation\n23. Radiation --> Longwave GHG\n24. Radiation --> Longwave Cloud Ice\n25. Radiation --> Longwave Cloud Liquid\n26. Radiation --> Longwave Cloud Inhomogeneity\n27. Radiation --> Longwave Aerosols\n28. Radiation --> Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --> Boundary Layer Turbulence\n31. Turbulence Convection --> Deep Convection\n32. Turbulence Convection --> Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --> Large Scale Precipitation\n35. Microphysics Precipitation --> Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --> Optical Cloud Properties\n38. Cloud Scheme --> Sub Grid Scale Water Distribution\n39. Cloud Scheme --> Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --> Isscp Attributes\n42. Observation Simulation --> Cosp Attributes\n43. Observation Simulation --> Radar Inputs\n44. Observation Simulation --> Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --> Orographic Gravity Waves\n47. Gravity Waves --> Non Orographic Gravity Waves\n48. Solar\n49. Solar --> Solar Pathways\n50. Solar --> Solar Constant\n51. Solar --> Orbital Parameters\n52. Solar --> Insolation Ozone\n53. Volcanos\n54. Volcanos --> Volcanoes Treatment \n1. Key Properties --> Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Family\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of atmospheric model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBasic approximations made in the atmosphere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Range Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.4. Number Of Vertical Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"2.5. High Top\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of the orography.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n",
"4.2. Changes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n",
"5. Grid --> Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Discretisation --> Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.2. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n",
"6.3. Scheme Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation function order",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.4. Horizontal Pole\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nHorizontal discretisation pole singularity treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.5. Grid Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal grid type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7. Grid --> Discretisation --> Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nType of vertical coordinate system",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of atmosphere dynamical core",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the dynamical core of the model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Timestepping Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTimestepping framework type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of the model prognostic variables",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9. Dynamical Core --> Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTop boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.2. Top Heat\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop boundary heat treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Top Wind\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop boundary wind treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Dynamical Core --> Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nType of lateral boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11. Dynamical Core --> Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nHorizontal diffusion scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal diffusion scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Dynamical Core --> Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nTracer advection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.2. Scheme Characteristics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTracer advection scheme characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.3. Conserved Quantities\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTracer advection scheme conserved quantities",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.4. Conservation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTracer advection scheme conservation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Dynamical Core --> Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nMomentum advection schemes name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Scheme Characteristics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMomentum advection scheme characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Scheme Staggering Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMomentum advection scheme staggering type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.4. Conserved Quantities\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMomentum advection scheme conserved quantities",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.5. Conservation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMomentum advection scheme conservation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Radiation --> Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Spectral Integration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nShortwave radiation scheme spectral integration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.4. Transport Calculation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nShortwave radiation transport calculation methods",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Spectral Intervals\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Radiation --> Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. ODS\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.3. Other Flourinated Gases\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Radiation --> Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18. Radiation --> Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Radiation --> Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20. Radiation --> Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21. Radiation --> Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with gases",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22. Radiation --> Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.3. Spectral Integration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLongwave radiation scheme spectral integration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.4. Transport Calculation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nLongwave radiation transport calculation methods",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.5. Spectral Intervals\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"23. Radiation --> Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. ODS\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Other Flourinated Gases\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Radiation --> Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.2. Physical Reprenstation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25. Radiation --> Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26. Radiation --> Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27. Radiation --> Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"28. Radiation --> Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with gases",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of atmosphere convection and turbulence",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. Turbulence Convection --> Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nBoundary layer turbulence scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBoundary layer turbulence scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.3. Closure Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nBoundary layer turbulence scheme closure order",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Counter Gradient\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"31. Turbulence Convection --> Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDeep convection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDeep convection scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.3. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDeep convection scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.4. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.5. Microphysics\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Turbulence Convection --> Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nShallow convection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nshallow convection scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.3. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nshallow convection scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n",
"32.4. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Microphysics\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMicrophysics scheme for shallow convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34. Microphysics Precipitation --> Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34.2. Hydrometeors\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"35. Microphysics Precipitation --> Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35.2. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nLarge scale cloud microphysics processes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of the atmosphere cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.3. Atmos Coupling\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n",
"36.4. Uses Separate Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProcesses included in the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36.6. Prognostic Scheme\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.7. Diagnostic Scheme\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.8. Prognostic Variables\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"37. Cloud Scheme --> Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"37.2. Cloud Inhomogeneity\nIs Required: FALSE Type: STRING Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38. Cloud Scheme --> Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSub-grid scale water distribution type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n",
"38.2. Function Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nSub-grid scale water distribution function name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38.3. Function Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nSub-grid scale water distribution function type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"38.4. Convection Coupling\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSub-grid scale water distribution coupling with convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n",
"39. Cloud Scheme --> Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSub-grid scale ice distribution type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n",
"39.2. Function Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nSub-grid scale ice distribution function name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"39.3. Function Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nSub-grid scale ice distribution function type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"39.4. Convection Coupling\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n",
"40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of observation simulator characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"41. Observation Simulation --> Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.2. Top Height Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator ISSCP top height direction",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"42. Observation Simulation --> Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator COSP run configuration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"42.2. Number Of Grid Points\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of grid points",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"42.3. Number Of Sub Columns\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"42.4. Number Of Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of levels",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"43. Observation Simulation --> Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nCloud simulator radar frequency (Hz)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"43.2. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator radar type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"43.3. Gas Absorption\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nCloud simulator radar uses gas absorption",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"43.4. Effective Radius\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nCloud simulator radar uses effective radius",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"44. Observation Simulation --> Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator lidar ice type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"44.2. Overlap\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nCloud simulator lidar overlap",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"45.2. Sponge Layer\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45.3. Background\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBackground wave distribution",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45.4. Subgrid Scale Orography\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSubgrid scale orography effects taken into account.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46. Gravity Waves --> Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"46.2. Source Mechanisms\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOrographic gravity wave source mechanisms",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.3. Calculation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOrographic gravity wave calculation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.4. Propagation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrographic gravity wave propogation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.5. Dissipation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrographic gravity wave dissipation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47. Gravity Waves --> Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"47.2. Source Mechanisms\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nNon-orographic gravity wave source mechanisms",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47.3. Calculation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nNon-orographic gravity wave calculation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n",
"47.4. Propagation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nNon-orographic gravity wave propogation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47.5. Dissipation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of solar insolation of the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"49. Solar --> Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"50. Solar --> Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of the solar constant.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n",
"50.2. Fixed Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"50.3. Transient Characteristics\nIs Required: TRUE Type: STRING Cardinality: 1.1\nsolar constant transient characteristics (W m-2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"51. Solar --> Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of orbital parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n",
"51.2. Fixed Reference Date\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"51.3. Transient Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescription of transient orbital parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"51.4. Computation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod used for computing orbital parameters.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"52. Solar --> Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"54. Volcanos --> Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
rcrehuet/Python_for_Scientists_2017
|
notebooks/extras/Numpy arrays. Data manipulation.ipynb
|
gpl-3.0
|
[
"Numpy arrays. Data manipulation\nIn this notebook we are going to work with some numerical data that we need to re-format.\nProfasi is a Monte Carlo code for protein simulation. It can run Parallel Tempering simulations where each replica runs in a processor and exchanges temperatures with the other replicas. It is done this way because it is more efficient to exchange temperatures than to exchange molecular coordiantes between processors. The problem is that in the output files the temperatures are mixed. Usually one needs the data for all the sampled molecules at a the same temperature.\nThe above explanation is just to say that we need to reorder the data from different files and generated re-ordered files. All the our input files are called rt and each resides in a directory called ni where i corresponds to the processor. If the simulation was run with 16 processors i will go from 0 to 15.\nThe rt file is a text file that contains just columns of numbers, separated by spaces. Here is how it looks like:",
"!head ../../data/profasi/n0/rt",
"The first column indicate the iteration or cycle, the second the temperature. The remaining are energy components and other observables. The temperature is an integer. In a different file one an find the conversion to kelvin.\nWe start with some setup...",
"import numpy as np\nimport glob",
"Use glob.glob to get a list of all the input rt files.",
"files = glob.glob('../../data/profasi/n*/rt')",
"We now read all the rt files into memory. If they were too big, we should think of a more efficient way to process them (maybe using memmap or pytables). In our case they are small enough.\nAs these are numeric files, we use loadtxt to automatically generte an array. As different files will generate different arrays, we collect them in a list that we finally transform into an array.",
"all_enes = []\nfor filein in files:\n print(\"Reading..... \", filein)\n all_enes.append(np.loadtxt(filein))\nall_enes = np.asarray(all_enes)",
"This is the shape of the resulting array:",
"all_enes.shape",
"Now we need to reshape it so that it contains 2 dimensions with all the raws and the 14 columns:",
"all_enes=all_enes.reshape((-1,all_enes.shape[2]))",
"Alternatively, we could have concatenated all the rows into the first loaded array. Here is a way to do that:",
"all_enes = None\nfor filein in files:\n print(\"Reading..... \", filein)\n if all_enes is not None:\n all_enes= np.r_[all_enes, np.loadtxt(filein)]\n else:\n all_enes= np.loadtxt(filein)\n",
"Now we need to get the temperatures. We know that there are as many temperatures as nodes, so this would work:\ntemperatures = np.arange(len(files))\n\nHowever, image that for some purpose we did not process all the ni directories, but only a fraction of them. We can still get all the temperatures from the rt files. It corresponds to the 2nd column. A simple way to get the temperatures is with a set.",
"temperatures = set(all_enes[:,1])",
"We can also use np.unique to the the unique values of an array or part of it. A little bit more efficient... (check it with %timeit).",
"temperatures = np.unique(all_enes[:,1])\n%timeit set(all_enes[:,1])\n%timeit np.unique(all_enes[:,1])",
"Now we eneed to extract the energies from the array based on the temperature value, and create separate sub-arrays. Array elements can be selected with Boolean arrays. This is called fancy indexing.\nWe start by difining an empty array and then fill it in:",
"ene_temp = np.zeros_like(all_enes)\nene_temp = ene_temp.reshape([len(temperatures), -1, all_enes.shape[1]])\n\nfor ti in temperatures.astype(int):\n ene_temp[ti] = all_enes[all_enes[:,1]==ti, :]\n",
"The last step is to keep only those energy values that are beyond the equilibration point. So we want only to keep data from, a certain point. Let's plot the energy vs. iteration to see how it looks line. We'll plot temperature 5 as this is the lowest temperature (Profasi order from high to low).",
"import matplotlib.pyplot as plt\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0)\n\nplt.plot(ene_temp[5,:,0], ene_temp[5,:,2])",
"Ugly, isn't it? The array is not ordered by iteration. The order can be seen here:",
"plt.plot(ene_temp[5,:,0],'x')",
"We have the structures at the correct temperature but still ordered from the 6 replicas that were running. Let's order them with respect to the first column. We cannot use sort here, because we want to use the order of the first column to order all the raw elements. We can get that order with argsort and then apply it to the array.\nThe problem is that the sizes of the sorting order do no agree with the sizes of ene_temp. To solve that we need to do a trick which I pesonally find very cumbersome. The reason is we need to broadcas correctly the dimensions of order into ene_temp. It's simplers to understand if you see that for the first temperatures we want:\nene_temp[0, order[0]]\nene_temp[1, order[1]]\n\nand so on. It would seem that ene_temp[:, order[:]] but this performs the broadcasting in the wrong axis. Instead, we need to transpose the first axis, because what we are actually doing is:\nene_temp[[0], order[0]]\nene_temp[[1], order[1]]\n\nAnd this can be done creating the vector [[0], [1], [2]... which is done with: np.arange(ene_temp.shape[0])[:, np.newaxis].",
"order = ene_temp[:, :, 0].argsort()\nene_temp = ene_temp[np.arange(ene_temp.shape[0])[:, np.newaxis], order]\n\nplt.subplot(2,1,1)\nplt.plot(ene_temp[5,:,0],'x')\nplt.subplot(2,1,2)\nplt.plot(ene_temp[5,:,0], ene_temp[5,:,2])",
"Finally we can select and save the submatrix from iteration 2000 onwards:",
"np.save('energies_temperatures', ene_temp[ene_temp[:,:,0]>2000, :], )",
"Advanced Topic: optimizing with numba\nIn the previous section, if there were $N$ temperatures we cycles the all_enes array $N$ times, which is not very efficient. We could potentially make if faster by running this in a single step. We create an empty arrray and fill it in with the correct values.\nWe first time our initial approach:",
"%%timeit\nfor ti in temperatures.astype(int):\n ene_temp[ti] = all_enes[all_enes[:,1]==ti, :]",
"Now we loop though all the rows and put each row to its corrrect first axis dimension. We need to keep an array of the filled positions.",
"ene_temp2 = np.zeros_like(ene_temp)\nfilled = np.zeros(len(temperatures), np.int)\n\nfor row in all_enes:\n ti = int(row[1])\n ene_temp2[ti, filled[ti]] = row\n filled[ti] +=1",
"We check we are still getting the same result, and we time it:",
"np.all(ene_temp2==ene_temp)\n\n%%timeit\nfilled = np.zeros(len(temperatures), np.int)\nfor row in all_enes:\n ti = int(row[1])\n ene_temp2[ti, filled[ti]] = row\n filled[ti] +=1",
"Good (ironically)! Two orders of magnitude slower than the first approach... The reason is that in the previous approach we were using numpy fast looping abilities, whereas now the loops are implemented in pure python and therefore are much slower.\nThis is the typical case where numba can increase the performance of such loops.",
"import numba",
"We first write a function of our lines. We avoid creating arrays into that function as those cannot be optimized with numba. We test our approach and check that the timings are the same.",
"def get_temperatures(array_in, array_out, filled):\n for r in range(array_in.shape[0]):\n ti = int(array_in[r,1])\n for j in range(array_in.shape[1]):\n array_out[ti, filled[ti], j] = array_in[r,j]\n filled[ti] +=1\n return array_out\n\n%%timeit \nnum_temp = len(temperatures)\nm = all_enes.shape[0]\nn = all_enes.shape[1]\nm = m // num_temp\nene_temp = np.zeros((num_temp, m,n ))\nfilled = np.zeros(num_temp, np.int)\nget_temperatures(all_enes, ene_temp, filled)",
"Now we can pass this function to numba. The nopython option tell numba not to create object code which is as slow as python code. That is why we created the arrays outside the function. We also check the timings.",
"numba_get_temperatures = numba.jit(get_temperatures,nopython=True)\n\n%%timeit \nnum_temp = len(temperatures)\nm = all_enes.shape[0]\nn = all_enes.shape[1]\nm = m // num_temp\nene_temp3 = np.zeros((num_temp, m,n ))\nfilled = np.zeros(num_temp, np.int)\nnumba_get_temperatures(all_enes, ene_temp3, filled)",
"Wow! Three orders of magnitude faster than the python and one order faster than our original numpy code (with only 6 temperatures!).\nBut having to declare all arrays outside is ugly. Is there a workaroud? Yes! Numba is clever enough to separate the loops from the array creation, and optimize the loops. This called loop-lifting or loop-jitting. We need to remove the nopython option as part of the code will be object like, but we see that it is as efficient as before.\nHere we also use a decoratior instead of a function call. The results are exactly the same, it just gives a shorter syntax.",
"@numba.jit \ndef numba2_get_temperatures(array_in, num_temp):\n m = all_enes.shape[0]\n n = all_enes.shape[1]\n m = m // num_temp\n array_out = np.zeros((num_temp, m,n ))\n filled = np.zeros(num_temp, np.int)\n for r in range(array_in.shape[0]):\n ti = int(array_in[r,1])\n for j in range(array_in.shape[1]):\n array_out[ti, filled[ti], j] = array_in[r,j]\n filled[ti] +=1\n return array_out\n\n\n%%timeit \nnum_temp = len(temperatures)\nene_temp4 = numba2_get_temperatures(all_enes, num_temp)\n\nnp.all(numba2_get_temperatures(all_enes, num_temp)==ene_temp)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/recommendation_systems/solutions/featurization.ipynb
|
apache-2.0
|
[
"Using side features: feature preprocessing\nLearning Objectives\n\nTurning categorical features into embeddings.\nNormalizing continuous features.\nProcessing text features.\nBuild a User and Movie model.\n\nIntroduction\nOne of the great advantages of using a deep learning framework to build recommender models is the freedom to build rich, flexible feature representations.\nThe first step in doing so is preparing the features, as raw features will usually not be immediately usable in a model.\nFor example:\n\nUser and item ids may be strings (titles, usernames) or large, noncontiguous integers (database IDs).\nItem descriptions could be raw text.\nInteraction timestamps could be raw Unix timestamps.\n\nThese need to be appropriately transformed in order to be useful in building models:\n\nUser and item ids have to be translated into embedding vectors: high-dimensional numerical representations that are adjusted during training to help the model predict its objective better.\nRaw text needs to be tokenized (split into smaller parts such as individual words) and translated into embeddings.\nNumerical features need to be normalized so that their values lie in a small interval around 0.\n\nFortunately, by using TensorFlow we can make such preprocessing part of our model rather than a separate preprocessing step. This is not only convenient, but also ensures that our pre-processing is exactly the same during training and during serving. This makes it safe and easy to deploy models that include even very sophisticated pre-processing.\nIn this notebook, we are going to focus on recommenders and the preprocessing we need to do on the MovieLens dataset. If you're interested in a larger tutorial without a recommender system focus, have a look at the full Keras preprocessing guide.\nEach learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.\nThe MovieLens dataset\nLet's first have a look at what features we can use from the MovieLens dataset:",
"!pip install -q --upgrade tensorflow-datasets",
"Please re-run the above cell if you are getting any incompatible warnings and errors.",
"import pprint\n\nimport tensorflow_datasets as tfds\n\nratings = tfds.load(\"movielens/100k-ratings\", split=\"train\")\n\nfor x in ratings.take(1).as_numpy_iterator():\n pprint.pprint(x)",
"There are a couple of key features here:\n\nMovie title is useful as a movie identifier.\nUser id is useful as a user identifier.\nTimestamps will allow us to model the effect of time.\n\nThe first two are categorical features; timestamps are a continuous feature.\nTurning categorical features into embeddings\nA categorical feature is a feature that does not express a continuous quantity, but rather takes on one of a set of fixed values.\nMost deep learning models express these feature by turning them into high-dimensional vectors. During model training, the value of that vector is adjusted to help the model predict its objective better.\nFor example, suppose that our goal is to predict which user is going to watch which movie. To do that, we represent each user and each movie by an embedding vector. Initially, these embeddings will take on random values - but during training, we will adjust them so that embeddings of users and the movies they watch end up closer together.\nTaking raw categorical features and turning them into embeddings is normally a two-step process:\n\nFirstly, we need to translate the raw values into a range of contiguous integers, normally by building a mapping (called a \"vocabulary\") that maps raw values (\"Star Wars\") to integers (say, 15).\nSecondly, we need to take these integers and turn them into embeddings.\n\nDefining the vocabulary\nThe first step is to define a vocabulary. We can do this easily using Keras preprocessing layers.",
"import numpy as np\nimport tensorflow as tf\n\nmovie_title_lookup = tf.keras.layers.experimental.preprocessing.StringLookup()",
"The layer itself does not have a vocabulary yet, but we can build it using our data.",
"movie_title_lookup.adapt(ratings.map(lambda x: x[\"movie_title\"]))\n\nprint(f\"Vocabulary: {movie_title_lookup.get_vocabulary()[:3]}\")",
"Once we have this we can use the layer to translate raw tokens to embedding ids:",
"movie_title_lookup([\"Star Wars (1977)\", \"One Flew Over the Cuckoo's Nest (1975)\"])",
"Note that the layer's vocabulary includes one (or more!) unknown (or \"out of vocabulary\", OOV) tokens. This is really handy: it means that the layer can handle categorical values that are not in the vocabulary. In practical terms, this means that the model can continue to learn about and make recommendations even using features that have not been seen during vocabulary construction.\nUsing feature hashing\nIn fact, the StringLookup layer allows us to configure multiple OOV indices. If we do that, any raw value that is not in the vocabulary will be deterministically hashed to one of the OOV indices. The more such indices we have, the less likley it is that two different raw feature values will hash to the same OOV index. Consequently, if we have enough such indices the model should be able to train about as well as a model with an explicit vocabulary without the disadvantage of having to maintain the token list.\nWe can take this to its logical extreme and rely entirely on feature hashing, with no vocabulary at all. This is implemented in the tf.keras.layers.experimental.preprocessing.Hashing layer.",
"# We set up a large number of bins to reduce the chance of hash collisions.\nnum_hashing_bins = 200_000\n\nmovie_title_hashing = tf.keras.layers.experimental.preprocessing.Hashing(\n num_bins=num_hashing_bins\n)",
"We can do the lookup as before without the need to build vocabularies:",
"movie_title_hashing([\"Star Wars (1977)\", \"One Flew Over the Cuckoo's Nest (1975)\"])",
"Defining the embeddings\nNow that we have integer ids, we can use the Embedding layer to turn those into embeddings.\nAn embedding layer has two dimensions: the first dimension tells us how many distinct categories we can embed; the second tells us how large the vector representing each of them can be.\nWhen creating the embedding layer for movie titles, we are going to set the first value to the size of our title vocabulary (or the number of hashing bins). The second is up to us: the larger it is, the higher the capacity of the model, but the slower it is to fit and serve.",
"# Turns positive integers (indexes) into dense vectors of fixed size.\n# TODO\nmovie_title_embedding = tf.keras.layers.Embedding(\n # Let's use the explicit vocabulary lookup.\n input_dim=movie_title_lookup.vocab_size(),\n output_dim=32\n)",
"We can put the two together into a single layer which takes raw text in and yields embeddings.",
"movie_title_model = tf.keras.Sequential([movie_title_lookup, movie_title_embedding])",
"Just like that, we can directly get the embeddings for our movie titles:",
"movie_title_model([\"Star Wars (1977)\"])",
"We can do the same with user embeddings:",
"user_id_lookup = tf.keras.layers.experimental.preprocessing.StringLookup()\nuser_id_lookup.adapt(ratings.map(lambda x: x[\"user_id\"]))\n\nuser_id_embedding = tf.keras.layers.Embedding(user_id_lookup.vocab_size(), 32)\n\nuser_id_model = tf.keras.Sequential([user_id_lookup, user_id_embedding])",
"Normalizing continuous features\nContinuous features also need normalization. For example, the timestamp feature is far too large to be used directly in a deep model:",
"for x in ratings.take(3).as_numpy_iterator():\n print(f\"Timestamp: {x['timestamp']}.\")",
"We need to process it before we can use it. While there are many ways in which we can do this, discretization and standardization are two common ones.\nStandardization\nStandardization rescales features to normalize their range by subtracting the feature's mean and dividing by its standard deviation. It is a common preprocessing transformation.\nThis can be easily accomplished using the tf.keras.layers.experimental.preprocessing.Normalization layer:",
"# Feature-wise normalization of the data.\n# TODO\ntimestamp_normalization = tf.keras.layers.experimental.preprocessing.Normalization()\ntimestamp_normalization.adapt(ratings.map(lambda x: x[\"timestamp\"]).batch(1024))\n\nfor x in ratings.take(3).as_numpy_iterator():\n print(f\"Normalized timestamp: {timestamp_normalization(x['timestamp'])}.\")",
"Discretization\nAnother common transformation is to turn a continuous feature into a number of categorical features. This makes good sense if we have reasons to suspect that a feature's effect is non-continuous.\nTo do this, we first need to establish the boundaries of the buckets we will use for discretization. The easiest way is to identify the minimum and maximum value of the feature, and divide the resulting interval equally:",
"max_timestamp = ratings.map(lambda x: x[\"timestamp\"]).reduce(\n tf.cast(0, tf.int64), tf.maximum).numpy().max()\nmin_timestamp = ratings.map(lambda x: x[\"timestamp\"]).reduce(\n np.int64(1e9), tf.minimum).numpy().min()\n\ntimestamp_buckets = np.linspace(\n min_timestamp, max_timestamp, num=1000)\n\nprint(f\"Buckets: {timestamp_buckets[:3]}\")",
"Given the bucket boundaries we can transform timestamps into embeddings:",
"timestamp_embedding_model = tf.keras.Sequential([\n tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()),\n tf.keras.layers.Embedding(len(timestamp_buckets) + 1, 32)\n])\n\nfor timestamp in ratings.take(1).map(lambda x: x[\"timestamp\"]).batch(1).as_numpy_iterator():\n print(f\"Timestamp embedding: {timestamp_embedding_model(timestamp)}.\") ",
"Processing text features\nWe may also want to add text features to our model. Usually, things like product descriptions are free form text, and we can hope that our model can learn to use the information they contain to make better recommendations, especially in a cold-start or long tail scenario.\nWhile the MovieLens dataset does not give us rich textual features, we can still use movie titles. This may help us capture the fact that movies with very similar titles are likely to belong to the same series.\nThe first transformation we need to apply to text is tokenization (splitting into constituent words or word-pieces), followed by vocabulary learning, followed by an embedding.\nThe Keras tf.keras.layers.experimental.preprocessing.TextVectorization layer can do the first two steps for us:",
"# Text vectorization layer.\n# TODO\ntitle_text = tf.keras.layers.experimental.preprocessing.TextVectorization()\ntitle_text.adapt(ratings.map(lambda x: x[\"movie_title\"]))",
"Let's try it out:",
"for row in ratings.batch(1).map(lambda x: x[\"movie_title\"]).take(1):\n print(title_text(row))",
"Each title is translated into a sequence of tokens, one for each piece we've tokenized.\nWe can check the learned vocabulary to verify that the layer is using the correct tokenization:",
"title_text.get_vocabulary()[40:45]",
"This looks correct: the layer is tokenizing titles into individual words.\nTo finish the processing, we now need to embed the text. Because each title contains multiple words, we will get multiple embeddings for each title. For use in a downstream model these are usually compressed into a single embedding. Models like RNNs or Transformers are useful here, but averaging all the words' embeddings together is a good starting point.\nPutting it all together\nWith these components in place, we can build a model that does all the preprocessing together.\nUser model\nThe full user model may look like the following:",
"class UserModel(tf.keras.Model):\n \n def __init__(self):\n super().__init__()\n\n self.user_embedding = tf.keras.Sequential([\n user_id_lookup,\n tf.keras.layers.Embedding(user_id_lookup.vocab_size(), 32),\n ])\n self.timestamp_embedding = tf.keras.Sequential([\n tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()),\n tf.keras.layers.Embedding(len(timestamp_buckets) + 2, 32)\n ])\n self.normalized_timestamp = tf.keras.layers.experimental.preprocessing.Normalization()\n\n def call(self, inputs):\n\n # Take the input dictionary, pass it through each input layer,\n # and concatenate the result.\n return tf.concat([\n self.user_embedding(inputs[\"user_id\"]),\n self.timestamp_embedding(inputs[\"timestamp\"]),\n self.normalized_timestamp(inputs[\"timestamp\"])\n ], axis=1)",
"Let's try it out:",
"# TODO\nuser_model = UserModel()\n\nuser_model.normalized_timestamp.adapt(\n ratings.map(lambda x: x[\"timestamp\"]).batch(128))\n\nfor row in ratings.batch(1).take(1):\n print(f\"Computed representations: {user_model(row)[0, :3]}\")",
"Movie model\nWe can do the same for the movie model:",
"class MovieModel(tf.keras.Model):\n \n def __init__(self):\n super().__init__()\n\n max_tokens = 10_000\n\n self.title_embedding = tf.keras.Sequential([\n movie_title_lookup,\n tf.keras.layers.Embedding(movie_title_lookup.vocab_size(), 32)\n ])\n self.title_text_embedding = tf.keras.Sequential([\n tf.keras.layers.experimental.preprocessing.TextVectorization(max_tokens=max_tokens),\n tf.keras.layers.Embedding(max_tokens, 32, mask_zero=True),\n # We average the embedding of individual words to get one embedding vector\n # per title.\n tf.keras.layers.GlobalAveragePooling1D(),\n ])\n\n def call(self, inputs):\n return tf.concat([\n self.title_embedding(inputs[\"movie_title\"]),\n self.title_text_embedding(inputs[\"movie_title\"]),\n ], axis=1)",
"Let's try it out:",
"# TODO\nmovie_model = MovieModel()\n\nmovie_model.title_text_embedding.layers[0].adapt(\n ratings.map(lambda x: x[\"movie_title\"]))\n\nfor row in ratings.batch(1).take(1):\n print(f\"Computed representations: {movie_model(row)[0, :3]}\")",
"Next steps\nWith the two models above we've taken the first steps to representing rich features in a recommender model: to take this further and explore how these can be used to build an effective deep recommender model, take a look at our Deep Recommenders tutorial."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
infotranecon/IEtools
|
IEtools Demo.ipynb
|
mit
|
[
"This is a demo notebook of some of the features of IEtools.py.\nIEtools includes tools to read FRED economic data\nhttps://fred.stlouisfed.org/\nin either csv or xls formats. IEtools also includes tools for\nfitting information equilibrium parameters and constructing\ndynamic equilibrium models (see Dynamic Equilibrium Examples.ipynb).\nThe basic information equilibrium condition between two variables\nA and B is given by\np = dA/dB = k A/B\nwith information transfer (IT) index k. In 'general equilibrium' \n(where A or B is not changing faster than the other), we have the \nsolution\nA = a B^k\nor\nlog(A) = k log(B) + c\nIEtools has a function to solve for these parameters. In the\ninformation equilibrium condition, p = dA/dB is the abstract price.\nIn 'general equilibrium', we have\np = k a B^(k-1)\nThe continuously compounded growth rates of these variables are \nalso related. If the growth rate of A is g_a, B is g_b, \nand p is g_p then:\ng_a = k g_b\ng_p = (k-1) g_b\ng_p = g_a - g_b\nThis notebook shows some of these results for GDP and labor supply, \nshowing Okun's law along the way.\nIEtools was tested using Python 3.6 as part of the Anaconda 4.4.0 \npackage. All dependencies are included in Anaconda 4.4.0.",
"import numpy as np\nimport IEtools\nimport pylab as pl\n%pylab inline",
"Read in the files",
"filename1='C:/econdata/GDP.xls'\nfilename2='C:/econdata/PAYEMS.xls'\nfilename3='C:/econdata/CPIAUCSL.xls'\n\ngdp = IEtools.FREDxlsRead(filename1)\nlab = IEtools.FREDxlsRead(filename2)\ncpi = IEtools.FREDxlsRead(filename3)",
"Here's a plot of nominal GDP",
"pl.plot(gdp['interp'].x,gdp['interp'](gdp['interp'].x))\npl.ylabel(gdp['name']+' [G$]')\npl.yscale('log')\npl.show()",
"And here is nominal GDP growth",
"pl.plot(gdp['growth'].x,gdp['growth'](gdp['growth'].x))\npl.ylabel(gdp['name']+' growth [%]')\npl.show()",
"Fit information equilibrium parameters\nHere we take the information equilibrium model with A = nominal GDP, B = labor employed (PAYEMS), and p = CPI (all items). First we solve for the IT index and show the model. Note: this is a simple model with limited accuracy.",
"result = IEtools.fitGeneralInfoEq(gdp['data'],lab['data'], guess=[1.0,0.0])\nprint(result)\nprint('IT index = ',np.round(result.x[0],decimals=2))\ntime=gdp['interp'].x\npl.plot(time,np.exp(result.x[0]*np.log(lab['interp'](time))+result.x[1]),label='model')\npl.plot(time,gdp['interp'](time),label='data')\npl.yscale('log')\npl.ylabel(gdp['name']+' [G$]')\npl.legend()\npl.show()",
"And we can show the relationship between the growth rates (i.e. compute the inflation rate equal to the growth rate of the CPI)",
"time=gdp['data'][:,0]\n\nder1=gdp['growth'](time)-lab['growth'](time)\nder2=cpi['growth'](time)\npl.plot(time,der1,label='model')\npl.plot(time,der2,label='data')\npl.legend()\npl.show()",
"Additionally, rearranging the terms and looking at the growth rate, we can show a form of Okun's law. Since g_p = g_a - g_b, we can say g_b = g_a - g_p. The right hand side of the last equation when A is nominal GDP and p is the CPI is the CPI-deflated real GDP growth. Okun's law is an inverse relationship between the change in unemployment and RGDP growth, but in our case we will look at the direct relationship of RGDP growth and change in employment (PAYEMS).",
"time=gdp['data'][:,0]\n\nder1=gdp['growth'](time)-cpi['growth'](time)\nder2=lab['growth'](time)\npl.plot(time,der1,label='model')\npl.plot(time,der2,label='data')\npl.legend()\npl.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
paris-saclay-cds/python-workshop
|
Day_2_Software_engineering_best_practices/solutions/02_docstring.ipynb
|
bsd-3-clause
|
[
"import pandas\nimport numpy\nfrom matplotlib import pyplot\n%matplotlib inline",
"IO: Reading and preprocess the data\nWe can define a function which will read the data and process them.",
"def read_spectra(path_csv):\n \"\"\"Read and parse data in pandas DataFrames.\n \n Parameters\n ----------\n path_csv : str\n Path to the CSV file to read.\n \n Returns\n -------\n s : pandas DataFrame, shape (n_spectra, n_freq_point)\n DataFrame containing all Raman spectra.\n \n c : pandas Series, shape (n_spectra,)\n Series containing the concentration of the molecule.\n \n m : pandas Series, shape (n_spectra,)\n Series containing the type of chemotherapeutic agent.\n \n \"\"\"\n s = pandas.read_csv(path_csv)\n c = s['concentration']\n m = s['molecule']\n s = s['spectra']\n x = []\n for spec in s:\n x.append(numpy.fromstring(spec[1:-1], sep=','))\n s = pandas.DataFrame(x)\n \n return s, c, m\n\n# read the frequency and get a pandas serie\nf = pandas.read_csv('data/freq.csv')['freqs']\n\n# read all data for training\nfilenames = ['data/spectra_{}.csv'.format(i)\n for i in range(4)]\n\nstot = []\nc = []\nm = []\nfor filename in filenames:\n s_tmp, c_tmp, m_tmp = read_spectra(filename)\n stot.append(s_tmp)\n c.append(c_tmp)\n m.append(m_tmp)\n\nstot = pandas.concat(stot)\nc = pandas.concat(c)\nm = pandas.concat(m)",
"Plot helper functions\nWe can create two functions: (i) to plot all spectra and (ii) plot the mean spectra with the std intervals.\nWe will make a \"private\" function which will be used by both plot types.",
"def _apply_axis_layout(ax, title):\n \"\"\"Apply despine style and add labels to axis.\"\"\"\n ax.set_xlabel('Frequency')\n ax.set_ylabel('Concentration')\n ax.set_title(title)\n ax.spines['top'].set_visible(False)\n ax.spines['right'].set_visible(False)\n ax.get_xaxis().tick_bottom()\n ax.get_yaxis().tick_left()\n ax.spines['left'].set_position(('outward', 10))\n ax.spines['bottom'].set_position(('outward', 10))\n\ndef plot_spectra(f, s, title):\n \"\"\"Plot a bunch of Raman spectra.\n \n Parameters\n ----------\n f : pandas Series, shape (n_freq_points,)\n Frequencies for which the Raman spectra were acquired.\n \n s : pandas DataFrame, shape (n_spectra, n_freq_points)\n DataFrame containing all Raman spectra.\n \n title : str\n Title added to the plot.\n \n Returns\n -------\n None\n \n \"\"\"\n fig, ax = pyplot.subplots()\n ax.plot(f, s.T)\n _apply_axis_layout(ax, title)\n \ndef plot_spectra_by_type(f, s, classes, title):\n \"\"\"Plot mean spectrum with its variance for a given class.\n \n Parameters\n ----------\n f : pandas Series, shape (n_freq_points,)\n Frequencies for which the Raman spectra were acquired.\n \n s : pandas DataFrame, shape (n_spectra, n_freq_points)\n DataFrame containing all Raman spectra.\n \n classes : array-like, shape (n_classes,)\n Array contining the different spectra class which will be plotted.\n \n title : str\n Title added to the plot.\n \n Returns\n -------\n None\n \n \"\"\"\n fig, ax = pyplot.subplots()\n for c_type in numpy.unique(classes):\n i = numpy.nonzero(classes == c_type)[0]\n ax.plot(f, numpy.mean(s.iloc[i], axis=0), label=c_type)\n ax.fill_between(f, numpy.mean(s.iloc[i], axis=0) + numpy.std(s.iloc[i], axis=0), numpy.mean(s.iloc[i], axis=0) - numpy.std(s.iloc[i], axis=0), alpha=0.2)\n _apply_axis_layout(ax, title)\n ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))\n\nplot_spectra(f, stot, 'All training spectra')\n\nplot_spectra_by_type(f, stot, m, 'Mean spectra in function of the molecules')\n\nplot_spectra_by_type(f, stot, c, 'Mean spectra in function of the concentrations')",
"Reusability for new data:",
"s4, c4, m4 = read_spectra('data/spectra_4.csv')\n\nplot_spectra(f, stot, 'All training spectra')\nplot_spectra_by_type(f, s4, m4, 'Mean spectra in function of the molecules')\nplot_spectra_by_type(f, s4, c4, 'Mean spectra in function of the concentrations')",
"Training and testing a machine learning model for classification",
"def plot_cm(cm, classes, title):\n \"\"\"Plot a confusion matrix.\n \n Parameters\n ----------\n cm : ndarray, shape (n_classes, n_classes)\n Confusion matrix.\n \n classes : array-like, shape (n_classes,)\n Array contining the different spectra classes used in the\n classification problem.\n \n title : str\n Title added to the plot.\n \n Returns\n -------\n None\n \n \"\"\"\n import itertools\n fig, ax = pyplot.subplots()\n pyplot.imshow(cm, interpolation='nearest', cmap='bwr')\n pyplot.title(title)\n pyplot.colorbar()\n tick_marks = numpy.arange(len(numpy.unique(classes)))\n pyplot.xticks(tick_marks, numpy.unique(classes), rotation=45)\n pyplot.yticks(tick_marks, numpy.unique(classes))\n\n fmt = 'd'\n thresh = cm.max() / 2.\n for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n pyplot.text(j, i, format(cm[i, j], fmt),\n horizontalalignment=\"center\",\n color=\"white\" if cm[i, j] > thresh else \"black\")\n\n pyplot.tight_layout()\n pyplot.ylabel('True label')\n pyplot.xlabel('Predicted label')\n\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.decomposition import PCA\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.svm import LinearSVC\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.metrics import confusion_matrix\n\nfor clf in [RandomForestClassifier(random_state=0),\n LinearSVC(random_state=0)]:\n p = make_pipeline(StandardScaler(), PCA(n_components=100, random_state=0), clf)\n p.fit(stot, m)\n pred = p.predict(s4)\n plot_cm(confusion_matrix(m4, pred), p.classes_, 'Confusion matrix using {}'.format(clf.__class__.__name__))\n print('Accuracy score: {0:.2f}'.format(p.score(s4, m4)))",
"Training and testing a machine learning model for regression",
"def plot_regression(y_true, y_pred, title):\n \"\"\"Plot actual vs. predicted scatter plot.\n \n Parameters\n ----------\n y_true : array-like, shape (n_samples,)\n Ground truth (correct) target values.\n\n y_pred : array-like, shape (n_samples,)\n Estimated targets as returned by a regressor.\n\n title : str\n Title added to the plot.\n \n Returns\n -------\n None\n \n \"\"\"\n from sklearn.metrics import r2_score, median_absolute_error\n fig, ax = pyplot.subplots()\n ax.scatter(y_true, y_pred)\n ax.plot([0, 25000], [0, 25000], '--k')\n ax.set_ylabel('Target predicted')\n ax.set_xlabel('True Target')\n ax.set_title(title)\n ax.text(1000, 20000, r'$R^2$=%.2f, MAE=%.2f' % (\n r2_score(y_true, y_pred), median_absolute_error(y_true, y_pred)))\n ax.set_xlim([0, 25000])\n ax.set_ylim([0, 25000])\n\nfrom sklearn.decomposition import PCA\nfrom sklearn.linear_model import RidgeCV\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.pipeline import make_pipeline\n\nfor reg in [RidgeCV(), RandomForestRegressor(random_state=0)]:\n p = make_pipeline(PCA(n_components=100), reg)\n p.fit(stot, c)\n pred = p.predict(s4)\n plot_regression(c4, pred, 'Regression using {}'.format(reg.__class__.__name__))\n\ndef fit_params(data):\n \"\"\"Compute statistics for robustly scale data.\n \n Compute the median and the variance, i.e. the difference\n between the 75th and 25th percentiles.\n These statistics are used later to scale data.\n \n Parameters\n ----------\n data : pandas DataFrame, shape (n_spectra, n_freq_point)\n DataFrame containing all Raman spectra.\n \n Returns\n -------\n median : ndarray, shape (n_freq_point,)\n Median for each wavelength.\n \n variance : ndarray, shape (n_freq_point,)\n Variance (difference between the 75th and 25th\n percentiles) for each wavelength.\n \n \"\"\"\n median = numpy.median(data, axis=0)\n p_25 = numpy.percentile(data, 25, axis=0)\n p_75 = numpy.percentile(data, 75, axis=0)\n return median, (p_75 - p_25)\n\ndef transform(data, median, var_25_75):\n \"\"\"Scale data using robust estimators.\n \n Scale the data by subtracting the median and dividing by the\n variance, i.e. the difference between the 75th and 25th percentiles.\n \n Parameters\n ----------\n data : pandas DataFrame, shape (n_spectra, n_freq_point)\n DataFrame containing all Raman spectra.\n \n median : ndarray, shape (n_freq_point,)\n Median for each wavelength.\n \n var_25_75 : ndarray, shape (n_freq_point,)\n Variance (difference between the 75th and 25th\n percentiles) for each wavelength.\n \n Returns\n -------\n data_scaled : pandas DataFrame, shape (n_spectra, n_freq_point)\n DataFrame containing all scaled Raman spectra.\n \n \"\"\"\n return (data - median) / var_25_75\n\nmedian, var_25_75 = fit_params(stot)\nstot_scaled = transform(stot, median, var_25_75)\ns4_scaled = transform(s4, median, var_25_75)\n\nfrom sklearn.decomposition import PCA\nfrom sklearn.linear_model import RidgeCV\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.pipeline import make_pipeline\n\nfor reg in [RidgeCV(), RandomForestRegressor(random_state=0)]:\n p = make_pipeline(PCA(n_components=100), reg)\n p.fit(stot_scaled, c)\n pred = p.predict(s4_scaled)\n plot_regression(c4, pred, 'Regression using {}'.format(reg.__class__.__name__))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sytays/openanalysis
|
doc/OpenAnalysis/05 - Data Structures.ipynb
|
gpl-3.0
|
[
"Data Structures\nData structures are a concrete implementation of the specification provided by one or more particular abstract data types (ADT), which specify the operations that can be performed on a data structure and the computational complexity of those operations.\nDifferent kinds of data structures are suited for different kinds of applications, and some are highly specialized to specific tasks. For example, relational databases commonly use B-tree indexes for data retrieval, while compiler implementations usually use hash tables to look up identifiers.\nUsually, efficient data structures are key to designing efficient algorithms.\nStandard import statement",
"from openanalysis.data_structures import DataStructureBase, DataStructureVisualization\nimport gi.repository.Gtk as gtk # for displaying GUI dialogs",
"DataStructureBase is the base class for implementing data structures\nDataStructureVisualization is the class that visualizes data structures in GUI\nDataStructureBase class\nAny data structure, which is to be implemented, has to be derived from this class. Now we shall see data members and member functions of this class:\nData Members\n\nname - Name of the DS\nfile_path - Path to store output of DS operations\n\nMember Functions\n\n__init__(self, name, file_path) - Initializes DS with a name and a file_path to store the output\ninsert(self, item) - Inserts item into the DS\ndelete(Self, item) - Deletes item from the DS, <br/>            if item is not present in the DS, throws a ValueError \nfind(self, item) - Finds the item in the DS\n<br/>          returns True if found, else returns False<br/>          similar to __contains__(self, item)\nget_root(self) - Returns the root (for graph and tree DS)\nget_graph(self, rt) - Gets the dict representation between the parent and children (for graph and tree DS)\ndraw(self, nth=None) - Draws the output to visualize the operations performed on the DS<br/>             nth is used to pass an item to visualize a find operation\n\nDataStructureVisualization class\nThis class is used for visualizing data structures in a GUI (using GTK+ 3). Now we shall see data members and member functions of this class:\nData Members\n\nds - Any DS, which is an instance of DataStructureBase\n\nMember Functions\n\n__init__(self, ds) - Initializes ds with an instance of DS that is to be visualized\nrun(self) - Opens a GUI window to visualize the DS operations\n\nAn example ..... Binary Search Tree\nNow we shall implement the class BinarySearchTree",
"class BinarySearchTree(DataStructureBase): # Derived from DataStructureBase\n \n class Node: # Class for creating a node\n def __init__(self, data):\n self.left = None\n self.right = None\n self.data = data\n\n def __str__(self):\n return str(self.data)\n\n def __init__(self):\n DataStructureBase.__init__(self, \"Binary Search Tree\", \"t.png\") # Initializing with name and path\n self.root = None\n self.count = 0\n\n def get_root(self): # Returns root node of the tree\n return self.root\n\n def insert(self, item): # Inserts item into the tree\n newNode = BinarySearchTree.Node(item)\n insNode = self.root\n parent = None\n while insNode is not None:\n parent = insNode\n if insNode.data > newNode.data:\n insNode = insNode.left\n else:\n insNode = insNode.right\n if parent is None:\n self.root = newNode\n else:\n if parent.data > newNode.data:\n parent.left = newNode\n else:\n parent.right = newNode\n self.count += 1\n\n def find(self, item): # Finds if item is present in tree or not\n node = self.root\n while node is not None:\n if item < node.data:\n node = node.left\n elif item > node.data:\n node = node.right\n else:\n return True\n return False\n \n def min_value_node(self): # Returns the minimum value node\n current = self.root\n while current.left is not None:\n current = current.left\n return current\n\n def delete(self, item): # Deletes item from tree if present\n # else shows Value Error\n if item not in self:\n dialog = gtk.MessageDialog(None, 0, gtk.MessageType.ERROR,\n gtk.ButtonsType.CANCEL, \"Value not found ERROR\")\n dialog.format_secondary_text(\n \"Element not found in the %s\" % self.name)\n dialog.run()\n dialog.destroy()\n else:\n self.count -= 1\n if self.root.data == item and (self.root.left is None or self.root.right is None):\n if self.root.left is None and self.root.right is None:\n self.root = None\n elif self.root.data == item and self.root.left is None:\n self.root = self.root.right\n elif self.root.data == item and self.root.right is None:\n self.root = self.root.left\n return self.root\n if item < self.root.data:\n temp = self.root\n self.root = self.root.left\n temp.left = self.delete(item)\n self.root = temp\n elif item > self.root.data:\n temp = self.root\n self.root = self.root.right\n temp.right = self.delete(item)\n self.root = temp\n else:\n if self.root.left is None:\n return self.root.right\n elif self.root.right is None:\n return self.root.left\n temp = self.root\n self.root = self.root.right\n min_node = self.min_value_node()\n temp.data = min_node.data\n temp.right = self.delete(min_node.data)\n self.root = temp\n return self.root\n\n def get_graph(self, rt): # Populates self.graph with elements depending\n # upon the parent-children relation\n if rt is None:\n return\n self.graph[rt.data] = {}\n if rt.left is not None:\n self.graph[rt.data][rt.left.data] = {'child_status': 'left'}\n self.get_graph(rt.left)\n if rt.right is not None:\n self.graph[rt.data][rt.right.data] = {'child_status': 'right'}\n self.get_graph(rt.right)",
"Now, this program can be executed as follows:",
"DataStructureVisualization(BinarySearchTree).run()\n\nimport io\nimport base64\nfrom IPython.display import HTML\n\nvideo = io.open('../res/bst.mp4', 'r+b').read()\nencoded = base64.b64encode(video)\nHTML(data='''<video alt=\"test\" controls>\n <source src=\"data:video/mp4;base64,{0}\" type=\"video/mp4\" />\n </video>'''.format(encoded.decode('ascii')))",
"Example File\nYou can see more examples at Github"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
henchc/Rediscovering-Text-as-Data
|
08-Classification/02-Underwood-Sellers.ipynb
|
mit
|
[
"%%capture\n!rm -rf data/\n!unzip data.zip -d data\nfrom datascience import *\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n%matplotlib inline\nplt.style.use('ggplot')",
"This notebook is designed to reproduce several findings from Ted Underwood and Jordan Sellers's article \"How Quickly Do Literary Standards Change?\" (draft (2015), forthcoming in <i>Modern Language Quarterly</i>). See especially Fig 1 and reported classifier accuracy (p 8).\nUnderwood and Sellers have made their corpus of poems and their code available here: https://github.com/tedunderwood/paceofchange\n\nUnderwood and Sellers",
"metadata_tb = Table.read_table('data/poemeta.csv', keep_default_na=False)\nmetadata_tb.show(5)",
"We see above a recept column that has their reception status. We want to look at reviewed and random, just like Underwood and Sellers did:",
"reception_mask = (metadata_tb['recept']=='reviewed') + (metadata_tb['recept']=='random')\nclf_tb = metadata_tb.where(reception_mask)\nclf_tb.show(5)",
"Great, now we've successfully subsetted our data!\nNo we're going to import Term Frequencies by Text. This script is specifically tailored to the format of \ntextual data made available from Hathi Trust. This consists of a series of spreadsheets, each containing book-level term frequencies.\nEach spreadsheet will become a row in our Document-Term Matrix.",
"# Create list that will contain a series of dictionaries\nfreqdict_list = []\n\n# Iterate through texts in our spreadsheet\nfor _id in clf_tb['docid']:\n\n # Each text will have its own dictionary\n # Keys are terms and values are frequencies\n termfreq_dict = {}\n \n # Open the given text's spreadsheet\n with open('data/poems/' + _id + '.poe.tsv', encoding='utf-8') as f:\n filelines = f.readlines()\n \n # Each line in the spreadsheet contains a unique term and its frequency\n for line in filelines:\n termfreq = line.split('\\t')\n \n # 'If' conditions throw out junk lines in the spreadsheet\n if len(termfreq) > 2 or len(termfreq) > 2:\n continue\n term, freq = termfreq[0], int(termfreq[1])\n if len(term)>0 and term[0].isalpha():\n \n # Create new entry in text's dictionary for the term\n termfreq_dict[term] = freq\n \n freqdict_list.append(termfreq_dict)",
"We can use sklearn's DictVectorizer to turn this into a DTM:",
"from sklearn.feature_extraction import DictVectorizer\n\ndv = DictVectorizer()\ndtm = dv.fit_transform(freqdict_list)\nterm_list = dv.feature_names_\n\ndtm",
"Feature Selection, Training, Prediction\nWe can put this DTM into a pandas dataframe, very similar to what you know from Table:",
"dtm_df = pd.DataFrame(dtm.toarray(), columns = term_list)\ndtm_df.head()",
"Let's add in the docid as the index instead of just a counter for the row:",
"dtm_df.set_index(clf_tb['docid'], inplace=True)\ndtm_df.head()",
"Inputs and Outputs\nUnderwood and Sellers create a unique model for each author in the corpus. They set aside a given author from the training set and then use the model to predict whether she was likely to be reviewed or not. Create a list of authors and an \"empty\" array in which to record probabilities.",
"authors = list(set(clf_tb['author']))\nprobabilities = np.zeros([len(clf_tb['docid'])])",
"Now we'll set up the regression model with sklearn.\nUnderwood and Sellers use a regularization constant ('C') to prevent overfitting, since this is a major concern when observing thousands of variables.",
"from sklearn.linear_model import LogisticRegression\nclf = LogisticRegression(C = 0.00007)",
"We'll then write a function set_author_aside that sifts out the target author's works from the dataset. Recall that Underwood and Sellers trained a model for each author when they were taken out.",
"def set_author_aside(author, tb, df):\n '''\n Set aside each author's texts from training set\n '''\n train_ids = tb.where(tb['author']!=author).column('docid')\n test_ids = tb.where(tb['author']==author).column('docid')\n \n train_df_ = df.loc[train_ids]\n test_df_ = df.loc[test_ids]\n \n train_targets_ = tb.where(tb['author']!=author)['recept']=='reviewed'\n \n return train_df_, test_df_, train_targets_",
"We also need to get out the most common words by their document frequency for each model. The function below, top_vocab_by_docfreq will do this each time we loop and create a new model:",
"def top_vocab_by_docfreq(df, num_words):\n '''\n Retrieve the most common words (by document frequency) for a given model\n '''\n docfreq_df = df > 0\n wordcolumn_sums = docfreq_df.sum()\n words_by_freq = wordcolumn_sums.sort_values(ascending=False)\n top_words = words_by_freq[:num_words]\n top_words_list = top_words.index.tolist()\n \n return top_words_list",
"We then need to normalize the frequencies:",
"def normalize_model(train_df_, test_df_, vocabulary):\n '''\n Normalize the model's term frequencies and put them into standard units\n '''\n # Select columns for only the most common words\n train_df_ = train_df_[vocabulary]\n test_df_ = test_df_[vocabulary]\n \n # Normalize each value by the sum of all values in its row\n train_df_ = train_df_.apply(lambda x: x/sum(x), axis=1)\n test_df_ = test_df_.apply(lambda x: x/sum(x), axis=1)\n \n # Get mean and stdev for each column\n train_mean = np.mean(train_df_)\n train_std = np.std(train_df_)\n\n # Transform each value to standard units for its column\n train_df_ = ( train_df_ - train_mean ) / train_std\n test_df_ = ( test_df_ - train_mean ) / train_std\n \n return train_df_, test_df_",
"Training and Prediction\nThe cell below will build a model for just 1 author, let's see how long it takes:",
"import time\nstart = time.time()\n\nfor author in authors[:1]:\n \n # Set aside each author's texts from training set\n train_df, test_df, train_targets = set_author_aside(author, clf_tb, dtm_df)\n\n # Retrieve the most common words (by document frequency) for a given model\n vocab_list = top_vocab_by_docfreq(train_df, 3200)\n \n # Normalize the model's term frequencies and put them into standard units\n train_df, test_df = normalize_model(train_df, test_df, vocab_list)\n \n # Learn the Logistic Regression over our model\n clf.fit(train_df, train_targets)\n \n # Some authors have more than one text in the corpus, so we retrieve all\n for _id in test_df.index.tolist():\n \n # Make prediction whether text was reviewed\n text = test_df.loc[_id]\n probability = clf.predict_proba([text])[0][1]\n\n # Record predictions in same order as the metadata spreadsheet\n _index = list(clf_tb.column('docid')).index(_id)\n probabilities[_index] = probability\n \n \nend = time.time()\nprint(end - start)",
"So how long is this going to take us?",
"len(authors) * (end-start) / 60",
"We don't have 100 minutes!\nEfficiency\nA lot of that time is figuring out the vocabulary:",
"len(term_list)",
"Let's read in a preprocessed vocabulary file. This contains only words that will be used in classification. This list was created by simply iterating through each model and observing the words that appeared in it.",
"import pickle\nwith open('data/preprocessed_vocab.pickle', 'rb') as f:\n pp_vocab = pickle.load(f)\n\nlen(pp_vocab)",
"Now let's select only columns for words in our pre-processed vocabulary. This will make our computation more efficient later:",
"dtm_df = dtm_df[pp_vocab]",
"We'll use the unique IDs from our metadata to keep track of each text:",
"dtm_df.set_index(clf_tb['docid'], inplace=True)\ndtm_df.head()",
"Document Frequency",
"# Create new DataFrame that simply lists whether a term appears in\n# each document, so that we don't have to repeat this process evey iteration\n\nterm_in_doc_df = dtm_df>0\n\nterm_in_doc_df\n\n# Re-write the model-building function\n\ndef set_author_aside(author, tb, dtm_df_, dfreq_df_):\n train_ids = tb.where(tb['author']!=author).column('docid')\n test_ids = tb.where(tb['author']==author).column('docid')\n \n train_df_ = dtm_df_.loc[train_ids]\n dfreq_df_ = dfreq_df_.loc[train_ids] # Include only term_in_doc values for texts in training set\n test_df_ = dtm_df_.loc[test_ids]\n \n train_targets_ = tb.where(tb['author']!=author)['recept']=='reviewed'\n \n return train_df_, test_df_, train_targets_, dfreq_df_\n\n\n# Re-write our vocabulary selection function\n\ndef top_vocab_by_docfreq(df, num_words):\n # Removed the test of whether a term is in a given document (i.e. df>0)\n wordcolumn_sums = sum(df)\n words_by_freq = wordcolumn_sums.sort_values(ascending=False)\n top_words = words_by_freq[:num_words]\n top_words_list = top_words.index.tolist()\n \n return top_words_list",
"Parallel Processing",
"# Parallel Processing means running our script on multiple cores simultaneously\n# This can be used in situations where we might otherwise use a 'FOR' loop\n# (when it doesn't matter what order we go through the list of values!)\n\nclf = LogisticRegression(C = 0.00007)\n\ndef master_function(author):\n # Note: Our only input is the name of the author.\n # Remember that we had iterated over the list of authors previously.\n train_df, test_df, train_targets, dfreq_df = set_author_aside(author, clf_tb, dtm_df, term_in_doc_df)\n vocab_list = top_vocab_by_docfreq(dfreq_df, 3200)\n train_df, test_df = normalize_model(train_df, test_df, vocab_list)\n clf.fit(train_df, train_targets)\n \n # Create a list of each text's probability of review AND its index in the metadata table\n index_probability_tuples = []\n for _id in test_df.index.tolist():\n text = test_df.loc[_id]\n probability = clf.predict_proba([text])[0][1]\n _index = list(clf_tb.column('docid')).index(_id)\n index_probability_tuples.append( (_index, probability) )\n return index_probability_tuples\n\n# Multiprocessing enables Python to parallelize\n\nimport multiprocessing\n\n# Return number of cores\nmultiprocessing.cpu_count()\n\n# By default, the Pool contains one worker for each core\n\n# Since we are working on a shared server, we'll set the number\n# of workers to 4.\n\npool = multiprocessing.Pool(4, maxtasksperchild=1)\n\n# Efficiently applies the master_function() to our list of authors\n# Returns a list where each entry is an item returned by the function\n\n# Timing the process again\nstart = time.time()\n\noutput = pool.map(master_function, authors)\n\nend = time.time()\nprint(end-start)\n\noutput[:10]\n\n# In this case, each element in output is itself a list ('index_probability_tuples'),\n# the length of which is the number of texts by a given author. We'll flatten it for\n# ease of use.\n\nflat_output = [tup for lst in output for tup in lst]\nflat_output[:10]\n\n# Use the indices returned with the output to arrange probabilities properly\n\nprobabilities = np.zeros([len(clf_tb['docid'])])\nfor tup in flat_output:\n probabilities[tup[0]] = tup[1]\n\nclf_tb['P(reviewed)'] = probabilities\n\nclf_tb.select(['docid', 'firstpub','author', 'title', 'recept', 'P(reviewed)'])",
"Evaluation",
"# Visualize the probability each text was reviewed\n\ncolors = ['r' if recept=='reviewed' else 'b' for recept in clf_tb['recept']]\n\nclf_tb.scatter('firstpub', 'P(reviewed)', c=colors, fit_line=True)\n\n# Does the Logistic Regression Model think its likely each book was reviewed?\npredictions = probabilities>0.5\npredictions\n\nfrom sklearn.metrics import accuracy_score\n\n# Creates array where '1' indicates a reviewed book and '0' indicates not\ntargets = clf_tb['recept']=='reviewed'\n\nprint(accuracy_score(predictions, targets))\n\n# Note: Often we prefer to evaluate accuracy based on the F1-score, which\n# weighs the number of times we correctly predicted reviewed texts against\n# the number of times we incorrectly predicted them as 'random'.\n\nfrom sklearn.metrics import f1_score\n\nprint(f1_score(predictions, targets))\n\n## EX. Change the regularization parameter ('C') in our Logistic Regression function.\n## How does this change the classifier's accuracy?\n\n## EX. Reduce the size of the vocabulary used for classification. How does accuracy change?\n\n## Q. Are there cases when we might not want to set the classification threshold\n## to 50% likelihood? How certain are we that 51% is different from a 49% probability?",
"Classification",
"# Train model using full set of 'reviewed' and 'random' texts\n\n# Use this to predict the probability that other prestigious texts\n# (i.e. ones that we haven't trained on) might have been reviewed\n# ...if they had been published! The new texts include, for example,\n# Emily Dickinson and Gerard Manley Hopkins.\n\n# Re-run script from scratch\n\n%pylab inline\nmatplotlib.style.use('ggplot')\n\nfrom datascience import *\nimport pandas as pd\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.feature_extraction import DictVectorizer\n\ncorpus_path = 'poems/'\n\n# Read metadata from spreadsheet\nmetadata_tb = Table.read_table('poemeta.csv', keep_default_na=False)\n\n# We'll copy just our new texts into a separate table as well, for later\ncanon_tb = metadata_tb.where('recept','addcanon')\n\n# Read Term Frequencies from files\n\nfreqdict_list = []\n\n# Iterate through texts in our spreadsheet\nfor _id in metadata_tb['docid']:\n\n # Each text will have its own dictionary\n # Keys are terms and values are frequencies\n termfreq_dict = {}\n \n # Open the given text's spreadsheet\n with open(corpus_path+_id+'.poe.tsv', encoding='utf-8') as file_in:\n filelines = file_in.readlines()\n \n # Each line in the spreadsheet contains a unique term and its frequency\n for line in filelines:\n termfreq = line.split('\\t')\n \n # 'If' conditions throw out junk lines in the spreadsheet\n if len(termfreq) > 2 or len(termfreq) > 2:\n continue\n term, freq = termfreq[0], int(termfreq[1])\n if len(term)>0 and term[0].isalpha():\n \n # Create new entry in text's dictionary for the term\n termfreq_dict[term] = freq\n \n freqdict_list.append(termfreq_dict)\n\n# Create the Document-Term-Matrix\n\ndv = DictVectorizer()\ndtm = dv.fit_transform(freqdict_list)\nterm_list = dv.feature_names_\n\n# Place the DTM into a Pandas DataFrame for further manipulation\n\ndtm_df = pd.DataFrame(dtm.toarray(), columns = term_list)\ndtm_df.set_index(metadata_tb['docid'], inplace=True)\n\n# These are Feature Selection functions like the ones we originally defined,\n# not their efficiency minded counterparts, since we only train once\n\n\n# Set aside each canonic texts from training set\n\ndef set_canon_aside(tb, df):\n train_ids = tb.where(tb['recept']!='addcanon').column('docid')\n classify_ids = tb.where(tb['recept']=='addcanon').column('docid')\n \n train_df_ = df.loc[train_ids]\n classify_df_ = df.loc[classify_ids]\n \n train_targets_ = tb.where(tb['recept']!='addcanon')['recept']=='reviewed'\n \n return train_df_, classify_df_, train_targets_\n\n\n# Retrieve the most common words (by document frequency) for a given model\n\ndef top_vocab_by_docfreq(df, num_words):\n docfreq_df = df > 0\n wordcolumn_sums = sum(docfreq_df)\n words_by_freq = wordcolumn_sums.sort_values(ascending=False)\n top_words = words_by_freq[:num_words]\n top_words_list = top_words.index.tolist()\n \n return top_words_list\n\n\n# Normalize the model's term frequencies and put them into standard units\n\ndef normalize_model(train_df_, classify_df_, vocabulary):\n # Select columns for only the most common words\n train_df_ = train_df_[vocabulary]\n classify_df_ = classify_df_[vocabulary]\n \n # Normalize each value by the sum of all values in its row\n train_df_ = train_df_.apply(lambda x: x/sum(x), axis=1)\n classify_df_ = classify_df_.apply(lambda x: x/sum(x), axis=1)\n \n # Get mean and stdev for each column\n train_mean = np.mean(train_df_)\n train_std = np.std(train_df_)\n\n # Transform each value to standard units for its column\n train_df_ = ( train_df_ - train_mean ) / train_std\n classify_df_ = ( classify_df_ - train_mean ) / train_std\n \n return train_df_, classify_df_\n\n# Train our Logistic Regression Model\n\nclf = LogisticRegression(C = 0.00007)\n\nmodel_df, classify_df, model_targets = set_canon_aside(metadata_tb, dtm_df)\nvocab_list = top_vocab_by_docfreq(model_df, 3200)\nmodel_df, classify_df = normalize_model(model_df, classify_df, vocab_list)\nclf.fit(model_df, model_targets)\n\n# Predict whether our new prestigious texts might have been reviewed\n\nprobabilities = numpy.zeros([len(canon_tb.column('docid'))])\nfor _id in classify_df.index.tolist():\n text = classify_df.loc[_id]\n probability = clf.predict_proba([text])[0][1]\n \n _index = list(canon_tb.column('docid')).index(_id)\n probabilities[_index] = probability\n\n# Add this probability as a new column to our table of canonic texts\n\ncanon_tb['P(reviewed)'] = probabilities\n\n# Visualize\n\ncanon_tb.scatter('firstpub','P(reviewed)', fit_line=True)\n\n## Q. Two of the prestigious texts are assigned less than 50% probability\n## that they were reviewed. How do we make sense of that?"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jaabberwocky/jaabberwocky.github.io
|
Python/Machine Learning/Kaggle/AdvancedHousingPrediction/KaggleAdvancedHousingPrediction.ipynb
|
mit
|
[
"# load libraries\n\nimport numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n%matplotlib inline\n\nimport os\nprint(os.listdir(\"../input\"))\n\n# load the datasets in\ntrain = pd.read_csv('../input/train.csv')\ntest = pd.read_csv('../input/test.csv')\nsample_sub = pd.read_csv('../input/sample_submission.csv')\n\n# look at train dataset\ntrain.head()\n\nprint(train.shape)\nprint(train.info())",
"We have 1460 observations, and 81 columns. There quite a few strings too, which we would have to encode later on. \nLet's have a look at the number of missing values.",
"tmp = train.isnull().sum()\n\n# get top 10 results\ntmp.sort_values(ascending=False).head(10).plot(kind='bar', figsize=(8,8))",
"One way to handle this is to drop the first 4, given that almost all observations are missing.",
"drop_cols = ['PoolQC','MiscFeature','Alley','Fence']\n\n# write custom transformer to drop these 4 cols for use in Pipeline later\nfrom sklearn.base import BaseEstimator, TransformerMixin\n\ndef DropColumnsTransform(BaseEstimator, TransformerMixin):\n def __init__(self, attribs_drop):\n self.attribs_drop = attribs_drop\n def fit(self, X):\n return self\n def transform(self, X):\n return X.drop(self.attribs_drop, axis=1).values\n\n# look at categorical data\ntrain_cat = train.select_dtypes(include=['object'])\ntrain_cat.shape\n# use this to impute missing values as \"?\"\ntrain_cat = train_cat.fillna(\"?\")\n\nprint(\"43/%d or %.2f%% of columns are categorical\" % (train.shape[1], 43/train.shape[1]*100))\n\nfrom sklearn.preprocessing import LabelBinarizer, Imputer\n\nLabelBinarizer = LabelBinarizer()\n# loop to apply LB to each column individually, then combine them back together\nlist_cols = []\nfor col in list(train_cat.columns):\n x = train_cat[col].values\n x_trans = LabelBinarizer.fit_transform(x)\n list_cols.append(x_trans)\ntrain_cat_transformed = np.concatenate(list_cols,axis=1)\ntrain_cat_transformed\n\n# numerical data now\n\nImp = Imputer(strategy=\"median\")\ntrain_num = train.select_dtypes(include=['number'])\ntrain_num.shape\n\n# look at correlation\ncor = train_num.corr()\nf = plt.figure(figsize=(15,15))\nsns.heatmap(cor, cmap='plasma')",
"Many features (e.g. LotArea, GarageCars) are indeed correlated highly with SalePrice.",
"tmp = cor['SalePrice'].sort_values(ascending=False)\ntmp[1:11].plot(kind='bar', figsize=(8,8))\n\n# we will have to remove SalePrice before imputing\ntrain_num_wsp = train_num.drop('SalePrice',axis=1)\ntrain_num_tr = Imp.fit_transform(train_num_wsp)\ntrain_num_tr\n\nX = np.concatenate([train_num_tr, train_cat_transformed],axis=1)\ny = train_num['SalePrice'].values\nprint(\"Shape of X:\", X.shape)\nprint(\"Shape of y:\", y.shape)",
"Fit Models\n\nLinear Regression\nRandomForest",
"from sklearn.model_selection import train_test_split\n# split into 10% for validation at end\nX_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.1)\n\n# Linear Regression\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.metrics import mean_squared_error as mse\n\nlinreg = LinearRegression()\nscores = cross_val_score(linreg, X_train, y_train, scoring=\"neg_mean_squared_error\", cv=10, verbose=1)\n\ndef printscorespretty(scores):\n sc = np.sqrt(-scores)\n print(\"Scores:\", sc)\n print(\"Mean:\", np.mean(sc))\n print(\"SD:\", np.sqrt(np.var(sc)))\n\nprintscorespretty(scores)\n\n#Decision Tree Regressor\nfrom sklearn.tree import DecisionTreeRegressor\n\ndtr = DecisionTreeRegressor()\nscores = cross_val_score(dtr, X_train, y_train, scoring=\"neg_mean_squared_error\", cv=10, verbose=1)\nprintscorespretty(scores)",
"DecisionTree Regressor is performing much better than Linear Regression here, perhaps capturing some non-linearity in data.",
"from sklearn.ensemble import RandomForestRegressor\n\nrf = RandomForestRegressor()\nscores = cross_val_score(rf, X_train, y_train, scoring=\"neg_mean_squared_error\", cv=10, verbose=1)\nprintscorespretty(scores)",
"Best performance thus far is from RF.",
"# XGBoost\nfrom xgboost import XGBRegressor\n\nXGB = XGBRegressor()\nscores = cross_val_score(XGB, X_train, y_train, scoring=\"neg_mean_squared_error\", cv=10, verbose=1)\nprintscorespretty(scores)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google/eng-edu
|
ml/cc/prework/fr/hello_world.ipynb
|
apache-2.0
|
[
"Copyright 2017 Google LLC.",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"# Travail préalable : Hello World\nObjectif de formation : Exécuter un programme TensorFlow dans le navigateur.\nVoici un programme TensorFlow \"Hello World\" :",
"from __future__ import print_function\n\nimport tensorflow as tf\n\nc = tf.constant('Hello, world!')\n\nwith tf.Session() as sess:\n\n print(sess.run(c))",
"## Pour exécuter ce programme\n\n\nCliquez n'importe où dans le bloc de code (sur le mot import, par exemple).\n\n\nCliquez sur la flèche vers la droite dans l'angle supérieur gauche du bloc de code, ou appuyez sur ⌘/Ctrl+Entrée.\nPatientez quelques secondes avant que le programme s'exécute. Si tout se passe bien, la phrase Hello, world! s'affiche juste en dessous du bloc de code.\n\n\nCe programme contient un seul bloc de code. La plupart des programmes utilisés dans les exercices incluent plusieurs blocs, que vous devrez exécuter un à un et dans l'ordre, en partant du haut vers le bas. \nEn général, le fait de ne pas respecter l'ordre d'exécution des blocs de code génère des erreurs.\n## Raccourcis clavier utiles\n\n⌘/Ctrl+M,B : crée une cellule de code vide sous la cellule sélectionnée.\n⌘/Ctrl+M,I : interrompt l'exécution de la cellule active.\n⌘/Ctrl+M,H : affiche la liste complète des raccourcis clavier.\nPour en savoir plus sur une méthode de l'API TensorFlow, positionnez le curseur de la souris après sa parenthèse ouvrante, puis appuyez sur la touche Tab :"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jiumem/tuthpc
|
multiprocessing.ipynb
|
bsd-3-clause
|
[
"Multiprocessing and multithreading\nParallelism in python",
"%%file multihello.py\n'''hello from another process\n'''\nfrom multiprocessing import Process\n\ndef f(name):\n print 'hello', name\n\nif __name__ == '__main__':\n p = Process(target=f, args=('world',))\n p.start()\n p.join()\n \n\n# EOF\n\n!python2.7 multihello.py",
"On Windows: multiprocessing spawns with subprocess.Popen",
"if __name__ == '__main__':\n from multiprocessing import freeze_support\n freeze_support()\n \n # Then, do multiprocessing stuff...",
"Data parallelism versus task parallelism\n\n\nMultithreading versus multiple threads\n\n\nThe global interpreter lock\n\n\nProcesses versus threads\n\n\nShared memory and shared objects\n\nShared objects: Value and Array",
"%%file sharedobj.py\n'''demonstrate shared objects in multiprocessing\n'''\nfrom multiprocessing import Process, Value, Array\n\ndef f(n, a):\n n.value = 3.1415927\n for i in range(len(a)):\n a[i] = -a[i]\n\nif __name__ == '__main__':\n num = Value('d', 0.0)\n arr = Array('i', range(10))\n\n p = Process(target=f, args=(num, arr))\n p.start()\n p.join()\n\n print num.value\n print arr[:]\n \n\n# EOF\n\n!python2.7 sharedobj.py",
"Manager and proxies",
"%%file sharedproxy.py\n'''demonstrate sharing objects by proxy through a manager\n'''\nfrom multiprocessing import Process, Manager\n\ndef f(d, l):\n d[1] = '1'\n d['2'] = 2\n d[0.25] = None\n l.reverse()\n\nif __name__ == '__main__':\n manager = Manager()\n\n d = manager.dict()\n l = manager.list(range(10))\n\n p = Process(target=f, args=(d, l))\n p.start()\n p.join()\n\n print d\n print l\n\n\n# EOF\n\n!python2.7 sharedproxy.py",
"See: https://docs.python.org/2/library/multiprocessing.html\n\nWorking in C with ctypes and numpy",
"%%file numpyshared.py\n'''demonstrating shared objects using numpy and ctypes\n'''\nimport multiprocessing as mp\nfrom multiprocessing import sharedctypes\nfrom numpy import ctypeslib\n\ndef fill_arr(arr_view, i):\n arr_view.fill(i)\n\nif __name__ == '__main__':\n ra = sharedctypes.RawArray('i', 4)\n arr = ctypeslib.as_array(ra)\n arr.shape = (2, 2)\n p1 = mp.Process(target=fill_arr, args=(arr[:1, :], 1))\n p2 = mp.Process(target=fill_arr, args=(arr[1:, :], 2))\n p1.start(); p2.start()\n p1.join(); p2.join()\n print arr\n\n!python2.7 numpyshared.py",
"Issues: threading and locks\n\nLow-level task parallelism: point to point communication\n\nProcess",
"%%file mprocess.py\n'''demonstrate the process claas\n'''\nimport multiprocessing as mp\nfrom time import sleep\nfrom random import random\n\ndef worker(num):\n sleep(2.0 * random())\n name = mp.current_process().name\n print \"worker {},name:{}\".format(num, name)\n\nif __name__ == '__main__':\n master = mp.current_process().name\n print \"Master name: {}\".format(master)\n for i in range(2):\n p = mp.Process(target=worker, args=(i,))\n p.start()\n\n # Close all child processes spawn\n [p.join() for p in mp.active_children()]\n\n!python2.7 mprocess.py",
"Queue and Pipe",
"%%file queuepipe.py\n'''demonstrate queues and pipes\n'''\nimport multiprocessing as mp\nimport pickle\n\ndef qworker(q):\n v = q.get() # blocking!\n print \"queue worker got '{}' from parent\".format(v)\n \ndef pworker(p):\n import pickle # needed for encapsulation\n msg = 'hello hello hello'\n print \"pipe worker sending {!r} to parent\".format(msg)\n p.send(msg)\n v = p.recv()\n print \"pipe worker got {!r} from parent\".format(v)\n print \"unpickled to {}\".format(pickle.loads(v))\n\n \nif __name__ == '__main__':\n q = mp.Queue()\n p = mp.Process(target=qworker, args=(q,))\n p.start() # blocks at q.get()\n v = 'python rocks!'\n print \"putting '{}' on queue\".format(v)\n q.put(v)\n p.join()\n print ''\n\n # The two ends of the pipe: the parent and the child connections\n p_conn, c_conn = mp.Pipe()\n p = mp.Process(target=pworker, args=(c_conn,))\n p.start()\n msg = pickle.dumps([1,2,3],-1)\n print \"got {!r} from child\".format(p_conn.recv())\n print \"sending {!r} to child\".format(msg)\n p_conn.send(msg)\n import datetime\n print \"\\nfinished: {}\".format(datetime.date.today())\n p.join()\n\n!python2.7 queuepipe.py",
"Synchronization with Lock and Event",
"%%file multi_sync.py\n'''demonstrating locks\n'''\nimport multiprocessing as mp\n\ndef print_lock(lk, i):\n name = mp.current_process().name\n lk.acquire()\n for j in range(5):\n print i, \"from process\", name\n lk.release()\n\n\nif __name__ == '__main__':\n lk = mp.Lock()\n ps = [mp.Process(target=print_lock, args=(lk,i)) for i in range(5)]\n [p.start() for p in ps]\n [p.join() for p in ps]\n\n!python2.7 multi_sync.py\n\n'''events\n'''\nimport multiprocessing as mp\n\ndef wait_on_event(e):\n name = mp.current_process().name\n e.wait()\n print name, \"finished waiting\" \n\n\nif __name__ == '__main__':\n e = mp.Event()\n ps = [mp.Process(target=wait_on_event, args=(e,)) for i in range(10)]\n [p.start() for p in ps]\n print \"e.is_set()\", e.is_set()\n #raw_input(\"press any key to set event\")\n e.set()\n [p.join() for p in ps]",
"High-level task parallelism: collective communication\n\n\nThe task Pool\n\n\npipes (apply) and map",
"import multiprocessing as mp\n\ndef random_mean(x):\n import numpy as np\n return round(np.mean(np.random.randint(-x,x+1,10000)), 3)\n\n\nif __name__ == '__main__':\n # create a pool with cpu_count() procsesses\n p = mp.Pool()\n results = p.map(random_mean, range(1,10))\n print results\n print p.apply(random_mean, [100])\n p.close()\n p.join()",
"Variants: blocking, iterative, unordered, and asynchronous",
"import multiprocessing as mp\n\ndef random_mean_count(x):\n import numpy as np\n return x + round(np.mean(np.random.randint(-x,x+1,10000)), 3)\n\n\nif __name__ == '__main__':\n # create a pool with cpu_count() procsesses\n p = mp.Pool()\n results = p.imap_unordered(random_mean_count, range(1,10))\n print \"[\",\n for i in results:\n print i,\n if abs(i) <= 1.0:\n print \"...] QUIT\"\n break\n list(results)\n p.close()\n p.join()\n\nimport multiprocessing as mp\n\ndef random_mean_count(x):\n import numpy as np\n return x + round(np.mean(np.random.randint(-x,x+1,10000)), 3)\n\n\nif __name__ == '__main__':\n # create a pool with cpu_count() procsesses\n p = mp.Pool()\n results = p.map_async(random_mean_count, range(1,10))\n print \"Waiting .\",\n i = 0\n while not results.ready():\n if not i%4000:\n print \".\",\n i += 1\n print results.get()\n print \"\\n\", p.apply_async(random_mean_count, [100]).get()\n p.close()\n p.join()",
"Issues: random number generators",
"import numpy as np\n\ndef walk(x, n=100, box=.5, delta=.2):\n \"perform a random walk\"\n w = np.cumsum(x + np.random.uniform(-delta,delta,n))\n w = np.where(abs(w) > box)[0]\n return w[0] if len(w) else n\n\nN = 10\n\n# run N trials, all starting from x=0\npwalk = np.vectorize(walk)\nprint pwalk(np.zeros(N))\n\n# run again, using list comprehension instead of ufunc\nprint [walk(0) for i in range(N)]\n\n# run again, using multiprocessing's map\nimport multiprocessing as mp\np = mp.Pool()\nprint p.map(walk, [0]*N)\n\n%%file state.py\n\"\"\"some good state utilities\n\"\"\"\n\ndef check_pickle(x, dill=False):\n \"checks the pickle across a subprocess\"\n import pickle\n import subprocess\n if dill:\n import dill as pickle\n pik = \"dill\"\n else:\n pik = \"pickle\"\n fail = True\n try:\n _x = pickle.dumps(x)\n fail = False\n finally:\n if fail:\n print \"DUMP FAILED\"\n msg = \"python -c import {0}; print {0}.loads({1})\".format(pik,repr(_x))\n print \"SUCCESS\" if not subprocess.call(msg.split(None,2)) else \"LOAD FAILED\"\n\n \ndef random_seed(s=None):\n \"sets the seed for calls to 'random()'\"\n import random\n random.seed(s)\n try:\n from numpy import random\n random.seed(s)\n except:\n pass\n return\n\n\ndef random_state(module='random', new=False, seed='!'):\n \"\"\"return a (optionally manually seeded) random generator\n\nFor a given module, return an object that has random number generation (RNG)\nmethods available. If new=False, use the global copy of the RNG object.\nIf seed='!', do not reseed the RNG (using seed=None 'removes' any seeding).\nIf seed='*', use a seed that depends on the process id (PID); this is useful\nfor building RNGs that are different across multiple threads or processes.\n \"\"\"\n import random\n if module == 'random':\n rng = random\n elif not isinstance(module, type(random)):\n # convienence for passing in 'numpy'\n if module == 'numpy': module = 'numpy.random'\n try:\n import importlib\n rng = importlib.import_module(module)\n except ImportError:\n rng = __import__(module, fromlist=module.split('.')[-1:])\n elif module.__name__ == 'numpy': # convienence for passing in numpy\n from numpy import random as rng\n else: rng = module\n\n _rng = getattr(rng, 'RandomState', None) or \\\n getattr(rng, 'Random') # throw error if no rng found\n if new:\n rng = _rng()\n\n if seed == '!': # special case: don't reset the seed\n return rng\n if seed == '*': # special case: random seeding for multiprocessing\n try:\n try:\n import multiprocessing as mp\n except ImportError:\n import processing as mp\n try:\n seed = mp.current_process().pid\n except AttributeError:\n seed = mp.currentProcess().getPid()\n except:\n seed = 0\n import time\n seed += int(time.time()*1e6)\n\n # set the random seed (or 'reset' with None)\n rng.seed(seed)\n return rng\n\n\n# EOF",
"Issues: serialization\n\nBetter serialization: multiprocess",
"import multiprocess\n\nprint multiprocess.Pool().map(lambda x:x**2, range(10))",
"EXERCISE: << Either the mystic multi-solve or one of the pathos tests or with rng >>\n\nCode-based versus object-based serialization: pp(ft)",
"%%file runppft.py\n'''demonstrate ppft\n'''\n\nimport ppft\n\ndef squared(x):\n return x*x\n\nserver = ppft.Server() # can take 'localhost:8000' or remote:port\nresult = server.submit(squared, (5,))\nresult.wait()\nprint result.finished\nprint result()\n\n!python2.7 runppft.py",
"Programming efficiency: pathos\n\n\nMulti-argument map functions\n\n\nUnified API for threading, multiprocessing, and serial and parallel python (pp)",
"%%file allpool.py\n'''demonstrate pool API\n'''\nimport pathos\n\ndef sum_squared(x,y):\n return (x+y)**2\n\nx = range(5) \ny = range(0,10,2)\n\nif __name__ == '__main__':\n sp = pathos.pools.SerialPool() \n pp = pathos.pools.ParallelPool()\n mp = pathos.pools.ProcessPool()\n tp = pathos.pools.ThreadPool()\n\n for pool in [sp,pp,mp,tp]:\n print pool.map(sum_squared, x, y)\n pool.close()\n pool.join()\n\n!python2.7 allpool.py",
"Strives for natural programming constructs in parallel code",
"from itertools import izip\n\n\nPRIMES = [\n 112272535095293,\n 112582705942171,\n 112272535095293,\n 115280095190773,\n 115797848077099,\n 1099726899285419]\n\ndef is_prime(n):\n if n % 2 == 0:\n return False\n\n import math\n sqrt_n = int(math.floor(math.sqrt(n)))\n for i in range(3, sqrt_n + 1, 2):\n if n % i == 0:\n return False\n return True\n\ndef sleep_add1(x):\n from time import sleep\n if x < 4: sleep(x/10.0)\n return x+1\n\ndef sleep_add2(x):\n from time import sleep\n if x < 4: sleep(x/10.0)\n return x+2\n\ndef test_with_multipool(Pool):\n inputs = range(10)\n with Pool() as pool1:\n res1 = pool1.amap(sleep_add1, inputs)\n with Pool() as pool2:\n res2 = pool2.amap(sleep_add2, inputs)\n\n with Pool() as pool3:\n for number, prime in izip(PRIMES, pool3.imap(is_prime, PRIMES)):\n assert prime if number != PRIMES[-1] else not prime\n\n assert res1.get() == [i+1 for i in inputs]\n assert res2.get() == [i+2 for i in inputs]\n print \"OK\"\n\n\nif __name__ == '__main__':\n from pathos.pools import ProcessPool\n test_with_multipool(ProcessPool)",
"Programming models and hierarchical computing",
"import pathos\nfrom math import sin, cos\n\nif __name__ == '__main__':\n mp = pathos.pools.ProcessPool()\n tp = pathos.pools.ThreadPool()\n \n print mp.amap(tp.map, [sin, cos], [range(3),range(3)]).get()\n mp.close(); tp.close()\n mp.join(); tp.join()",
"Pool caching\n\n\nNot covered: IPython.parallel and scoop\n\n\nEXERCISE: Let's take another swing at Monte Carlo betting. You'll want to focus on roll.py, trials.py and optimize.py. Can you speed things up with careful placement of a Pool? Are there small modifications to the code that would allow hierarchical parallelism? Can we speed up the calculation, or does parallel computing lose to spin-up overhead? Where are we now hitting the wall?\nSee: 'solution'\nRemote execution\n\n\nEasy: the pp.Server\n\n\nEven easier: Pool().server in pathos\n\n\nNot covered: rpyc, pyro, and zmq\n\n\nRelated: secure authentication with ssh\n\npathos.secure: connection and tunnel",
"import pathos\nimport sys\n\nrhost = 'localhost'\nrport = 23\n\nif __name__ == '__main__':\n tunnel = pathos.secure.Tunnel()\n lport = tunnel.connect(rhost, rport)\n print 'SSH Tunnel to:', rhost\n print 'Remote port:', rport\n print 'Local port:', lport\n print 'Press <Enter> to disconnect'\n sys.stdin.readline()\n tunnel.disconnect()",
"Not oovered: paramiko\n\nWhat about large-scale cluster computing?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dnc1994/MachineLearning-UW
|
ml-classification/module-8-boosting-assignment-2-solution.ipynb
|
mit
|
[
"Boosting a decision stump\nThe goal of this notebook is to implement your own boosting module.\nBrace yourselves! This is going to be a fun and challenging assignment.\n\nUse SFrames to do some feature engineering.\nModify the decision trees to incorporate weights.\nImplement Adaboost ensembling.\nUse your implementation of Adaboost to train a boosted decision stump ensemble.\nEvaluate the effect of boosting (adding more decision stumps) on performance of the model.\nExplore the robustness of Adaboost to overfitting.\n\nLet's get started!\nFire up GraphLab Create\nMake sure you have the latest version of GraphLab Create (1.8.3 or newer). Upgrade by\npip install graphlab-create --upgrade\nSee this page for detailed instructions on upgrading.",
"import graphlab\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"Getting the data ready\nWe will be using the same LendingClub dataset as in the previous assignment.",
"loans = graphlab.SFrame('lending-club-data.gl/')",
"Extracting the target and the feature columns\nWe will now repeat some of the feature processing steps that we saw in the previous assignment:\nFirst, we re-assign the target to have +1 as a safe (good) loan, and -1 as a risky (bad) loan.\nNext, we select four categorical features: \n1. grade of the loan \n2. the length of the loan term\n3. the home ownership status: own, mortgage, rent\n4. number of years of employment.",
"features = ['grade', # grade of the loan\n 'term', # the term of the loan\n 'home_ownership', # home ownership status: own, mortgage or rent\n 'emp_length', # number of years of employment\n ]\nloans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)\nloans.remove_column('bad_loans')\ntarget = 'safe_loans'\nloans = loans[features + [target]]",
"Subsample dataset to make sure classes are balanced\nJust as we did in the previous assignment, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We use seed=1 so everyone gets the same results.",
"safe_loans_raw = loans[loans[target] == 1]\nrisky_loans_raw = loans[loans[target] == -1]\n\n# Undersample the safe loans.\npercentage = len(risky_loans_raw)/float(len(safe_loans_raw))\nrisky_loans = risky_loans_raw\nsafe_loans = safe_loans_raw.sample(percentage, seed=1)\nloans_data = risky_loans_raw.append(safe_loans)\n\nprint \"Percentage of safe loans :\", len(safe_loans) / float(len(loans_data))\nprint \"Percentage of risky loans :\", len(risky_loans) / float(len(loans_data))\nprint \"Total number of loans in our new dataset :\", len(loans_data)",
"Note: There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this paper. For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods.\nTransform categorical data into binary features\nIn this assignment, we will work with binary decision trees. Since all of our features are currently categorical features, we want to turn them into binary features using 1-hot encoding. \nWe can do so with the following code block (see the first assignments for more details):",
"loans_data = risky_loans.append(safe_loans)\nfor feature in features:\n loans_data_one_hot_encoded = loans_data[feature].apply(lambda x: {x: 1}) \n loans_data_unpacked = loans_data_one_hot_encoded.unpack(column_name_prefix=feature)\n \n # Change None's to 0's\n for column in loans_data_unpacked.column_names():\n loans_data_unpacked[column] = loans_data_unpacked[column].fillna(0)\n\n loans_data.remove_column(feature)\n loans_data.add_columns(loans_data_unpacked)",
"Let's see what the feature columns look like now:",
"features = loans_data.column_names()\nfeatures.remove('safe_loans') # Remove the response variable\nfeatures",
"Train-test split\nWe split the data into training and test sets with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.",
"train_data, test_data = loans_data.random_split(0.8, seed=1)",
"Weighted decision trees\nLet's modify our decision tree code from Module 5 to support weighting of individual data points.\nWeighted error definition\nConsider a model with $N$ data points with:\n* Predictions $\\hat{y}_1 ... \\hat{y}_n$ \n* Target $y_1 ... y_n$ \n* Data point weights $\\alpha_1 ... \\alpha_n$.\nThen the weighted error is defined by:\n$$\n\\mathrm{E}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}}) = \\frac{\\sum_{i=1}^{n} \\alpha_i \\times 1[y_i \\neq \\hat{y_i}]}{\\sum_{i=1}^{n} \\alpha_i}\n$$\nwhere $1[y_i \\neq \\hat{y_i}]$ is an indicator function that is set to $1$ if $y_i \\neq \\hat{y_i}$.\nWrite a function to compute weight of mistakes\nWrite a function that calculates the weight of mistakes for making the \"weighted-majority\" predictions for a dataset. The function accepts two inputs:\n* labels_in_node: Targets $y_1 ... y_n$ \n* data_weights: Data point weights $\\alpha_1 ... \\alpha_n$\nWe are interested in computing the (total) weight of mistakes, i.e.\n$$\n\\mathrm{WM}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}}) = \\sum_{i=1}^{n} \\alpha_i \\times 1[y_i \\neq \\hat{y_i}].\n$$\nThis quantity is analogous to the number of mistakes, except that each mistake now carries different weight. It is related to the weighted error in the following way:\n$$\n\\mathrm{E}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}}) = \\frac{\\mathrm{WM}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})}{\\sum_{i=1}^{n} \\alpha_i}\n$$\nThe function intermediate_node_weighted_mistakes should first compute two weights: \n * $\\mathrm{WM}{-1}$: weight of mistakes when all predictions are $\\hat{y}_i = -1$ i.e $\\mathrm{WM}(\\mathbf{\\alpha}, \\mathbf{-1}$)\n * $\\mathrm{WM}{+1}$: weight of mistakes when all predictions are $\\hat{y}_i = +1$ i.e $\\mbox{WM}(\\mathbf{\\alpha}, \\mathbf{+1}$)\nwhere $\\mathbf{-1}$ and $\\mathbf{+1}$ are vectors where all values are -1 and +1 respectively.\nAfter computing $\\mathrm{WM}{-1}$ and $\\mathrm{WM}{+1}$, the function intermediate_node_weighted_mistakes should return the lower of the two weights of mistakes, along with the class associated with that weight. We have provided a skeleton for you with YOUR CODE HERE to be filled in several places.",
"def intermediate_node_weighted_mistakes(labels_in_node, data_weights):\n # Sum the weights of all entries with label +1\n total_weight_positive = sum(data_weights[labels_in_node == +1])\n \n # Weight of mistakes for predicting all -1's is equal to the sum above\n ### YOUR CODE HERE\n weighted_mistakes_all_negative = total_weight_positive\n \n # Sum the weights of all entries with label -1\n ### YOUR CODE HERE\n total_weight_negative = sum(data_weights[labels_in_node == -1])\n \n # Weight of mistakes for predicting all +1's is equal to the sum above\n ### YOUR CODE HERE\n weighted_mistakes_all_positive = total_weight_negative\n \n # Return the tuple (weight, class_label) representing the lower of the two weights\n # class_label should be an integer of value +1 or -1.\n # If the two weights are identical, return (weighted_mistakes_all_positive,+1)\n ### YOUR CODE HERE\n if weighted_mistakes_all_positive <= weighted_mistakes_all_negative:\n return weighted_mistakes_all_positive, +1\n else:\n return weighted_mistakes_all_negative, -1",
"Checkpoint: Test your intermediate_node_weighted_mistakes function, run the following cell:",
"example_labels = graphlab.SArray([-1, -1, 1, 1, 1])\nexample_data_weights = graphlab.SArray([1., 2., .5, 1., 1.])\nif intermediate_node_weighted_mistakes(example_labels, example_data_weights) == (2.5, -1):\n print 'Test passed!'\nelse:\n print 'Test failed... try again!'",
"Recall that the classification error is defined as follows:\n$$\n\\mbox{classification error} = \\frac{\\mbox{# mistakes}}{\\mbox{# all data points}}\n$$\nQuiz Question: If we set the weights $\\mathbf{\\alpha} = 1$ for all data points, how is the weight of mistakes $\\mbox{WM}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})$ related to the classification error?\nFunction to pick best feature to split on\nWe continue modifying our decision tree code from the earlier assignment to incorporate weighting of individual data points. The next step is to pick the best feature to split on.\nThe best_splitting_feature function is similar to the one from the earlier assignment with two minor modifications:\n 1. The function best_splitting_feature should now accept an extra parameter data_weights to take account of weights of data points.\n 2. Instead of computing the number of mistakes in the left and right side of the split, we compute the weight of mistakes for both sides, add up the two weights, and divide it by the total weight of the data.\nComplete the following function. Comments starting with DIFFERENT HERE mark the sections where the weighted version differs from the original implementation.",
"# If the data is identical in each feature, this function should return None\n\ndef best_splitting_feature(data, features, target, data_weights):\n print data_weights\n \n # These variables will keep track of the best feature and the corresponding error\n best_feature = None\n best_error = float('+inf') \n num_points = float(len(data))\n\n # Loop through each feature to consider splitting on that feature\n for feature in features:\n \n # The left split will have all data points where the feature value is 0\n # The right split will have all data points where the feature value is 1\n left_split = data[data[feature] == 0]\n right_split = data[data[feature] == 1]\n \n # Apply the same filtering to data_weights to create left_data_weights, right_data_weights\n ## YOUR CODE HERE\n left_data_weights = data_weights[data[feature] == 0]\n right_data_weights = data_weights[data[feature] == 1]\n \n # DIFFERENT HERE\n # Calculate the weight of mistakes for left and right sides\n ## YOUR CODE HERE\n left_weighted_mistakes, left_class = intermediate_node_weighted_mistakes(left_split[target], left_data_weights)\n right_weighted_mistakes, right_class = intermediate_node_weighted_mistakes(right_split[target], right_data_weights)\n \n # DIFFERENT HERE\n # Compute weighted error by computing\n # ( [weight of mistakes (left)] + [weight of mistakes (right)] ) / [total weight of all data points]\n ## YOUR CODE HERE\n error = (left_weighted_mistakes + right_weighted_mistakes) / (sum(left_data_weights) + sum(right_data_weights))\n \n # If this is the best error we have found so far, store the feature and the error\n if error < best_error:\n best_feature = feature\n best_error = error\n \n # Return the best feature we found\n return best_feature",
"Checkpoint: Now, we have another checkpoint to make sure you are on the right track.",
"example_data_weights = graphlab.SArray(len(train_data)* [1.5])\nif best_splitting_feature(train_data, features, target, example_data_weights) == 'term. 36 months':\n print 'Test passed!'\nelse:\n print 'Test failed... try again!'",
"Note. If you get an exception in the line of \"the logical filter has different size than the array\", try upgradting your GraphLab Create installation to 1.8.3 or newer.\nVery Optional. Relationship between weighted error and weight of mistakes\nBy definition, the weighted error is the weight of mistakes divided by the weight of all data points, so\n$$\n\\mathrm{E}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}}) = \\frac{\\sum_{i=1}^{n} \\alpha_i \\times 1[y_i \\neq \\hat{y_i}]}{\\sum_{i=1}^{n} \\alpha_i} = \\frac{\\mathrm{WM}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})}{\\sum_{i=1}^{n} \\alpha_i}.\n$$\nIn the code above, we obtain $\\mathrm{E}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})$ from the two weights of mistakes from both sides, $\\mathrm{WM}(\\mathbf{\\alpha}{\\mathrm{left}}, \\mathbf{\\hat{y}}{\\mathrm{left}})$ and $\\mathrm{WM}(\\mathbf{\\alpha}{\\mathrm{right}}, \\mathbf{\\hat{y}}{\\mathrm{right}})$. First, notice that the overall weight of mistakes $\\mathrm{WM}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})$ can be broken into two weights of mistakes over either side of the split:\n$$\n\\mathrm{WM}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})\n= \\sum_{i=1}^{n} \\alpha_i \\times 1[y_i \\neq \\hat{y_i}]\n= \\sum_{\\mathrm{left}} \\alpha_i \\times 1[y_i \\neq \\hat{y_i}]\n + \\sum_{\\mathrm{right}} \\alpha_i \\times 1[y_i \\neq \\hat{y_i}]\\\n= \\mathrm{WM}(\\mathbf{\\alpha}{\\mathrm{left}}, \\mathbf{\\hat{y}}{\\mathrm{left}}) + \\mathrm{WM}(\\mathbf{\\alpha}{\\mathrm{right}}, \\mathbf{\\hat{y}}{\\mathrm{right}})\n$$\nWe then divide through by the total weight of all data points to obtain $\\mathrm{E}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})$:\n$$\n\\mathrm{E}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})\n= \\frac{\\mathrm{WM}(\\mathbf{\\alpha}{\\mathrm{left}}, \\mathbf{\\hat{y}}{\\mathrm{left}}) + \\mathrm{WM}(\\mathbf{\\alpha}{\\mathrm{right}}, \\mathbf{\\hat{y}}{\\mathrm{right}})}{\\sum_{i=1}^{n} \\alpha_i}\n$$\nBuilding the tree\nWith the above functions implemented correctly, we are now ready to build our decision tree. Recall from the previous assignments that each node in the decision tree is represented as a dictionary which contains the following keys:\n{ \n 'is_leaf' : True/False.\n 'prediction' : Prediction at the leaf node.\n 'left' : (dictionary corresponding to the left tree).\n 'right' : (dictionary corresponding to the right tree).\n 'features_remaining' : List of features that are posible splits.\n}\n\nLet us start with a function that creates a leaf node given a set of target values:",
"def create_leaf(target_values, data_weights):\n \n # Create a leaf node\n leaf = {'splitting_feature' : None,\n 'is_leaf': True}\n \n # Computed weight of mistakes.\n weighted_error, best_class = intermediate_node_weighted_mistakes(target_values, data_weights)\n # Store the predicted class (1 or -1) in leaf['prediction']\n leaf['prediction'] = best_class ## YOUR CODE HERE\n \n return leaf ",
"We provide a function that learns a weighted decision tree recursively and implements 3 stopping conditions:\n1. All data points in a node are from the same class.\n2. No more features to split on.\n3. Stop growing the tree when the tree depth reaches max_depth.",
"def weighted_decision_tree_create(data, features, target, data_weights, current_depth = 1, max_depth = 10):\n remaining_features = features[:] # Make a copy of the features.\n target_values = data[target]\n print \"--------------------------------------------------------------------\"\n print \"Subtree, depth = %s (%s data points).\" % (current_depth, len(target_values))\n \n # Stopping condition 1. Error is 0.\n if intermediate_node_weighted_mistakes(target_values, data_weights)[0] <= 1e-15:\n print \"Stopping condition 1 reached.\" \n return create_leaf(target_values, data_weights)\n \n # Stopping condition 2. No more features.\n if remaining_features == []:\n print \"Stopping condition 2 reached.\" \n return create_leaf(target_values, data_weights) \n \n # Additional stopping condition (limit tree depth)\n if current_depth > max_depth:\n print \"Reached maximum depth. Stopping for now.\"\n return create_leaf(target_values, data_weights)\n \n # If all the datapoints are the same, splitting_feature will be None. Create a leaf\n splitting_feature = best_splitting_feature(data, features, target, data_weights)\n remaining_features.remove(splitting_feature)\n \n left_split = data[data[splitting_feature] == 0]\n right_split = data[data[splitting_feature] == 1]\n \n left_data_weights = data_weights[data[splitting_feature] == 0]\n right_data_weights = data_weights[data[splitting_feature] == 1]\n \n print \"Split on feature %s. (%s, %s)\" % (\\\n splitting_feature, len(left_split), len(right_split))\n \n # Create a leaf node if the split is \"perfect\"\n if len(left_split) == len(data):\n print \"Creating leaf node.\"\n return create_leaf(left_split[target], data_weights)\n if len(right_split) == len(data):\n print \"Creating leaf node.\"\n return create_leaf(right_split[target], data_weights)\n \n # Repeat (recurse) on left and right subtrees\n left_tree = weighted_decision_tree_create(\n left_split, remaining_features, target, left_data_weights, current_depth + 1, max_depth)\n right_tree = weighted_decision_tree_create(\n right_split, remaining_features, target, right_data_weights, current_depth + 1, max_depth)\n \n return {'is_leaf' : False, \n 'prediction' : None,\n 'splitting_feature': splitting_feature,\n 'left' : left_tree, \n 'right' : right_tree}",
"Here is a recursive function to count the nodes in your tree:",
"def count_nodes(tree):\n if tree['is_leaf']:\n return 1\n return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])",
"Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.",
"example_data_weights = graphlab.SArray([1.0 for i in range(len(train_data))])\nsmall_data_decision_tree = weighted_decision_tree_create(train_data, features, target,\n example_data_weights, max_depth=2)\nif count_nodes(small_data_decision_tree) == 7:\n print 'Test passed!'\nelse:\n print 'Test failed... try again!'\n print 'Number of nodes found:', count_nodes(small_data_decision_tree)\n print 'Number of nodes that should be there: 7' ",
"Let us take a quick look at what the trained tree is like. You should get something that looks like the following\n{'is_leaf': False,\n 'left': {'is_leaf': False,\n 'left': {'is_leaf': True, 'prediction': -1, 'splitting_feature': None},\n 'prediction': None,\n 'right': {'is_leaf': True, 'prediction': 1, 'splitting_feature': None},\n 'splitting_feature': 'grade.A'\n },\n 'prediction': None,\n 'right': {'is_leaf': False,\n 'left': {'is_leaf': True, 'prediction': 1, 'splitting_feature': None},\n 'prediction': None,\n 'right': {'is_leaf': True, 'prediction': -1, 'splitting_feature': None},\n 'splitting_feature': 'grade.D'\n },\n 'splitting_feature': 'term. 36 months'\n}",
"small_data_decision_tree",
"Making predictions with a weighted decision tree\nWe give you a function that classifies one data point. It can also return the probability if you want to play around with that as well.",
"def classify(tree, x, annotate = False): \n # If the node is a leaf node.\n if tree['is_leaf']:\n if annotate: \n print \"At leaf, predicting %s\" % tree['prediction']\n return tree['prediction'] \n else:\n # Split on feature.\n split_feature_value = x[tree['splitting_feature']]\n if annotate: \n print \"Split on %s = %s\" % (tree['splitting_feature'], split_feature_value)\n if split_feature_value == 0:\n return classify(tree['left'], x, annotate)\n else:\n return classify(tree['right'], x, annotate)",
"Evaluating the tree\nNow, we will write a function to evaluate a decision tree by computing the classification error of the tree on the given dataset.\nAgain, recall that the classification error is defined as follows:\n$$\n\\mbox{classification error} = \\frac{\\mbox{# mistakes}}{\\mbox{# all data points}}\n$$\nThe function called evaluate_classification_error takes in as input:\n1. tree (as described above)\n2. data (an SFrame)\nThe function does not change because of adding data point weights.",
"def evaluate_classification_error(tree, data):\n # Apply the classify(tree, x) to each row in your data\n prediction = data.apply(lambda x: classify(tree, x))\n \n # Once you've made the predictions, calculate the classification error\n return (prediction != data[target]).sum() / float(len(data))\n\nevaluate_classification_error(small_data_decision_tree, test_data)",
"Example: Training a weighted decision tree\nTo build intuition on how weighted data points affect the tree being built, consider the following:\nSuppose we only care about making good predictions for the first 10 and last 10 items in train_data, we assign weights:\n* 1 to the last 10 items \n* 1 to the first 10 items \n* and 0 to the rest. \nLet us fit a weighted decision tree with max_depth = 2.",
"# Assign weights\nexample_data_weights = graphlab.SArray([1.] * 10 + [0.]*(len(train_data) - 20) + [1.] * 10)\n\n# Train a weighted decision tree model.\nsmall_data_decision_tree_subset_20 = weighted_decision_tree_create(train_data, features, target,\n example_data_weights, max_depth=2)",
"Now, we will compute the classification error on the subset_20, i.e. the subset of data points whose weight is 1 (namely the first and last 10 data points).",
"subset_20 = train_data.head(10).append(train_data.tail(10))\nevaluate_classification_error(small_data_decision_tree_subset_20, subset_20)",
"Now, let us compare the classification error of the model small_data_decision_tree_subset_20 on the entire test set train_data:",
"evaluate_classification_error(small_data_decision_tree_subset_20, train_data)",
"The model small_data_decision_tree_subset_20 performs a lot better on subset_20 than on train_data.\nSo, what does this mean?\n* The points with higher weights are the ones that are more important during the training process of the weighted decision tree.\n* The points with zero weights are basically ignored during training.\nQuiz Question: Will you get the same model as small_data_decision_tree_subset_20 if you trained a decision tree with only the 20 data points with non-zero weights from the set of points in subset_20?\nImplementing your own Adaboost (on decision stumps)\nNow that we have a weighted decision tree working, it takes only a bit of work to implement Adaboost. For the sake of simplicity, let us stick with decision tree stumps by training trees with max_depth=1.\nRecall from the lecture the procedure for Adaboost:\n1. Start with unweighted data with $\\alpha_j = 1$\n2. For t = 1,...T:\n * Learn $f_t(x)$ with data weights $\\alpha_j$\n * Compute coefficient $\\hat{w}t$:\n $$\\hat{w}_t = \\frac{1}{2}\\ln{\\left(\\frac{1- \\mbox{E}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})}{\\mbox{E}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})}\\right)}$$\n * Re-compute weights $\\alpha_j$:\n $$\\alpha_j \\gets \\begin{cases}\n \\alpha_j \\exp{(-\\hat{w}_t)} & \\text{ if }f_t(x_j) = y_j\\\n \\alpha_j \\exp{(\\hat{w}_t)} & \\text{ if }f_t(x_j) \\neq y_j\n \\end{cases}$$\n * Normalize weights $\\alpha_j$:\n $$\\alpha_j \\gets \\frac{\\alpha_j}{\\sum{i=1}^{N}{\\alpha_i}} $$\nComplete the skeleton for the following code to implement adaboost_with_tree_stumps. Fill in the places with YOUR CODE HERE.",
"from math import log\nfrom math import exp\n\ndef adaboost_with_tree_stumps(data, features, target, num_tree_stumps):\n # start with unweighted data\n alpha = graphlab.SArray([1.]*len(data))\n weights = []\n tree_stumps = []\n target_values = data[target]\n \n for t in xrange(num_tree_stumps):\n print '====================================================='\n print 'Adaboost Iteration %d' % t\n print '=====================================================' \n # Learn a weighted decision tree stump. Use max_depth=1\n tree_stump = weighted_decision_tree_create(data, features, target, data_weights=alpha, max_depth=1)\n tree_stumps.append(tree_stump)\n \n # Make predictions\n predictions = data.apply(lambda x: classify(tree_stump, x))\n \n # Produce a Boolean array indicating whether\n # each data point was correctly classified\n is_correct = predictions == target_values\n is_wrong = predictions != target_values\n \n # Compute weighted error\n # YOUR CODE HERE\n weighted_error = sum(alpha * is_wrong) / sum(alpha)\n \n # Compute model coefficient using weighted error\n # YOUR CODE HERE\n weight = .5 * log((1 - weighted_error) / weighted_error)\n weights.append(weight)\n \n # Adjust weights on data point\n adjustment = is_correct.apply(lambda is_correct : exp(-weight) if is_correct else exp(weight))\n \n # Scale alpha by multiplying by adjustment \n # Then normalize data points weights\n ## YOUR CODE HERE \n alpha *= adjustment\n alpha /= sum(alpha)\n \n return weights, tree_stumps",
"Checking your Adaboost code\nTrain an ensemble of two tree stumps and see which features those stumps split on. We will run the algorithm with the following parameters:\n* train_data\n* features\n* target\n* num_tree_stumps = 2",
"stump_weights, tree_stumps = adaboost_with_tree_stumps(train_data, features, target, num_tree_stumps=2)\n\ndef print_stump(tree):\n split_name = tree['splitting_feature'] # split_name is something like 'term. 36 months'\n if split_name is None:\n print \"(leaf, label: %s)\" % tree['prediction']\n return None\n split_feature, split_value = split_name.split('.')\n print ' root'\n print ' |---------------|----------------|'\n print ' | |'\n print ' | |'\n print ' | |'\n print ' [{0} == 0]{1}[{0} == 1] '.format(split_name, ' '*(27-len(split_name)))\n print ' | |'\n print ' | |'\n print ' | |'\n print ' (%s) (%s)' \\\n % (('leaf, label: ' + str(tree['left']['prediction']) if tree['left']['is_leaf'] else 'subtree'),\n ('leaf, label: ' + str(tree['right']['prediction']) if tree['right']['is_leaf'] else 'subtree'))",
"Here is what the first stump looks like:",
"print_stump(tree_stumps[0])",
"Here is what the next stump looks like:",
"print_stump(tree_stumps[1])\n\nprint stump_weights",
"If your Adaboost is correctly implemented, the following things should be true:\n\ntree_stumps[0] should split on term. 36 months with the prediction -1 on the left and +1 on the right.\ntree_stumps[1] should split on grade.A with the prediction -1 on the left and +1 on the right.\nWeights should be approximately [0.158, 0.177] \n\nReminders\n- Stump weights ($\\mathbf{\\hat{w}}$) and data point weights ($\\mathbf{\\alpha}$) are two different concepts.\n- Stump weights ($\\mathbf{\\hat{w}}$) tell you how important each stump is while making predictions with the entire boosted ensemble.\n- Data point weights ($\\mathbf{\\alpha}$) tell you how important each data point is while training a decision stump.\nTraining a boosted ensemble of 10 stumps\nLet us train an ensemble of 10 decision tree stumps with Adaboost. We run the adaboost_with_tree_stumps function with the following parameters:\n* train_data\n* features\n* target\n* num_tree_stumps = 10",
"stump_weights, tree_stumps = adaboost_with_tree_stumps(train_data, features, \n target, num_tree_stumps=10)",
"Making predictions\nRecall from the lecture that in order to make predictions, we use the following formula:\n$$\n\\hat{y} = sign\\left(\\sum_{t=1}^T \\hat{w}_t f_t(x)\\right)\n$$\nWe need to do the following things:\n- Compute the predictions $f_t(x)$ using the $t$-th decision tree\n- Compute $\\hat{w}_t f_t(x)$ by multiplying the stump_weights with the predictions $f_t(x)$ from the decision trees\n- Sum the weighted predictions over each stump in the ensemble.\nComplete the following skeleton for making predictions:",
"def predict_adaboost(stump_weights, tree_stumps, data):\n scores = graphlab.SArray([0.]*len(data))\n \n for i, tree_stump in enumerate(tree_stumps):\n predictions = data.apply(lambda x: classify(tree_stump, x))\n \n # Accumulate predictions on scaores array\n # YOUR CODE HERE\n scores += stump_weights[i] * predictions\n \n return scores.apply(lambda score : +1 if score > 0 else -1)\n\npredictions = predict_adaboost(stump_weights, tree_stumps, test_data)\naccuracy = graphlab.evaluation.accuracy(test_data[target], predictions)\nprint 'Accuracy of 10-component ensemble = %s' % accuracy ",
"Now, let us take a quick look what the stump_weights look like at the end of each iteration of the 10-stump ensemble:",
"stump_weights",
"Quiz Question: Are the weights monotonically decreasing, monotonically increasing, or neither?\nReminder: Stump weights ($\\mathbf{\\hat{w}}$) tell you how important each stump is while making predictions with the entire boosted ensemble.\nPerformance plots\nIn this section, we will try to reproduce some of the performance plots dicussed in the lecture.\nHow does accuracy change with adding stumps to the ensemble?\nWe will now train an ensemble with:\n* train_data\n* features\n* target\n* num_tree_stumps = 30\nOnce we are done with this, we will then do the following:\n* Compute the classification error at the end of each iteration.\n* Plot a curve of classification error vs iteration.\nFirst, lets train the model.",
"# this may take a while... \nstump_weights, tree_stumps = adaboost_with_tree_stumps(train_data, \n features, target, num_tree_stumps=30)",
"Computing training error at the end of each iteration\nNow, we will compute the classification error on the train_data and see how it is reduced as trees are added.",
"error_all = []\nfor n in xrange(1, 31):\n predictions = predict_adaboost(stump_weights[:n], tree_stumps[:n], train_data)\n error = 1.0 - graphlab.evaluation.accuracy(train_data[target], predictions)\n error_all.append(error)\n print \"Iteration %s, training error = %s\" % (n, error_all[n-1])",
"Visualizing training error vs number of iterations\nWe have provided you with a simple code snippet that plots classification error with the number of iterations.",
"plt.rcParams['figure.figsize'] = 7, 5\nplt.plot(range(1,31), error_all, '-', linewidth=4.0, label='Training error')\nplt.title('Performance of Adaboost ensemble')\nplt.xlabel('# of iterations')\nplt.ylabel('Classification error')\nplt.legend(loc='best', prop={'size':15})\n\nplt.rcParams.update({'font.size': 16})",
"Quiz Question: Which of the following best describes a general trend in accuracy as we add more and more components? Answer based on the 30 components learned so far.\n\nTraining error goes down monotonically, i.e. the training error reduces with each iteration but never increases.\nTraining error goes down in general, with some ups and downs in the middle.\nTraining error goes up in general, with some ups and downs in the middle.\nTraining error goes down in the beginning, achieves the best error, and then goes up sharply.\nNone of the above\n\nEvaluation on the test data\nPerforming well on the training data is cheating, so lets make sure it works on the test_data as well. Here, we will compute the classification error on the test_data at the end of each iteration.",
"test_error_all = []\nfor n in xrange(1, 31):\n predictions = predict_adaboost(stump_weights[:n], tree_stumps[:n], test_data)\n error = 1.0 - graphlab.evaluation.accuracy(test_data[target], predictions)\n test_error_all.append(error)\n print \"Iteration %s, test error = %s\" % (n, test_error_all[n-1])",
"Visualize both the training and test errors\nNow, let us plot the training & test error with the number of iterations.",
"plt.rcParams['figure.figsize'] = 7, 5\nplt.plot(range(1,31), error_all, '-', linewidth=4.0, label='Training error')\nplt.plot(range(1,31), test_error_all, '-', linewidth=4.0, label='Test error')\n\nplt.title('Performance of Adaboost ensemble')\nplt.xlabel('# of iterations')\nplt.ylabel('Classification error')\nplt.rcParams.update({'font.size': 16})\nplt.legend(loc='best', prop={'size':15})\nplt.tight_layout()",
"Quiz Question: From this plot (with 30 trees), is there massive overfitting as the # of iterations increases?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
broundy/udacity
|
nanodegrees/deep_learning_foundations/unit_2/lesson_24_intro_to_recurrent_neural_networks/.ipynb_checkpoints/Anna KaRNNa-checkpoint.ipynb
|
unlicense
|
[
"Anna KaRNNa\nIn this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.\nThis network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.\n<img src=\"assets/charseq.jpeg\" width=\"500\">",
"import time\nfrom collections import namedtuple\n\nimport numpy as np\nimport tensorflow as tf",
"First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.",
"with open('anna.txt', 'r') as f:\n text=f.read()\nvocab = set(text)\nvocab_to_int = {c: i for i, c in enumerate(vocab)}\nint_to_vocab = dict(enumerate(vocab))\nchars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)",
"Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.",
"text[:100]",
"And we can see the characters encoded as integers.",
"chars[:100]",
"Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.",
"np.max(chars)+1",
"Making training and validation batches\nNow I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.\nHere I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.\nThe idea here is to make a 2D matrix where the number of rows is equal to the batch size. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.",
"def split_data(chars, batch_size, num_steps, split_frac=0.9):\n \"\"\" \n Split character data into training and validation sets, inputs and targets for each set.\n \n Arguments\n ---------\n chars: character array\n batch_size: Size of examples in each of batch\n num_steps: Number of sequence steps to keep in the input and pass to the network\n split_frac: Fraction of batches to keep in the training set\n \n \n Returns train_x, train_y, val_x, val_y\n \"\"\"\n \n slice_size = batch_size * num_steps\n n_batches = int(len(chars) / slice_size)\n \n # Drop the last few characters to make only full batches\n x = chars[: n_batches*slice_size]\n y = chars[1: n_batches*slice_size + 1]\n \n # Split the data into batch_size slices, then stack them into a 2D matrix \n x = np.stack(np.split(x, batch_size))\n y = np.stack(np.split(y, batch_size))\n \n # Now x and y are arrays with dimensions batch_size x n_batches*num_steps\n \n # Split into training and validation sets, keep the first split_frac batches for training\n split_idx = int(n_batches*split_frac)\n train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]\n val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]\n \n return train_x, train_y, val_x, val_y",
"Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.",
"train_x, train_y, val_x, val_y = split_data(chars, 10, 50)\n\ntrain_x.shape",
"Looking at the size of this array, we see that we have rows equal to the batch size. When we want to get a batch out of here, we can grab a subset of this array that contains all the rows but has a width equal to the number of steps in the sequence. The first batch looks like this:",
"train_x[:,:50]",
"I'll write another function to grab batches out of the arrays made by split_data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.",
"def get_batch(arrs, num_steps):\n batch_size, slice_size = arrs[0].shape\n \n n_batches = int(slice_size/num_steps)\n for b in range(n_batches):\n yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]",
"Building the model\nBelow is a function where I build the graph for the network.",
"def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,\n learning_rate=0.001, grad_clip=5, sampling=False):\n \n # When we're using this network for sampling later, we'll be passing in\n # one character at a time, so providing an option for that\n if sampling == True:\n batch_size, num_steps = 1, 1\n\n tf.reset_default_graph()\n \n # Declare placeholders we'll feed into the graph\n inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')\n targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')\n \n # Keep probability placeholder for drop out layers\n keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n \n # One-hot encoding the input and target characters\n x_one_hot = tf.one_hot(inputs, num_classes)\n y_one_hot = tf.one_hot(targets, num_classes)\n\n ### Build the RNN layers\n # Use a basic LSTM cell\n lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n \n # Add dropout to the cell\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n \n # Stack up multiple LSTM layers, for deep learning\n cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)\n initial_state = cell.zero_state(batch_size, tf.float32)\n\n ### Run the data through the RNN layers\n # This makes a list where each element is on step in the sequence\n rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]\n \n # Run each sequence step through the RNN and collect the outputs\n outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)\n final_state = state\n \n # Reshape output so it's a bunch of rows, one output row for each step for each batch\n seq_output = tf.concat(outputs, axis=1)\n output = tf.reshape(seq_output, [-1, lstm_size])\n \n # Now connect the RNN outputs to a softmax layer\n with tf.variable_scope('softmax'):\n softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1))\n softmax_b = tf.Variable(tf.zeros(num_classes))\n \n # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch\n # of rows of logit outputs, one for each step and batch\n logits = tf.matmul(output, softmax_w) + softmax_b\n \n # Use softmax to get the probabilities for predicted characters\n preds = tf.nn.softmax(logits, name='predictions')\n \n # Reshape the targets to match the logits\n y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])\n loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)\n cost = tf.reduce_mean(loss)\n\n # Optimizer for training, using gradient clipping to control exploding gradients\n tvars = tf.trainable_variables()\n grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)\n train_op = tf.train.AdamOptimizer(learning_rate)\n optimizer = train_op.apply_gradients(zip(grads, tvars))\n \n # Export the nodes\n # NOTE: I'm using a namedtuple here because I think they are cool\n export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',\n 'keep_prob', 'cost', 'preds', 'optimizer']\n Graph = namedtuple('Graph', export_nodes)\n local_dict = locals()\n graph = Graph(*[local_dict[each] for each in export_nodes])\n \n return graph",
"Hyperparameters\nHere I'm defining the hyperparameters for the network. \n\nbatch_size - Number of sequences running through the network in one pass.\nnum_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.\nlstm_size - The number of units in the hidden layers.\nnum_layers - Number of hidden LSTM layers to use\nlearning_rate - Learning rate for training\nkeep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.\n\nHere's some good advice from Andrej Karpathy on training the network. I'm going to write it in here for your benefit, but also link to where it originally came from.\n\nTips and Tricks\nMonitoring Validation Loss vs. Training Loss\nIf you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:\n\nIf your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.\nIf your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)\n\nApproximate number of parameters\nThe two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:\n\nThe number of parameters in your model. This is printed when you start training.\nThe size of your dataset. 1MB file is approximately 1 million characters.\n\nThese two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:\n\nI have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.\nI have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.\n\nBest models strategy\nThe winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.\nIt is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.\nBy the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.",
"batch_size = 100\nnum_steps = 100 \nlstm_size = 512\nnum_layers = 2\nlearning_rate = 0.001\nkeep_prob = 0.5",
"Training\nTime for training which is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.\nHere I'm saving checkpoints with the format\ni{iteration number}_l{# hidden layer units}_v{validation loss}.ckpt",
"epochs = 20\n# Save every N iterations\nsave_every_n = 200\ntrain_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)\n\nmodel = build_rnn(len(vocab), \n batch_size=batch_size,\n num_steps=num_steps,\n learning_rate=learning_rate,\n lstm_size=lstm_size,\n num_layers=num_layers)\n\nsaver = tf.train.Saver(max_to_keep=100)\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n \n # Use the line below to load a checkpoint and resume training\n #saver.restore(sess, 'checkpoints/______.ckpt')\n \n n_batches = int(train_x.shape[1]/num_steps)\n iterations = n_batches * epochs\n for e in range(epochs):\n \n # Train network\n new_state = sess.run(model.initial_state)\n loss = 0\n for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):\n iteration = e*n_batches + b\n start = time.time()\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: keep_prob,\n model.initial_state: new_state}\n batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer], \n feed_dict=feed)\n loss += batch_loss\n end = time.time()\n print('Epoch {}/{} '.format(e+1, epochs),\n 'Iteration {}/{}'.format(iteration, iterations),\n 'Training loss: {:.4f}'.format(loss/b),\n '{:.4f} sec/batch'.format((end-start)))\n \n \n if (iteration%save_every_n == 0) or (iteration == iterations):\n # Check performance, notice dropout has been set to 1\n val_loss = []\n new_state = sess.run(model.initial_state)\n for x, y in get_batch([val_x, val_y], num_steps):\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)\n val_loss.append(batch_loss)\n\n print('Validation loss:', np.mean(val_loss),\n 'Saving checkpoint!')\n saver.save(sess, \"checkpoints/i{}_l{}_v{:.3f}.ckpt\".format(iteration, lstm_size, np.mean(val_loss)))",
"Saved checkpoints\nRead up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables",
"tf.train.get_checkpoint_state('checkpoints')",
"Sampling\nNow that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.\nThe network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.",
"def pick_top_n(preds, vocab_size, top_n=5):\n p = np.squeeze(preds)\n p[np.argsort(p)[:-top_n]] = 0\n p = p / np.sum(p)\n c = np.random.choice(vocab_size, 1, p=p)[0]\n return c\n\ndef sample(checkpoint, n_samples, lstm_size, vocab_size, prime=\"The \"):\n samples = [c for c in prime]\n model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)\n saver = tf.train.Saver()\n with tf.Session() as sess:\n saver.restore(sess, checkpoint)\n new_state = sess.run(model.initial_state)\n for c in prime:\n x = np.zeros((1, 1))\n x[0,0] = vocab_to_int[c]\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n\n for i in range(n_samples):\n x[0,0] = c\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n \n return ''.join(samples)",
"Here, pass in the path to a checkpoint and sample from the network.",
"checkpoint = \"checkpoints/____.ckpt\"\nsamp = sample(checkpoint, 2000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
altimesh/hybridizer-basic-samples
|
Jupyter/Labs/02_VectorAdd/HYB_CUDA_CSHARP.ipynb
|
mit
|
[
"<div align=\"center\"><h1>Vector Add on GPU</h1></div>\n\n\nVector Add\nIn the world of computing, the addition of two vectors is the standard \"Hello World\". \n\nGiven two sets of scalar data, such as the image above, we want to compute the sum, element by element. \nWe start by implementing the algorithm in plain C#. \nEdit the file 01-vector-add.cs and implement this algorithm in plain C# until it displays OK\nIf you get stuck, you can refer to the solution.",
"!hybridizer-cuda ./01-vector-add/01-vector-add.cs -o ./01-vector-add/vectoradd.exe -run",
"Introduce Parallelism\nAs we can see in the solution, a plain scalar iterative approach only uses one thread, while modern CPUs have typically 4 cores and 8 threads. \nFortunately, .Net and C# provide an intuitive construct to leverage parallelism : Parallel.For. \nModify 01-vector-add.cs to distribute the work among multiple threads. \nIf you get stuck, you can refer to the solution.",
"!hybridizer-cuda ./01-vector-add/01-vector-add.cs -o ./01-vector-add/parallel-vectoradd.exe -run",
"Run Code on the GPU\nUsing Hybridizer to run the above code on a GPU is quite straightforward. We need to\n- Decorate methods we want to run on the GPU\nThis is done by adding [EntryPoint] attribute on methods of interest. \n- \"Wrap\" current object into a dynamic object able to dispatch code on the GPU\nThis is done by the following boilerplate code:\ncsharp\ndynamic wrapped = HybRunner.Cuda().Wrap(new Program());\nwrapped.mymethod(...)\nwrapped object has the same methods signatures (static or instance) as the current object, but dispatches calls to GPU.\nModify the 02-vector-add.cs so the Add method runs on a GPU. \nIf you get stuck, you can refer to the solution.",
"!hybridizer-cuda ./02-gpu-vector-add/02-gpu-vector-add.cs -o ./02-gpu-vector-add/gpu-vectoradd.exe -run"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
javoweb/deep-learning
|
intro-to-rnns/RNN Albert Camus.ipynb
|
mit
|
[
"Anna KaRNNa\nIn this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.\nThis network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.\n<img src=\"assets/charseq.jpeg\" width=\"500\">",
"import time\nfrom collections import namedtuple\n\nimport numpy as np\nimport tensorflow as tf",
"First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.",
"with open('chalo.txt', 'r') as f:\n text=f.read()\nvocab = set(text)\nvocab_to_int = {c: i for i, c in enumerate(vocab)}\nint_to_vocab = dict(enumerate(vocab))\nchars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)",
"Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.",
"text[:100]",
"And we can see the characters encoded as integers.",
"chars[:100]",
"Making training and validation batches\nNow I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.\nHere I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.\nThe idea here is to make a 2D matrix where the number of rows is equal to the batch size. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.",
"def split_data(chars, batch_size, num_steps, split_frac=0.9):\n \"\"\" \n Split character data into training and validation sets, inputs and targets for each set.\n \n Arguments\n ---------\n chars: character array\n batch_size: Size of examples in each of batch\n num_steps: Number of sequence steps to keep in the input and pass to the network\n split_frac: Fraction of batches to keep in the training set\n \n \n Returns train_x, train_y, val_x, val_y\n \"\"\"\n \n slice_size = batch_size * num_steps\n n_batches = int(len(chars) / slice_size)\n \n # Drop the last few characters to make only full batches\n x = chars[: n_batches*slice_size]\n y = chars[1: n_batches*slice_size + 1]\n \n # Split the data into batch_size slices, then stack them into a 2D matrix \n x = np.stack(np.split(x, batch_size))\n y = np.stack(np.split(y, batch_size))\n \n # Now x and y are arrays with dimensions batch_size x n_batches*num_steps\n \n # Split into training and validation sets, keep the virst split_frac batches for training\n split_idx = int(n_batches*split_frac)\n train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]\n val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]\n \n return train_x, train_y, val_x, val_y",
"Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.",
"train_x, train_y, val_x, val_y = split_data(chars, 10, 50)\n\ntrain_x.shape",
"Looking at the size of this array, we see that we have rows equal to the batch size. When we want to get a batch out of here, we can grab a subset of this array that contains all the rows but has a width equal to the number of steps in the sequence. The first batch looks like this:",
"train_x[:,:50]",
"I'll write another function to grab batches out of the arrays made by split_data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.",
"def get_batch(arrs, num_steps):\n batch_size, slice_size = arrs[0].shape\n \n n_batches = int(slice_size/num_steps)\n for b in range(n_batches):\n yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]",
"Building the model\nBelow is a function where I build the graph for the network.",
"def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,\n learning_rate=0.001, grad_clip=5, sampling=False):\n \n # When we're using this network for sampling later, we'll be passing in\n # one character at a time, so providing an option for that\n if sampling == True:\n batch_size, num_steps = 1, 1\n\n tf.reset_default_graph()\n \n # Declare placeholders we'll feed into the graph\n inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')\n targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')\n \n # Keep probability placeholder for drop out layers\n keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n \n # One-hot encoding the input and target characters\n x_one_hot = tf.one_hot(inputs, num_classes)\n y_one_hot = tf.one_hot(targets, num_classes)\n\n ### Build the RNN layers\n # Use a basic LSTM cell\n lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n \n # Add dropout to the cell\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n \n # Stack up multiple LSTM layers, for deep learning\n cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)\n initial_state = cell.zero_state(batch_size, tf.float32)\n\n ### Run the data through the RNN layers\n # This makes a list where each element is on step in the sequence\n rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]\n \n # Run each sequence step through the RNN and collect the outputs\n outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)\n final_state = state\n \n # Reshape output so it's a bunch of rows, one output row for each step for each batch\n seq_output = tf.concat(outputs, axis=1)\n output = tf.reshape(seq_output, [-1, lstm_size])\n \n # Now connect the RNN putputs to a softmax layer\n with tf.variable_scope('softmax'):\n softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1))\n softmax_b = tf.Variable(tf.zeros(num_classes))\n \n # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch\n # of rows of logit outputs, one for each step and batch\n logits = tf.matmul(output, softmax_w) + softmax_b\n \n # Use softmax to get the probabilities for predicted characters\n preds = tf.nn.softmax(logits, name='predictions')\n \n # Reshape the targets to match the logits\n y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])\n loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)\n cost = tf.reduce_mean(loss)\n\n # Optimizer for training, using gradient clipping to control exploding gradients\n tvars = tf.trainable_variables()\n grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)\n train_op = tf.train.AdamOptimizer(learning_rate)\n optimizer = train_op.apply_gradients(zip(grads, tvars))\n \n # Export the nodes\n # NOTE: I'm using a namedtuple here because I think they are cool\n export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',\n 'keep_prob', 'cost', 'preds', 'optimizer']\n Graph = namedtuple('Graph', export_nodes)\n local_dict = locals()\n graph = Graph(*[local_dict[each] for each in export_nodes])\n \n return graph",
"Hyperparameters\nHere I'm defining the hyperparameters for the network. \n\nbatch_size - Number of sequences running through the network in one pass.\nnum_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.\nlstm_size - The number of units in the hidden layers.\nnum_layers - Number of hidden LSTM layers to use\nlearning_rate - Learning rate for training\nkeep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.\n\nHere's some good advice from Andrej Karpathy on training the network. I'm going to write it in here for your benefit, but also link to where it originally came from.\n\nTips and Tricks\nMonitoring Validation Loss vs. Training Loss\nIf you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:\n\nIf your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.\nIf your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)\n\nApproximate number of parameters\nThe two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:\n\nThe number of parameters in your model. This is printed when you start training.\nThe size of your dataset. 1MB file is approximately 1 million characters.\n\nThese two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:\n\nI have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.\nI have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that heps the validation loss.\n\nBest models strategy\nThe winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.\nIt is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.\nBy the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.",
"batch_size = 100\nnum_steps = 100 \nlstm_size = 512\nnum_layers = 2\nlearning_rate = 0.001\nkeep_prob = 0.3",
"Training\nTime for training which is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.\nHere I'm saving checkpoints with the format\ni{iteration number}_l{# hidden layer units}_v{validation loss}.ckpt",
"epochs = 300\n# Save every N iterations\nsave_every_n = 100\ntrain_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)\n\nmodel = build_rnn(len(vocab), \n batch_size=batch_size,\n num_steps=num_steps,\n learning_rate=learning_rate,\n lstm_size=lstm_size,\n num_layers=num_layers)\n\nsaver = tf.train.Saver(max_to_keep=100)\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n \n # Use the line below to load a checkpoint and resume training\n #saver.restore(sess, 'checkpoints/______.ckpt')\n \n n_batches = int(train_x.shape[1]/num_steps)\n iterations = n_batches * epochs\n for e in range(epochs):\n \n # Train network\n new_state = sess.run(model.initial_state)\n loss = 0\n for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):\n iteration = e*n_batches + b\n start = time.time()\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: keep_prob,\n model.initial_state: new_state}\n batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer], \n feed_dict=feed)\n loss += batch_loss\n end = time.time()\n print('Epoch {}/{} '.format(e+1, epochs),\n 'Iteration {}/{}'.format(iteration, iterations),\n 'Training loss: {:.4f}'.format(loss/b),\n '{:.4f} sec/batch'.format((end-start)))\n \n \n if (iteration%save_every_n == 0) or (iteration == iterations):\n # Check performance, notice dropout has been set to 1\n val_loss = []\n new_state = sess.run(model.initial_state)\n for x, y in get_batch([val_x, val_y], num_steps):\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)\n val_loss.append(batch_loss)\n\n print('Validation loss:', np.mean(val_loss),\n 'Saving checkpoint!')\n saver.save(sess, \"checkpoints/i{}_l{}_v{:.3f}.ckpt\".format(iteration, lstm_size, np.mean(val_loss)))",
"Saved checkpoints\nRead up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables",
"tf.train.get_checkpoint_state('checkpoints')",
"Sampling\nNow that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.\nThe network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.",
"def pick_top_n(preds, vocab_size, top_n=5):\n p = np.squeeze(preds)\n p[np.argsort(p)[:-top_n]] = 0\n p = p / np.sum(p)\n c = np.random.choice(vocab_size, 1, p=p)[0]\n return c\n\ndef sample(checkpoint, n_samples, lstm_size, vocab_size, prime=\"The \"):\n samples = [c for c in prime]\n model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)\n saver = tf.train.Saver()\n with tf.Session() as sess:\n saver.restore(sess, checkpoint)\n new_state = sess.run(model.initial_state)\n for c in prime:\n x = np.zeros((1, 1))\n x[0,0] = vocab_to_int[c]\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n\n for i in range(n_samples):\n x[0,0] = c\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n \n return ''.join(samples)",
"Here, pass in the path to a checkpoint and sample from the network.",
"checkpoint = \"checkpoints/i3000_l512_v2.497.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Cuando en\")\nprint(samp)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mrustl/flopy
|
examples/Notebooks/flopy3_sfrpackage_example.ipynb
|
bsd-3-clause
|
[
"SFR package example\nDemonstrates functionality of Flopy SFR module using the example documented by Prudic and others (2004): \nProblem description:\n\nGrid dimensions: 1 Layer, 15 Rows, 10 Columns \nStress periods: 1 steady \nFlow package: LPF \nStress packages: SFR, GHB, EVT, RCH \nSolver: SIP \n\n<img src=\"./img/Prudic2004_fig6.png\" width=\"400\" height=\"500\"/>",
"import sys\nimport platform\nimport os\nimport numpy as np\nimport glob\nimport shutil\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport flopy\nimport flopy.utils.binaryfile as bf\n\n#Set name of MODFLOW exe\n# assumes executable is in users path statement\nexe_name = 'mf2005'\nif platform.system() == 'Windows':\n exe_name = 'mf2005.exe'\n\n% matplotlib inline\n\nmpl.rcParams['figure.figsize'] = (11, 8.5)",
"copy over the example files to the working directory",
"path = 'data'\ngpth = os.path.join('..', 'data', 'mf2005_test', 'test1ss.*')\nfor f in glob.glob(gpth):\n shutil.copy(f, path)",
"Load example dataset, skipping the SFR package",
"m = flopy.modflow.Modflow.load('test1ss.nam', version='mf2005', exe_name=exe_name, \n model_ws=path, load_only=['ghb', 'evt', 'rch', 'dis', 'bas6', 'oc', 'sip', 'lpf'])",
"Read pre-prepared reach and segment data into numpy recarrays using numpy.genfromtxt()\nReach data (Item 2 in the SFR input instructions), are input and stored in a numpy record array\nhttp://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html\nThis allows for reach data to be indexed by their variable names, as described in the SFR input instructions.\nFor more information on Item 2, see the Online Guide to MODFLOW:\nhttp://water.usgs.gov/nrp/gwsoftware/modflow2000/MFDOC/index.html?sfr.htm",
"rpth = os.path.join('..', 'data', 'sfr_examples', 'test1ss_reach_data.csv')\nreach_data = np.genfromtxt(rpth, delimiter=',', names=True)\nreach_data",
"Segment Data structure\nSegment data are input and stored in a dictionary of record arrays, which",
"spth = os.path.join('..', 'data', 'sfr_examples', 'test1ss_segment_data.csv')\nss_segment_data = np.genfromtxt(spth, delimiter=',', names=True)\nsegment_data = {0: ss_segment_data}\nsegment_data[0][0:1]['width1']",
"define dataset 6e (channel flow data) for segment 1\ndataset 6e is stored in a nested dictionary keyed by stress period and segment,\nwith a list of the following lists defined for each segment with icalc == 4\nFLOWTAB(1) FLOWTAB(2) ... FLOWTAB(NSTRPTS)\nDPTHTAB(1) DPTHTAB(2) ... DPTHTAB(NSTRPTS)\nWDTHTAB(1) WDTHTAB(2) ... WDTHTAB(NSTRPTS)",
"channel_flow_data = {0: {1: [[0.5, 1.0, 2.0, 4.0, 7.0, 10.0, 20.0, 30.0, 50.0, 75.0, 100.0],\n [0.25, 0.4, 0.55, 0.7, 0.8, 0.9, 1.1, 1.25, 1.4, 1.7, 2.6],\n [3.0, 3.5, 4.2, 5.3, 7.0, 8.5, 12.0, 14.0, 17.0, 20.0, 22.0]]}}",
"define dataset 6d (channel geometry data) for segments 7 and 8\ndataset 6d is stored in a nested dictionary keyed by stress period and segment,\nwith a list of the following lists defined for each segment with icalc == 4\nFLOWTAB(1) FLOWTAB(2) ... FLOWTAB(NSTRPTS)\nDPTHTAB(1) DPTHTAB(2) ... DPTHTAB(NSTRPTS)\nWDTHTAB(1) WDTHTAB(2) ... WDTHTAB(NSTRPTS)",
"channel_geometry_data = {0: {7: [[0.0, 10.0, 80.0, 100.0, 150.0, 170.0, 240.0, 250.0],\n [20.0, 13.0, 10.0, 2.0, 0.0, 10.0, 13.0, 20.0]],\n 8: [[0.0, 10.0, 80.0, 100.0, 150.0, 170.0, 240.0, 250.0],\n [25.0, 17.0, 13.0, 4.0, 0.0, 10.0, 16.0, 20.0]]}}",
"Define SFR package variables",
"nstrm = len(reach_data) # number of reaches\nnss = len(segment_data[0]) # number of segments\nnsfrpar = 0 # number of parameters (not supported)\nnparseg = 0\nconst = 1.486 # constant for manning's equation, units of cfs\ndleak = 0.0001 # closure tolerance for stream stage computation\nistcb1 = 53 # flag for writing SFR output to cell-by-cell budget (on unit 53)\nistcb2 = 81 # flag for writing SFR output to text file\ndataset_5 = {0: [nss, 0, 0]} # dataset 5 (see online guide)",
"Instantiate SFR package\nInput arguments generally follow the variable names defined in the Online Guide to MODFLOW",
"sfr = flopy.modflow.ModflowSfr2(m, nstrm=nstrm, nss=nss, const=const, dleak=dleak, istcb1=istcb1, istcb2=istcb2, \n reach_data=reach_data,\n segment_data=segment_data,\n channel_geometry_data=channel_geometry_data,\n channel_flow_data=channel_flow_data,\n dataset_5=dataset_5)\n\nsfr.reach_data[0:1]",
"Plot the SFR segments\nany column in the reach_data array can be plotted using the key argument",
"sfr.plot(key='iseg');",
"Check the SFR dataset for errors",
"chk = sfr.check()\n\nm.external_fnames = [os.path.split(f)[1] for f in m.external_fnames]\nm.external_fnames\n\nm.write_input()\n\nm.run_model()",
"Look at results",
"sfr_outfile = os.path.join('..', 'data', 'sfr_examples', 'test1ss.flw')\nnames = [\"layer\", \"row\", \"column\", \"segment\", \"reach\", \"Qin\", \n \"Qaquifer\", \"Qout\", \"Qovr\", \"Qprecip\", \"Qet\", \"stage\", \"depth\", \"width\", \"Cond\", \"gradient\"]",
"Read results into numpy array using genfromtxt",
"sfrresults = np.genfromtxt(sfr_outfile, skip_header=8, names=names, dtype=None)\nsfrresults[0:1]",
"Read results into pandas dataframe\n\nrequires the pandas library",
"import pandas as pd\ndf = pd.read_csv(sfr_outfile, delim_whitespace=True, skiprows=8, names=names, header=None)\ndf",
"Plot streamflow and stream/aquifer interactions for a segment",
"inds = df.segment == 3\nax = df.ix[inds, ['Qin', 'Qaquifer', 'Qout']].plot(x=df.reach[inds])\nax.set_ylabel('Flow, in cubic feet per second')\nax.set_xlabel('SFR reach')",
"Look at stage, model top, and streambed top",
"streambed_top = m.sfr.segment_data[0][m.sfr.segment_data[0].nseg == 3][['elevup', 'elevdn']][0]\nstreambed_top\n\ndf['model_top'] = m.dis.top.array[df.row.values - 1, df.column.values -1]\nfig, ax = plt.subplots()\nplt.plot([1, 6], list(streambed_top), label='streambed top')\nax = df.ix[inds, ['stage', 'model_top']].plot(ax=ax, x=df.reach[inds])\nax.set_ylabel('Elevation, in feet')\nplt.legend()",
"Get SFR leakage results from cell budget file",
"bpth = os.path.join('data', 'test1ss.cbc')\ncbbobj = bf.CellBudgetFile(bpth)\ncbbobj.list_records()\n\nsfrleak = cbbobj.get_data(text=' STREAM LEAKAGE')[0]\nsfrleak[sfrleak == 0] = np.nan # remove zero values",
"Plot leakage in plan view",
"im = plt.imshow(sfrleak[0], interpolation='none', cmap='coolwarm', vmin = -3, vmax=3)\ncb = plt.colorbar(im, label='SFR Leakage, in cubic feet per second');",
"Plot total streamflow",
"sfrQ = sfrleak[0].copy()\nsfrQ[sfrQ == 0] = np.nan\nsfrQ[df.row.values-1, df.column.values-1] = df[['Qin', 'Qout']].mean(axis=1).values\nim = plt.imshow(sfrQ, interpolation='none')\nplt.colorbar(im, label='Streamflow, in cubic feet per second');"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
NEONScience/NEON-Data-Skills
|
tutorials/Python/Hyperspectral/indices/NEON_AOP_Hyperspectral_Functions_Tiles_py/NEON_AOP_Hyperspectral_Functions_Tiles_py.ipynb
|
agpl-3.0
|
[
"syncID: e046a83d83f2042d8b40dea1b20fd6779\ntitle: \"Band Stacking, RGB & False Color Images, and Interactive Widgets in Python - Tiled Data\"\ndescription: \"Learn to efficintly work with tiled NEON AOP spectral data using functions.\"\ndateCreated: 2017-06-19 \nauthors: Bridget Hass\ncontributors: Donal O'Leary\nestimatedTime: 0.5 hours\npackagesLibraries: numpy, matplotlib, h5py, os, osr, copy\ntopics: hyperspectral-remote-sensing, HDF5, remote-sensing\nlanguagesTool: python\ndataProduct: NEON.DP3.30006, NEON.DP3.30008\ncode1: https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/Hyperspectral/indices/NEON_AOP_Hyperspectral_Functions_Tiles_py/NEON_AOP_Hyperspectral_Functions_Tiles_py.ipynb\ntutorialSeries: intro-hsi-tiles-py-series\nurlTitle: neon-hsi-aop-functions-tiles-python\n\nIn the tutorial, we learn how to efficiently read in hdf5 data using h5py, \napply the no-data value and scale factor, and plot a single band of a \nreflectance data tile using functions built for NEON AOP data. We will \nintroduce functions aop_h5refl2array, plot different combinations of \nbands, and demonstrate how to create IPython widgets for more interactive \ndata visualization. \n<div id=\"ds-ojectives\" markdown=\"1\">\n\n### Objectives\nAfter completing this tutorial, you will be able to:\n\n* Upload a Python module\n* Efficiently work with NEON hyperspectral data using functions, including: \n + Read in tiled NEON AOP reflectance hdf5 data and associated metadata\n + Stack and plot 3-band combinations (eg. RGB, Color Infrared, False Color Images)\n* Use IPython widgets to explore RGB band combinations interactively \n* Understand how to write and use functions and loops to automate repeated processes\n\n### Install Python Packages\n\n* **numpy**\n* **pandas**\n* **gdal** \n* **matplotlib** \n* **h5py**\n\n\n### Download Data\n\nTo complete this tutorial, you will use data available from the NEON 2017 Data\nInstitute.\n\nThis tutorial uses the following files:\n\n<ul>\n <li> <a href=\"https://www.neonscience.org/sites/default/files/neon_aop_spectral_python_functions_tiled_data.zip\">neon_aop_spectral_python_functions_tiled_data.zip (10 KB)</a> <- Click to Download</li>\n <li><a href=\"https://ndownloader.figshare.com/files/25752665\" target=\"_blank\">NEON_D02_SERC_DP3_368000_4306000_reflectance.h5 (618 MB)</a> <- Click to Download</li>\n</ul>\n\n<a href=\"https://ndownloader.figshare.com/files/25752665\" class=\"link--button link--arrow\">\nDownload Dataset</a>\n\nThe LiDAR and imagery data used to create this raster teaching data subset \nwere collected over the \n<a href=\"http://www.neonscience.org/\" target=\"_blank\"> National Ecological Observatory Network's</a> \n<a href=\"http://www.neonscience.org/science-design/field-sites/\" target=\"_blank\" >field sites</a>\nand processed at NEON headquarters.\nThe entire dataset can be accessed on the \n<a href=\"http://data.neonscience.org\" target=\"_blank\"> NEON data portal</a>.\n\n</div>\n\nWe can combine any three bands from the NEON reflectance data to make an RGB \nimage that will depict different information about the Earth's surface. \nA natural color image, made with bands from the red, green, and blue \nwavelengths looks close to what we would see with the naked eye. We can also \nchoose band combinations from other wavelenghts, and map them to the red, blue, \nand green colors to highlight different features. A false color image is \nmade with one or more bands from a non-visible portion of the electromagnetic \nspectrum that are mapped to red, green, and blue colors. These images can \ndisplay other information about the landscape that is not easily seen with a \nnatural color image. \nThe NASA Goddard Media Studio video \"Peeling Back Landsat's Layers of Data\" \ngives a good quick overview of natural and false color band combinations. Note \nthat Landsat collects information from 11 wavelength bands, while NEON AOP \nhyperspectral data collects information from 426 bands!\nPeeling Back Landsat's Layers of Data Video\n<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/YP0et8l_bvY\" frameborder=\"0\" allowfullscreen></iframe>\n\nFurther Reading\n\nCheck out the NASA Earth Observatory article \n<a href=\"https://earthobservatory.nasa.gov/Features/FalseColor/\" target=\"_blank\">How to Interpret a False-Color Satellite Image</a>. \nRead the supporting article for the video above, \n<a href=\"https://svs.gsfc.nasa.gov//vis/a010000/a011400/a011491/index.html\" target=\"_blank\"> Landsat 8 Onion Skin</a>. \n\nLoad Function Module\nBefore we get started, let's set up our plot and warning preferences:",
"import matplotlib.pyplot as plt\nimport numpy as np\nimport h5py, os, osr, copy\n%matplotlib inline\nimport warnings\nwarnings.filterwarnings('ignore')",
"The first function we will use is aop_h5refl2array. This function is loaded into the cell below, we encourage you to look through the code to understand what it is doing -- most of these steps should look familiar to you from the first lesson. This function can be thought of as a wrapper to automate the steps required to read AOP hdf5 reflectance tiles into a Python format. This function also cleans the data: it sets any no data values within the reflectance tile to nan (not a number) and applies the reflectance scale factor so the final array that is returned represents unitless scaled reflectance, with values ranging between 0 and 1 (0-100%).",
"def aop_h5refl2array(refl_filename):\n \"\"\"aop_h5refl2array reads in a NEON AOP reflectance hdf5 file and returns \n 1. reflectance array (with the no data value and reflectance scale factor applied)\n 2. dictionary of metadata including spatial information, and wavelengths of the bands\n --------\n Parameters\n refl_filename -- full or relative path and name of reflectance hdf5 file\n --------\n Returns \n --------\n reflArray:\n array of reflectance values\n metadata:\n dictionary containing the following metadata:\n bad_band_window1 (tuple)\n bad_band_window2 (tuple)\n bands: # of bands (float)\n data ignore value: value corresponding to no data (float)\n epsg: coordinate system code (float)\n map info: coordinate system, datum & ellipsoid, pixel dimensions, and origin coordinates (string)\n reflectance scale factor: factor by which reflectance is scaled (float)\n wavelength: wavelength values (float)\n wavelength unit: 'm' (string)\n --------\n NOTE: This function applies to the NEON hdf5 format implemented in 2016, and should be used for\n data acquired 2016 and after. Data in earlier NEON hdf5 format (collected prior to 2016) is \n expected to be re-processed after the 2018 flight season. \n --------\n Example Execution:\n --------\n sercRefl, sercRefl_metadata = h5refl2array('NEON_D02_SERC_DP3_368000_4306000_reflectance.h5') \"\"\"\n \n import h5py\n \n #Read in reflectance hdf5 file \n hdf5_file = h5py.File(refl_filename,'r')\n\n #Get the site name\n file_attrs_string = str(list(hdf5_file.items()))\n file_attrs_string_split = file_attrs_string.split(\"'\")\n sitename = file_attrs_string_split[1]\n \n #Extract the reflectance & wavelength datasets\n refl = hdf5_file[sitename]['Reflectance']\n reflData = refl['Reflectance_Data']\n reflRaw = refl['Reflectance_Data'].value\n \n #Create dictionary containing relevant metadata information\n metadata = {}\n metadata['map info'] = refl['Metadata']['Coordinate_System']['Map_Info'].value\n metadata['wavelength'] = refl['Metadata']['Spectral_Data']['Wavelength'].value\n\n #Extract no data value & scale factor\n metadata['data ignore value'] = float(reflData.attrs['Data_Ignore_Value'])\n metadata['reflectance scale factor'] = float(reflData.attrs['Scale_Factor'])\n #metadata['interleave'] = reflData.attrs['Interleave']\n \n #Apply no data value\n reflClean = reflRaw.astype(float)\n arr_size = reflClean.shape\n if metadata['data ignore value'] in reflRaw:\n print('% No Data: ',np.round(np.count_nonzero(reflClean==metadata['data ignore value'])*100/(arr_size[0]*arr_size[1]*arr_size[2]),1))\n nodata_ind = np.where(reflClean==metadata['data ignore value'])\n reflClean[nodata_ind]=np.nan \n \n #Apply scale factor\n reflArray = reflClean/metadata['reflectance scale factor']\n \n #Extract spatial extent from attributes\n metadata['spatial extent'] = reflData.attrs['Spatial_Extent_meters']\n \n #Extract bad band windows\n metadata['bad band window1'] = (refl.attrs['Band_Window_1_Nanometers'])\n metadata['bad band window2'] = (refl.attrs['Band_Window_2_Nanometers'])\n \n #Extract projection information\n #metadata['projection'] = refl['Metadata']['Coordinate_System']['Proj4'].value\n metadata['epsg'] = int(refl['Metadata']['Coordinate_System']['EPSG Code'].value)\n \n #Extract map information: spatial extent & resolution (pixel size)\n mapInfo = refl['Metadata']['Coordinate_System']['Map_Info'].value\n \n hdf5_file.close \n \n return reflArray, metadata",
"If you forget what this function does, or don't want to scroll up to read the docstrings, remember you can use help or ? to display the associated docstrings.",
"help(aop_h5refl2array)\naop_h5refl2array?",
"Now that we have an idea of how this function works, let's try it out. First, define the path where th e reflectance data is stored and use os.path.join to create the full path to the data file. Note that if you want to run this notebook later on a different reflectance tile, you just have to change this variable.",
"# Note you will need to update this filepath for your local machine\nserc_h5_tile = ('/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_reflectance.h5') ",
"Now that we've specified our reflectance tile, we can call aop_h5refl2array to read in the reflectance tile as a python array called sercRefl , and the associated metadata into a dictionary sercMetadata",
"sercRefl,sercMetadata = aop_h5refl2array(serc_h5_tile)",
"We can use the shape method to see the dimensions of the array we read in. NEON tiles are (1000 x 1000 x # of bands), the number of bands may vary depending on the hyperspectral sensor used, but should be in the vicinity of 426.",
"sercRefl.shape",
"plot_aop_refl: plot a single band\nNext we'll use the function plot_aop_refl to plot a single band of reflectance data. Read the Parameters section of the docstring to understand the required inputs & data type for each of these; only the band and spatial extent are required inputs, the rest are optional inputs that, if specified, allow you to set the range color values, specify the axis, add a title, colorbar, colorbar title, and change the colormap (default is to plot in greyscale).",
"def plot_aop_refl(band_array,refl_extent,colorlimit=(0,1),ax=plt.gca(),title='',cbar ='on',cmap_title='',colormap='Greys'):\n \n '''plot_refl_data reads in and plots a single band or 3 stacked bands of a reflectance array\n --------\n Parameters\n --------\n band_array: array of reflectance values, created from aop_h5refl2array\n refl_extent: extent of reflectance data to be plotted (xMin, xMax, yMin, yMax) \n use metadata['spatial extent'] from aop_h5refl2array function\n colorlimit: optional, range of values to plot (min,max). \n - helpful to look at the histogram of reflectance values before plotting to determine colorlimit.\n ax: optional, default = current axis\n title: optional; plot title (string)\n cmap_title: optional; colorbar title \n colormap: optional (string, see https://matplotlib.org/examples/color/colormaps_reference.html) for list of colormaps\n --------\n Returns \n --------\n plots flightline array of single band of reflectance data\n --------\n\n Examples:\n --------\n plot_aop_refl(sercb56,\n sercMetadata['spatial extent'],\n colorlimit=(0,0.3),\n title='SERC Band 56 Reflectance',\n cmap_title='Reflectance',\n colormap='Greys_r') '''\n \n import matplotlib.pyplot as plt\n \n plot = plt.imshow(band_array,extent=refl_extent,clim=colorlimit); \n if cbar == 'on':\n cbar = plt.colorbar(plot,aspect=40); plt.set_cmap(colormap); \n cbar.set_label(cmap_title,rotation=90,labelpad=20)\n plt.title(title); ax = plt.gca(); \n ax.ticklabel_format(useOffset=False, style='plain'); #do not use scientific notation for ticklabels\n rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90); #rotate x tick labels 90 degrees",
"Now that we have loaded this function, let's extract a single band from the SERC reflectance array and plot it:",
"sercb56 = sercRefl[:,:,55]\n\nplot_aop_refl(sercb56,\n sercMetadata['spatial extent'],\n colorlimit=(0,0.3),\n title='SERC Band 56 Reflectance',\n cmap_title='Reflectance',\n colormap='Greys_r') ",
"RGB Plots - Band Stacking\nIt is often useful to look at several bands together. We can extract and stack three reflectance bands in the red, green, and blue (RGB) spectrums to produce a color image that looks like what we see with our eyes; this is your typical camera image. In the next part of this tutorial, we will learn to stack multiple bands and make a geotif raster from the compilation of these bands. We can see that different combinations of bands allow for different visualizations of the remotely-sensed objects and also conveys useful information about the chemical makeup of the Earth's surface. \nWe will select bands that fall within the visible range of the electromagnetic \nspectrum (400-700 nm) and at specific points that correspond to what we see \nas red, green, and blue.\n<figure>\n <a href=\"https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/spectrum_RGBcombined.png\">\n <img src=\"https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/spectrum_RGBcombined.png\"></a>\n <figcaption> NEON Imaging Spectrometer bands and their respective nanometers. Source: National Ecological Observatory Network (NEON) \n </figcaption>\n</figure>\n\nFor this exercise, we'll first use the neon_aop_module function stack_rgb to extract the bands we want to stack. This function uses splicing to extract the nth band from the reflectance array, and then uses the numpy function stack to create a new 3D array (1000 x 1000 x 3) consisting of only the three bands we want.",
"def stack_rgb(reflArray,bands):\n \n import numpy as np\n \n red = reflArray[:,:,bands[0]-1]\n green = reflArray[:,:,bands[1]-1]\n blue = reflArray[:,:,bands[2]-1]\n \n stackedRGB = np.stack((red,green,blue),axis=2)\n \n return stackedRGB",
"First, we will look at red, green, and blue bands, whos indices are defined below. To confirm that these band indices correspond to wavelengths in the expected portion of the spectrum, we can print out the wavelength values stored in metadata['wavelength']:",
"rgb_bands = (58,34,19)\n\nprint('Band 58 Center Wavelength = %.2f' %(sercMetadata['wavelength'][57]),'nm')\nprint('Band 33 Center Wavelength = %.2f' %(sercMetadata['wavelength'][33]),'nm')\nprint('Band 19 Center Wavelength = %.2f' %(sercMetadata['wavelength'][18]),'nm')",
"Below we use stack_rgb to create an RGB array. Check that the dimensions of this array are as expected.\nData Tip: Checking the shape of arrays with .shape is a good habit to get into when creating your own workflows, and can be a handy tool for troubleshooting.",
"SERCrgb = stack_rgb(sercRefl,rgb_bands)\nSERCrgb.shape",
"plot_aop_refl: plot an RGB band combination\nNext, we can use the function plot_aop_refl, even though we have more than one band. This function only works for a single or 3-band array, so ensure the array you use has the proper dimensions before using. You do not need to specify the colorlimits as the matplotlib.pyplot automatically scales 3-band arrays to 8-bit color (256).",
"plot_aop_refl(SERCrgb,\n sercMetadata['spatial extent'],\n title='SERC RGB Image',\n cbar='off') ",
"You'll notice that this image is very dark; it is possible to make out some of the features (roads, buildings), but it is not ideal. Since colorlimits don't apply to 3-band images, we have to use some other image processing tools to enhance the visibility of this image. \nImage Processing -- Contrast Stretch & Histogram Equalization\nWe can also try out some image processing routines to better visualize the reflectance data using the ski-image package. \nHistogram equalization is a method in image processing of contrast adjustment using the image's histogram. Stretching the histogram can improve the contrast of a displayed image by eliminating very high or low reflectance values that skew the display of the image. \n<figure>\n <a href=\"https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/histogram_equalization.png\">\n <img src=\"https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/histogram_equalization.png\"></a>\n <figcaption> Histogram equalization is a method in image processing of contrast adjustment \nusing the image's histogram. Stretching the histogram can improve the contrast \nof a displayed image, as we will show how to do below.\n Source: <a href=\"https://en.wikipedia.org/wiki/Talk%3AHistogram_equalization#/media/File:Histogrammspreizung.png\"> Wikipedia - Public Domain </a>\n </figcaption>\n</figure>\n\nThe following tutorial section is adapted from skikit-image's tutorial\n<a href=\"http://scikit-image.org/docs/stable/auto_examples/color_exposure/plot_equalize.html#sphx-glr-auto-examples-color-exposure-plot-equalize-py\" target=\"_blank\"> Histogram Equalization</a>.\nLet's see what the image looks like using a 5% linear contrast stretch using the skiimage module's exposure function.",
"from skimage import exposure\n\ndef plot_aop_rgb(rgbArray,ext,ls_pct=5,plot_title=''):\n \n from skimage import exposure\n \n pLow, pHigh = np.percentile(rgbArray[~np.isnan(rgbArray)], (ls_pct,100-ls_pct))\n img_rescale = exposure.rescale_intensity(rgbArray, in_range=(pLow,pHigh))\n plt.imshow(img_rescale,extent=ext)\n plt.title(plot_title + '\\n Linear ' + str(ls_pct) + '% Contrast Stretch'); \n ax = plt.gca(); ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation #\n rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree\n\nplot_aop_rgb(SERCrgb,\n sercMetadata['spatial extent'],\n plot_title = 'SERC RGB')",
"False Color Image - Color Infrared (CIR)\nWe can also create an image from bands outside of the visible spectrum. An image containing one or more bands outside of the visible range is called a false-color image. Here we'll use the green and blue bands as before, but we replace the red band with a near-infrared (NIR) band. \nFor more information about non-visible wavelengths, false color images, and some frequently used false-color band combinations, refer to <a href=\"https://earthobservatory.nasa.gov/Features/FalseColor/\" target=\"_blank\">NASA's Earth Observatory page</a>.",
"CIRbands = (90,34,19)\nprint('Band 90 Center Wavelength = %.2f' %(sercMetadata['wavelength'][89]),'nm')\nprint('Band 34 Center Wavelength = %.2f' %(sercMetadata['wavelength'][33]),'nm')\nprint('Band 19 Center Wavelength = %.2f' %(sercMetadata['wavelength'][18]),'nm')\n\nSERCcir = stack_rgb(sercRefl,CIRbands)\nplot_aop_rgb(SERCcir,\n sercMetadata['spatial extent'],\n ls_pct=2,\n plot_title='SERC CIR')",
"Demo: Exploring Band Combinations Interactively\nNow that we have made a couple different band combinations, we can demo a Python widget to explore different combinations of bands in the visible and non-visible portions of the spectrum.",
"from IPython.html.widgets import *\n\narray = copy.copy(sercRefl)\nmetadata = copy.copy(sercMetadata)\n\ndef RGBplot_widget(R,G,B):\n \n #Pre-allocate array size\n rgbArray = np.zeros((array.shape[0],array.shape[1],3), 'uint8')\n \n Rband = array[:,:,R-1].astype(np.float)\n #Rband_clean = clean_band(Rband,Refl_md)\n \n Gband = array[:,:,G-1].astype(np.float)\n #Gband_clean = clean_band(Gband,Refl_md)\n \n Bband = array[:,:,B-1].astype(np.float)\n #Bband_clean = clean_band(Bband,Refl_md)\n \n rgbArray[..., 0] = Rband*256\n rgbArray[..., 1] = Gband*256\n rgbArray[..., 2] = Bband*256\n \n # Apply Adaptive Histogram Equalization to Improve Contrast:\n \n img_nonan = np.ma.masked_invalid(rgbArray) #first mask the image \n img_adapteq = exposure.equalize_adapthist(img_nonan, clip_limit=0.10)\n \n plot = plt.imshow(img_adapteq,extent=metadata['spatial extent']); \n plt.title('Bands: \\nR:' + str(R) + ' (' + str(round(metadata['wavelength'][R-1])) +'nm)'\n + '\\n G:' + str(G) + ' (' + str(round(metadata['wavelength'][G-1])) + 'nm)'\n + '\\n B:' + str(B) + ' (' + str(round(metadata['wavelength'][B-1])) + 'nm)'); \n ax = plt.gca(); ax.ticklabel_format(useOffset=False, style='plain') \n rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) \n \ninteract(RGBplot_widget, R=(1,426,1), G=(1,426,1), B=(1,426,1))",
"Demo: Interactive Linear Stretch & Equalization\nHere is another widget to play around with, demonstrating how to interactively visualize linear contrast stretches with a variable percent.",
"rgbArray = copy.copy(SERCrgb)\n\ndef linearStretch(percent):\n pLow, pHigh = np.percentile(rgbArray[~np.isnan(rgbArray)], (percent,100-percent))\n img_rescale = exposure.rescale_intensity(rgbArray, in_range=(pLow,pHigh))\n plt.imshow(img_rescale,extent=sercMetadata['spatial extent'])\n plt.title('SERC RGB \\n Linear ' + str(percent) + '% Contrast Stretch'); \n ax = plt.gca()\n ax.ticklabel_format(useOffset=False, style='plain') \n rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) \n\ninteract(linearStretch,percent=(0,20,1))",
"References\nKekesi, Alex et al. \n<a href=\"https://svs.gsfc.nasa.gov/vis/a010000/a011400/a011491/\" target=\"_blank\"> \"NASA | Peeling Back Landsat's Layers of Data\". </a>\nhttps://svs.gsfc.nasa.gov/vis/a010000/a011400/a011491/. Published on Feb 24, 2014.\nRiebeek, Holli. \n<a href=\"https://earthobservatory.nasa.gov/Features/FalseColor/\" target=\"_blank\"> \"Why is that Forest Red and that Cloud Blue? How to Interpret a False-Color Satellite Image\" </a> \nhttps://earthobservatory.nasa.gov/Features/FalseColor/"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
agile-geoscience/welly
|
tutorial/06_Welly_and_LAS.ipynb
|
apache-2.0
|
[
"Welly and LAS files\nSome preliminaries...",
"import numpy as np\nimport matplotlib.pyplot as plt\n\nimport welly\nwelly.__version__\n\nimport lasio\nlasio.__version__",
"Load a well from LAS\nUse the from_las() method to load a well by passing a filename as a str. \nThis is really just a wrapper for lasio but instantiates a Header, Curves, etc.",
"from welly import Well\n\nw = Well.from_las('data/P-129_out.LAS')",
"Save LAS file\nWe can write out to LAS with a simple command, passing the file name you want:",
"w.to_las('data/out.las')",
"Let's just check we get the same thing out of that file as we put in:",
"w.plot()\n\nz = Well.from_las('data/out.las')\nz.plot()\n\nz.data['CALI'].plot()\n",
"We don't get the striplog back (right hand side), but everything else looks good.\nHeader\nMaybe should be called 'meta' as it's not really a header...",
"w.header\n\nw.header.name\n\nw.uwi",
"What?? OK, we need to load this file more carefully...\nCoping with messy LAS\nSome file headers are a disgrace:\n# LAS format log file from PETREL\n# Project units are specified as depth units\n#==================================================================\n~Version information\nVERS. 2.0:\nWRAP. YES:\n#==================================================================\n~WELL INFORMATION\n#MNEM.UNIT DATA DESCRIPTION\n#---- ------ -------------- -----------------------------\nSTRT .M 1.0668 :START DEPTH \nSTOP .M 1939.13760 :STOP DEPTH \nSTEP .M 0.15240 :STEP \nNULL . -999.25 :NULL VALUE\nCOMP . Elmworth Energy Corporation :COMPANY\nWELL . Kennetcook #2 :WELL\nFLD . Windsor Block :FIELD\nLOC . Lat = 45* 12' 34.237\" N :LOCATION\nPROV . Nova Scotia :PROVINCE\n UWI. Long = 63* 45'24.460 W :UNIQUE WELL ID\nLIC . P-129 :LICENSE NUMBER\nCTRY . CA :COUNTRY (WWW code)\n DATE. 10-Oct-2007 :LOG DATE {DD-MMM-YYYY}\nSRVC . Schlumberger :SERVICE COMPANY\nLATI .DEG :LATITUDE\nLONG .DEG :LONGITUDE\nGDAT . :GeoDetic Datum\nSECT . 45.20 Deg N :Section\nRANG . PD 176 :Range\nTOWN . 63.75 Deg W :Township",
"import welly\nimport re\n\ndef transform_ll(text):\n def callback(match):\n d = match.group(1).strip()\n m = match.group(2).strip()\n s = match.group(3).strip()\n c = match.group(4).strip()\n if c.lower() in ('w', 's') and d[0] != '-':\n d = '-' + d\n return ' '.join([d, m, s])\n pattern = re.compile(r\"\"\".+?([-0-9]+?).? ?([0-9]+?).? ?([\\.0-9]+?).? +?([NESW])\"\"\", re.I)\n text = pattern.sub(callback, text)\n return welly.utils.dms2dd([float(i) for i in text.split()])\n\nprint(transform_ll(\"\"\"Lat = 45* 12' 34.237\" N\"\"\"))\nprint(transform_ll(\"\"\"Long = 63* 45'24.460 W\"\"\"))\n\nremap = {\n 'LATI': 'LOC', # Use LOC for the parameter LATI.\n 'LONG': 'UWI', # Use UWI for the parameter LONG.\n 'SECT': None, # Use nothing for the parameter SECT.\n 'RANG': None, # Use nothing for the parameter RANG.\n 'TOWN': None, # Use nothing for the parameter TOWN.\n}\n\nfuncs = {\n 'LATI': transform_ll, # Pass LATI through this function before loading.\n 'LONG': transform_ll, # Pass LONG through it too.\n 'UWI': lambda x: \"No name, oh no!\"\n}\n\nw = Well.from_las('data/P-129_out.LAS', remap=remap, funcs=funcs)\n\nw.location\n\nw.location.crs # Should be empty.\n\nw.uwi",
"© 2022 Agile Scientific, CC BY"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
lguarneros/fimda
|
dinamica-2puentes.ipynb
|
gpl-3.0
|
[
"FIMDA\nScript que realiza el análisis de dinámica para una trayectoria con 2 puentes di sulfuro.\nEl presente script genera los archivos resultantes del análisis de una trayectoria haciendo uso:\n\nGromacs 5 y Gromacs 4\nXmgrace\nVMD\nChimera\ncatdcd\ntrjconv.\n\nSe deberá contar con la trayectoria que contenga el rmsd más bajo para realizar el análisis sobre ella.\n\nLibrerías a utilizar",
"%matplotlib inline\nimport numpy as np\nimport pylab as pl\nimport matplotlib.patches as mpatches\nimport matplotlib.ticker as ticker\nimport os\nimport shutil\nfrom IPython.display import Image\nfrom matplotlib.ticker import FormatStrFormatter",
"Ruta de la trayectoria\n\nEscribir después de la diagonal, la ruta de la trayectoria seleccionada con el rmsd más bajo.",
"ruta=os.getcwd()\nc=input('Nombre de la trayectoria para realizar el análisis... Ejemplo: run001....')\nif os.path.isdir(c):\n indir = '/'+c\n print (indir)\n ruta_old_traj=ruta+indir\n print (ruta)\n print (ruta_old_traj)\nelse:\n print ('La carpetac'+c+' no existe...')\n\n\n#\nruta_scripts=ruta+'/scripts_fimda'\nprint (ruta_scripts)\nif os.path.exists(ruta_scripts): \n print ('Ruta identificada para búsqueda de scripst adicionales ===>',ruta_scripts)\nelse:\n print ('La carpeta de scripst adicionales no existe, copiar en '+ruta_scripts+' ..!!!')",
"Convirtiendo la trayectoria DCD -> XTC\n\nLos siguientes comandos convierten la trayectoria DCD contenida en la carpeta seleccionada a formato de XTC\nCrear la nueva ruta para enviar las trayectorias convertidas",
"#Verificando que exista la nueva carpeta para la conversión de trayectorias\n#nuevaruta = ruta+'/'+indir+'_XTC'\nnuevaruta = ruta+indir+'_Dinamica'\nprint ( nuevaruta )\nif not os.path.exists(nuevaruta): \n os.makedirs(nuevaruta)\n print ('Se ha creado la ruta ===>',nuevaruta)\nelse:\n print (\"La ruta \"+nuevaruta+\" existe..!!!\")\n ",
"Realizando la conversión de la trayectoria",
"\nprint ('Obtenemos los archivos a convertir')\n#Buscamos el archivo DCD, PDB y PSF para realizar las operaciones\nfor filename in os.listdir(ruta_old_traj):\n if filename.endswith('.dcd'):\n dcd_file=filename\n if filename.endswith('.psf'):\n psf_file=filename\n if filename.endswith('.pdb'):\n pdb_file=filename\n\nprint ('pdb file =>', pdb_file)\nprint ('psf file =>', psf_file)\nprint ('dcd file =>', dcd_file)\n\n\nprint ( 'Nos vemos a ....', ruta_old_traj )\nos.chdir( ruta_old_traj )\nprint ('\\nEjecutando CATDCD para convertir la trayectoria....')\noutput_catdcd=!catdcd -otype trr -o output.trr $dcd_file \nprint (output_catdcd.n)\n\nprint ('\\nEjecutando TRJCONV para convertir la trayectoria....')\noutput_trjconv=!trjconv -f output.trr -o output.xtc -timestep 20\n#print (output_trjconv.n)\n\nprint ('\\nBorrando archivos temporales de conversión...')\noutput_rm=!rm output.trr\n\nprint ('\\nMoviendo los archivos de salida al directorio '+nuevaruta)\nsource_file=ruta_old_traj+'/output.xtc'\ndest_file=nuevaruta+'/output.xtc'\nshutil.move(source_file,dest_file)\n\nprint ('\\Copiando el archivo ionized.pdb a '+nuevaruta)\nsource_file=ruta_old_traj+'/ionized.pdb'\ndest_file=nuevaruta+'/ionized.pdb'\nshutil.copy(source_file,dest_file)\n\nprint ('\\nCopiando el archivo ionized.psf a '+nuevaruta)\nsource_file=ruta_old_traj+'/ionized.psf'\ndest_file=nuevaruta+'/ionized.psf'\nshutil.copy(source_file,dest_file)\n\n\nprint('\\nTrayectoria convertida, regresando a '+ruta)\nos.chdir( ruta )",
"Cargando la nueva trayectoria en VMD para su revisión",
"print ('Visualizando la nueva trayectoria')\nfile_psf=nuevaruta+'/'+psf_file\ntraj = nuevaruta+'/output.xtc'\n!vmd $file_psf $traj",
"Calculando el RMSD con Gromacs 5\n\nEl siguiente script obtiene el RMSD de la trayectoria haciendo uso de Gromacs 5\nCreando la carpeta de RMSD",
"### Creando el directorio para el análisis del RMSD\n#Verificando que exista la nueva carpeta para la conversión de trayectorias\n#nuevaruta = ruta+'/'+indir+'_XTC'\nruta_rmsd = nuevaruta+'/rmsd'\nprint ( ruta_rmsd )\nif not os.path.exists(ruta_rmsd): \n os.makedirs(ruta_rmsd)\n print ('Se ha creado la ruta ===>',ruta_rmsd)\nelse:\n print (\"La ruta \"+ruta_rmsd+\" existe..!!!\")\n \nprint ( 'Nos vamos a ....', ruta_rmsd )\nos.chdir( ruta_rmsd )",
"Calculando el RMSD con la opción 3 'C-Alpha'\nSelect group for least squares fit\nGroup 3 ( C-alpha)\nSelect a group: 3\nSelected 3: 'C-alpha'\nSelect group for RMSD calculation\nGroup 3 ( C-alpha)\nSelect a group: 3\nSelected 3: 'C-alpha'",
"\nprint ('Ejecutando el análisis de rmsd...')\n!echo 3 3 | g_rms -f ../output.xtc -s ../ionized.pdb -a avgrp.xvg",
"Creando archivo rmsd.dat para su visualización en XMGRACE\nSe genera el archivo de salida rmsd.dat, éste se deberá visualizar con Xmgrace para guardarlo en formato PNG.",
"#Inicializando vector\nrmsd=[]\n\ntry:\n archivo = open( 'rmsd.xvg' )\nexcept IOError:\n print ('No se pudo abrir el archivo o no existe·..')\n\ni=0\nfor linea in archivo.readlines():\n fila = linea.strip()\n sl = fila.split()\n cadena=sl[0]\n if (not '#' in cadena) and (not '@' in cadena):\n num=float(sl[0])\n #num2=float(sl[1])\n num=num/1000\n rmsd.append(repr(num)+'\\t'+sl[1]+'\\n')\n i=i+1\n\n\n#Escribiendo el archivo RMSD\nf = open('rmsd.dat', 'w')\n#f.write('@ title \"RMSD\" \\n')\nf.write('@ xaxis label \" Time (ns)\" \\n')\nf.write('@ xaxis label char size 1.480000\\n')\nf.write('@ xaxis bar linewidth 3.0\\n')\nf.write('@ xaxis ticklabel char size 1.480000\\n')\nf.write('@ yaxis label \" RMSD (nm)\" \\n')\nf.write('@ yaxis label char size 1.480000\\n')\nf.write('@ yaxis bar linewidth 3.0\\n')\nf.write('@ yaxis ticklabel char size 1.480000\\n')\nf.write('@ s0 line linewidth 1.5\\n')\n\nf.write('@TYPE xy \\n')\n#f.write('@ subtitle \"C-alpha after lsq fit to C-alpha\" \\n')\nf.write(\"\".join(rmsd))\nf.close()\n\n\n#Cargando el archivo para visualizar en xmgrace\n!xmgrace rmsd.dat\n\n\n#Cargando la imagen generada en xmgrace\nImage(filename='rmsd.png')",
"Creando el archivo rmsd_residue.dat para visualizar con xmgrace\nSe crea el archivo rmsd_residue.dat formateado para su visualización en Xmgrace, en donde se deberá guardar como imagen PNG.",
"#Inicializando vector\nrmsd_residue=[]\n\ntry:\n archivo_rmsd = open( 'aver.xvg' )\nexcept IOError:\n print ('No se pudo abrir el archivo o no existe·..')\n\ni=1\nfor linea in archivo_rmsd.readlines():\n fila = linea.strip()\n sl = fila.split()\n cadena=sl[0]\n if (not '#' in cadena) and (not '@' in cadena):\n num=int(sl[0])\n print ('Residuo =>',num+1)\n rmsd_residue.append(repr(num+1)+'\\t'+sl[1]+'\\n')\n i=i+1\n\n#Escribiendo el archivo RMSD_RESIDUE\nf = open('rmsd_residue.dat', 'w')\n#f.write('@ title \"C-alpha\" \\n')\nf.write('@ xaxis label \"Residue\" \\n')\nf.write('@ xaxis label char size 1.480000\\n')\nf.write('@ xaxis bar linewidth 3.0\\n')\nf.write('@ xaxis ticklabel char size 1.480000\\n')\nf.write('@ yaxis label \" RMSD (nm)\" \\n')\nf.write('@ yaxis label char size 1.480000\\n')\nf.write('@ yaxis bar linewidth 3.0\\n')\nf.write('@ yaxis ticklabel char size 1.480000\\n')\nf.write('@ s0 line linewidth 2.5\\n')\nf.write('@ s0 symbol 1\\n')\nf.write('@ s0 symbol size 1.000000\\n')\nf.write('@ s0 symbol color 1\\n')\nf.write('@ s0 symbol pattern 1\\n')\nf.write('@ s0 symbol fill color 2\\n')\nf.write('@ s0 symbol fill pattern 1\\n')\nf.write('@ s0 symbol linewidth 1.0\\n')\nf.write('@TYPE xy \\n')\nf.write(\"\".join(rmsd_residue))\nf.close()\n\n\n \n\n\n!xmgrace rmsd_residue.dat\n\n#Cargando la imagen generada en xmgrace\nImage(filename='rmsd_residue.png')",
"Creando archivo rmsd.dat para su visualización en Matplotlib\nSe genera el gráfico de salida para matplotlib",
"\n\ndata_rmsd=np.loadtxt('rmsd.xvg',comments=['#', '@'])\n\n#Engrosar marco \nfig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)\nax = fig.add_subplot(111)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(4)\n\n#Formateando los valores de los ejes\nax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))\n\n\npl.plot(data_rmsd[:,0]/1000, data_rmsd[:,1], linewidth = 2, markeredgewidth=3, color='black')\npl.xlabel(\"Time (ns)\", fontsize = 40)\npl.ylabel('RMSD (nm)', fontsize = 40)\n#pl.suptitle('RMSD', fontsize=50)\n#pl.title('C-alpha after lsq fit to C-alpha', fontsize=30)\npl.xticks(fontsize=30) \npl.yticks(fontsize=30) \n\n\n",
"Creando archivo rmsd_residue.dat para su visualización en Matplotlib\nSe genera el gráfico de salida para matplotlib",
"data_rmsd_res=np.loadtxt('aver.xvg',comments=['#', '@'])\n\n \n#Engrosar marco \nfig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)\nax = fig.add_subplot(111)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(4)\n\n#Formateando los valores de los ejes\nax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))\n\npl.plot(data_rmsd_res[:,0]+1, data_rmsd_res[:,1], '-o', color='black', markersize=25,\n markerfacecolor='red',markeredgecolor='black',markeredgewidth=3, linewidth = 4, )\npl.xlabel(\"Residue\", fontsize = 40)\npl.ylabel('RMSD (nm)', fontsize = 40)\n#pl.title('C-alpha', fontsize=40)\npl.xticks(fontsize=30) \npl.yticks(fontsize=30) \npl.xlim(0, len(data_rmsd_res[:,1]))",
"RMSF\n\nSe crea una carpeta RMSF para guardar los archivos generados.",
"### Creando el directorio para el análisis del RMSF\n#Verificando que exista la nueva carpeta para la conversión de trayectorias\nruta_rmsf = nuevaruta+'/rmsf'\nprint ( ruta_rmsf )\nif not os.path.exists(ruta_rmsf): \n os.makedirs(ruta_rmsf)\n print ('Se ha creado la ruta ===>',ruta_rmsf)\nelse:\n print (\"La ruta \"+ruta_rmsf+\" existe..!!!\")\n\nprint ( 'Nos vamos a ....', ruta_rmsf )\nos.chdir( ruta_rmsf )",
"Calculando el RMSF con la opción 3 'C-Alpha'",
"\nprint ('Ejecutando el análisis de rmsf...')\n!echo 3 | g_rmsf -f ../output.xtc -s ../ionized.pdb -oq bfac.pdb -o rmsf.xvg -res",
"Creando archivo rmsf.dat para su visualización en XMGRACE\nSe genera el archivo de salida rmsf.dat, éste se deberá visualizar con Xmgrace para guardarlo en formato PNG.",
"#Inicializando vector\nrmsf=[]\nrmsf_x=[]\nrmsf_y=[]\ntry:\n file_rmsf = open( 'rmsf.xvg' )\nexcept IOError:\n print ('No se pudo abrir el archivo o no existe·..')\n\ni=0\nfor linea in file_rmsf.readlines():\n fila = linea.strip()\n sl = fila.split()\n cadena=sl[0]\n if (not '#' in cadena) and (not '@' in cadena):\n print ('Residue =>',cadena)\n rmsf.append(sl[0]+'\\t'+sl[1]+'\\n')\n rmsf_x.append(int(sl[0]))\n rmsf_y.append(float(sl[1]))\n i=i+1\n\nfile_rmsf.close()\n#Escribiendo el archivo RMSD\nf = open('rmsf.dat', 'w')\n#f.write('@ title \"RMSF fluctuation\" \\n')\nf.write('@ xaxis label \" Residue\" \\n')\nf.write('@ xaxis label char size 1.480000\\n')\nf.write('@ xaxis bar linewidth 3.0\\n')\nf.write('@ xaxis ticklabel char size 1.480000\\n')\nf.write('@ yaxis label \"RMSF (nm)\" \\n')\nf.write('@ yaxis label char size 1.480000\\n')\nf.write('@ yaxis bar linewidth 3.0\\n')\nf.write('@ yaxis ticklabel char size 1.480000\\n')\nf.write('@ s0 line linewidth 2.5\\n')\nf.write('@ s0 symbol 1\\n')\nf.write('@ s0 symbol size 1.000000\\n')\nf.write('@ s0 symbol color 1\\n')\nf.write('@ s0 symbol pattern 1\\n')\nf.write('@ s0 symbol fill color 2\\n')\nf.write('@ s0 symbol fill pattern 1\\n')\nf.write('@ s0 symbol linewidth 1.0\\n')\n\nf.write('@TYPE xy \\n')\nf.write(\"\".join(rmsf))\nf.close()\n\n\n!xmgrace rmsf.dat\n\n#Cargando la imagen generada en xmgrace\nImage(filename='rmsf.png')",
"Creando archivo rmsf.dat para su visualización en Matplotlib\nSe genera el gráfico de salida para matplotlib",
"data_rmsf=np.loadtxt('rmsf.xvg',comments=['#', '@'])\n\n \n#Engrosar marco \nfig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)\nax = fig.add_subplot(111)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(4)\n\n#Formateando los valores de los ejes\nax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))\n\npl.plot(data_rmsf[:,0], data_rmsf[:,1], '-o', color='black', markersize=25,\n markerfacecolor='red',markeredgecolor='black',markeredgewidth=3, linewidth = 4, )\npl.xlabel(\"Residue\", fontsize = 40)\npl.ylabel('RMSF (nm)', fontsize = 40)\n#pl.title('RMSF Fluctuation', fontsize=40)\npl.xticks(fontsize=30) \npl.yticks(fontsize=30) \npl.xlim(0, len(data_rmsf[:,1]))\n",
"B-factors\n\nGenerando archivo para visualizarlo con XMGRACE",
"#Inicializando vector\nbfactors=[]\ntry:\n file_bfactor = open( 'bfac.pdb' )\nexcept IOError:\n print ('No se pudo abrir el archivo o no existe·..')\n\ni=0\nfor linea in file_bfactor.readlines():\n fila = linea.strip()\n sl = fila.split()\n if (sl[0]=='ATOM'):\n #print (sl[0])\n idresidue=fila[23:26]\n bfactor=fila[60:66]\n print (idresidue + '\\t'+bfactor)\n bfactors.append(idresidue+'\\t'+bfactor+'\\n')\n #i=i+1\n\n\n#Escribiendo el archivo BFACTOR.dat\nf = open('bfactor.dat', 'w')\n#f.write('@ title \"B-factors\" \\n')\nfoo = 'baz \"\\\\\"'\nf.write('@ xaxis label \" Residue\" \\n')\nf.write('@ xaxis label char size 1.480000\\n')\nf.write('@ xaxis bar linewidth 3.0\\n')\nf.write('@ xaxis ticklabel char size 1.480000\\n')\nf.write('@ yaxis label \"B-factors (' +\"\\\\\"+'cE'+\"\\\\\"+'C)\"\\n')\nf.write('@ yaxis label char size 1.480000\\n')\nf.write('@ yaxis bar linewidth 3.0\\n')\nf.write('@ yaxis ticklabel char size 1.480000\\n')\nf.write('@ s0 line linewidth 2.5\\n')\nf.write('@ s0 symbol 1\\n')\nf.write('@ s0 symbol size 1.000000\\n')\nf.write('@ s0 symbol color 1\\n')\nf.write('@ s0 symbol pattern 1\\n')\nf.write('@ s0 symbol fill color 2\\n')\nf.write('@ s0 symbol fill pattern 1\\n')\nf.write('@ s0 symbol linewidth 1.0\\n')\n\nf.write('@TYPE xy \\n')\nf.write(\"\".join(bfactors))\nf.close()\n\n\n!xmgrace bfactor.dat\n\n#Cargando la imagen generada en xmgrace\nImage(filename='bfactor.png')",
"Generando archivo para visualizar con Matplotlib",
"#Inicializando vector\nbfactors=[]\ntry:\n file_bfactor = open( 'bfac.pdb' )\nexcept IOError:\n print ('No se pudo abrir el archivo o no existe·..')\n\ni=0\nprint ('Residuo' + '\\t'+'bfactor')\nfor linea in file_bfactor.readlines():\n fila = linea.strip()\n sl = fila.split()\n if (sl[0]=='ATOM'):\n #print (sl[0])\n idresidue=fila[23:26]\n bfactor=fila[60:66]\n print (idresidue + '\\t'+bfactor)\n bfactors.append(idresidue+'\\t'+bfactor+'\\n')\n #i=i+1\n\n#Escribiendo el archivo BFACTOR.dat\nf = open('bfactor.dat', 'w')\n\nf.write(\"\".join(bfactors))\nf.close()\n\ndata_bfactor=np.loadtxt('bfactor.dat',comments=['#', '@'])\n#Engrosar marco \nfig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)\nax = fig.add_subplot(111)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(4)\n\n#Formateando los valores de los ejes\n#ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))\n\npl.plot(data_bfactor[:,0], data_bfactor[:,1], '-o', color='black', markersize=25,\n markerfacecolor='red',markeredgecolor='black',markeredgewidth=3, linewidth = 4, )\n\npl.xlabel('Residue', fontsize = 40)\npl.ylabel('B-factors ('+ r'$\\AA$'+')' , fontsize = 40)\n#pl.title('B-Factors', fontsize=40)\npl.xticks(fontsize=30) \npl.yticks(fontsize=30) \npl.xlim(0, len(data_bfactor[:,1]))\n",
"Secondary Structure\n\nSe crea la carpeta para cálculo de la estructura",
"### Creando el directorio para el análisis del RMSF\n#Verificando que exista la nueva carpeta para la conversión de trayectorias\nruta_ss = nuevaruta+'/estructura'\nprint ( ruta_ss )\nif not os.path.exists(ruta_ss): \n os.makedirs(ruta_ss)\n print ('Se ha creado la ruta ===>',ruta_ss)\nelse:\n print (\"La ruta \"+ruta_ss+\" existe..!!!\")\n \nprint ( 'Nos vamos a ....', ruta_ss )\nos.chdir( ruta_ss )",
"Calculando la estructura secundaria\nSe necesita contar con el programa dssp en la ruta /usr/local/bin, el cual se enlaza con Gromacs 5",
"\nprint ('Ejecutando el análisis de esctructura secundaria...') \n!echo 5 | do_dssp -f ../output.xtc -s ../ionized.pdb -o sec_est.xpm -tu ns\n\n\nprint ('\\n Convirtiendo el archivo a ps...')\n!xpm2ps -f sec_est.xpm -by 6 -bx .1 -o est_sec.eps\n\n\n\nprint('\\nConvirtiendo a png...')\n!convert -density 600 est_sec.eps -resize 1024x1024 est_sec.png\n\nprint ('Cargando el archivo...')\nImage(filename='est_sec.png', width=1024)",
"R-GYRATE\n\nSe crea una carpeta rgiro para guardar los archivos generados.",
"### Creando el directorio para el análisis del r-gyro\n#Verificando que exista la nueva carpeta para la conversión de trayectorias\nruta_rgyro = nuevaruta+'/rgyro'\nprint ( ruta_rgyro )\nif not os.path.exists(ruta_rgyro): \n os.makedirs(ruta_rgyro)\n print ('Se ha creado la ruta ===>',ruta_rgyro)\nelse:\n print (\"La ruta \"+ruta_rgyro+\" existe..!!!\")\n\nprint ( 'Nos vamos a ....', ruta_rgyro)\nos.chdir( ruta_rgyro )",
"Calculando el r-gyro con la opción (3) - C-alpha\nSe calcula para los carbonos alfa.",
"\nprint ('Ejecutando el análisis de rgyro...')\n!echo 3 | g_gyrate -f ../output.xtc -s ../ionized.pdb -o gyrate.xvg\n",
"Generando el archivo rgyro.dat para su análisis con XMGRACE",
"#Inicializando vector\nrgyro=[]\ntry:\n file_rmsf = open( 'gyrate.xvg' )\nexcept IOError:\n print ('No se pudo abrir el archivo o no existe·..')\n\ni=0\nfor linea in file_rmsf.readlines():\n fila = linea.strip()\n sl = fila.split()\n cadena=sl[0]\n if (not '#' in cadena) and (not '@' in cadena):\n num=float(sl[0])\n #num2=float(sl[1])\n num=num/1000\n rgyro.append(repr(num)+'\\t'+sl[1]+'\\n')\n i=i+1\n\n\n#Escribiendo el archivo RGYRO.DAT\nf = open('rgyro.dat', 'w')\n#f.write('@ title \"Radius of gyration\" \\n')\nf.write('@ xaxis label \" Time (ns)\" \\n')\nf.write('@ xaxis label char size 1.480000\\n')\nf.write('@ xaxis bar linewidth 3.0\\n')\nf.write('@ xaxis ticklabel char size 1.480000\\n')\nf.write('@ yaxis label \"Rg (nm)\" \\n')\nf.write('@ yaxis label char size 1.480000\\n')\nf.write('@ yaxis bar linewidth 3.0\\n')\nf.write('@ yaxis ticklabel char size 1.480000\\n')\nf.write('@ s0 line linewidth 2.5\\n')\n\n\nf.write('@TYPE xy \\n')\nf.write(\"\".join(rgyro))\nf.close()\n\n\n!xmgrace rgyro.dat\n\n#Cargando la imagen generada en xmgrace\nImage(filename='rgyro.png')",
"Ploteando el archivo gyrate.xvg con matplotlib",
"data_rgyro=np.loadtxt('gyrate.xvg',comments=['#', '@'])\n\n \n#Engrosar marco \nfig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)\nax = fig.add_subplot(111)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(4)\n\n#Formateando los valores de los ejes\nax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))\n\npl.plot(data_rgyro[:,0]/1000, data_rgyro[:,1], linewidth = 2, color='black')\npl.xlabel(\"Time (ns)\", fontsize = 40)\npl.ylabel('Rg (nm)', fontsize = 40)\n#pl.suptitle('Radius of gyration', fontsize=50)\npl.xticks(fontsize=30) \npl.yticks(fontsize=30) ",
"RMSD Helix Alfa\n\nPara realizar este análisis se debe cargar el pdb original de la proteina que se encuentra en la carpeta 01_BUILD.\nCargarlo con VMD y dirigirse al Menú EXTENSIONS -> ANALYSIS -> SEQUENCE VIEWER, en la cual se tomará el rango de átomos del campo Struct (H), el cual se proporcionará de la forma \"resid X1 to X2\" donde X1 es primer átomo de la helix y X2 el último átomo de la helix.",
"### Creando el directorio para el análisis del RMSF\n#Verificando que exista la nueva carpeta para la conversión de trayectorias\nruta_helix = nuevaruta+'/rmsd_helix'\nprint ( ruta_helix )\nif not os.path.exists(ruta_helix): \n os.makedirs(ruta_helix)\n print ('Se ha creado la ruta ===>',ruta_helix)\nelse:\n print (\"La ruta \"+ruta_helix+\" existe..!!!\")\n\nprint ( 'Nos vamos a ....', ruta_helix)\nos.chdir( ruta_helix )",
"Entrada de datos\nPara la entrada se deberá dar con la opción \"resid X to X\".",
"num=input('Número de hélices con las que cuenta la proteína:')\nprint (num)\n\nif (int(num)==1):\n indices_ha1=input('Proporciona el rango de índices de la Hélice 1:')\n print (indices_ha1)\n r_helix_1=1\n r_helix_2=0\n r_helix_3=0\n r_helix_4=0\nif (int(num)==2):\n indices_ha1=input('Proporciona el rango de índices de la Hélice 1:')\n print (indices_ha1)\n indices_ha2=input('Proporciona el rango de índices de la Hélice 2:')\n print (indices_ha2)\n r_helix_1=1\n r_helix_2=1\n r_helix_3=0\n r_helix_4=0\nif (int(num)==3):\n indices_ha1=input('Proporciona el rango de índices de la Hélice 1:')\n print (indices_ha1)\n indices_ha2=input('Proporciona el rango de índices de la Hélice 2:')\n print (indices_ha2)\n indices_ha3=input('Proporciona el rango de índices de la Hélice 3:')\n print (indices_ha3)\n r_helix_1=1\n r_helix_2=1\n r_helix_3=1\n r_helix_4=0\nif (int(num)==4):\n indices_ha1=input('Proporciona el rango de índices de la Hélice 1:')\n print (indices_ha1)\n indices_ha2=input('Proporciona el rango de índices de la Hélice 2:')\n print (indices_ha2)\n indices_ha3=input('Proporciona el rango de índices de la Hélice 3:')\n print (indices_ha3)\n indices_ha4=input('Proporciona el rango de índices de la Hélice 4:')\n print (indices_ha4)\n r_helix_1=1\n r_helix_2=1\n r_helix_3=1\n r_helix_4=1\n\n#Script para vmd de la Hélice Alfa 2\npsf=ruta_old_traj+'/'+psf_file\ndcd=ruta_old_traj+'/'+dcd_file\n\nif (r_helix_1==1):\n f = open('ha1.tcl', 'w')\n print(f)\n f.write('set psfFile '+ psf+' \\n')\n f.write('set dcdFile '+ dcd+' \\n')\n f.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n f.write('set outfile ' +'[open ' +'rmsd_ha1.dat'+' w]\\n')\n f.write('set nf [molinfo top get numframes]\\n')\n f.write('\\n#RMSD calculation loop\\n')\n f.write('set f1 [atomselect top \"'+indices_ha1+' \" frame 0]\\n')\n f.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n f.write(' set sel [atomselect top \"'+indices_ha1+' \" frame $i]\\n')\n f.write(' $sel move [measure fit $sel $f1]\\n')\n f.write(' set time [expr {$i +1}]\\n')\n f.write(' puts -nonewline $outfile \"[measure rmsd $sel $f1]\"\\n')\n f.write(' puts $outfile \"$time $time\"\\n')\n f.write('}\\n')\n f.write('close $outfile')\n f.close()\n\nif (r_helix_2==1):\n f = open('ha2.tcl', 'w')\n print(f)\n f.write('set psfFile '+ psf+' \\n')\n f.write('set dcdFile '+ dcd+' \\n')\n f.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n f.write('set outfile ' +'[open ' +'rmsd_ha2.dat'+' w]\\n')\n f.write('set nf [molinfo top get numframes]\\n')\n f.write('\\n#RMSD calculation loop\\n')\n f.write('set f1 [atomselect top \"'+indices_ha2+' \" frame 0]\\n')\n f.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n f.write(' set sel [atomselect top \"'+indices_ha2+' \" frame $i]\\n')\n f.write(' $sel move [measure fit $sel $f1]\\n')\n f.write(' set time [expr {$i +1}]\\n')\n f.write(' puts -nonewline $outfile \"[measure rmsd $sel $f1]\"\\n')\n f.write(' puts $outfile \"$time $time\"\\n')\n f.write('}\\n')\n f.write('close $outfile')\n f.close()\n\nif (r_helix_3==1):\n f = open('ha3.tcl', 'w')\n print(f)\n f.write('set psfFile '+ psf+' \\n')\n f.write('set dcdFile '+ dcd+' \\n')\n f.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n f.write('set outfile ' +'[open ' +'rmsd_ha3.dat'+' w]\\n')\n f.write('set nf [molinfo top get numframes]\\n')\n f.write('\\n#RMSD calculation loop\\n')\n f.write('set f1 [atomselect top \"'+indices_ha3+' \" frame 0]\\n')\n f.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n f.write(' set sel [atomselect top \"'+indices_ha3+' \" frame $i]\\n')\n f.write(' $sel move [measure fit $sel $f1]\\n')\n f.write(' set time [expr {$i +1}]\\n')\n f.write(' puts -nonewline $outfile \"[measure rmsd $sel $f1]\"\\n')\n f.write(' puts $outfile \"$time $time\"\\n')\n f.write('}\\n')\n f.write('close $outfile')\n f.close()\n\nif (r_helix_4==1):\n f = open('ha4.tcl', 'w')\n print(f)\n f.write('set psfFile '+ psf+' \\n')\n f.write('set dcdFile '+ dcd+' \\n')\n f.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n f.write('set outfile ' +'[open ' +'rmsd_ha4.dat'+' w]\\n')\n f.write('set nf [molinfo top get numframes]\\n')\n f.write('\\n#RMSD calculation loop\\n')\n f.write('set f1 [atomselect top \"'+indices_ha4+' \" frame 0]\\n')\n f.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n f.write(' set sel [atomselect top \"'+indices_ha4+' \" frame $i]\\n')\n f.write(' $sel move [measure fit $sel $f1]\\n')\n f.write(' set time [expr {$i +1}]\\n')\n f.write(' puts -nonewline $outfile \"[measure rmsd $sel $f1]\"\\n')\n f.write(' puts $outfile \"$time $time\"\\n')\n f.write('}\\n')\n f.write('close $outfile')\n f.close()\n\n\n\nif (r_helix_1==1):\n #Calculando con VMD hélice 1\n !vmd -dispdev text < ha1.tcl\nif (r_helix_2==1):\n #Calculando con VMD hélice 2\n !vmd -dispdev text < ha2.tcl\nif (r_helix_3==1):\n #Calculando con VMD hélice 3\n !vmd -dispdev text < ha3.tcl\nif (r_helix_4==1):\n #Calculando con VMD hélice 4\n !vmd -dispdev text < ha4.tcl\n\nif (int(num)==1):\n #Graficando\n data_ha1=np.loadtxt('rmsd_ha1.dat',comments=['#', '@'])\n #Engrosar marco \n fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)\n ax = fig.add_subplot(111)\n for axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(4)\n #Formateando los valores de los ejes\n ax.yaxis.set_major_formatter(FormatStrFormatter('%.0f'))\n #pl.plot(data_ha1[:,0], data_ha1[:,1], linewidth = 3)\n pl.plot(data_ha1[:,1]*0.02, data_ha1[:,0]/10, linewidth = 3, color='black')\n pl.xlabel(\"Time (ns)\", fontsize = 40)\n pl.ylabel('RMSD (nm)', fontsize = 40)\n #pl.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\n #pl.title('RMSD Helix Alfa', fontsize=50)\n pl.xticks(fontsize=30) \n pl.yticks(fontsize=30) \n \n\n\nif (int(num)==2):\n #Graficando\n data_ha1=np.loadtxt('rmsd_ha1.dat',comments=['#', '@'])\n data_ha2=np.loadtxt('rmsd_ha2.dat',comments=['#', '@'])\n #Engrosar marco \n fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)\n ax = fig.add_subplot(111)\n for axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(4)\n #Formateando los valores de los ejes\n ax.yaxis.set_major_formatter(FormatStrFormatter('%.0f'))\n #pl.plot(data_ha1[:,0], data_ha1[:,1], linewidth = 3)\n pl.plot(data_ha1[:,1]*0.02, data_ha1[:,0]/10, linewidth = 3, color='black')\n pl.plot(data_ha2[:,1]*0.02, data_ha2[:,0]/10, linewidth = 3, color='red')\n pl.xlabel(\"Time (ns)\", fontsize = 40)\n pl.ylabel('RMSD (nm)', fontsize = 40)\n #pl.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\n #pl.title('RMSD Helix Alfa', fontsize=50)\n pl.xticks(fontsize=30) \n pl.yticks(fontsize=30) \n\nif (int(num)==3):\n #Graficando\n data_ha1=np.loadtxt('rmsd_ha1.dat',comments=['#', '@'])\n data_ha2=np.loadtxt('rmsd_ha2.dat',comments=['#', '@'])\n data_ha3=np.loadtxt('rmsd_ha3.dat',comments=['#', '@'])\n #Engrosar marco \n fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)\n ax = fig.add_subplot(111)\n for axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(4)\n #Formateando los valores de los ejes\n ax.yaxis.set_major_formatter(FormatStrFormatter('%.0f'))\n #pl.plot(data_ha1[:,0], data_ha1[:,1], linewidth = 3)\n pl.plot(data_ha1[:,1]*0.02, data_ha1[:,0]/10, linewidth = 3, color='black')\n pl.plot(data_ha2[:,1]*0.02, data_ha2[:,0]/10, linewidth = 3, color='red')\n pl.plot(data_ha3[:,1]*0.02, data_ha3[:,0]/10, linewidth = 3, color='green')\n pl.xlabel(\"Time (ns)\", fontsize = 40)\n pl.ylabel('RMSD (nm)', fontsize = 40)\n #pl.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\n #pl.title('RMSD Helix Alfa', fontsize=50)\n pl.xticks(fontsize=30) \n pl.yticks(fontsize=30)\n\nif (int(num)==4):\n #Graficando\n data_ha1=np.loadtxt('rmsd_ha1.dat',comments=['#', '@'])\n data_ha2=np.loadtxt('rmsd_ha2.dat',comments=['#', '@'])\n data_ha3=np.loadtxt('rmsd_ha3.dat',comments=['#', '@'])\n data_ha4=np.loadtxt('rmsd_ha4.dat',comments=['#', '@'])\n #Engrosar marco \n fig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)\n ax = fig.add_subplot(111)\n for axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(4)\n #Formateando los valores de los ejes\n ax.yaxis.set_major_formatter(FormatStrFormatter('%.0f'))\n #pl.plot(data_ha1[:,0], data_ha1[:,1], linewidth = 3)\n pl.plot(data_ha1[:,1]*0.02, data_ha1[:,0]/10, linewidth = 3, color='black')\n pl.plot(data_ha2[:,1]*0.02, data_ha2[:,0]/10, linewidth = 3, color='red')\n pl.plot(data_ha3[:,1]*0.02, data_ha3[:,0]/10, linewidth = 3, color='green')\n pl.plot(data_ha4[:,1]*0.02, data_ha4[:,0]/10, linewidth = 3, color='blue')\n pl.xlabel(\"Time (ns)\", fontsize = 40)\n pl.ylabel('RMSD (A)', fontsize = 40)\n #pl.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\n #pl.title('RMSD Helix Alfa', fontsize=50)\n pl.xticks(fontsize=30) \n pl.yticks(fontsize=30)",
"SASA\n\nCreando la estructura de carpeta para el cálculo",
"### Creando el directorio para el análisis del SASA\n### NOTA: se calcula con gromacs4 ya que arroja bien los resultados comparado con gromacs5\n\nruta_sasa = nuevaruta+'/sasa'\nprint ( ruta_sasa )\nif not os.path.exists(ruta_sasa): \n os.makedirs(ruta_sasa)\n print ('Se ha creado la ruta ===>',ruta_sasa)\nelse:\n print (\"La ruta \"+ruta_sasa+\" existe..!!!\")\n\nprint ( 'Nos vamos a ....', ruta_sasa )\nos.chdir( ruta_sasa )",
"Ejecutando el análisis de SASA con Gromacs4",
"\nprint ('Ejecutando el análisis de sasa con Gromacs 4 utilizando la opción 1 (protein)...')\n!echo 1 1 | /opt/gromacs4/bin/g_sas -f ../output.xtc -s ../ionized.pdb -o solven-accessible-surface.xvg -oa atomic-sas.xvg -or residue-sas.xvg",
"Creando el archivo sasa_residuo.dat para salida con XMGRACE",
"#Inicializando vector\nsasa_residuo=[]\n\ntry:\n residue_sas = open( 'residue-sas.xvg' )\nexcept IOError:\n print ('No se pudo abrir el archivo o no existe·..')\n\ni=0\nfor linea in residue_sas.readlines():\n fila = linea.strip()\n sl = fila.split()\n cadena=sl[0]\n if (not '#' in cadena) and (not '@' in cadena):\n print ('Residue =>',cadena)\n sasa_residuo.append(sl[0]+'\\t'+sl[1]+'\\n')\n i=i+1\n\n\n#Escribiendo el archivo RMSD\nf = open('sasa-residuo.dat', 'w')\n#f.write('@ title \"Area per residue over the trajectory\" \\n')\nf.write('@ xaxis label \" Residue \" \\n')\nf.write('@ xaxis label char size 1.480000\\n')\nf.write('@ xaxis bar linewidth 3.0\\n')\nf.write('@ xaxis ticklabel char size 1.480000\\n')\nf.write('@ yaxis label \"Area (nm' +\"\\\\\"+'S2'+\"\\\\N\"+')\"\\n')\nf.write('@ yaxis label char size 1.480000\\n')\nf.write('@ yaxis bar linewidth 3.0\\n')\nf.write('@ yaxis ticklabel char size 1.480000\\n')\nf.write('@ s0 line linewidth 2.5\\n')\nf.write('@ s0 symbol 1\\n')\nf.write('@ s0 symbol size 1.000000\\n')\nf.write('@ s0 symbol color 1\\n')\nf.write('@ s0 symbol pattern 1\\n')\nf.write('@ s0 symbol fill color 2\\n')\nf.write('@ s0 symbol fill pattern 1\\n')\nf.write('@ s0 symbol linewidth 1.0\\n')\nf.write('@TYPE xy \\n')\nf.write(\"\".join(sasa_residuo))\nf.close()\n\n\n!xmgrace sasa-residuo.dat\n\n#Cargando la imagen generada en xmgrace\nImage(filename='sasa-residuo.png')",
"Cargando archivo residue-sas.xvg para su visualización en Matplotlib\nSe genera el gráfico de salida para matplotlib",
"data_sasa_residue=np.loadtxt('residue-sas.xvg',comments=['#', '@'])\n\n \n#Engrosar marco \nfig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)\nax = fig.add_subplot(111)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(4)\n#Formateando los valores de los ejes\nax.yaxis.set_major_formatter(FormatStrFormatter('%.1f'))\n\npl.plot(data_sasa_residue[:,0], data_sasa_residue[:,1], '-o', color='black', markersize=25,\n markerfacecolor='red',markeredgecolor='black',markeredgewidth=3, linewidth = 4, )\npl.xlabel(\"Residue\", fontsize = 30)\n#pl.ylabel('Area (nm2)', fontsize = 30)\npl.ylabel('Area ( nm'+ r'$\\ ^2$'+')' , fontsize = 40)\n#pl.title('Area per residue over the trajectory', fontsize=40)\npl.xticks(fontsize=30) \npl.yticks(fontsize=30) \npl.xlim(0, len(data_sasa_residue[:,1]))",
"Creando el archivo sasa.dat para salida con XMGRACE",
"#Inicializando vector\nsasa=[]\n\ntry:\n sasafile = open( 'solven-accessible-surface.xvg' )\nexcept IOError:\n print ('No se pudo abrir el archivo o no existe·..')\n\ni=0\nfor linea in sasafile.readlines():\n fila = linea.strip()\n sl = fila.split()\n cadena=sl[0]\n if (not '#' in cadena) and (not '@' in cadena):\n #print (cadena)\n num=float(sl[0])\n num=num/1000\n sasa.append(repr(num)+'\\t'+sl[1]+'\\t'+sl[2]+'\\t'+sl[3]+'\\n')\n i=i+1\n\ncel2=float(sl[2])\nprint(cel2)\n#Escribiendo el archivo RMSD\nf = open('sasa.dat', 'w')\n#f.write('@ title \"Solven Accessible Surface\" \\n')\nf.write('@ xaxis label \" Time (ns) \" \\n')\nf.write('@ xaxis label char size 1.480000\\n')\nf.write('@ xaxis bar linewidth 3.0\\n')\nf.write('@ xaxis ticklabel char size 1.480000\\n')\nf.write('@ yaxis label \"Area (nm' +\"\\\\\"+'S2'+\"\\\\N\"+')\"\\n')\nf.write('@ yaxis label char size 1.480000\\n')\nf.write('@ yaxis bar linewidth 3.0\\n')\nf.write('@ yaxis ticklabel char size 1.480000\\n')\n#f.write('@ s0 legend \"Hydrophobic\"\\n')\n#if (cel2>0):\n #f.write('@ s1 legend \"Hydrophilic\"\\n')\n\nf.write('@TYPE xy \\n')\nf.write(\"\".join(sasa))\nf.close()\n\n\n!xmgrace sasa.dat\n\n#Cargando la imagen generada en xmgrace\nImage(filename='sasa.png')",
"Cargando archivo solven-accessible-surface.xvg para graficar con Matplotlib",
"data_sasa=np.loadtxt('solven-accessible-surface.xvg',comments=['#', '@'])\n\n \n#Engrosar marco \nfig=pl.figure(figsize=(20, 12), dpi=100, linewidth=3.0)\nax = fig.add_subplot(111)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(4)\n#Formateando los valores de los ejes\n#ax.yaxis.set_major_formatter(FormatStrFormatter('%.1f'))\n\npl.xlabel(\"Time (ns)\", fontsize = 40)\npl.ylabel('Area ( nm'+ r'$\\ ^2$'+')' , fontsize = 40)\n#pl.title('Solvent Accessible Surface', fontsize=50)\npl.xticks(fontsize=30) \npl.yticks(fontsize=30) \ndato=data_sasa[:,2]\ndato2=dato[0]\nif (dato2>0):\n pl.plot(data_sasa[:,0]/1000, data_sasa[:,1], linewidth = 2, color='black')\n pl.plot(data_sasa[:,0]/1000, data_sasa[:,2], linewidth = 2, color='red')\n\nelse:\n pl.plot(data_sasa[:,0]/1000, data_sasa[:,1], linewidth = 2, color='black')\n",
"MATRIZ DE RMSD",
"### Creando el directorio para el análisis del SASA\n### NOTA: se calcula con gromacs4 ya que arroja bien los resultados comparado con gromacs5\n\nruta_m_rmsd = nuevaruta+'/matriz'\nprint ( ruta_m_rmsd )\nif not os.path.exists(ruta_m_rmsd): \n os.makedirs(ruta_m_rmsd)\n print ('Se ha creado la ruta ===>',ruta_m_rmsd)\nelse:\n print (\"La ruta \"+ruta_m_rmsd+\" existe..!!!\")\n\nprint ( 'Nos vamos a ....', ruta_m_rmsd )\nos.chdir( ruta_m_rmsd )\n\nprint ('\\nCopiando el archivo rmsd_matrix.tcl a '+ruta_m_rmsd)\nsource_file=ruta_scripts+'/rmsd_matriz/rmsd_matrix.tcl'\ndest_file=ruta_m_rmsd+'/rmsd_matrix.tcl'\nshutil.copy(source_file,dest_file)\n\n\n#print ( 'Nos vemos a ....', ruta_old_traj )\n#os.chdir( ruta_old_traj )\nfile_dcd=ruta_old_traj+'/'+dcd_file\nfile_psf=ruta_old_traj+'/'+psf_file\nprint (file_dcd)\nprint ('\\nEjecutando CATDCD para obtener 100 frames de la trayectoria original....')\noutput_catdcd=!catdcd -o 100.dcd -stride 50 $file_dcd\nprint (output_catdcd.n)",
"Cargar el scrit rmsd_matrix con vmd en la nueva trayectoria\nArrancar VMD, dirigirse al manú Extensions -> Tk Console, copiar y ejecutar la siguiente secuencia de comandos:\ntcl\nsource rmsd_matrix.tcl\nrmsd_matrix -mol top -seltext \"name CA\" -frames all -o salida.dat\nexit",
"#Arrancando VMD para cargar el script rmsd_matrix.tcl\n!vmd 100.dcd $file_psf\n\nruta_matriz=os.getcwd()\nif os.path.isfile('salida.dat'):\n print ('El archivo salida.dat existe')\nelse:\n print ('El archivo salida.dat no existe.. ejecutar desde MATRIZ DE RMSD...')",
"Graficando el archivo de salida",
"#Creando el gráfico\ndata_matriz=np.loadtxt('salida.dat',comments=['#', '@'])\nprint(data_matriz.shape)\npl.figure(figsize=(20, 12), dpi=100)\n\n\nimgplot = pl.imshow(data_matriz, origin='lower', cmap=pl.cm.Greens, interpolation='nearest')\n#imgplot = pl.imshow(data_matriz, origin='lower', cmap=pl.cm.coolwarm, interpolation='nearest')\npl.xlabel(\"Time (ns)\", fontsize = 30)\npl.ylabel('Time (ns)', fontsize = 30)\n#pl.suptitle('RMSD', fontsize=50)\n#pl.title('C-Alpha RMSD matrix', fontsize=40)\npl.xticks(fontsize=20) \npl.yticks(fontsize=20) \npl.xlim(0, 100)\npl.ylim(0, 100)\npl.colorbar()",
"Matriz de distancia mínima",
"### Creando el directorio para el análisis del RMSF\n#Verificando que exista la nueva carpeta para la conversión de trayectorias\nruta_matriz_dm = nuevaruta+'/matriz_dm'\nprint ( ruta_matriz_dm )\nif not os.path.exists(ruta_matriz_dm): \n os.makedirs(ruta_matriz_dm)\n print ('Se ha creado la ruta ===>',ruta_matriz_dm)\nelse:\n print (\"La ruta \"+ruta_matriz_dm+\" existe..!!!\")\n \nprint ( 'Nos vamos a ....', ruta_matriz_dm )\nos.chdir( ruta_matriz_dm )",
"Calculando la matriz de distancia mínima\nSeleccionar el backbone (opción 4)",
"!echo 4 | g_mdmat -f ../output.xtc -s ../ionized.pdb -mean average -frames frames -dt 10000",
"Generando los archivos para visualizarlos",
"!xpm2ps -f frames.xpm -o frames.eps\n!xpm2ps -f average.xpm -o average.eps\nprint('\\nConvirtiendo a png...')\n!convert -density 600 frames.eps -resize 1024x1024 frames.png\n!convert -density 600 average.eps -resize 1024x1024 average.png\n\n\n\nprint ('Cargando el archivo average...')\n\nImage(filename='average.png', width=800)",
"Free Energy\n\nPara el cálculo de la energía libre se requiere el valor mínimo y máximo del RMSD y del radio de gyro, así como el valor de la temperatura a la cual se realizó la simulación. Estos datos son de entrada para el script del cálculo del mismo.",
"### Creando el directorio para el análisis de la libre energía\n\nruta_f_energy = nuevaruta+'/free_energy'\nprint ( ruta_f_energy )\nif not os.path.exists(ruta_f_energy): \n os.makedirs(ruta_f_energy)\n print ('Se ha creado la ruta ===>',ruta_f_energy)\nelse:\n print (\"La ruta \"+ruta_f_energy+\" existe..!!!\")\n\nprint ( 'Nos vamos a ....', ruta_f_energy )\nos.chdir( ruta_f_energy )\n\n#Solicita la temperatura\nt=input('Temperatura a la cual se realizó la simulación:')\ntemperatura=int(t)\nprint ('Temperatura=>',temperatura)",
"Calculando el rmsd y el r-gyro para obtener el mínimo y máximo de cada uno de ellos.",
"print ('Ejecutando el análisis de rmsd...')\n!echo 3 3 | g_rms -f ../output.xtc -s ../ionized.pdb -a avgrp.xvg\nprint ('Ejecutando el análisis de rgyro...')\n!echo 3 | g_gyrate -f ../output.xtc -s ../ionized.pdb -o gyrate.xvg",
"Escribiendo script a /tmp para utilizar en el cálculo",
"print ('\\nCopiando el archivo generateFES.py a '+ruta_f_energy)\nsource_file=ruta_scripts+'/free_energy/generateFES.py'\ndest_file=ruta_f_energy+'/generateFES.py'\nshutil.copy(source_file,dest_file)\n#Cambiando permisos de ejecución\n!chmod +x generateFES.py",
"Realizando los cálculos para la Free Energy",
"\n\n#Cargando valores del RMSD\ndata_rmsd=np.loadtxt('rmsd.xvg',comments=['#', '@'])\n\n#Cargnaod valores del R-GYRO\ndata_rgyro=np.loadtxt('gyrate.xvg',comments=['#', '@'])\n\n#Obteniendo los valores máximo y mínimo del rmsd\nmin_rmsd=np.amin(data_rmsd[:,1])\nmax_rmsd=np.amax(data_rmsd[:,1])\nprint ('Minimo RMSD=>',min_rmsd)\nprint ('Máximo RMSD=>',max_rmsd)\n\n#Obteniendo los valores máximo y mínimo del r-gyro\nmin_rgyro=np.amin(data_rgyro[:,1])\nmax_rgyro=np.amax(data_rgyro[:,1])\nprint ('Minimo RGYRO=>',min_rgyro)\nprint ('Máximo RGYRO=>',max_rgyro)\n\n#Creando los archivos de entrada para el script\nnp.savetxt('rmsd.dat',data_rmsd[:,1], fmt='%1.7f')\nnp.savetxt('rgyro.dat',data_rgyro[:,1], fmt='%1.7f')\n!paste rgyro.dat rmsd.dat > fes.dat\n\n#Ejecutando el script de FES\n!python generateFES.py fes.dat $min_rgyro $max_rgyro $min_rmsd $max_rmsd 200 200 $temperatura FEES.dat\n\n#Cargando el archivo generado para plotear con matplotlib\ndata_fes=np.loadtxt('FEES.dat',comments=['#', '@'])\n",
"Ploteando con GNUplot",
"# This loads the magics for gnuplot\n%load_ext gnuplot_kernel\n#Configurando la salida para GNUplot\n%gnuplot inline pngcairo transparent enhanced font \"arial,20\" fontscale 1.0 size 1280,960; set zeroaxis;;\n\n%%gnuplot\nset output \"free_energy.png\"\n\nset palette model RGB\nset palette defined ( 0 '#000090',\\\n 1 '#000fff',\\\n 2 '#0090ff',\\\n 3 '#0fffee',\\\n 4 '#90ff70',\\\n 5 '#ffee00',\\\n 6 '#ff7000',\\\n 7 '#ee0000',\\\n 8 '#7f0000')\nset view map\nset dgrid3d\nset pm3d interpolate 0,0\nset xlabel \"Rg (nm)\nset ylabel \"RMSD (nm)\"\n##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente\n#set cbrange[8:10]\nsplot \"FEES.dat\" with pm3d\n\n",
"PCA",
"### Creando el directorio para el análisis del PCA\n\nruta_pca = nuevaruta+'/pca'\nprint ( ruta_pca )\nif not os.path.exists(ruta_pca): \n os.makedirs(ruta_pca)\n print ('Se ha creado la ruta ===>',ruta_pca)\nelse:\n print (\"La ruta \"+ruta_pca+\" existe..!!!\")\n\nprint ( 'Nos vamos a ....', ruta_pca )\nos.chdir( ruta_pca )\n\n#Calculando matriz de covarianza\n!echo 1 1 | g_covar -s ../ionized.pdb -f ../output.xtc -o eigenvalues.xvg -v eigenvectors.trr -xpma covar.xpm",
"Una vez calculada la matriz el eigenvalues y eigenvectors sirven de entrada para generar el pca.\nEl siguiente comando representa el movimiento del primer y segundo eigenvector.",
"!echo 1 1 | g_anaeig -s ../ionized.pdb -f ../output.xtc -v eigenvectors.trr -eig eigenvalues.xvg -first 1 -last 2 -2d 2dproj_1_2.xvg\n\n#pcaX, pcaY=np.loadtxt('2dproj_1_2.xvg',comments=['#', '@'], unpack=True)\ndata_pca=np.loadtxt('2dproj_1_2.xvg',comments=['#', '@'])\n\n#Obteniendo los valores máximo y mínimo del pca\nmin_pcaX=np.amin(data_pca[:,0])\nmax_pcaX=np.amax(data_pca[:,0])\nprint ('Minimo PCA_X=>',min_pcaX)\nprint ('Máximo PCA_X=>',max_pcaX)\nmin_pcaY=np.amin(data_pca[:,1])\nmax_pcaY=np.amax(data_pca[:,1])\nprint ('Minimo PCA_Y=>',min_pcaY)\nprint ('Máximo PCA_Y=>',max_pcaY)\n\n\n#Creando los archivos de entrada para el script\nnp.savetxt('PCA.dat',data_pca, fmt='%1.5f')\n\n\n#Copiando el script generateFES de la carpeta Free_energy\nprint ('\\nCopiando el archivo generateFES.py a '+ruta_pca+ ' desde '+ ruta_f_energy)\nsource_file=ruta_f_energy+'/generateFES.py'\ndest_file=ruta_pca+'/generateFES.py'\nshutil.copy(source_file,dest_file)\n\n#Ejecutando el script de FES\n!python generateFES.py PCA.dat $min_pcaX $max_pcaX $min_pcaY $max_pcaY 200 200 $temperatura FEES_PCA.dat\n\n",
"Ploteando el archivo con gnuplot",
"#Volver a cargar el kernel de gnuplot para limpiar su buffer\n%reload_ext gnuplot_kernel\n#Configurando la salida para GNUplot\n%gnuplot inline pngcairo transparent enhanced font \"arial,20\" fontscale 1.0 size 1280,960; set zeroaxis;;\n\n%%gnuplot\nset output \"pca.png\"\n\nset palette model RGB\nset palette defined ( 0 '#000090',\\\n 1 '#000fff',\\\n 2 '#0090ff',\\\n 3 '#0fffee',\\\n 4 '#90ff70',\\\n 5 '#ffee00',\\\n 6 '#ff7000',\\\n 7 '#ee0000',\\\n 8 '#7f0000')\nset view map\nset dgrid3d\nset pm3d interpolate 0,0\nset xlabel \"projection on eigenvector 1 (nm)\"\nset ylabel \"projection on eigenvector 2 (nm)\"\nset title \" \"\n##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente\n#set cbrange[8:10]\nsplot \"FEES_PCA.dat\" with pm3d\n\n",
"Análisis de puentes di sulfuro\n\nEste aplica para 2 puente, para lo cual se utiliza el software HTMD.",
"from htmd import *",
"Creando la ruta\nRuta para el análisis de los datos.",
"### Creando el directorio para el análisis de los RMSD de los puentes\n\nruta_rmsd_diedros = nuevaruta+'/rmsd_diedros'\nprint ( ruta_rmsd_diedros )\nif not os.path.exists(ruta_rmsd_diedros): \n os.makedirs(ruta_rmsd_diedros)\n print ('Se ha creado la ruta ===>',ruta_rmsd_diedros)\nelse:\n print (\"La ruta \"+ruta_rmsd_diedros+\" existe..!!!\")\n\nprint ( 'Nos vamos a ....', ruta_rmsd_diedros)\nos.chdir( ruta_rmsd_diedros )",
"Cargando de los puentes di sulfuro\nPara este análisis se deberá revisar el archivo psf_charmm.tcl de la carpeta 01_BUILD, en el cual se tiene la definición de los puentes como la siguiente:\n patch DISU A:4 A:22\n\n patch DISU A:8 A:18\n\nEl número del puente se determinará de acuerdo al orden en que se encuentran definidos en este archivo, por ejemplo, la nota anterior:\n DB1 4-22\n\n DB2 8-18\n\nLa entrada de datos será por los índices del lado izquierdo y derecho respectivamente, con los cuales se creará la estructura completa de cada uno de ellos tomando los valores de los indices para su respectivo análisis.",
"# Cargando la molécula\nmol = Molecule('../ionized.pdb')\n\n# Solicitando los datos de entrada\npx1l=input('Índice del DB1 izquierdo:')\npx1r=input('Índice del DB1 derecho:')\npx2l=input('Índice del DB2 izquierdo:')\npx2r=input('Índice del DB2 derecho:')\nrevisa1=1\nrevisa2=1",
"Obteniendo los índices de los puentes",
"\nif (revisa1>0):\n #Obteniendo lado izquierdo del DB1\n x1l_name=mol.get('name','resname CYS and noh and resid '+px1l)\n x1l_index=mol.get('index','resname CYS and noh and resid '+px1l)\n x1l_resid=mol.get('resid','resname CYS and noh and resid '+px1l)\n #Obteniendo lado derecho del DB1\n x1r_name=mol.get('name','resname CYS and noh and resid '+px1r)\n x1r_index=mol.get('index','resname CYS and noh and resid '+px1r)\n x1r_resid=mol.get('resid','resname CYS and noh and resid '+px1r)\n\nif (revisa2>0):\n #Obteniendo el lado izquierdo del DB2\n x2l_name=mol.get('name','resname CYS and noh and resid '+px2l)\n x2l_index=mol.get('index','resname CYS and noh and resid '+px2l)\n x2l_resid=mol.get('resid','resname CYS and noh and resid '+px2l)\n #Obteniendo el lado derecho del DB2\n x2r_name=mol.get('name','resname CYS and noh and resid '+px2r)\n x2r_index=mol.get('index','resname CYS and noh and resid '+px2r)\n x2r_resid=mol.get('resid','resname CYS and noh and resid '+px2r)\n\n \n#Obteniendo la lista de índices de los puentes\nprint ('Generando la lista de los índices para enviarlos')\ndb1x1l=[]\ndb1x2l=[]\ndb1x3m=[]\ndb1x2r=[]\ndb1x1r=[]\n\ndb1l_name_l=[]\ndb1l_index_l=[]\ndb1r_name_l=[]\ndb1r_index_l=[]\n\ndb2l_name_l=[]\ndb2l_index_l=[]\ndb2r_name_l=[]\ndb2r_index_l=[]\n\ndb3l_name_l=[]\ndb3l_index_l=[]\ndb3r_name_l=[]\ndb3r_index_l=[]\n\nif (revisa1>0):\n #Obteniendo los índices del DB1\n for i in range(len(x1l_name)):\n if (x1l_name[i]=='N' or x1l_name[i]=='CA' or x1l_name[i]=='CB' or x1l_name[i]=='SG'):\n db1l_name_l.append(str(x1l_name[i]))\n db1l_index_l.append(str(x1l_index[i]))\n for i in range(len(x1r_name)):\n if (x1r_name[i]=='N' or x1r_name[i]=='CA' or x1r_name[i]=='CB' or x1r_name[i]=='SG'):\n db1r_name_l.append(str(x1r_name[i]))\n db1r_index_l.append(str(x1r_index[i])) \n print ('DB1 X1L =>',db1l_name_l)\n print (db1l_index_l)\n print ('DB1 X1R =>',db1r_name_l)\n print (db1r_index_l)\n\nif (revisa2>0):\n #Obteniendo los índices del DB2\n for i in range(len(x2l_name)):\n if (x2l_name[i]=='N' or x2l_name[i]=='CA' or x2l_name[i]=='CB' or x2l_name[i]=='SG'):\n db2l_name_l.append(str(x2l_name[i]))\n db2l_index_l.append(str(x2l_index[i]))\n for i in range(len(x2r_name)):\n if (x2r_name[i]=='N' or x2r_name[i]=='CA' or x2r_name[i]=='CB' or x2r_name[i]=='SG'):\n db2r_name_l.append(str(x2r_name[i]))\n db2r_index_l.append(str(x2r_index[i])) \n print ('DB2 X1L =>',db2l_name_l)\n print (db2l_index_l)\n print ('DB2 X1R =>',db2r_name_l)\n print (db2r_index_l)\n\n",
"Ordenando los puentes de la forma ['N', 'CA', 'CB', 'SG', 'SG', 'CB', 'CA', 'N']",
"#Generando el DB1 completo ordenado\nfilas=8\ncol=2\nDB1_i=[]\nDB1_N=[]\nDB2_i=[]\nDB2_N=[]\nDB3_i=[]\nDB3_N=[]\nfor i in range(0,filas):\n DB1_N.append([' '])\n DB1_i.append(['0'])\n DB2_N.append([' '])\n DB2_i.append(['0'])\n DB3_N.append([' '])\n DB3_i.append(['0'])\n\nif (revisa1>0):\n #Cargando índices para el puente 1\n for i in range(len(db1l_name_l)):\n if db1l_name_l[i]=='N':\n DB1_N[0] = db1l_name_l[i]\n DB1_i[0]='index '+db1l_index_l[i]\n if db1l_name_l[i]=='CA':\n DB1_N[1] = db1l_name_l[i]\n DB1_i[1]='index '+db1l_index_l[i]\n if db1l_name_l[i]=='CB':\n DB1_N[2] = db1l_name_l[i]\n DB1_i[2]='index '+db1l_index_l[i]\n if db1l_name_l[i]=='SG':\n DB1_N[3] = db1l_name_l[i]\n DB1_i[3]='index '+db1l_index_l[i]\n \n for i in range(len(db1r_name_l)):\n if db1r_name_l[i]=='SG':\n DB1_N[4] = db1r_name_l[i]\n DB1_i[4]='index '+db1r_index_l[i]\n if db1r_name_l[i]=='CB':\n DB1_N[5] = db1r_name_l[i]\n DB1_i[5]='index '+db1r_index_l[i]\n if db1r_name_l[i]=='CA':\n DB1_N[6] = db1r_name_l[i]\n DB1_i[6]='index '+db1r_index_l[i]\n if db1r_name_l[i]=='N':\n DB1_N[7] = db1r_name_l[i]\n DB1_i[7]='index '+db1r_index_l[i]\n \n print ('Puente DB1 = resid '+px1l+':'+px1r)\n print ('Names DB1=>',DB1_i)\n print ('Index DB1=>',DB1_N)\n print ('\\n')\n\nif (revisa2>0):\n #Cargando índices para el puente 2\n for i in range(len(db2l_name_l)):\n if db2l_name_l[i]=='N':\n DB2_N[0] = db2l_name_l[i]\n DB2_i[0]='index '+db2l_index_l[i]\n if db2l_name_l[i]=='CA':\n DB2_N[1] = db2l_name_l[i]\n DB2_i[1]='index '+db2l_index_l[i]\n if db2l_name_l[i]=='CB':\n DB2_N[2] = db2l_name_l[i]\n DB2_i[2]='index '+db2l_index_l[i]\n if db2l_name_l[i]=='SG':\n DB2_N[3] = db2l_name_l[i]\n DB2_i[3]='index '+db2l_index_l[i]\n \n for i in range(len(db2r_name_l)):\n if db2r_name_l[i]=='SG':\n DB2_N[4] = db2r_name_l[i]\n DB2_i[4]='index '+db2r_index_l[i]\n if db2r_name_l[i]=='CB':\n DB2_N[5] = db2r_name_l[i]\n DB2_i[5]='index '+db2r_index_l[i]\n if db2r_name_l[i]=='CA':\n DB2_N[6] = db2r_name_l[i]\n DB2_i[6]='index '+db2r_index_l[i]\n if db2r_name_l[i]=='N':\n DB2_N[7] = db2r_name_l[i]\n DB2_i[7]='index '+db2r_index_l[i]\n \n print ('Puente DB2 = resid '+px2l+':'+px2r)\n print ('Names DB2=>',DB2_i)\n print ('Index DB2=>',DB2_N)\n print ('\\n')",
"Creando los archivos tcl para el cálculo del RMSD de los puentes\nSe crean los archivos de salida en formato tcl.",
"if (revisa1>0):\n \n #Creando script para DB1_x1l\n psf=ruta_old_traj+'/'+psf_file\n dcd=ruta_old_traj+'/'+dcd_file\n print(psf)\n f1 = open('DB1_x1l.tcl', 'w')\n print(f1)\n f1.write('set psfFile '+ psf+' \\n')\n f1.write('set dcdFile '+ dcd+' \\n')\n f1.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n f1.write('set outfile ' +'[open ' +'db1_x1l.dat'+' w]\\n')\n f1.write('set nf [molinfo top get numframes]\\n')\n f1.write('\\n#RMSD calculation loop\\n')\n f1.write('set f1 [atomselect top \"'+DB1_i[0]+' or '+DB1_i[1]+' or '+DB1_i[2]+' or '+DB1_i[3]+' \" frame 0]\\n')\n f1.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n f1.write(' set sel [atomselect top \"'+DB1_i[0]+' or '+DB1_i[1]+' or '+DB1_i[2]+' or '+DB1_i[3]+' \" frame $i]\\n')\n f1.write(' $sel move [measure fit $sel $f1]\\n')\n f1.write(' set time [expr {$i +1}]\\n')\n f1.write(' puts -nonewline $outfile \"[measure rmsd $sel $f1]\"\\n')\n f1.write(' puts $outfile \" $time\"\\n')\n f1.write('}\\n')\n f1.write('close $outfile')\n f1.close()\n \n #Creando script para DB1_x2l\n psf=ruta_old_traj+'/'+psf_file\n dcd=ruta_old_traj+'/'+dcd_file\n print(psf)\n f2 = open('DB1_x2l.tcl', 'w')\n print(f2)\n f2.write('set psfFile '+ psf+' \\n')\n f2.write('set dcdFile '+ dcd+' \\n')\n f2.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n f2.write('set outfile ' +'[open ' +'db1_x2l.dat'+' w]\\n')\n f2.write('set nf [molinfo top get numframes]\\n')\n f2.write('\\n#RMSD calculation loop\\n')\n f2.write('set f1 [atomselect top \"'+DB1_i[1]+' or '+DB1_i[2]+' or '+DB1_i[3]+' or '+DB1_i[4]+' \" frame 0]\\n')\n f2.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n f2.write(' set sel [atomselect top \"'+DB1_i[1]+' or '+DB1_i[2]+' or '+DB1_i[3]+' or '+DB1_i[4]+' \" frame $i]\\n')\n f2.write(' $sel move [measure fit $sel $f1]\\n')\n f2.write(' set time [expr {$i +1}]\\n')\n f2.write(' puts -nonewline $outfile \"[measure rmsd $sel $f1]\"\\n')\n f2.write(' puts $outfile \" $time\"\\n')\n f2.write('}\\n')\n f2.write('close $outfile')\n f2.close()\n \n #Creando script para DB1_x3m\n psf=ruta_old_traj+'/'+psf_file\n dcd=ruta_old_traj+'/'+dcd_file\n print(psf)\n f3 = open('DB1_x3m.tcl', 'w')\n print(f3)\n f3.write('set psfFile '+ psf+' \\n')\n f3.write('set dcdFile '+ dcd+' \\n')\n f3.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n f3.write('set outfile ' +'[open ' +'db1_x3m.dat'+' w]\\n')\n f3.write('set nf [molinfo top get numframes]\\n')\n f3.write('\\n#RMSD calculation loop\\n')\n f3.write('set f1 [atomselect top \"'+DB1_i[2]+' or '+DB1_i[3]+' or '+DB1_i[4]+' or '+DB1_i[5]+' \" frame 0]\\n')\n f3.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n f3.write(' set sel [atomselect top \"'+DB1_i[2]+' or '+DB1_i[3]+' or '+DB1_i[4]+' or '+DB1_i[5]+' \" frame $i]\\n')\n f3.write(' $sel move [measure fit $sel $f1]\\n')\n f3.write(' set time [expr {$i +1}]\\n')\n f3.write(' puts -nonewline $outfile \"[measure rmsd $sel $f1]\"\\n')\n f3.write(' puts $outfile \" $time\"\\n')\n f3.write('}\\n')\n f3.write('close $outfile')\n f3.close()\n \n #Creando script para DB1_x2r\n psf=ruta_old_traj+'/'+psf_file\n dcd=ruta_old_traj+'/'+dcd_file\n print(psf)\n f4 = open('DB1_x2r.tcl', 'w')\n print(f4)\n f4.write('set psfFile '+ psf+' \\n')\n f4.write('set dcdFile '+ dcd+' \\n')\n f4.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n f4.write('set outfile ' +'[open ' +'db1_x2r.dat'+' w]\\n')\n f4.write('set nf [molinfo top get numframes]\\n')\n f4.write('\\n#RMSD calculation loop\\n')\n f4.write('set f1 [atomselect top \"'+DB1_i[3]+' or '+DB1_i[4]+' or '+DB1_i[5]+' or '+DB1_i[6]+' \" frame 0]\\n')\n f4.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n f4.write(' set sel [atomselect top \"'+DB1_i[3]+' or '+DB1_i[4]+' or '+DB1_i[5]+' or '+DB1_i[6]+' \" frame $i]\\n')\n f4.write(' $sel move [measure fit $sel $f1]\\n')\n f4.write(' set time [expr {$i +1}]\\n')\n f4.write(' puts -nonewline $outfile \"[measure rmsd $sel $f1]\"\\n')\n f4.write(' puts $outfile \" $time\"\\n')\n f4.write('}\\n')\n f4.write('close $outfile')\n f4.close()\n \n #Creando script para DB1_x1r\n psf=ruta_old_traj+'/'+psf_file\n dcd=ruta_old_traj+'/'+dcd_file\n print(psf)\n f5 = open('DB1_x1r.tcl', 'w')\n print(f5)\n f5.write('set psfFile '+ psf+' \\n')\n f5.write('set dcdFile '+ dcd+' \\n')\n f5.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n f5.write('set outfile ' +'[open ' +'db1_x1r.dat'+' w]\\n')\n f5.write('set nf [molinfo top get numframes]\\n')\n f5.write('\\n#RMSD calculation loop\\n')\n f5.write('set f1 [atomselect top \"'+DB1_i[4]+' or '+DB1_i[5]+' or '+DB1_i[6]+' or '+DB1_i[7]+' \" frame 0]\\n')\n f5.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n f5.write(' set sel [atomselect top \"'+DB1_i[4]+' or '+DB1_i[5]+' or '+DB1_i[6]+' or '+DB1_i[7]+' \" frame $i]\\n')\n f5.write(' $sel move [measure fit $sel $f1]\\n')\n f5.write(' set time [expr {$i +1}]\\n')\n f5.write(' puts -nonewline $outfile \"[measure rmsd $sel $f1]\"\\n')\n f5.write(' puts $outfile \" $time\"\\n')\n f5.write('}\\n')\n f5.write('close $outfile')\n f5.close()\n\nif (revisa2>0):\n ##########################################################################################\n ## Creando los archivos para DB2\n #######################################################################################\n #Creando script para DB2_x1l\n psf=ruta_old_traj+'/'+psf_file\n dcd=ruta_old_traj+'/'+dcd_file\n print(psf)\n f6 = open('DB2_x1l.tcl', 'w')\n print(f6)\n f6.write('set psfFile '+ psf+' \\n')\n f6.write('set dcdFile '+ dcd+' \\n')\n f6.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n f6.write('set outfile ' +'[open ' +'db2_x1l.dat'+' w]\\n')\n f6.write('set nf [molinfo top get numframes]\\n')\n f6.write('\\n#RMSD calculation loop\\n')\n f6.write('set f1 [atomselect top \"'+DB2_i[0]+' or '+DB2_i[1]+' or '+DB2_i[2]+' or '+DB2_i[3]+' \" frame 0]\\n')\n f6.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n f6.write(' set sel [atomselect top \"'+DB2_i[0]+' or '+DB2_i[1]+' or '+DB2_i[2]+' or '+DB2_i[3]+' \" frame $i]\\n')\n f6.write(' $sel move [measure fit $sel $f1]\\n')\n f6.write(' set time [expr {$i +1}]\\n')\n f6.write(' puts -nonewline $outfile \"[measure rmsd $sel $f1]\"\\n')\n f6.write(' puts $outfile \" $time\"\\n')\n f6.write('}\\n')\n f6.write('close $outfile')\n f6.close()\n \n #Creando script para DB1_x2l\n psf=ruta_old_traj+'/'+psf_file\n dcd=ruta_old_traj+'/'+dcd_file\n print(psf)\n f7 = open('DB2_x2l.tcl', 'w')\n print(f7)\n f7.write('set psfFile '+ psf+' \\n')\n f7.write('set dcdFile '+ dcd+' \\n')\n f7.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n f7.write('set outfile ' +'[open ' +'db2_x2l.dat'+' w]\\n')\n f7.write('set nf [molinfo top get numframes]\\n')\n f7.write('\\n#RMSD calculation loop\\n')\n f7.write('set f1 [atomselect top \"'+DB2_i[1]+' or '+DB2_i[2]+' or '+DB2_i[3]+' or '+DB2_i[4]+' \" frame 0]\\n')\n f7.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n f7.write(' set sel [atomselect top \"'+DB2_i[1]+' or '+DB2_i[2]+' or '+DB2_i[3]+' or '+DB2_i[4]+' \" frame $i]\\n')\n f7.write(' $sel move [measure fit $sel $f1]\\n')\n f7.write(' set time [expr {$i +1}]\\n')\n f7.write(' puts -nonewline $outfile \"[measure rmsd $sel $f1]\"\\n')\n f7.write(' puts $outfile \" $time\"\\n')\n f7.write('}\\n')\n f7.write('close $outfile')\n f7.close()\n \n #Creando script para DB1_x3m\n psf=ruta_old_traj+'/'+psf_file\n dcd=ruta_old_traj+'/'+dcd_file\n print(psf)\n f8 = open('DB2_x3m.tcl', 'w')\n print(f8)\n f8.write('set psfFile '+ psf+' \\n')\n f8.write('set dcdFile '+ dcd+' \\n')\n f8.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n f8.write('set outfile ' +'[open ' +'db2_x3m.dat'+' w]\\n')\n f8.write('set nf [molinfo top get numframes]\\n')\n f8.write('\\n#RMSD calculation loop\\n')\n f8.write('set f1 [atomselect top \"'+DB2_i[2]+' or '+DB2_i[3]+' or '+DB2_i[4]+' or '+DB2_i[5]+' \" frame 0]\\n')\n f8.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n f8.write(' set sel [atomselect top \"'+DB2_i[2]+' or '+DB2_i[3]+' or '+DB2_i[4]+' or '+DB2_i[5]+' \" frame $i]\\n')\n f8.write(' $sel move [measure fit $sel $f1]\\n')\n f8.write(' set time [expr {$i +1}]\\n')\n f8.write(' puts -nonewline $outfile \"[measure rmsd $sel $f1]\"\\n')\n f8.write(' puts $outfile \" $time\"\\n')\n f8.write('}\\n')\n f8.write('close $outfile')\n f8.close()\n\n #Creando script para DB1_x2r\n psf=ruta_old_traj+'/'+psf_file\n dcd=ruta_old_traj+'/'+dcd_file\n print(psf)\n f9 = open('DB2_x2r.tcl', 'w')\n print(f9)\n f9.write('set psfFile '+ psf+' \\n')\n f9.write('set dcdFile '+ dcd+' \\n')\n f9.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n f9.write('set outfile ' +'[open ' +'db2_x2r.dat'+' w]\\n')\n f9.write('set nf [molinfo top get numframes]\\n')\n f9.write('\\n#RMSD calculation loop\\n')\n f9.write('set f1 [atomselect top \"'+DB2_i[3]+' or '+DB2_i[4]+' or '+DB2_i[5]+' or '+DB2_i[6]+' \" frame 0]\\n')\n f9.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n f9.write(' set sel [atomselect top \"'+DB2_i[3]+' or '+DB2_i[4]+' or '+DB2_i[5]+' or '+DB2_i[6]+' \" frame $i]\\n')\n f9.write(' $sel move [measure fit $sel $f1]\\n')\n f9.write(' set time [expr {$i +1}]\\n')\n f9.write(' puts -nonewline $outfile \"[measure rmsd $sel $f1]\"\\n')\n f9.write(' puts $outfile \" $time\"\\n')\n f9.write('}\\n')\n f9.write('close $outfile')\n f9.close()\n \n #Creando script para DB1_x1r\n psf=ruta_old_traj+'/'+psf_file\n dcd=ruta_old_traj+'/'+dcd_file\n print(psf)\n f10 = open('DB2_x1r.tcl', 'w')\n print(f10)\n f10.write('set psfFile '+ psf+' \\n')\n f10.write('set dcdFile '+ dcd+' \\n')\n f10.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n f10.write('set outfile ' +'[open ' +'db2_x1r.dat'+' w]\\n')\n f10.write('set nf [molinfo top get numframes]\\n')\n f10.write('\\n#RMSD calculation loop\\n')\n f10.write('set f1 [atomselect top \"'+DB2_i[4]+' or '+DB2_i[5]+' or '+DB2_i[6]+' or '+DB2_i[7]+' \" frame 0]\\n')\n f10.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n f10.write(' set sel [atomselect top \"'+DB2_i[4]+' or '+DB2_i[5]+' or '+DB2_i[6]+' or '+DB2_i[7]+' \" frame $i]\\n')\n f10.write(' $sel move [measure fit $sel $f1]\\n')\n f10.write(' set time [expr {$i +1}]\\n')\n f10.write(' puts -nonewline $outfile \"[measure rmsd $sel $f1]\"\\n')\n f10.write(' puts $outfile \" $time\"\\n')\n f10.write('}\\n')\n f10.write('close $outfile')\n f10.close()",
"Ejecutando los archivos rmsd en tcl con vmd\nEjecutando los archivos en VMD",
"if (revisa1>0):\n #Calculando con VMD rmsd DB1 X1L\n !vmd -dispdev text < DB1_x1l.tcl\n #Calculando con VMD DB1 X2L\n !vmd -dispdev text < DB1_x2l.tcl\n #Calculando con VMD DB1 X3M\n !vmd -dispdev text < DB1_x3m.tcl\n #Calculando con VMD DB1 X2R\n !vmd -dispdev text < DB1_x2r.tcl\n #Calculando con VMD DB1 X1R\n !vmd -dispdev text < DB1_x1r.tcl\n\nif (revisa2>0):\n #Calculando con VMD rmsd DB2 X1L\n !vmd -dispdev text < DB2_x1l.tcl\n #Calculando con VMD DB2 X2L\n !vmd -dispdev text < DB2_x2l.tcl\n #Calculando con VMD DB2 X3M\n !vmd -dispdev text < DB2_x3m.tcl\n #Calculando con VMD DB2 X2R\n !vmd -dispdev text < DB2_x2r.tcl\n #Calculando con VMD DB2 X1R\n !vmd -dispdev text < DB2_x1r.tcl",
"Generando los gráficos RMSD en matplotlib",
"\nescale_y=[]\nfig = pl.figure(figsize=(25,8))\nfig.subplots_adjust(hspace=.4, wspace=0.3)\n#Formateando los valores de los ejes\n\n\n#Engrosando marcos\nax = fig.add_subplot(2,5,1)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(3)\nax = fig.add_subplot(2,5,2)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(3)\nax = fig.add_subplot(2,5,3)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(3)\nax = fig.add_subplot(2,5,4)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(3)\nax = fig.add_subplot(2,5,5)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(3)\nax = fig.add_subplot(2,5,6)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(3)\nax = fig.add_subplot(2,5,7)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(3)\nax = fig.add_subplot(2,5,8)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(3)\nax = fig.add_subplot(2,5,9)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(3)\nax = fig.add_subplot(2,5,10)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(3)\nax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))\n\nif (revisa1>0):\n #Datos de DB1\n data_db1_x1l=np.loadtxt('db1_x1l.dat',comments=['#', '@'])\n data_db1_x2l=np.loadtxt('db1_x2l.dat',comments=['#', '@'])\n data_db1_x3m=np.loadtxt('db1_x3m.dat',comments=['#', '@'])\n data_db1_x2r=np.loadtxt('db1_x2r.dat',comments=['#', '@'])\n data_db1_x1r=np.loadtxt('db1_x1r.dat',comments=['#', '@'])\n sub1 = fig.add_subplot(251) # instead of plt.subplot(2, 2, 1)\n \n #sub1.set_title('DB1_X1L') \n sub1.set_xlabel('Time (ns)')\n sub1.set_ylabel('RMSD (nm)')\n sub1.plot(data_db1_x1l[:,1]*0.02, data_db1_x1l[:,0]/10, color='black', linewidth = 1, label='DB1_X1L')\n x1,x2,y1,y2=sub1.axis()\n escale_y.append(y2)\n sub2 = fig.add_subplot(252)\n #sub2.set_title('DB1_X2L')\n sub2.set_xlabel('Time (ns)')\n sub2.set_ylabel('RMSD (nm)')\n sub2.plot(data_db1_x2l[:,1]*0.02, data_db1_x2l[:,0]/10, color='black', linewidth = 1, label='DB1_X2L')\n x1,x2,y1,y2=sub2.axis()\n escale_y.append(y2)\n sub3 = fig.add_subplot(253)\n #sub3.set_title('DB1_X3M')\n sub3.set_xlabel('Time (ns)')\n sub3.set_ylabel('RMSD (nm)')\n sub3.plot(data_db1_x3m[:,1]*0.02, data_db1_x3m[:,0]/10, color='black', linewidth = 1, label='DB1_X3M')\n x1,x2,y1,y2=sub3.axis()\n escale_y.append(y2)\n sub4 = fig.add_subplot(254)\n #sub4.set_title('DB1_X2R')\n sub4.set_xlabel('Time (ns)')\n sub4.set_ylabel('RMSD (nm)')\n sub4.plot(data_db1_x2r[:,1]*0.02, data_db1_x2r[:,0]/10, color='black', linewidth = 1, label='DB1_X2R')\n x1,x2,y1,y2=sub4.axis()\n escale_y.append(y2)\n sub5 = fig.add_subplot(255)\n #sub5.set_title('DB1_X1R')\n sub5.set_xlabel('Time (ns)')\n sub5.set_ylabel('RMSD (nm)')\n sub5.plot(data_db1_x1r[:,1]*0.02, data_db1_x1r[:,0]/10, color='black', linewidth = 1, label='DB1_X1R')\n x1,x2,y1,y2=sub5.axis()\n escale_y.append(y2)\n\nif (revisa2>0):\n #DAtos de DB2\n data_db2_x1l=np.loadtxt('db2_x1l.dat',comments=['#', '@'])\n data_db2_x2l=np.loadtxt('db2_x2l.dat',comments=['#', '@'])\n data_db2_x3m=np.loadtxt('db2_x3m.dat',comments=['#', '@'])\n data_db2_x2r=np.loadtxt('db2_x2r.dat',comments=['#', '@'])\n data_db2_x1r=np.loadtxt('db2_x1r.dat',comments=['#', '@'])\n #Ploteando DB2\n sub6 = fig.add_subplot(256)\n #sub6.set_title('DB2_X1L')\n sub6.set_xlabel('Time (ns)')\n sub6.set_ylabel('RMSD (nm)')\n sub6.plot(data_db2_x1l[:,1]*0.02, data_db2_x1l[:,0]/10, color='red', linewidth = 1, label='DB2_X1L')\n x1,x2,y1,y2=sub6.axis()\n escale_y.append(y2)\n sub7 = fig.add_subplot(257)\n #sub7.set_title('DB2_X2L')\n sub7.set_xlabel('Time (ns)')\n sub7.set_ylabel('RMSD (nm)')\n sub7.plot(data_db2_x2l[:,1]*0.02, data_db2_x2l[:,0]/10, color='red', linewidth = 1, label='DB2_X2L')\n x1,x2,y1,y2=sub7.axis()\n escale_y.append(y2)\n sub8 = fig.add_subplot(258)\n #sub8.set_title('DB2_X3M')\n sub8.set_xlabel('Time (ns)')\n sub8.set_ylabel('RMSD (nm)')\n sub8.plot(data_db2_x3m[:,1]*0.02, data_db2_x3m[:,0]/10, color='red', linewidth = 1, label='DB2_X3M')\n x1,x2,y1,y2=sub8.axis()\n escale_y.append(y2)\n sub9 = fig.add_subplot(259)\n #sub9.set_title('DB2_X2R')\n sub9.set_xlabel('Time (ns)')\n sub9.set_ylabel('RMSD (nm)')\n sub9.plot(data_db2_x2r[:,1]*0.02, data_db2_x2r[:,0]/10, color='red', linewidth = 1, label='DB2_X2R')\n x1,x2,y1,y2=sub9.axis()\n escale_y.append(y2)\n sub10 = fig.add_subplot(2,5,10)\n #sub10.set_title('DB2_X1R')\n sub10.set_xlabel('Time (ns)')\n sub10.set_ylabel('RMSD (nm)')\n sub10.plot(data_db2_x1r[:,1]*0.02, data_db2_x1r[:,0]/10, color='red', linewidth = 1, label='DB2_X1R')\n x1,x2,y1,y2=sub10.axis()\n escale_y.append(y2)\n \n#escale_y\nescale_y.sort(reverse=True)\nescale_y\n##Cambiando los ejes de las y\nsub1.axis((x1,x2,y1,escale_y[0]))\nsub2.axis((x1,x2,y1,escale_y[0]))\nsub3.axis((x1,x2,y1,escale_y[0]))\nsub4.axis((x1,x2,y1,escale_y[0]))\nsub5.axis((x1,x2,y1,escale_y[0]))\nsub6.axis((x1,x2,y1,escale_y[0]))\nsub7.axis((x1,x2,y1,escale_y[0]))\nsub8.axis((x1,x2,y1,escale_y[0]))\nsub9.axis((x1,x2,y1,escale_y[0]))\nsub10.axis((x1,x2,y1,escale_y[0]))\n\n",
"FREE ENERGY DIHEDRAL INTRAMOLECULAR\n\nSe calculan las distancias de los ángulos diedros para el cálculo de la free energy intramolecular",
"### Creando el directorio para el análisis de las distancias de enlace de los puentes\n\nruta_diedros = nuevaruta+'/diedros_intra'\nprint ( ruta_diedros )\nif not os.path.exists(ruta_diedros): \n os.makedirs(ruta_diedros)\n print ('Se ha creado la ruta ===>',ruta_diedros)\nelse:\n print (\"La ruta \"+ruta_diedros+\" existe..!!!\")\n\nprint ( 'Nos vamos a ....', ruta_diedros)\nos.chdir( ruta_diedros )",
"Creación de los archivos tcl para el cálculo de los ángulos diedros",
"psf=ruta_old_traj+'/'+psf_file\ndcd=ruta_old_traj+'/'+dcd_file\nif (revisa1>0):\n \n #Creando script para DB1_x1l\n d1 = open('dihed_DB1_x1l.tcl', 'w')\n print(d1)\n d1.write('set psfFile '+ psf+' \\n')\n d1.write('set dcdFile '+ dcd+' \\n')\n d1.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n d1.write('set outfile ' +'[open ' +'dihed_db1_x1l.dat'+' w]\\n')\n d1.write('set nf [molinfo top get numframes]\\n')\n d1.write(' \\n')\n d1.write('set selatoms1 [[atomselect top \"protein and chain A and '+DB1_i[0]+'\"] get index]\\n')\n d1.write('set selatoms2 [[atomselect top \"protein and chain A and '+DB1_i[1]+'\"] get index]\\n')\n d1.write('set selatoms3 [[atomselect top \"protein and chain A and '+DB1_i[2]+'\"] get index]\\n')\n d1.write('set selatoms4 [[atomselect top \"protein and chain A and '+DB1_i[3]+'\"] get index]\\n')\n d1.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\\n')\n d1.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n d1.write(' set x [measure dihed $dihed frame $i]\\n')\n d1.write(' set time [expr {$i +1}]\\n')\n d1.write(' puts $outfile \"$time $x\"\\n')\n d1.write('}\\n')\n d1.close()\n \n #Creando script para DB1_x2l\n d2 = open('dihed_DB1_x2l.tcl', 'w')\n print(d2)\n d2.write('set psfFile '+ psf+' \\n')\n d2.write('set dcdFile '+ dcd+' \\n')\n d2.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n d2.write('set outfile ' +'[open ' +'dihed_db1_x2l.dat'+' w]\\n')\n d2.write('set nf [molinfo top get numframes]\\n')\n d2.write(' \\n')\n d2.write('set selatoms1 [[atomselect top \"protein and chain A and '+DB1_i[1]+'\"] get index]\\n')\n d2.write('set selatoms2 [[atomselect top \"protein and chain A and '+DB1_i[2]+'\"] get index]\\n')\n d2.write('set selatoms3 [[atomselect top \"protein and chain A and '+DB1_i[3]+'\"] get index]\\n')\n d2.write('set selatoms4 [[atomselect top \"protein and chain A and '+DB1_i[4]+'\"] get index]\\n')\n d2.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\\n')\n d2.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n d2.write(' set x [measure dihed $dihed frame $i]\\n')\n d2.write(' set time [expr {$i +1}]\\n')\n d2.write(' puts $outfile \"$time $x\"\\n')\n d2.write('}\\n')\n d2.close()\n \n #Creando script para DB1_x3m\n d3 = open('dihed_DB1_x3m.tcl', 'w')\n print(d3)\n d3.write('set psfFile '+ psf+' \\n')\n d3.write('set dcdFile '+ dcd+' \\n')\n d3.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n d3.write('set outfile ' +'[open ' +'dihed_db1_x3m.dat'+' w]\\n')\n d3.write('set nf [molinfo top get numframes]\\n')\n d3.write(' \\n')\n d3.write('set selatoms1 [[atomselect top \"protein and chain A and '+DB1_i[2]+'\"] get index]\\n')\n d3.write('set selatoms2 [[atomselect top \"protein and chain A and '+DB1_i[3]+'\"] get index]\\n')\n d3.write('set selatoms3 [[atomselect top \"protein and chain A and '+DB1_i[4]+'\"] get index]\\n')\n d3.write('set selatoms4 [[atomselect top \"protein and chain A and '+DB1_i[5]+'\"] get index]\\n')\n d3.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\\n')\n d3.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n d3.write(' set x [measure dihed $dihed frame $i]\\n')\n d3.write(' set time [expr {$i +1}]\\n')\n d3.write(' puts $outfile \"$time $x\"\\n')\n d3.write('}\\n')\n d3.close()\n \n #Creando script para DB1_x2r\n d4 = open('dihed_DB1_x2r.tcl', 'w')\n print(d4)\n d4.write('set psfFile '+ psf+' \\n')\n d4.write('set dcdFile '+ dcd+' \\n')\n d4.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n d4.write('set outfile ' +'[open ' +'dihed_db1_x2r.dat'+' w]\\n')\n d4.write('set nf [molinfo top get numframes]\\n')\n d4.write(' \\n')\n d4.write('set selatoms1 [[atomselect top \"protein and chain A and '+DB1_i[3]+'\"] get index]\\n')\n d4.write('set selatoms2 [[atomselect top \"protein and chain A and '+DB1_i[4]+'\"] get index]\\n')\n d4.write('set selatoms3 [[atomselect top \"protein and chain A and '+DB1_i[5]+'\"] get index]\\n')\n d4.write('set selatoms4 [[atomselect top \"protein and chain A and '+DB1_i[6]+'\"] get index]\\n')\n d4.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\\n')\n d4.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n d4.write(' set x [measure dihed $dihed frame $i]\\n')\n d4.write(' set time [expr {$i +1}]\\n')\n d4.write(' puts $outfile \"$time $x\"\\n')\n d4.write('}\\n')\n d4.close()\n \n #Creando script para DB1_x1r\n d5 = open('dihed_DB1_x1r.tcl', 'w')\n print(d5)\n d5.write('set psfFile '+ psf+' \\n')\n d5.write('set dcdFile '+ dcd+' \\n')\n d5.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n d5.write('set outfile ' +'[open ' +'dihed_db1_x1r.dat'+' w]\\n')\n d5.write('set nf [molinfo top get numframes]\\n')\n d5.write(' \\n')\n d5.write('set selatoms1 [[atomselect top \"protein and chain A and '+DB1_i[4]+'\"] get index]\\n')\n d5.write('set selatoms2 [[atomselect top \"protein and chain A and '+DB1_i[5]+'\"] get index]\\n')\n d5.write('set selatoms3 [[atomselect top \"protein and chain A and '+DB1_i[6]+'\"] get index]\\n')\n d5.write('set selatoms4 [[atomselect top \"protein and chain A and '+DB1_i[7]+'\"] get index]\\n')\n d5.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\\n')\n d5.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n d5.write(' set x [measure dihed $dihed frame $i]\\n')\n d5.write(' set time [expr {$i +1}]\\n')\n d5.write(' puts $outfile \"$time $x\"\\n')\n d5.write('}\\n')\n d5.close()\n \nif (revisa2>0):\n #####################################################################\n ########## Puente 2\n ##########################################3\n #Creando script para DB2_x1l\n d6 = open('dihed_DB2_x1l.tcl', 'w')\n print(d6)\n d6.write('set psfFile '+ psf+' \\n')\n d6.write('set dcdFile '+ dcd+' \\n')\n d6.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n d6.write('set outfile ' +'[open ' +'dihed_db2_x1l.dat'+' w]\\n')\n d6.write('set nf [molinfo top get numframes]\\n')\n d6.write(' \\n')\n d6.write('set selatoms1 [[atomselect top \"protein and chain A and '+DB2_i[0]+'\"] get index]\\n')\n d6.write('set selatoms2 [[atomselect top \"protein and chain A and '+DB2_i[1]+'\"] get index]\\n')\n d6.write('set selatoms3 [[atomselect top \"protein and chain A and '+DB2_i[2]+'\"] get index]\\n')\n d6.write('set selatoms4 [[atomselect top \"protein and chain A and '+DB2_i[3]+'\"] get index]\\n')\n d6.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\\n')\n d6.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n d6.write(' set x [measure dihed $dihed frame $i]\\n')\n d6.write(' set time [expr {$i +1}]\\n')\n d6.write(' puts $outfile \"$time $x\"\\n')\n d6.write('}\\n')\n d6.close()\n \n #Creando script para DB2_x2l\n d7 = open('dihed_DB2_x2l.tcl', 'w')\n print(d7)\n d7.write('set psfFile '+ psf+' \\n')\n d7.write('set dcdFile '+ dcd+' \\n')\n d7.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n d7.write('set outfile ' +'[open ' +'dihed_db2_x2l.dat'+' w]\\n')\n d7.write('set nf [molinfo top get numframes]\\n')\n d7.write(' \\n')\n d7.write('set selatoms1 [[atomselect top \"protein and chain A and '+DB2_i[1]+'\"] get index]\\n')\n d7.write('set selatoms2 [[atomselect top \"protein and chain A and '+DB2_i[2]+'\"] get index]\\n')\n d7.write('set selatoms3 [[atomselect top \"protein and chain A and '+DB2_i[3]+'\"] get index]\\n')\n d7.write('set selatoms4 [[atomselect top \"protein and chain A and '+DB2_i[4]+'\"] get index]\\n')\n d7.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\\n')\n d7.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n d7.write(' set x [measure dihed $dihed frame $i]\\n')\n d7.write(' set time [expr {$i +1}]\\n')\n d7.write(' puts $outfile \"$time $x\"\\n')\n d7.write('}\\n')\n d7.close()\n \n #Creando script para DB2_x3m\n d8 = open('dihed_DB2_x3m.tcl', 'w')\n print(d8)\n d8.write('set psfFile '+ psf+' \\n')\n d8.write('set dcdFile '+ dcd+' \\n')\n d8.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n d8.write('set outfile ' +'[open ' +'dihed_db2_x3m.dat'+' w]\\n')\n d8.write('set nf [molinfo top get numframes]\\n')\n d8.write(' \\n')\n d8.write('set selatoms1 [[atomselect top \"protein and chain A and '+DB2_i[2]+'\"] get index]\\n')\n d8.write('set selatoms2 [[atomselect top \"protein and chain A and '+DB2_i[3]+'\"] get index]\\n')\n d8.write('set selatoms3 [[atomselect top \"protein and chain A and '+DB2_i[4]+'\"] get index]\\n')\n d8.write('set selatoms4 [[atomselect top \"protein and chain A and '+DB2_i[5]+'\"] get index]\\n')\n d8.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\\n')\n d8.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n d8.write(' set x [measure dihed $dihed frame $i]\\n')\n d8.write(' set time [expr {$i +1}]\\n')\n d8.write(' puts $outfile \"$time $x\"\\n')\n d8.write('}\\n')\n d8.close()\n \n #Creando script para DB2_x2r\n d9 = open('dihed_DB2_x2r.tcl', 'w')\n print(d9)\n d9.write('set psfFile '+ psf+' \\n')\n d9.write('set dcdFile '+ dcd+' \\n')\n d9.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n d9.write('set outfile ' +'[open ' +'dihed_db2_x2r.dat'+' w]\\n')\n d9.write('set nf [molinfo top get numframes]\\n')\n d9.write(' \\n')\n d9.write('set selatoms1 [[atomselect top \"protein and chain A and '+DB2_i[3]+'\"] get index]\\n')\n d9.write('set selatoms2 [[atomselect top \"protein and chain A and '+DB2_i[4]+'\"] get index]\\n')\n d9.write('set selatoms3 [[atomselect top \"protein and chain A and '+DB2_i[5]+'\"] get index]\\n')\n d9.write('set selatoms4 [[atomselect top \"protein and chain A and '+DB2_i[6]+'\"] get index]\\n')\n d9.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\\n')\n d9.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n d9.write(' set x [measure dihed $dihed frame $i]\\n')\n d9.write(' set time [expr {$i +1}]\\n')\n d9.write(' puts $outfile \"$time $x\"\\n')\n d9.write('}\\n')\n d9.close()\n \n #Creando script para DB2_x1r\n d10 = open('dihed_DB2_x1r.tcl', 'w')\n print(d10)\n d10.write('set psfFile '+ psf+' \\n')\n d10.write('set dcdFile '+ dcd+' \\n')\n d10.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n d10.write('set outfile ' +'[open ' +'dihed_db2_x1r.dat'+' w]\\n')\n d10.write('set nf [molinfo top get numframes]\\n')\n d10.write(' \\n')\n d10.write('set selatoms1 [[atomselect top \"protein and chain A and '+DB2_i[4]+'\"] get index]\\n')\n d10.write('set selatoms2 [[atomselect top \"protein and chain A and '+DB2_i[5]+'\"] get index]\\n')\n d10.write('set selatoms3 [[atomselect top \"protein and chain A and '+DB2_i[6]+'\"] get index]\\n')\n d10.write('set selatoms4 [[atomselect top \"protein and chain A and '+DB2_i[7]+'\"] get index]\\n')\n d10.write('set dihed [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] [lindex $selatoms4] ]\\n')\n d10.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n d10.write(' set x [measure dihed $dihed frame $i]\\n')\n d10.write(' set time [expr {$i +1}]\\n')\n d10.write(' puts $outfile \"$time $x\"\\n')\n d10.write('}\\n')\n d10.close()",
"Ejecutando los archivos de los ángulos diedros tcl generados con VMD",
"if (revisa1>0):\n #Calculando con VMD rmsd DB1 X1L\n !vmd -dispdev text < dihed_DB1_x1l.tcl\n #Calculando con VMD DB1 X2L\n !vmd -dispdev text < dihed_DB1_x2l.tcl\n #Calculando con VMD DB1 X3M\n !vmd -dispdev text < dihed_DB1_x3m.tcl\n #Calculando con VMD DB1 X2R\n !vmd -dispdev text < dihed_DB1_x2r.tcl\n #Calculando con VMD DB1 X1R\n !vmd -dispdev text < dihed_DB1_x1r.tcl\n\nif (revisa2>0):\n #Calculando con VMD rmsd DB2 X1L\n !vmd -dispdev text < dihed_DB2_x1l.tcl\n #Calculando con VMD DB2 X2L\n !vmd -dispdev text < dihed_DB2_x2l.tcl\n #Calculando con VMD DB2 X3M\n !vmd -dispdev text < dihed_DB2_x3m.tcl\n #Calculando con VMD DB2 X2R\n !vmd -dispdev text < dihed_DB2_x2r.tcl\n #Calculando con VMD DB2 X1R\n !vmd -dispdev text < dihed_DB2_x1r.tcl\n\nprint ('\\nCopiando el archivo generateFES.py a '+ruta_diedros)\nsource_file=ruta_f_energy+'/generateFES.py'\ndest_file=ruta_diedros+'/generateFES.py'\nshutil.copy(source_file,dest_file)\n#Cambiando permisos de ejecución\n!chmod +x generateFES.py",
"Calculando la Free Energy Intramolecular para el Puente 1",
"if (revisa1>0):\n #Cargando valores del DB1_X1L\n data_db1_x1l=np.loadtxt('dihed_db1_x1l.dat',comments=['#', '@'])\n #Cargando valores del DB1_X1R\n data_db1_x1r=np.loadtxt('dihed_db1_x1r.dat',comments=['#', '@'])\n \n #Obteniendo los valores máximo y mínimo del DB1_X1L\n min_x1l=np.amin(data_db1_x1l[:,1])\n max_x1l=np.amax(data_db1_x1l[:,1])\n print ('Minimo DB1_X1L=>',min_x1l)\n print ('Máximo DB1_X1L=>',max_x1l)\n #Obteniendo los valores máximo y mínimo del DB1_X1R\n min_x1r=np.amin(data_db1_x1r[:,1])\n max_x1r=np.amax(data_db1_x1r[:,1])\n print ('Minimo DB1_X1R=>',min_x1r)\n print ('Máximo DB1_X1R=>',max_x1r)\n \n #Creando los archivos de entrada para el script\n np.savetxt('db1_x1l.dat',data_db1_x1l[:,1], fmt='%1.14f')\n np.savetxt('db1_x1r.dat',data_db1_x1r[:,1], fmt='%1.14f')\n !paste db1_x1l.dat db1_x1r.dat > DB1_x1_lr.dat\n \n #Ejecutando el script de FES\n !python generateFES.py DB1_x1_lr.dat $min_x1l $max_x1l $min_x1r $max_x1r 200 200 $temperatura XL1_XR1.dat\n \n ###################################################################\n #Cargando valores del DB1_X2l\n data_db1_x2l=np.loadtxt('dihed_db1_x2l.dat',comments=['#', '@'])\n #Cargando valores del DB1_X1R\n data_db1_x2r=np.loadtxt('dihed_db1_x2r.dat',comments=['#', '@'])\n \n #Obteniendo los valores máximo y mínimo del DB1_X1L\n min_x2l=np.amin(data_db1_x2l[:,1])\n max_x2l=np.amax(data_db1_x2l[:,1])\n print ('Minimo DB1_X2L=>',min_x2l)\n print ('Máximo DB1_X2L=>',max_x2l)\n #Obteniendo los valores máximo y mínimo del DB1_X1R\n min_x2r=np.amin(data_db1_x2r[:,1])\n max_x2r=np.amax(data_db1_x2r[:,1])\n print ('Minimo DB1_X2R=>',min_x2r)\n print ('Máximo DB1_X2R=>',max_x2r)\n \n #Creando los archivos de entrada para el script\n np.savetxt('db1_x2l.dat',data_db1_x2l[:,1], fmt='%1.14f')\n np.savetxt('db1_x2r.dat',data_db1_x2r[:,1], fmt='%1.14f')\n !paste db1_x2l.dat db1_x2r.dat > DB1_x2_lr.dat\n \n #Ejecutando el script de FES\n !python generateFES.py DB1_x2_lr.dat $min_x2l $max_x2l $min_x2r $max_x2r 200 200 $temperatura XL2_XR2.dat\n \n ######################################################################################\n #Generando los archivos para X3M\n data_db1_x3m=np.loadtxt('dihed_db1_x3m.dat',comments=['#', '@'])\n \n #Obteniendo los valores máximo y mínimo del DB1_X1L\n min_x3m=np.amin(data_db1_x3m[:,1])\n max_x3m=np.amax(data_db1_x3m[:,1])\n \n print ('Minimo DB1_X3M=>',min_x3m)\n print ('Máximo DB1_X3M=>',max_x3m)\n print ('Minimo DB1_X1L=>',min_x1l)\n print ('Máximo DB1_X1L=>',max_x1l)\n print ('Minimo DB1_X2L=>',min_x2l)\n print ('Máximo DB1_X2L=>',max_x2l)\n print ('Minimo DB1_X1R=>',min_x1r)\n print ('Máximo DB1_X1R=>',max_x1r)\n print ('Minimo DB1_X2R=>',min_x2r)\n print ('Máximo DB1_X2R=>',max_x2r)\n \n #Creando los archivos de entrada para el script\n np.savetxt('db1_x3m.dat',data_db1_x3m[:,1], fmt='%1.14f')\n !paste db1_x3m.dat db1_x1l.dat > DB1_x3m_x1l.dat\n !paste db1_x3m.dat db1_x2l.dat > DB1_x3m_x2l.dat\n !paste db1_x3m.dat db1_x1r.dat > DB1_x3m_x1r.dat\n !paste db1_x3m.dat db1_x2r.dat > DB1_x3m_x2r.dat\n \n #Ejecutando el script de FES\n !python generateFES.py DB1_x3m_x1l.dat $min_x3m $max_x3m $min_x1l $max_x1l 200 200 $temperatura XM3_XL1.dat\n !python generateFES.py DB1_x3m_x2l.dat $min_x3m $max_x3m $min_x2l $max_x2l 200 200 $temperatura XM3_XL2.dat\n !python generateFES.py DB1_x3m_x1r.dat $min_x3m $max_x3m $min_x1r $max_x1r 200 200 $temperatura XM3_XR1.dat\n !python generateFES.py DB1_x3m_x2r.dat $min_x3m $max_x3m $min_x2r $max_x2r 200 200 $temperatura XM3_XR2.dat",
"Ploteando con GNUPLOT el Puente 1",
"# This loads the magics for gnuplot\n%reload_ext gnuplot_kernel\n#Configurando la salida para GNUplot\n%gnuplot inline pngcairo transparent enhanced font \"arial,20\" fontscale 1.0 size 1280,960; set zeroaxis;;\n\n%%gnuplot\nset output \"db1_xl1_vs_xr1.png\"\nset palette model RGB\nset palette defined ( 0 '#000090',\\\n 1 '#000fff',\\\n 2 '#0090ff',\\\n 3 '#0fffee',\\\n 4 '#90ff70',\\\n 5 '#ffee00',\\\n 6 '#ff7000',\\\n 7 '#ee0000',\\\n 8 '#7f0000')\nset view map\nset dgrid3d\nset pm3d interpolate 0,0\nset xlabel \"{/=30 X@^L_1}\"\nset ylabel \"{/=30 X@^R_1}\"\nset title \"Free Energy Surface Intramolecular DB1\"\n##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente\n#set cbrange[8:10]\nsplot \"XL1_XR1.dat\" with pm3d\n\n# This loads the magics for gnuplot\n%reload_ext gnuplot_kernel\n#Configurando la salida para GNUplot\n%gnuplot inline pngcairo transparent enhanced font \"arial,20\" fontscale 1.0 size 1280,960; set zeroaxis;;\n\n%%gnuplot\nset output \"db1_xl2_vs_xr2.png\"\n\nset palette model RGB\nset palette defined ( 0 '#000090',\\\n 1 '#000fff',\\\n 2 '#0090ff',\\\n 3 '#0fffee',\\\n 4 '#90ff70',\\\n 5 '#ffee00',\\\n 6 '#ff7000',\\\n 7 '#ee0000',\\\n 8 '#7f0000')\nset view map\nset dgrid3d\nset pm3d interpolate 0,0\nset xlabel \"{/=30 X@^L_2}\"\nset ylabel \"{/=30 X@^R_2}\"\nset title \"Free Energy Surface Intramolecular DB1\"\n##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente\n#set cbrange[8:10]\nsplot \"XL2_XR2.dat\" with pm3d\n\n\n\n# This loads the magics for gnuplot\n%reload_ext gnuplot_kernel\n#Configurando la salida para GNUplot\n%gnuplot inline pngcairo transparent enhanced font \"arial,20\" fontscale 1.0 size 1280,960; set zeroaxis;;\n\n%%gnuplot\nset output \"db1_xm3_vs_xl1.png\"\n\nset palette model RGB\nset palette defined ( 0 '#000090',\\\n 1 '#000fff',\\\n 2 '#0090ff',\\\n 3 '#0fffee',\\\n 4 '#90ff70',\\\n 5 '#ffee00',\\\n 6 '#ff7000',\\\n 7 '#ee0000',\\\n 8 '#7f0000')\nset view map\nset dgrid3d\nset pm3d interpolate 0,0\nset xlabel \"{/=30 X@^M_3}\"\nset ylabel \"{/=30 X@^L_1}\"\nset title \"Free Energy Surface Intramolecular DB1\"\n##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente\n#set cbrange[8:10]\nsplot \"XM3_XL1.dat\" with pm3d\n\n\n\n# This loads the magics for gnuplot\n%reload_ext gnuplot_kernel\n#Configurando la salida para GNUplot\n%gnuplot inline pngcairo transparent enhanced font \"arial,20\" fontscale 1.0 size 1280,960; set zeroaxis;;\n\n%%gnuplot\nset output \"db1_xm3_vs_xl2.png\"\n\nset palette model RGB\nset palette defined ( 0 '#000090',\\\n 1 '#000fff',\\\n 2 '#0090ff',\\\n 3 '#0fffee',\\\n 4 '#90ff70',\\\n 5 '#ffee00',\\\n 6 '#ff7000',\\\n 7 '#ee0000',\\\n 8 '#7f0000')\nset view map\nset dgrid3d\nset pm3d interpolate 0,0\nset xlabel \"{/=30 X@^M_3}\"\nset ylabel \"{/=30 X@^L_2}\"\nset title \"Free Energy Surface Intramolecular DB1\"\n##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente\n#set cbrange[8:10]\nsplot \"XM3_XL2.dat\" with pm3d\n\n\n\n# This loads the magics for gnuplot\n%reload_ext gnuplot_kernel\n#Configurando la salida para GNUplot\n%gnuplot inline pngcairo transparent enhanced font \"arial,20\" fontscale 1.0 size 1280,960; set zeroaxis;;\n\n%%gnuplot\nset output \"db1_xm3_vs_xr2.png\"\n\nset palette model RGB\nset palette defined ( 0 '#000090',\\\n 1 '#000fff',\\\n 2 '#0090ff',\\\n 3 '#0fffee',\\\n 4 '#90ff70',\\\n 5 '#ffee00',\\\n 6 '#ff7000',\\\n 7 '#ee0000',\\\n 8 '#7f0000')\nset view map\nset dgrid3d\nset pm3d interpolate 0,0\nset xlabel \"{/=30 X@^M_3}\"\nset ylabel \"{/=30 X@^R_2}\"\nset title \"Free Energy Surface Intramolecular DB1\"\n##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente\n#set cbrange[8:10]\nsplot \"XM3_XR2.dat\" with pm3d\n\n\n\n# This loads the magics for gnuplot\n%reload_ext gnuplot_kernel\n#Configurando la salida para GNUplot\n%gnuplot inline pngcairo transparent enhanced font \"arial,20\" fontscale 1.0 size 1280,960; set zeroaxis;;\n\n%%gnuplot\nset output \"db1_xm3_vs_xr1.png\"\n\nset palette model RGB\nset palette defined ( 0 '#000090',\\\n 1 '#000fff',\\\n 2 '#0090ff',\\\n 3 '#0fffee',\\\n 4 '#90ff70',\\\n 5 '#ffee00',\\\n 6 '#ff7000',\\\n 7 '#ee0000',\\\n 8 '#7f0000')\nset view map\nset dgrid3d\nset pm3d interpolate 0,0\nset xlabel \"{/=30 X@^M_3}\"\nset ylabel \"{/=30 X@^R_1}\"\nset title \"Free Energy Surface Intramolecular DB1\"\n##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente\n#set cbrange[8:10]\nsplot \"XM3_XR1.dat\" with pm3d\n\n",
"Calculando la Free Energy intramolecular para el Puente 2",
"if (revisa2>0):\n #Cargando valores del DB2_X1L\n data_db2_x1l=np.loadtxt('dihed_db2_x1l.dat',comments=['#', '@'])\n #Cargando valores del DB1_X1R\n data_db2_x1r=np.loadtxt('dihed_db2_x1r.dat',comments=['#', '@'])\n \n #Obteniendo los valores máximo y mínimo del DB2_X1L\n min_db2_x1l=np.amin(data_db2_x1l[:,1])\n max_db2_x1l=np.amax(data_db2_x1l[:,1])\n print ('Minimo DB2_X1L=>',min_db2_x1l)\n print ('Máximo DB2_X1L=>',max_db2_x1l)\n \n #Obteniendo los valores máximo y mínimo del DB2_X1R\n min_db2_x1r=np.amin(data_db2_x1r[:,1])\n max_db2_x1r=np.amax(data_db2_x1r[:,1])\n print ('Minimo DB2_X1R=>',min_db2_x1r)\n print ('Máximo DB2_X1R=>',max_db2_x1r)\n \n #Creando los archivos de entrada para el script\n np.savetxt('db2_x1l.dat',data_db2_x1l[:,1], fmt='%1.14f')\n np.savetxt('db2_x1r.dat',data_db2_x1r[:,1], fmt='%1.14f')\n !paste db2_x1l.dat db2_x1r.dat > DB2_x1_lr.dat\n \n #Ejecutando el script de FES\n !python generateFES.py DB2_x1_lr.dat $min_db2_x1l $max_db2_x1l $min_db2_x1r $max_db2_x1r 200 200 $temperatura DB2_XL1_XR1.dat\n \n ###################################################################\n #Cargando valores del DB2_X2l\n data_db2_x2l=np.loadtxt('dihed_db2_x2l.dat',comments=['#', '@'])\n #Cargando valores del DB2_X1R\n data_db2_x2r=np.loadtxt('dihed_db2_x2r.dat',comments=['#', '@'])\n \n #Obteniendo los valores máximo y mínimo del DB2_X1L\n min_db2_x2l=np.amin(data_db2_x2l[:,1])\n max_db2_x2l=np.amax(data_db2_x2l[:,1])\n print ('Minimo DB2_X2L=>',min_db2_x2l)\n print ('Máximo DB2_X2L=>',max_db2_x2l)\n \n #Obteniendo los valores máximo y mínimo del DB2_X1R\n min_db2_x2r=np.amin(data_db2_x2r[:,1])\n max_db2_x2r=np.amax(data_db2_x2r[:,1])\n print ('Minimo DB2_X2R=>',min_db2_x2r)\n print ('Máximo DB2_X2R=>',max_db2_x2r)\n \n #Creando los archivos de entrada para el script\n np.savetxt('db2_x2l.dat',data_db2_x2l[:,1], fmt='%1.14f')\n np.savetxt('db2_x2r.dat',data_db2_x2r[:,1], fmt='%1.14f')\n !paste db2_x2l.dat db2_x2r.dat > DB2_x2_lr.dat\n \n #Ejecutando el script de FES\n !python generateFES.py DB2_x2_lr.dat $min_db2_x2l $max_db2_x2l $min_db2_x2r $max_db2_x2r 200 200 $temperatura DB2_XL2_XR2.dat\n \n ######################################################################################\n #Cargando valores del DB2_X3M\n data_db2_x3m=np.loadtxt('dihed_db2_x3m.dat',comments=['#', '@'])\n \n #Obteniendo los valores máximo y mínimo del DB2_X3M\n min_db2_x3m=np.amin(data_db2_x3m[:,1])\n max_db2_x3m=np.amax(data_db2_x3m[:,1])\n \n print ('Minimo DB2_X3M=>',min_db2_x3m)\n print ('Máximo DB2_X3M=>',max_db2_x3m)\n \n print ('Minimo DB2_X1R=>',min_db2_x1r)\n print ('Máximo DB2_X1R=>',max_db2_x1r)\n \n print ('Minimo DB2_X2R=>',min_db2_x2r)\n print ('Máximo DB2_X2R=>',max_db2_x2r)\n \n print ('Minimo DB2_X1L=>',min_db2_x1l)\n print ('Máximo DB2_X1L=>',max_db2_x1l)\n \n print ('Minimo DB2_X2L=>',min_db2_x2l)\n print ('Máximo DB2_X2L=>',max_db2_x2l)\n \n #Creando los archivos de entrada para el script\n np.savetxt('db2_x3m.dat',data_db2_x3m[:,1], fmt='%1.14f')\n !paste db2_x3m.dat db2_x1r.dat > DB2_x3m_x1r.dat\n !paste db2_x3m.dat db2_x2r.dat > DB2_x3m_x2r.dat\n !paste db2_x3m.dat db2_x1l.dat > DB2_x3m_x1l.dat\n !paste db2_x3m.dat db2_x2l.dat > DB2_x3m_x2l.dat\n \n #Ejecutando el script de FES\n !python generateFES.py DB2_x3m_x1r.dat $min_db2_x3m $max_db2_x3m $min_db2_x1r $max_db2_x1r 200 200 $temperatura DB2_XM3_XR1.dat\n !python generateFES.py DB2_x3m_x2r.dat $min_db2_x3m $max_db2_x3m $min_db2_x2r $max_db2_x2r 200 200 $temperatura DB2_XM3_XR2.dat\n !python generateFES.py DB2_x3m_x1l.dat $min_db2_x3m $max_db2_x3m $min_db2_x1l $max_db2_x1l 200 200 $temperatura DB2_XM3_XL1.dat\n !python generateFES.py DB2_x3m_x2l.dat $min_db2_x3m $max_db2_x3m $min_db2_x2l $max_db2_x2l 200 200 $temperatura DB2_XM3_XL2.dat",
"Ploteando con GNUPLOT el Puente 2",
"# This loads the magics for gnuplot\n%reload_ext gnuplot_kernel\n#Configurando la salida para GNUplot\n%gnuplot inline pngcairo transparent enhanced font \"arial,20\" fontscale 1.0 size 1280,960; set zeroaxis;;\n\n%%gnuplot\nset output \"db2_xl1_vs_xr1.png\"\n\nset palette model RGB\nset palette defined ( 0 '#000090',\\\n 1 '#000fff',\\\n 2 '#0090ff',\\\n 3 '#0fffee',\\\n 4 '#90ff70',\\\n 5 '#ffee00',\\\n 6 '#ff7000',\\\n 7 '#ee0000',\\\n 8 '#7f0000')\nset view map\nset dgrid3d\nset pm3d interpolate 0,0\nset xlabel \"{/=30 X@^L_1}\"\nset ylabel \"{/=30 X@^R_1}\"\nset title \"Free Energy Surface Intramolecular DB2\"\n##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente\n#set cbrange[8:10]\nsplot \"DB2_XL1_XR1.dat\" with pm3d\n\n\n\n# This loads the magics for gnuplot\n%reload_ext gnuplot_kernel\n#Configurando la salida para GNUplot\n%gnuplot inline pngcairo transparent enhanced font \"arial,20\" fontscale 1.0 size 1280,960; set zeroaxis;;\n\n%%gnuplot\nset output \"db2_xl2_vs_xr2.png\"\n\nset palette model RGB\nset palette defined ( 0 '#000090',\\\n 1 '#000fff',\\\n 2 '#0090ff',\\\n 3 '#0fffee',\\\n 4 '#90ff70',\\\n 5 '#ffee00',\\\n 6 '#ff7000',\\\n 7 '#ee0000',\\\n 8 '#7f0000')\nset view map\nset dgrid3d\nset pm3d interpolate 0,0\nset xlabel \"{/=30 X@^L_2}\"\nset ylabel \"{/=30 X@^R_2}\"\nset title \"Free Energy Surface Intramolecular DB2\"\n##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente\n#set cbrange[8:10]\nsplot \"DB2_XL2_XR2.dat\" with pm3d\n\n\n\n# This loads the magics for gnuplot\n%reload_ext gnuplot_kernel\n#Configurando la salida para GNUplot\n%gnuplot inline pngcairo transparent enhanced font \"arial,20\" fontscale 1.0 size 1280,960; set zeroaxis;;\n\n%%gnuplot\nset output \"db2_xm3_vs_xl1.png\"\n\nset palette model RGB\nset palette defined ( 0 '#000090',\\\n 1 '#000fff',\\\n 2 '#0090ff',\\\n 3 '#0fffee',\\\n 4 '#90ff70',\\\n 5 '#ffee00',\\\n 6 '#ff7000',\\\n 7 '#ee0000',\\\n 8 '#7f0000')\nset view map\nset dgrid3d\nset pm3d interpolate 0,0\nset xlabel \"{/=30 X@^M_3}\"\nset ylabel \"{/=30 X@^L_1}\"\nset title \"Free Energy Surface Intramolecular DB2\"\n##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente\n#set cbrange[8:10]\nsplot \"DB2_XM3_XL1.dat\" with pm3d\n\n\n\n# This loads the magics for gnuplot\n%reload_ext gnuplot_kernel\n#Configurando la salida para GNUplot\n%gnuplot inline pngcairo transparent enhanced font \"arial,20\" fontscale 1.0 size 1280,960; set zeroaxis;;\n\n%%gnuplot\nset output \"db2_xm3_vs_xl2.png\"\n\nset palette model RGB\nset palette defined ( 0 '#000090',\\\n 1 '#000fff',\\\n 2 '#0090ff',\\\n 3 '#0fffee',\\\n 4 '#90ff70',\\\n 5 '#ffee00',\\\n 6 '#ff7000',\\\n 7 '#ee0000',\\\n 8 '#7f0000')\nset view map\nset dgrid3d\nset pm3d interpolate 0,0\nset xlabel \"{/=30 X@^M_3}\"\nset ylabel \"{/=30 X@^L_2}\"\nset title \"Free Energy Surface Intramolecular DB2\"\n##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente\n#set cbrange[8:10]\nsplot \"DB2_XM3_XL2.dat\" with pm3d\n\n\n\n# This loads the magics for gnuplot\n%reload_ext gnuplot_kernel\n#Configurando la salida para GNUplot\n%gnuplot inline pngcairo transparent enhanced font \"arial,20\" fontscale 1.0 size 1280,960; set zeroaxis;;\n\n%%gnuplot\nset output \"db2_xm3_vs_xr2.png\"\n\nset palette model RGB\nset palette defined ( 0 '#000090',\\\n 1 '#000fff',\\\n 2 '#0090ff',\\\n 3 '#0fffee',\\\n 4 '#90ff70',\\\n 5 '#ffee00',\\\n 6 '#ff7000',\\\n 7 '#ee0000',\\\n 8 '#7f0000')\nset view map\nset dgrid3d\nset pm3d interpolate 0,0\nset xlabel \"{/=30 X@^M_3}\"\nset ylabel \"{/=30 X@^R_2}\"\nset title \"Free Energy Surface Intramolecular DB2\"\n##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente\n#set cbrange[8:10]\nsplot \"DB2_XM3_XR2.dat\" with pm3d\n\n\n\n# This loads the magics for gnuplot\n%reload_ext gnuplot_kernel\n#Configurando la salida para GNUplot\n%gnuplot inline pngcairo transparent enhanced font \"arial,20\" fontscale 1.0 size 1280,960; set zeroaxis;;\n\n%%gnuplot\nset output \"db2_xm3_vs_xr1.png\"\n\nset palette model RGB\nset palette defined ( 0 '#000090',\\\n 1 '#000fff',\\\n 2 '#0090ff',\\\n 3 '#0fffee',\\\n 4 '#90ff70',\\\n 5 '#ffee00',\\\n 6 '#ff7000',\\\n 7 '#ee0000',\\\n 8 '#7f0000')\nset view map\nset dgrid3d\nset pm3d interpolate 0,0\nset xlabel \"{/=30 X@^M_3}\"\nset ylabel \"{/=30 X@^R_1}\"\nset title \"Free Energy Surface Intramolecular DB2\"\n##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente\n#set cbrange[8:10]\nsplot \"DB2_XM3_XR1.dat\" with pm3d\n\n",
"Free Energy Intermolecular",
"############################################\n#### Intermolecular DB1- DB2 - X1L\n############################################\n#Creando el DB1-DB2-X1L\n!paste db1_x1l.dat db2_x1l.dat > DB1_DB2_x1l.dat\nprint('Minimo DB1-X1L=>',min_x1l)\nprint('Máximo DB1-X1L=>',max_x1l)\nprint('Minimo DB2-X1L=>',min_db2_x1l)\nprint('Máximo DB2-X1L=>',max_db2_x1l) \n#Ejecutando el script de FES\n!python generateFES.py DB1_DB2_x1l.dat $min_x1l $max_x1l $min_db2_x1l $max_db2_x1l 200 200 $temperatura DB1_DB2_X1L.dat\n\n \n#########################################\n#### Intermolecular DB1- DB2 - X2L\n############################################\n\n#Creando el DB1-DB2-X2L\n!paste db1_x2l.dat db2_x2l.dat > DB1_DB2_x2l.dat\nprint('Minimo DB1-X2L=>',min_x2l)\nprint('Máximo DB1-X2L=>',max_x2l)\nprint('Minimo DB2-X2L=>',min_db2_x2l)\nprint('Máximo DB2-X2L=>',max_db2_x2l)\n\n#Ejecutando el script de FES\n!python generateFES.py DB1_DB2_x2l.dat $min_x2l $max_x2l $min_db2_x2l $max_db2_x2l 200 200 $temperatura DB1_DB2_X2L.dat\n\n############################################\n#### Intermolecular DB1- DB2 - X3M\n############################################\n\n#Creando el DB1-DB2-X3M\n!paste db1_x3m.dat db2_x3m.dat > DB1_DB2_x3m.dat\nprint('Minimo DB1-X3M=>',min_x3m)\nprint('Máximo DB1-X3M=>',max_x3m)\nprint('Minimo DB2-X3M=>',min_db2_x3m)\nprint('Máximo DB2-X3M=>',max_db2_x3m)\n\n#Ejecutando el script de FES\n!python generateFES.py DB1_DB2_x3m.dat $min_x3m $max_x3m $min_db2_x3m $max_db2_x3m 200 200 $temperatura DB1_DB2_X3M.dat\n\n\n############################################\n#### Intermolecular DB1- DB2 - X2R\n############################################\n\n#Creando el DB1-DB2-X2R\n!paste db1_x2r.dat db2_x2r.dat > DB1_DB2_x2r.dat\nprint('Minimo DB1-X2R=>',min_x2r)\nprint('Máximo DB1-X2R=>',max_x2r)\nprint('Minimo DB2-X2R=>',min_db2_x2r)\nprint('Máximo DB2-X2R=>',max_db2_x2r)\n\n#Ejecutando el script de FES\n!python generateFES.py DB1_DB2_x2r.dat $min_x2r $max_x2r $min_db2_x2r $max_db2_x2r 200 200 $temperatura DB1_DB2_X2R.dat\n\n\n############################################\n#### Intermolecular DB1- DB2 - X1R\n############################################\n\n#Creando el DB1-DB2-X1R\n!paste db1_x1r.dat db2_x1r.dat > DB1_DB2_x1r.dat\nprint('Minimo DB1-X1R=>',min_x1r)\nprint('Máximo DB1-X1R=>',max_x1r)\nprint('Minimo DB2-X1R=>',min_db2_x1r)\nprint('Máximo DB2-X1R=>',max_db2_x1r)\n\n#Ejecutando el script de FES\n!python generateFES.py DB1_DB2_x1r.dat $min_x1r $max_x1r $min_db2_x1r $max_db2_x1r 200 200 $temperatura DB1_DB2_X1R.dat\n",
"Ploteando la Free Energy Intermolecular puentes DB1 y DB2",
"# This loads the magics for gnuplot\n%reload_ext gnuplot_kernel\n#Configurando la salida para GNUplot\n%gnuplot inline pngcairo transparent enhanced font \"arial,20\" fontscale 1.0 size 1280,960; set zeroaxis;;\n\n%%gnuplot\nset output \"DB1_DB2_X1L.png\"\n\nset palette model RGB\nset palette defined ( 0 '#000090',\\\n 1 '#000fff',\\\n 2 '#0090ff',\\\n 3 '#0fffee',\\\n 4 '#90ff70',\\\n 5 '#ffee00',\\\n 6 '#ff7000',\\\n 7 '#ee0000',\\\n 8 '#7f0000')\nset view map\nset dgrid3d\nset pm3d interpolate 0,0\nset xlabel \"{/=30 DB1 X@^L_1}\"\nset ylabel \"{/=30 DB2 X@^L_1}\"\nset title \"Free Energy Surface Intermolecular DB1-DB2\"\n##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente\n#set cbrange[8:10]\nsplot \"DB1_DB2_X1L.dat\" with pm3d\n\n\n\n# This loads the magics for gnuplot\n%reload_ext gnuplot_kernel\n#Configurando la salida para GNUplot\n%gnuplot inline pngcairo transparent enhanced font \"arial,20\" fontscale 1.0 size 1280,960; set zeroaxis;;\n\n%%gnuplot\nset output \"DB1_DB2_X2L.png\"\n\nset palette model RGB\nset palette defined ( 0 '#000090',\\\n 1 '#000fff',\\\n 2 '#0090ff',\\\n 3 '#0fffee',\\\n 4 '#90ff70',\\\n 5 '#ffee00',\\\n 6 '#ff7000',\\\n 7 '#ee0000',\\\n 8 '#7f0000')\nset view map\nset dgrid3d\nset pm3d interpolate 0,0\nset xlabel \"{/=30 DB1 X@^L_2}\"\nset ylabel \"{/=30 DB2 X@^L_2}\"\nset title \"Free Energy Surface Intermolecular DB1-DB2\"\n##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente\n#set cbrange[8:10]\n\nsplot \"DB1_DB2_X2L.dat\" with pm3d\n\n\n\n# This loads the magics for gnuplot\n%reload_ext gnuplot_kernel\n#Configurando la salida para GNUplot\n%gnuplot inline pngcairo transparent enhanced font \"arial,20\" fontscale 1.0 size 1280,960; set zeroaxis;;\n\n%%gnuplot\nset output \"DB1_DB2_X3M.png\"\n\nset palette model RGB\nset palette defined ( 0 '#000090',\\\n 1 '#000fff',\\\n 2 '#0090ff',\\\n 3 '#0fffee',\\\n 4 '#90ff70',\\\n 5 '#ffee00',\\\n 6 '#ff7000',\\\n 7 '#ee0000',\\\n 8 '#7f0000')\nset view map\nset dgrid3d\nset pm3d interpolate 0,0\nset xlabel \"{/=30 DB1 X@^M_3}\"\nset ylabel \"{/=30 DB2 X@^M_3}\"\nset title \"Free Energy Surface Intermolecular DB1-DB2\"\n##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente\n#set cbrange[8:10]\n\nsplot \"DB1_DB2_X3M.dat\" with pm3d\n\n\n\n# This loads the magics for gnuplot\n%reload_ext gnuplot_kernel\n#Configurando la salida para GNUplot\n%gnuplot inline pngcairo transparent enhanced font \"arial,20\" fontscale 1.0 size 1280,960; set zeroaxis;;\n\n%%gnuplot\nset output \"DB1_DB2_X2R.png\"\n\nset palette model RGB\nset palette defined ( 0 '#000090',\\\n 1 '#000fff',\\\n 2 '#0090ff',\\\n 3 '#0fffee',\\\n 4 '#90ff70',\\\n 5 '#ffee00',\\\n 6 '#ff7000',\\\n 7 '#ee0000',\\\n 8 '#7f0000')\nset view map\nset dgrid3d\nset xyplane 0\nset pm3d interpolate 0,0\nset xlabel \"{/=30 DB1 X@^R_2}\"\nset ylabel \"{/=30 DB2 X@^R_2}\"\nset title \"Free Energy Surface Intermolecular DB1-DB2\"\n##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente\n#set cbrange[8:10]\nsplot \"DB1_DB2_X2R.dat\" with pm3d\n\n\n\n# This loads the magics for gnuplot\n%reload_ext gnuplot_kernel\n#Configurando la salida para GNUplot\n%gnuplot inline pngcairo transparent enhanced font \"arial,20\" fontscale 1.0 size 1280,960; set zeroaxis;;\n\n%%gnuplot\nset output \"DB1_DB2_X1R.png\"\n\nset palette model RGB\nset palette defined ( 0 '#000090',\\\n 1 '#000fff',\\\n 2 '#0090ff',\\\n 3 '#0fffee',\\\n 4 '#90ff70',\\\n 5 '#ffee00',\\\n 6 '#ff7000',\\\n 7 '#ee0000',\\\n 8 '#7f0000')\nset view map\nset dgrid3d\nset pm3d interpolate 0,0\nset xlabel \"{/=30 DB1 X@^R_1}\"\nset ylabel \"{/=30 DB2 X@^R_1}\"\nset title \"Free Energy Surface Intermolecular DB1-DB2\"\n##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente\nset cbrange[8:10]\n\nsplot \"DB1_DB2_X1R.dat\" with pm3d\n\n",
"Calcular los histogramas de los diedros",
"hist_escale_y=[]\nfig = pl.figure(figsize=(25,8))\nfig.subplots_adjust(hspace=.4, wspace=.3)\n#subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)\n#left = 0.125 # the left side of the subplots of the figure\n#right = 0.9 # the right side of the subplots of the figure\n#bottom = 0.1 # the bottom of the subplots of the figure\n#top = 0.9 # the top of the subplots of the figure\n#wspace = 0.2 # the amount of width reserved for blank space between subplots\n#hspace = 0.2 # the amount of height reserved for white space between subplots\n#Formateando los valores de los ejes\n\n\n#Engrosando marcos\nax = fig.add_subplot(2,5,1)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(3)\n ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))\nax = fig.add_subplot(2,5,2)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(3)\n ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))\nax = fig.add_subplot(2,5,3)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(3)\n ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))\nax = fig.add_subplot(2,5,4)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(3)\n ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))\nax = fig.add_subplot(2,5,5)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(3)\n ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))\n\n#Cargando valores del DB1\ndata_h_db1_x1l=np.loadtxt('db1_x1l.dat',comments=['#', '@'])\ndata_h_db1_x2l=np.loadtxt('db1_x2l.dat',comments=['#', '@'])\ndata_h_db1_x3m=np.loadtxt('db1_x3m.dat',comments=['#', '@'])\ndata_h_db1_x2r=np.loadtxt('db1_x2r.dat',comments=['#', '@'])\ndata_h_db1_x1r=np.loadtxt('db1_x1r.dat',comments=['#', '@'])\n\n#Cargando valores del DB2\ndata_h_db2_x1l=np.loadtxt('db2_x1l.dat',comments=['#', '@'])\ndata_h_db2_x2l=np.loadtxt('db2_x2l.dat',comments=['#', '@'])\ndata_h_db2_x3m=np.loadtxt('db2_x3m.dat',comments=['#', '@'])\ndata_h_db2_x2r=np.loadtxt('db2_x2r.dat',comments=['#', '@'])\ndata_h_db2_x1r=np.loadtxt('db2_x1r.dat',comments=['#', '@'])\n\n\n\nsub1 = fig.add_subplot(251) # instead of plt.subplot(2, 2, 1)\nsub1.set_xlabel('Angle (Degree) ', fontsize=10)\nsub1.set_ylabel('P(Angle)')\nn1, bins1, rectangles1 = sub1.hist(data_h_db1_x1l,100, normed=True, color='black',histtype='step', linewidth=3)\nn2, bins2, rectangles2 = sub1.hist(data_h_db2_x1l,100, normed=True, color='red',histtype='step', linewidth=3)\nx1,x2,y1,y2=sub1.axis()\nhist_escale_y.append(y2)\n\nsub2 = fig.add_subplot(252) # instead of plt.subplot(2, 2, 1)\nsub2.set_xlabel('Angle (Degree) ', fontsize=10)\nsub2.set_ylabel('P(Angle)')\nn1, bins1, rectangles1 = sub2.hist(data_h_db1_x2l,100, normed=True, color='black',histtype='step', linewidth=3)\nn2, bins2, rectangles2 = sub2.hist(data_h_db2_x2l,100, normed=True, color='red',histtype='step', linewidth=3)\nx1,x2,y1,y2=sub2.axis()\nhist_escale_y.append(y2)\n\nsub3 = fig.add_subplot(253) # instead of plt.subplot(2, 2, 1)\nsub3.set_xlabel('Angle (Degree) ', fontsize=10)\nsub3.set_ylabel('P(Angle)')\nn1, bins1, rectangles1 = sub3.hist(data_h_db1_x3m,100, normed=True, color='black',histtype='step', linewidth=3)\nn2, bins2, rectangles2 = sub3.hist(data_h_db2_x3m,100, normed=True, color='red',histtype='step', linewidth=3)\nx1,x2,y1,y2=sub3.axis()\nhist_escale_y.append(y2)\n\nsub4 = fig.add_subplot(254) # instead of plt.subplot(2, 2, 1)\nsub4.set_xlabel('Angle (Degree) ', fontsize=10)\nsub4.set_ylabel('P(Angle)')\nn1, bins1, rectangles1 = sub4.hist(data_h_db1_x2r,100, normed=True, color='black',histtype='step', linewidth=3)\nn2, bins2, rectangles2 = sub4.hist(data_h_db2_x2r,100, normed=True, color='red',histtype='step', linewidth=3)\nx1,x2,y1,y2=sub4.axis()\nhist_escale_y.append(y2)\n\nsub5 = fig.add_subplot(255) # instead of plt.subplot(2, 2, 1)\nsub5.set_xlabel('Angle (Degree) ', fontsize=10)\nsub5.set_ylabel('P(Angle)')\nn1, bins1, rectangles1 = sub5.hist(data_h_db1_x1r,100, normed=True, color='black',histtype='step', linewidth=3)\nn2, bins2, rectangles2 = sub5.hist(data_h_db2_x1r,100, normed=True, color='red',histtype='step', linewidth=3)\nx1,x2,y1,y2=sub5.axis()\nhist_escale_y.append(y2)\n\n#escale_y\nhist_escale_y.sort(reverse=True)\nhist_escale_y\n##Cambiando los ejes de las y\nsub1.axis((x1,x2,y1,hist_escale_y[0]))\nsub2.axis((x1,x2,y1,hist_escale_y[0]))\nsub3.axis((x1,x2,y1,hist_escale_y[0]))\nsub4.axis((x1,x2,y1,hist_escale_y[0]))\nsub5.axis((x1,x2,y1,hist_escale_y[0]))\n",
"Ángulos de Enlace de los puentes Intermolecular",
"### Creando el directorio para el análisis de las distancias de enlace de los puentes INTERMOLECULAR\n\nruta_bonds_puentes = nuevaruta+'/bonds_puentes'\nprint ( ruta_bonds_puentes )\nif not os.path.exists(ruta_bonds_puentes): \n os.makedirs(ruta_bonds_puentes)\n print ('Se ha creado la ruta ===>',ruta_bonds_puentes)\nelse:\n print (\"La ruta \"+ruta_bonds_puentes+\" existe..!!!\")\n\nprint ( 'Nos vamos a ....', ruta_bonds_puentes)\nos.chdir( ruta_bonds_puentes )",
"Copiando el archivo de generación de FES",
"print ('\\nCopiando el archivo generateFES.py a '+ruta_bonds_puentes)\nsource_file=ruta_scripts+'/free_energy/generateFES.py'\ndest_file=ruta_bonds_puentes+'/generateFES.py'\nshutil.copy(source_file,dest_file)\n#Cambiando permisos de ejecución\n!chmod +x generateFES.py",
"Generando los archivos Tcl para el cálculo de los ángulos.",
"psf=ruta_old_traj+'/'+psf_file\ndcd=ruta_old_traj+'/'+dcd_file\nprint ('Puente DB1=>',DB1_N)\nprint ('Puente DB1=>',DB1_i)\nprint ('Puente DB2=>',DB2_N)\nprint ('Puente DB2=>',DB2_i)\n\npuente=2\nif (int(puente)==2):\n \n #Creando script para Bond X1 Left\n b1 = open('bond_DB1_left.tcl', 'w')\n print(b1)\n b1.write('set psfFile '+ psf+' \\n')\n b1.write('set dcdFile '+ dcd+' \\n')\n b1.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n b1.write('set outfile ' +'[open ' +'bond_db1_left.dat'+' w]\\n')\n b1.write('set nf [molinfo top get numframes]\\n')\n b1.write(' \\n')\n b1.write('set selatoms1 [[atomselect top \"protein and chain A and '+DB1_i[1]+'\"] get index]\\n')\n b1.write('set selatoms2 [[atomselect top \"protein and chain A and '+DB1_i[2]+'\"] get index]\\n')\n b1.write('set selatoms3 [[atomselect top \"protein and chain A and '+DB1_i[3]+'\"] get index]\\n')\n b1.write('set angle [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] ]\\n')\n b1.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n b1.write(' set x [measure angle $angle frame $i]\\n')\n b1.write(' set time [expr {$i +1}]\\n')\n b1.write(' puts $outfile \"$time $x\"\\n')\n b1.write('}\\n')\n b1.close()\n \n #Creando script para Bond X1 Right\n b2 = open('bond_DB1_right.tcl', 'w')\n print(b2)\n b2.write('set psfFile '+ psf+' \\n')\n b2.write('set dcdFile '+ dcd+' \\n')\n b2.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n b2.write('set outfile ' +'[open ' +'bond_db1_right.dat'+' w]\\n')\n b2.write('set nf [molinfo top get numframes]\\n')\n b2.write(' \\n')\n b2.write('set selatoms1 [[atomselect top \"protein and chain A and '+DB1_i[4]+'\"] get index]\\n')\n b2.write('set selatoms2 [[atomselect top \"protein and chain A and '+DB1_i[5]+'\"] get index]\\n')\n b2.write('set selatoms3 [[atomselect top \"protein and chain A and '+DB1_i[6]+'\"] get index]\\n')\n b2.write('set angle [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] ]\\n')\n b2.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n b2.write(' set x [measure angle $angle frame $i]\\n')\n b2.write(' set time [expr {$i +1}]\\n')\n b2.write(' puts $outfile \"$time $x\"\\n')\n b2.write('}\\n')\n b2.close()\n \n #Creando script para Bond DB2 X1 Left\n b3 = open('bond_DB2_left.tcl', 'w')\n print(b3)\n b3.write('set psfFile '+ psf+' \\n')\n b3.write('set dcdFile '+ dcd+' \\n')\n b3.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n b3.write('set outfile ' +'[open ' +'bond_db2_left.dat'+' w]\\n')\n b3.write('set nf [molinfo top get numframes]\\n')\n b3.write(' \\n')\n b3.write('set selatoms1 [[atomselect top \"protein and chain A and '+DB2_i[1]+'\"] get index]\\n')\n b3.write('set selatoms2 [[atomselect top \"protein and chain A and '+DB2_i[2]+'\"] get index]\\n')\n b3.write('set selatoms3 [[atomselect top \"protein and chain A and '+DB2_i[3]+'\"] get index]\\n')\n b3.write('set angle [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] ]\\n')\n b3.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n b3.write(' set x [measure angle $angle frame $i]\\n')\n b3.write(' set time [expr {$i +1}]\\n')\n b3.write(' puts $outfile \"$time $x\"\\n')\n b3.write('}\\n')\n b3.close()\n \n #Creando script para Bond DB2 X1 Right\n b4 = open('bond_DB2_right.tcl', 'w')\n print(b4)\n b4.write('set psfFile '+ psf+' \\n')\n b4.write('set dcdFile '+ dcd+' \\n')\n b4.write('\\nmol load psf $psfFile dcd $dcdFile\\n')\n b4.write('set outfile ' +'[open ' +'bond_db2_right.dat'+' w]\\n')\n b4.write('set nf [molinfo top get numframes]\\n')\n b4.write(' \\n')\n b4.write('set selatoms1 [[atomselect top \"protein and chain A and '+DB2_i[4]+'\"] get index]\\n')\n b4.write('set selatoms2 [[atomselect top \"protein and chain A and '+DB2_i[5]+'\"] get index]\\n')\n b4.write('set selatoms3 [[atomselect top \"protein and chain A and '+DB2_i[6]+'\"] get index]\\n')\n b4.write('set angle [list [lindex $selatoms1] [lindex $selatoms2] [lindex $selatoms3] ]\\n')\n b4.write('for {set i 0} {$i < $nf} {incr i 1} {\\n')\n b4.write(' set x [measure angle $angle frame $i]\\n')\n b4.write(' set time [expr {$i +1}]\\n')\n b4.write(' puts $outfile \"$time $x\"\\n')\n b4.write('}\\n')\n b4.close()",
"Ejecutando los archivos tcl generados con VMD",
"#Calculando con VMD bond DB1 Left\n!vmd -dispdev text < bond_DB1_left.tcl\n\n#Calculando con VMD bond DB1 Right\n!vmd -dispdev text < bond_DB1_right.tcl\n\n#Calculando con VMD bond DB2 Left\n!vmd -dispdev text < bond_DB2_left.tcl\n\n#Calculando con VMD bond DB2 Right\n!vmd -dispdev text < bond_DB2_right.tcl",
"Calculando la Free Energy de los Bonds de los puentes",
"#Cargando valores del DB1\ndata_bond_db1_left=np.loadtxt('bond_db1_left.dat',comments=['#', '@'])\n#Cargando valores del DB1_X1R\ndata_bond_db1_right=np.loadtxt('bond_db1_right.dat',comments=['#', '@'])\n\n#Obteniendo los valores máximo y mínimo del DB1 Left\nmin_bond1_left=np.amin(data_bond_db1_left[:,1])\nmax_bond1_left=np.amax(data_bond_db1_left[:,1])\nprint ('Minimo DB1_Left=>',min_bond1_left)\nprint ('Máximo DB1_Left=>',max_bond1_left)\n#Obteniendo los valores máximo y mínimo del DB1 Right\nmin_bond1_right=np.amin(data_bond_db1_right[:,1])\nmax_bond1_right=np.amax(data_bond_db1_right[:,1])\nprint ('Minimo DB1_Right=>',min_bond1_right)\nprint ('Máximo DB1_Right=>',max_bond1_right)\n\n#Creando los archivos de entrada para el script\nnp.savetxt('bond_DB1_left.dat',data_bond_db1_left[:,1], fmt='%1.14f')\nnp.savetxt('bond_DB1_right.dat',data_bond_db1_right[:,1], fmt='%1.14f')\n!paste bond_DB1_left.dat bond_DB1_right.dat > angles_DB1.dat\n\n#Ejecutando el script de FES\n!python generateFES.py angles_DB1.dat $min_bond1_left $max_bond1_left $min_bond1_right $max_bond1_right 200 200 $temperatura Angles_DB1.dat\n\n###################################################################3\n\n#Cargando valores del DB2\ndata_bond_db2_left=np.loadtxt('bond_db2_left.dat',comments=['#', '@'])\n#Cargando valores del DB1_X1R\ndata_bond_db2_right=np.loadtxt('bond_db2_right.dat',comments=['#', '@'])\n\n#Obteniendo los valores máximo y mínimo del DB2 Left\nmin_bond2_left=np.amin(data_bond_db2_left[:,1])\nmax_bond2_left=np.amax(data_bond_db2_left[:,1])\nprint ('Minimo DB2_Left=>',min_bond2_left)\nprint ('Máximo DB2_Left=>',max_bond2_left)\n#Obteniendo los valores máximo y mínimo del DB2 Right\nmin_bond2_right=np.amin(data_bond_db2_right[:,1])\nmax_bond2_right=np.amax(data_bond_db2_right[:,1])\nprint ('Minimo DB2_Right=>',min_bond2_right)\nprint ('Máximo DB2_Right=>',max_bond2_right)\n\n#Creando los archivos de entrada para el script\nnp.savetxt('bond_DB2_left.dat',data_bond_db2_left[:,1], fmt='%1.14f')\nnp.savetxt('bond_DB2_right.dat',data_bond_db2_right[:,1], fmt='%1.14f')\n!paste bond_DB2_left.dat bond_DB2_right.dat > angles_DB2.dat\n\n#Ejecutando el script de FES\n!python generateFES.py angles_DB2.dat $min_bond2_left $max_bond2_left $min_bond2_right $max_bond2_right 200 200 $temperatura Angles_DB2.dat\n",
"Ploteando la Free Energy de los ángulos con gnuplot",
"# This loads the magics for gnuplot\n%reload_ext gnuplot_kernel\n#Configurando la salida para GNUplot\n%gnuplot inline pngcairo transparent enhanced font \"arial,20\" fontscale 1.0 size 1280,960; set zeroaxis;;\n\n%%gnuplot\nset output \"db1_a1_a2.png\"\n\nset palette model RGB\nset palette defined ( 0 '#000090',\\\n 1 '#000fff',\\\n 2 '#0090ff',\\\n 3 '#0fffee',\\\n 4 '#90ff70',\\\n 5 '#ffee00',\\\n 6 '#ff7000',\\\n 7 '#ee0000',\\\n 8 '#7f0000')\nset view map\nset dgrid3d\nset pm3d interpolate 0,0\nset xlabel \"{/=30 C@^1_{/Symbol a}}-{/=30 C@^1_{/Symbol b}}-{/=30 S@^1_{/Symbol g}}\"\nset ylabel \"{/=30 C@^2_{/Symbol a}}-{/=30 C@^2_{/Symbol b}}-{/=30 S@^2_{/Symbol g}}\"\nset title \"Free Energy Surface Angles DB1\"\n##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente\n#set cbrange[8:10]\n\nsplot \"Angles_DB1.dat\" with pm3d\n\n\n\n# This loads the magics for gnuplot\n%reload_ext gnuplot_kernel\n#Configurando la salida para GNUplot\n%gnuplot inline pngcairo transparent enhanced font \"arial,20\" fontscale 1.0 size 1280,960; set zeroaxis;;\n\n%%gnuplot\nset output \"db2_a1_a2.png\"\n\nset palette model RGB\nset palette defined ( 0 '#000090',\\\n 1 '#000fff',\\\n 2 '#0090ff',\\\n 3 '#0fffee',\\\n 4 '#90ff70',\\\n 5 '#ffee00',\\\n 6 '#ff7000',\\\n 7 '#ee0000',\\\n 8 '#7f0000')\nset view map\nset dgrid3d\nset pm3d interpolate 0,0\nset xlabel \"{/=30 C@^1_{/Symbol a}}-{/=30 C@^1_{/Symbol b}}-{/=30 S@^1_{/Symbol g}}\"\nset ylabel \"{/=30 C@^2_{/Symbol a}}-{/=30 C@^2_{/Symbol b}}-{/=30 S@^2_{/Symbol g}}\"\nset title \"Free Energy Surface Angles DB2\"\n##Descomentar la siguiente línea de código en caso de que la escala comience con valor de 1 y ejecutar nuevamente\n#set cbrange[8:10]\n\nsplot \"Angles_DB2.dat\" with pm3d\n\n",
"Calculando los histogramas de los bonds",
"bonds_escale_y=[]\n#Cargando valores del DB1\ndata_h_db1_left=np.loadtxt('bond_DB1_left.dat',comments=['#', '@'])\ndata_h_db1_right=np.loadtxt('bond_DB1_right.dat',comments=['#', '@'])\n#Cargando valores del DB2\ndata_h_db2_left=np.loadtxt('bond_DB2_left.dat',comments=['#', '@'])\ndata_h_db2_right=np.loadtxt('bond_DB2_right.dat',comments=['#', '@'])\n\n\n\n\n#Engrosar marco \nfigb=pl.figure(figsize=(12, 10), dpi=100, linewidth=3.0)\nfigb.subplots_adjust(hspace=.5)\nax = figb.add_subplot(221)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(4)\nax = figb.add_subplot(222)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(4)\nax = figb.add_subplot(223)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(4)\nax = figb.add_subplot(224)\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(4)\n\n#Formateando los valores de los ejes\n\nax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))\n\nbond1 = figb.add_subplot(221) # instead of plt.subplot(2, 2, 1)\n#bond1.set_title('CA1 - CB1 - SY1') \nbond1.set_xlabel('Angle (Degree)')\nbond1.set_ylabel('P (Angle)')\nn, bins, rectangles = bond1.hist(data_h_db1_left,100, normed=True, color='black',histtype='step',linewidth=3)\nx1,x2,y1,y2=bond1.axis()\nbonds_escale_y.append(y2)\n\n\n\nbond2 = figb.add_subplot(222) # instead of plt.subplot(2, 2, 1)\n#bond2.set_title('CA2 - CB2 - SY2') \nbond2.set_xlabel('Angle (Degree)')\nbond2.set_ylabel('P (Angle)')\nn, bins, rectangles = bond2.hist(data_h_db1_right,100, normed=True, color='black',histtype='step', linewidth=3)\nx1,x2,y1,y2=bond2.axis()\nbonds_escale_y.append(y2)\n\n\nbond3 = figb.add_subplot(223) # instead of plt.subplot(2, 2, 1)\n#bond3.set_title('CA1 - CB1 - SY1') \nbond3.set_xlabel('Angle (Degree)')\nbond3.set_ylabel('P (Angle)')\nn, bins, rectangles = bond3.hist(data_h_db2_left,100, normed=True, color='red',histtype='step', linewidth=3)\nx1,x2,y1,y2=bond3.axis()\nbonds_escale_y.append(y2)\n\n\nbond4 = figb.add_subplot(224) # instead of plt.subplot(2, 2, 1)\n#bond4.set_title('CA2 - CB2 - SY2') \nbond4.set_xlabel('Angle (Degree)')\nbond4.set_ylabel('P (Angle)')\nn, bins, rectangles = bond4.hist(data_h_db2_right,100, normed=True, color='red',histtype='step', linewidth=3)\nx1,x2,y1,y2=bond4.axis()\nbonds_escale_y.append(y2)\n\n#escale_y\nbonds_escale_y.sort(reverse=True)\nbonds_escale_y\n##Cambiando los ejes de las y\nsub1.axis((x1,x2,y1,bonds_escale_y[0]))\nsub2.axis((x1,x2,y1,bonds_escale_y[0]))\nsub3.axis((x1,x2,y1,bonds_escale_y[0]))\nsub4.axis((x1,x2,y1,bonds_escale_y[0]))\n",
"Generación de clusters\n\nCrear la nueva ruta para calcular los clusters",
"### Creando el directorio para el análisis de los puentes\n\nruta_clusters = nuevaruta+'/clusters'\nprint ( ruta_clusters )\nif not os.path.exists(ruta_clusters): \n os.makedirs(ruta_clusters)\n print ('Se ha creado la ruta ===>',ruta_clusters)\nelse:\n print (\"La ruta \"+ruta_clusters+\" existe..!!!\")\n\nprint ( 'Nos vamos a ....', ruta_clusters)\nos.chdir( ruta_clusters )\n ",
"Calculando los clusters con la opción (1= Protein)",
" !echo 1 1 | g_cluster -f ../output.xtc -s ../ionized.pdb -method gromos -cl out.pdb -g out.log -cutoff 0.2",
"Cargando los clusters para su visualización en VMD\nSe cargan los clusters en VMD y se guardan sus coordenadas para cada uno de ellos haciendo uso de VMD",
"!vmd out.pdb",
"colorByRMSF\n\nCreando la carpeta para salida de datos",
"### Creando el directorio para el análisis de colorByRMSF\n\nruta_colorByRMSF = nuevaruta+'/colorByRMSF'\nprint ( ruta_colorByRMSF )\nif not os.path.exists(ruta_colorByRMSF): \n os.makedirs(ruta_colorByRMSF)\n print ('Se ha creado la ruta ===>',ruta_colorByRMSF)\nelse:\n print (\"La ruta \"+ruta_colorByRMSF+\" existe..!!!\")\n\nprint ( 'Nos vamos a ....', ruta_colorByRMSF)\nos.chdir( ruta_colorByRMSF )\n ",
"Copiando el archivo a la carpeta de datos",
"print ('\\nCopiando el archivo colorByRMSF.vmd a '+ruta_colorByRMSF)\nsource_file=ruta_scripts+'/colorByRMSF/colorByRMSF.vmd'\ndest_file=ruta_colorByRMSF+'/colorByRMSF.vmd'\nshutil.copy(source_file,dest_file)\n",
"Calculando el RMSF para el análisis de la proteína con la opción (1) Protein",
"\nprint ('Ejecutando el análisis de rmsf...')\n!echo 1 | g_rmsf -f ../output.xtc -s ../ionized.pdb -oq bfac.pdb -o rmsf.xvg\n\n#Calculando el mínimo y máximo del rmsf\n#Cargando valores del RMSF\ndata_rmsf_gcolor=np.loadtxt('rmsf.xvg',comments=['#', '@'])\n\n#Obteniendo los valores máximo y mínimo del RMSF\nmin_rmsf_gcolor=np.amin(data_rmsf_gcolor[:,1])\nmax_rmsf_gcolor=np.amax(data_rmsf_gcolor[:,1])\nprint ('Minimo_RMSF=>',min_rmsf_gcolor)\nprint ('Máximo_RMSF=>',max_rmsf_gcolor)",
"Cargar el scrit colorByRMSF.vmd en VMD\n\nArrancar VMD, dirigirse al menú Extensions -> Tk Console, copiar y ejecutar la siguiente secuencia de comandos en el cual pondremos los valores del Mínimo_RMSF y Máximo_RMSF calculado en la celda anterior:\ntcl\nsource colorByRMSF.vmd\ncolorByRMSF top rmsf.xvg Mínimo_RMSF Máximo_RMSF\n\nESCALA DE COLOR\nDirigirse al menú Extensions -> Visualization -> Color Scale Bar y cambiar los valores de los siguientes campos:\n1. Colocar el valor calculado de Mínimo_RMSF en el campo Mínimum scale value\n2. Colocar el valor calculado de Máximo_RMSF en el campo Maximum scale value.\n3. Seleccionar el color Black en el campo Color of labels.\n\nCAMBIAR EL COLOR DE FONDO\nDirigirse al menú Graphics -> Colors , y realizar las siguientes selecciones:\n1. Categories seleccionar Display\n2. Names seleccionar Background\n3. Colors seleccionar 8 White\n\nREMOVER EJE X,Y,Z\nDirigirse al menú Display -> Axes -> Off, con el cual eliminaremos el eje de X,Y,Z.",
"# Cargando el pdb con VMD\n!vmd ../ionized.pdb",
"Graficando B-Factors con Chimera",
"print ( 'Nos vamos a ....', ruta_colorByRMSF )\nos.chdir( ruta_colorByRMSF )",
"Adecuando archivo bfac.pdb para obtener la columna de B-factors",
"#Inicializando vector\nrmsf=[]\nrmsf_x=[]\nrmsf_y=[]\ntry:\n file_Bfactor = open( 'bfac.pdb' )\n new_bfactor=open('bfac_new.pdb','w')\n \nexcept IOError:\n print ('No se pudo abrir el archivo o no existe·..')\n\ni=0\nfor linea in file_Bfactor.readlines():\n fila = linea.strip()\n sl = fila.split()\n cadena=sl[0]\n if (cadena=='ATOM'):\n if (len(sl)==12):\n new_bfactor.write(linea)\n else:\n x=linea[0:60]\n tempFactor=linea[60:66]\n #print (x)\n #print(tempFactor)\n y=fila[67:]\n #print (y)\n enviar=x+' '+tempFactor+y\n new_bfactor.write(enviar+'\\n')\n #print(enviar)\n \n else:\n #print (linea)\n new_bfactor.write(linea)\n\nnew_bfactor.close()",
"Revisando la estructura del archivo generado.\nRevisar que los campos se encuentren completamente alineados en la estructura de los campos.\nGuardar y salir.",
"!gedit bfac_new.pdb",
"Generando el archivo de Bfactors para todos los átomos FALTA ADECUAR PARA SACAR EL MAYOR POR RESIDUO",
"#Inicializando vector\nbfactors_color=[]\ntry:\n file_bfactor_color = open( 'bfac_new.pdb' )\nexcept IOError:\n print ('No se pudo abrir el archivo o no existe·..')\n\ni=0\nfor linea in file_bfactor_color.readlines():\n fila = linea.strip()\n sl = fila.split()\n if (sl[0]=='ATOM'):\n #print (sl[0])\n idresidue=fila[23:26]\n bfactor=fila[60:66]\n #print (idresidue + '\\t'+bfactor)\n bfactors_color.append(idresidue+'\\t'+bfactor+'\\n')\n #i=i+1\n\n\n#Escribiendo el archivo BFACTOR.dat\nf = open('protein_bfactor.dat', 'w')\n#f.write('@ title \"B-factors\" \\n')\nf.write('@ xaxis label \" Residue\" \\n')\nf.write('@ xaxis label char size 1.480000\\n')\nf.write('@ xaxis bar linewidth 5.0\\n')\nf.write('@ xaxis ticklabel char size 1.480000\\n')\nf.write('@ yaxis label \"B-factors (' +\"\\\\\"+'cE'+\"\\\\\"+'C)\"\\n')\nf.write('@ yaxis label char size 1.480000\\n')\nf.write('@ yaxis bar linewidth 5.0\\n')\nf.write('@ yaxis ticklabel char size 1.480000\\n')\nf.write('@ s0 line linewidth 7\\n')\nf.write('@ s0 symbol 1\\n')\nf.write('@ s0 symbol size 1.000000\\n')\nf.write('@ s0 symbol color 1\\n')\nf.write('@ s0 symbol pattern 1\\n')\nf.write('@ s0 symbol fill color 2\\n')\nf.write('@ s0 symbol fill pattern 1\\n')\nf.write('@ s0 symbol linewidth 1.0\\n')\n\nf.write('@TYPE xy \\n')\nf.write(\"\".join(bfactors_color))\nf.close()\n\n\n\n \n\n!xmgrace protein_bfactor.dat\n\n#Cargando la imagen generada en xmgrace\nImage(filename='protein_bfactor.png')\n\n#Calculando el mínimo y máximo del rmsf\n#Cargando valores del RMSF\ndata_bfactor_color=np.loadtxt('protein_bfactor.dat',comments=['#', '@'])\n#Obteniendo los valores máximo y mínimo del RMSF\nmin_bfactor_color=np.amin(data_bfactor_color[:,1])\nmax_bfactor_color=np.amax(data_bfactor_color[:,1])\n\nprint ('Minimo_B-Factor=>',min_bfactor_color)\nprint ('Máximo_B-Factor=>',max_bfactor_color)",
"Cargando el archivo pdb con Chimera para realizar la coloración de Bfactors",
"!chimera bfac_new.pdb",
"Instrucciones para generar la imagen de B-factors\n\n ESTABLECER EL MODO DE VISUALIZACIÓN\n1. Seleccionar del menú principal Presets -> Interactive 2 (all atoms).\n2. Seleccionar del menú principal Actions -> Surface -> Show.\n3. Ajustar el tamaño de la ventana principal.\n4. Ajustar el tamaño y posición de la figura haciendo uso de la tecla CTRL+ Button wheel mouse.\n\n COLOREAR LOS B-FACTORS \n\nSeleccionar Tools -> Depiction -> Render by Attribute.\nNos desplegará una ventana Render/Select by Attribute.\n 1. Del campo Attribute seleccionar bfactor.\n 2. En el histograma que se muestra, seleccionar la barra blanca y cambiar el color de blanco a amarillo en el campo color.\n 3. Pulsar el botón Apply para visualizar los cambios de coloración.\n 4. Pulsar OK para finalizar.\n\n FONDO BLANCO\nPara aplicar el fondo blanco:\n1. Seleccionar del menú principal Presets->Publication_1.\n\n SALVAR LA POSICIÓN DE LA IMAGEN\nUna vez que se ha obtenido la imagen coloreada, ajustar la visualización rotando la imagen, con la finalidad de dejar los espacios adecuados para la inclusión de las etiquetas y la barra de color.\nPara salvar la posición final de la imagen:\n 1. Seleccionar Favorites -> Command Line.\n 2. En la línea de comando teclear savepos p1\nSi por alguna razón movemos la posición, para restaurarla hacer lo siguiente:\n 1. Seleccionar Favorites -> Command Line.\n 2. En la línea de comando teclear reset p1 \n\n TITULO Y BARRA DE COLOR \nSeleccionar del menú principal Tools -> Utilities -> Color Key. El cual desplegará la ventana 2D Labels/Color Key.\nPara desplegar la barra de color:\n\nSeleccionar la pestaña Color Key.\nCambiar el color blanco por amarillo pulsando en el botón correspondiente.\nCambiar la palabra min por el valor mínimo calculado del bfactor.\nCambiar la palabra max por el valor máximo calculado del bfactor.\nDar click con el mouse en la parte inferior de la imagen en donde se desea visualizar la escala. Arrastrar el mouse para definir el largo y ancho de la escala.\n\nPara desplegar el título de la barra:\n\nSeleccionar la pestaña Labels.\nDar click en la parte superior de barra de color para incrustar el título.\nEscribir el título de la barra como B-Factors(Å).\nPara ajustar el tamaño de letra, en el campo Font size cambiar el valor adecuado.\n\nPara desplegar el título de la imagen:\n\nSeleccionar la pestaña Labels.\nDar click en la parte superior de la imagenr para incrustar el título.\nEscribir el título con el nombre correspondiente.\nAjustar el tamaño de letra, en el campo Font size cambiar el valor adecuado.\nPara el título en negrita, en el campo Font style seleccionar bold.\n\nNotas:\n1. Si desea cambiar una etiqueta de posición, deberá estar en la pestaña Labels, mantener pulsado el botón izquierdo del mouse sobre la etiqueta y moverla a la posición deseada.\n2. Si desea eliminar una etiqueta, deberá seleccionarla en el campo de Labels y desmarcar la opción Show.\n\n SALVAR LA IMAGEN \nSeleccionar del menú principal File -> Save Image. El cual desplegará la ventana Save image, en el cual en el campo File name dar el nombre de image.png.\n\nSALVAR LA SESIÓN DE QUIMERA\nSeleccionar del menú principal File -> Save Session as. El cual desplegará la ventana Choose Session Save File, en el cual en el campo File name colocar el nombre con la extensión .py.",
"##Cargando la imagen generada\n\nprint ('Cargando el archivo...')\nImage(filename='image.png') ",
"Graficando SASA",
"### Creando el directorio para el análisis del SASA en el directorio de VMD\nprint ('Nos vamos a ', ruta)\nos.chdir( ruta )\noutput_find=!find /usr/local -maxdepth 2 -type d -name vmd\nprint (output_find)\nruta_vmd=output_find[0]\nprint (ruta_vmd)\nruta_vmd_sasa = ruta_vmd+'/plugins/noarch/tcl/iceVMD1.0'\nprint ( ruta_vmd_sasa )\nif not os.path.exists(ruta_vmd_sasa): \n os.makedirs(ruta_vmd_sasa)\n print ('Se ha creado la ruta ===>',ruta_vmd_sasa)\nelse:\n print (\"La ruta \"+ruta_vmd_sasa+\" existe..!!!\")\n\nprint ( 'Nos vamos a ....', ruta_vmd_sasa )\nos.chdir( ruta_vmd_sasa )\n\n#Copiando los archivos generados a la carpeta plugins de VMD\nprint ('\\nCopiando los archivos generados a '+ruta_vmd_sasa)\nsource_file=ruta_scripts+'/iceVMD1.0/colorplot.tcl'\ndest_file=ruta_vmd_sasa+'/colorplot.tcl'\nshutil.copy(source_file,dest_file)\nsource_file=ruta_scripts+'/iceVMD1.0/multiplot.tcl'\ndest_file=ruta_vmd_sasa+'/multiplot.tcl'\nshutil.copy(source_file,dest_file)\nsource_file=ruta_scripts+'/iceVMD1.0/pkgIndex.tcl'\ndest_file=ruta_vmd_sasa+'/pkgIndex.tcl'\nshutil.copy(source_file,dest_file)\nsource_file=ruta_scripts+'/iceVMD1.0/vmdICE.tcl'\ndest_file=ruta_vmd_sasa+'/vmdICE.tcl'\nshutil.copy(source_file,dest_file)\nprint('\\nArchivos copiados.. Regresando a... '+nuevaruta)\nos.chdir( nuevaruta )\n\n### Creando el directorio para la graficación del sasa\nruta_sasaColor = nuevaruta+'/sasaColor'\nprint ( ruta_sasaColor )\nif not os.path.exists(ruta_sasaColor): \n os.makedirs(ruta_sasaColor)\n print ('Se ha creado la ruta ===>',ruta_sasaColor)\nelse:\n print (\"La ruta \"+ruta_sasaColor+\" existe..!!!\")\n \nprint ( 'Nos vamos a ....', ruta_sasaColor )\nos.chdir( ruta_sasaColor )\n\nprint ('\\nCopiando el archivo de configuracion a '+ruta_sasaColor)\nsource_file=ruta_scripts+'/iceVMD1.0/vmdrc'\ndest_file=ruta_sasaColor+'/.vmdrc'\nshutil.copy(source_file,dest_file)",
"Coloreando el SASA\n\nArrancar VMD.\nVentana vmdICE\nDirigirse al menú Extensions -> Analysis -> vmdICE, se presentará una ventana y se deberán cambiar los valores de los siguientes campos:\n1. To: Colocar el rango máximo de frames de la trayectoria.\n2. Selection for Calculation: agregar a chain A and protein.\n3. Pulsar en el botón SASA Single Atom y esperar a que termine el cálculo.\n\nCAMBIAR EL COLOR DE FONDO\nDirigirse al menú Graphics -> Colors , y realizar las siguientes selecciones:\n1. Categories seleccionar Display\n2. Names seleccionar Background\n3. Colors seleccionar 8 White\n\nCAMBIAR RESOLUCIÓN DE ESFERAS\nDirigirse al menú Graphics - Representations, y en el campo Sphere Resolution cambiamos al valor de 50.\nROTAR LA IMAGEN PARA PRESENTAR UNA MEJOR VISTA Y GUARDARLA.",
"!vmd ../ionized.psf ../output.xtc",
"Restaurando configuración default de VMD",
"#Borrando los archivos del vmd\n\n!rm -r $ruta_vmd_sasa",
"Graficando el RGYRO",
"### Creando el directorio para la graficación del rgyro\nruta_gyroColor = nuevaruta+'/color_rgyro'\nprint ( ruta_gyroColor )\nif not os.path.exists(ruta_gyroColor): \n os.makedirs(ruta_gyroColor)\n print ('Se ha creado la ruta ===>',ruta_gyroColor)\nelse:\n print (\"La ruta \"+ruta_gyroColor+\" existe..!!!\")\n \nprint ( 'Nos vamos a ....', ruta_gyroColor )\nos.chdir( ruta_gyroColor )\n\nprint ('\\nCopiando el script colorRgyro.tcl a '+ruta_gyroColor)\nsource_file=ruta_scripts+'/colorRgyro/colorRgyro.tcl'\ndest_file=ruta_gyroColor+'/colorRgyro.tcl'\nshutil.copy(source_file,dest_file)",
"Coloreando el RGYRO\n\nArrancar VMD, dirigirse al manú Extensions -> Tk Console, copiar y ejecutar la siguiente secuencia de comandos:\ntcl\nsource colorRgyro.tcl\n\nCAMBIAR EL COLOR DE FONDO\nDirigirse al menú Graphics -> Colors , y realizar las siguientes selecciones:\n1. Categories seleccionar Display\n2. Names seleccionar Background\n3. Colors seleccionar 8 White\n\nROTAR LA IMAGEN PARA PRESENTAR UNA MEJOR VISTA Y GUARDARLA.",
"!vmd ../ionized.psf ../output.xtc"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
landmanbester/fundamentals_of_interferometry
|
6_Deconvolution/6_4_residuals_and_iqa.ipynb
|
gpl-2.0
|
[
"Outline\nGlossary\n6. Deconvolution in Imaging \nPrevious: 6.3 CLEAN Implementations \nNext: 6.5 Source Finding and Detection\n\n\n\n\nImport standard modules:",
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom IPython.display import HTML \nHTML('../style/course.css') #apply general CSS",
"Import section specific modules:",
"import matplotlib.image as mpimg\nfrom IPython.display import Image\nfrom astropy.io import fits\nimport aplpy\n\n#Disable astropy/aplpy logging\nimport logging\nlogger0 = logging.getLogger('astropy')\nlogger0.setLevel(logging.CRITICAL)\nlogger1 = logging.getLogger('aplpy')\nlogger1.setLevel(logging.CRITICAL)\n\nfrom IPython.display import HTML\nHTML('../style/code_toggle.html')",
"6.4 Residuals and Image Quality<a id='deconv:sec:iqa'></a>\nUsing CLEAN or another deconvolution methods produces 'nicer' images than the dirty image (except when deconvolution gets out of control). What it means for an image to be 'nicer' is not a well defined metric, in fact it is almost completely undefined. When we talk of the quality of an image in sythesis imaging we rarely use a quantitative metric, but instead rely on the subjective opinion of the people looking at the image. I know, this is not very scientific. The field of computer vision has been around for decades, this is a field which has developed the objective metrics and techniques for image quality assessment. At some point in the future these methods will need to be incorporated into radio astronomy. This is bound to happen as we have moved to using automated calibration, imaging, and deconvolution pipelines.\nWe have two some what related questions we need to answer when we are reducing visibilities into a final image:\n\nWhen should you halt the deconvolution process?\nWhat makes a good image?\n\nIn $\\S$ 6.2 ➞ we covered how we can seperate out a sky model from noise using an iterative CLEAN deconvolution process. But, we did not discuss at what point we halt the process. There is no well-defined point in which to stop the process. Typically and ad hoc decision is made to run deconvolution for a fixed number of iterations or down to a certain flux level. These halting limits are set by adjusting the CLEAN parameters until a 'nice' image is produced. Or, if the visibilities have been flux calibrated, which is possible with some arrays, the signal is fixed to some real flux scale. Having knowledge about the array and observation a theoretical noise floor can be computed, then CLEAN can be ran to a known noise level. One could imagine there is a more automated way to decide when to halt CLEAN, perhaps keeping track of the iterations and deciding if there is a convergence?\nAs a thought experiment we can think about a observation with perfect calibration (we discuss calibration in Chapter 8, but for now it is sufficient to know that the examples we have been using have perfect calibration). When we run CLEAN on this observation, each iteration will transfer some flux from the residual image to the sky model (see figure below). If we run this long enough then we will reach the observation noise floor. Then, we will start to deconvolve the noise from the image. And if you run this process for infinite iteration we will eventually have a sky model which contains all the flux, both from the sources and the noise. The residual image in this case will be empty. Now, this extreme case results in our sky model containing noise sources, this is not ideal. But, if we have not deconvoled enough flux then the sky model is incomplete and the residual image will contain PSF structure from the remaining flux. Thus, the challenge is to determine what is enough deconvolution to remove most of the true sky signal but not over-deconvolved such that noise is added to the sky model. As stated earlier, the typical way to do that at the moment is to do multiple deconvolutions, adjusting the parameters until a subjective solution is reached.\nWe can see an example of over-deconvolution below. Using the same example from the previous section ➞, if we deconvolve beyond 300 iterations (as we found to result in a well-deconvoled sky model) then noise from the residual image is added to the sky model. This can be seen as the low flux sources around the edge of the image. Over-deconvolution can lead to <cite data-cite='1998AJ....115.1693C'>clean bias</cite> ⤴ effects.",
"def generalGauss2d(x0, y0, sigmax, sigmay, amp=1., theta=0.):\n \"\"\"Return a normalized general 2-D Gaussian function\n x0,y0: centre position\n sigmax, sigmay: standard deviation\n amp: amplitude\n theta: rotation angle (deg)\"\"\"\n #norm = amp * (1./(2.*np.pi*(sigmax*sigmay))) #normalization factor\n norm = amp\n rtheta = theta * 180. / np.pi #convert to radians\n \n #general function parameters (https://en.wikipedia.org/wiki/Gaussian_function)\n a = (np.cos(rtheta)**2.)/(2.*(sigmax**2.)) + (np.sin(rtheta)**2.)/(2.*(sigmay**2.))\n b = -1.*(np.sin(2.*rtheta))/(4.*(sigmax**2.)) + (np.sin(2.*rtheta))/(4.*(sigmay**2.))\n c = (np.sin(rtheta)**2.)/(2.*(sigmax**2.)) + (np.cos(rtheta)**2.)/(2.*(sigmay**2.))\n return lambda x,y: norm * np.exp(-1. * (a * ((x - x0)**2.) - 2.*b*(x-x0)*(y-y0) + c * ((y-y0)**2.)))\n\ndef genRstoredBeamImg(fitsImg):\n \"\"\"Generate an image of the restored PSF beam based on the FITS header and image size\"\"\"\n fh = fits.open(fitsImg)\n \n #get the restoring beam information from the FITS header\n bmin = fh[0].header['BMIN'] #restored beam minor axis (deg)\n bmaj = fh[0].header['BMAJ'] #restored beam major axis (deg)\n bpa = fh[0].header['BPA'] #restored beam angle (deg)\n dRA = fh[0].header['CDELT1'] #pixel size in RA direction (deg)\n ra0 = fh[0].header['CRPIX1'] #centre RA pixel\n dDec = fh[0].header['CDELT2'] #pixel size in Dec direction (deg)\n dec0 = fh[0].header['CRPIX2'] #centre Dec pixel\n\n #construct 2-D ellipitcal Gaussian function\n gFunc = generalGauss2d(0., 0., bmin/2., bmaj/2., theta=bpa)\n\n #produce an restored PSF beam image\n imgSize = 2.*(ra0-1) #assumes a square image\n xpos, ypos = np.mgrid[0:imgSize, 0:imgSize].astype(float) #make a grid of pixel indicies\n xpos -= ra0 #recentre\n ypos -= dec0 #recentre\n xpos *= dRA #convert pixel number to degrees\n ypos *= dDec #convert pixel number to degrees\n return gFunc(xpos, ypos) #restored PSF beam image\n \ndef convolveBeamSky(beamImg, skyModel):\n \"\"\"Convolve a beam (PSF or restored) image with a sky model image, images must be the same shape\"\"\"\n sampFunc = np.fft.fft2(beamImg) #sampling function\n skyModelVis = np.fft.fft2(skyModel[0,0]) #sky model visibilities\n sampModelVis = sampFunc * skyModelVis #sampled sky model visibilities\n return np.abs(np.fft.fftshift(np.fft.ifft2(sampModelVis))) #sky model convolved with restored beam\n\nfig = plt.figure(figsize=(16, 7))\n \nfh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n1000-residual.fits')\nresidualImg = fh[0].data\nfh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n1000-model.fits')\nskyModel = fh[0].data\n \n#generate a retored PSF beam image\nrestBeam = genRstoredBeamImg(\n '../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n1000-residual.fits')\n \n#convolve restored beam image with skymodel\nconvImg = convolveBeamSky(restBeam, skyModel)\n \ngc1 = aplpy.FITSFigure(residualImg, figure=fig, subplot=[0.1,0.1,0.35,0.8])\ngc1.show_colorscale(vmin=-1.5, vmax=2, cmap='viridis')\ngc1.hide_axis_labels()\ngc1.hide_tick_labels()\nplt.title('Residual Image (niter=1000)')\ngc1.add_colorbar()\n \ngc2 = aplpy.FITSFigure(convImg, figure=fig, subplot=[0.5,0.1,0.35,0.8])\ngc2.show_colorscale(vmin=0., vmax=2.5, cmap='viridis')\ngc2.hide_axis_labels()\ngc2.hide_tick_labels()\nplt.title('Sky Model')\ngc2.add_colorbar()\n \nfig.canvas.draw()",
"Figure: residual image and sky model after 1000 deconvolution iterations. The residual image has been over-deconvolved leading to noise components being added to the sky model.\nThe second question of what makes a good image is why we still use subjective opinion. If we consider the realistic case of imaging and deconvolving a real set of visibilities then we have the added problem that there will be always be, at some level, calibration errors. These errors, and cause of these errors, can be identified by a trained eye whether it is poor gain calibration, interference, strong source sidelobes, or any number of other issues. Errors can cause a deconvolution process to diverge resulting in an unrealistic sky model. Humans are very good at looking at images and deciding if they make sense, but we can not easily describe how we do our image processing, thus we find it hard to implement algorithms to do the same. Looking at the dirty image and deconvolved image of the same field below most people would say the deconvoled image is objectively 'better' than the dirty image. Yet we do not know exactly why that is the case.",
"fig = plt.figure(figsize=(16, 7))\n\ngc1 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-dirty.fits', \\\n figure=fig, subplot=[0.1,0.1,0.35,0.8])\ngc1.show_colorscale(vmin=-1.5, vmax=3., cmap='viridis')\ngc1.hide_axis_labels()\ngc1.hide_tick_labels()\nplt.title('Dirty Image')\ngc1.add_colorbar()\n\ngc2 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-image.fits', \\\n figure=fig, subplot=[0.5,0.1,0.35,0.8])\ngc2.show_colorscale(vmin=-1.5, vmax=3., cmap='viridis')\ngc2.hide_axis_labels()\ngc2.hide_tick_labels()\nplt.title('Deconvolved Image')\ngc2.add_colorbar()\n\nfig.canvas.draw()",
"Left: dirty image from a 6 hour KAT-7 observation at a declination of $-30^{\\circ}$. Right: deconvolved image.\nThe deconvolved image does not have the same noisy PSF structures around the sources that the dirty image does. We could say that these imaging artefacts are localized and related to the PSF response to bright sources. The aim of deconvolution is to remove these PSF like structures and replace them with a simple sky model which is decoupled fro the observing system. Most of difficult work in radio interferometry is the attempt to understand and remove the instrumentational effects in order to recover the sky signal. Thus, we have some context for why the deconvolved image is 'better' than the dirty image. The challenge in automatically answering what makes a good image is some how encoding both the context and human intution. Indeed, a challenge left to the reader.\n6.4.1 Dynamic Range and Signal-to-Noise Ratio\nDynamic range is the standard metric, which has been used for decades, to describe the quality of an interferometric image. The dynamic range (DR) is defined as the ratio of the peak flux $I_{\\textrm{peak}}$ to the standard deviation of the noise in the image $\\sigma_I$. The dynamic range can be computed for either a dirty or deconvolved image.\n$$\\textrm{DR} = \\frac{I_{\\textrm{peak}}}{\\sigma_I}$$\nNow this definition of the dynamic range is not well defined. First, how is the peak flux defined? Typically, the peak pixel value anywhere in the image is taken to be the peak flux. But, be careful, changing the resolution of the image will result in different flux values. Decreasing the resolution can result in more flux being included in a single pixel, likewise by increasing the resolution the flux will be spread across more pixels. The second issue is how the noise of the image is computed, possible options are:\n\nUse the entire image\nUse the entire residual image\nRandomly sample the image\nChoose a 'relatively' empty region\n\nThis is not an exhaustive list of methods, but the typical method is option 4. After deconvolution, the image is loaded into a viewer and the standard deviation of the noise is computed from a region the relatively free of sources. As I write this I am aware of how ridiculous that might sound. Using the same image we can see how the dynamic range varies by using these different methods. The dynamic range for the image deconvolved image above is:",
"#load deconvolved image\nfh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-image.fits')\ndeconvImg = fh[0].data\n#load residual image\nfh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-residual.fits')\nresidImg = fh[0].data\n\npeakI = np.max(deconvImg)\nprint 'Peak Flux: %f Jy'%(peakI)\n\nprint 'Dynamic Range:'\n#method 1\nnoise = np.std(deconvImg)\nprint '\\tMethod 1:', peakI/noise\n\n#method 2\nnoise = np.std(residImg)\nprint '\\tMethod 2:', peakI/noise\n\n#method 3\nnoise = np.std(np.random.choice(deconvImg.flatten(), int(deconvImg.size*.01))) #randomly sample 1% of pixels\nprint '\\tMethod 3:', peakI/noise\n\n#method 4, region 1\nnoise = np.std(deconvImg[0,0,0:128,0:128]) #corner of image\nprint '\\tMethod 4a:', peakI/noise\n\n#method 4, region 2\nnoise = np.std(deconvImg[0,0,192:320,192:320]) #centre of image\nprint '\\tMethod 4b:', peakI/noise",
"Method 1 will always result in a lower dynamic range than Method 2 as the deconvoled image includes the sources where method 2 only uses the residuals. Method 3 will result in a dynamic range which varies depending on the number of pixels sampled and which pixels are sampled. One could imagine an unlucky sampling where every pixel chosen is part of a source, resulting in a large standard deviation. Method 4 depends on the region used to compute the noise. In the Method 4a result a corner of the image, where there are essantially no sources, results in a high dynamic range. On the other hand, choosing the centre region to compute the noise standard deviation results in a low dynamic range. This variation between methods can lead to people playing 'the dynamic range game' where someone can pick the result that best fits what they want to say about the image. Be careful, and make sure your dynamic range metric is well defined and unbaised.\nThere is a qualitative explaination for computing the image noise and the dynamic range by human interaction. Humans are very good at image processing, so we can quickly select regions which are 'noise-like', so it is easier to just look at an image then to try to come up with a complicated algorithm to find these regions. The dynamic range has a number of issues, but it is correlated with image quality. For a fixed visibility set, improving the dynamic range of an image usually results in a improvement in the quality of the image, as determined by a human.\nA significant disadvantage to using dynamic range is that it is a global metric which reduced an image down to a single number. It provides no information about local artefacts. This is becoming an important issue in modern synthesis imaging as we push into imaging siginificant portions of the primary beam and need to account for direction-dependent effects. These topics are discussed in Chapter 7. But, as is noted in <cite data-cite='taylor1999synthesis'>Synthesis Imaging in Radio Astronomy II (Lecture 13) </cite> ⤴ an valid argument can be made for using dynamic range as a proxy (at least a partial one) for image quality. As of this writing dynamic range is the standard method to measure image quality.\n6.4.2 The Residual Image\nWe have noted that the results of a deconvolution process is a sky model and a residual image. An example residual image is shown below.",
"fig = plt.figure(figsize=(8, 7))\n\ngc1 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-residual.fits', \\\n figure=fig)\ngc1.show_colorscale(vmin=-1.5, vmax=3., cmap='viridis')\ngc1.hide_axis_labels()\ngc1.hide_tick_labels()\nplt.title('Residual Image')\ngc1.add_colorbar()\n\nfig.canvas.draw()",
"Figure: Residual image of a KAT-7 observation resulting from CLEAN deconvolution.\nThe image shows that most of the sources bright sources have been deoncolved, but some flux remains. Thus, the centre of the image, where the brightest soruces are, is noisier than the edges where the soruces are weaker. This is typical as the primary beam is most sensitive at it's centre. This residual image could possibly be further deconvolved but the remaining flux from the sources is close to the image noise and we are in danger of deconvolving into the noise.\nThe residual image provides the best insight to how well the deconvolution and calibration process was preformed. The ideal residual image is completely noise-like with no apparent structure throughout. This ideal is rarely reached as there is often deconvolution or calbration artefacts present. Looking at the residual image you can determine if there are poorly calibrated baselines, RFI present, not enough devolution, the wrong deconvolution parameters, the w-term correction has not been applied, there are direction-dependent effects unaccounted for, remaining extended structure, or any number of other effects. Inspection of the residual image for these different effects requires intuition which will develop with time.\n6.4.3 Image Quality Assessment\nAssessing the quality of a sythnesized image is an open problem in interferometry. By default we use subjective human assessment. But this approach is not very scientific and can result in different measures of quality for the same image. With any hope this section will soon be expanded with better solutions to the image quality assessment problem.\nThe process of deconvolution produces a sky model, but that model may not be realistic in CLEAN where the sky model is a set of $\\delta$-functions even if a source is extended. We can take sky modelling one step further by using source finding techniques to determine what in the sky model is an isolated source, what is a collection of nearby components which make up an extended source, or what is noise which resulted from an imprefect deconvolution. This will be discussed in the next section.\n\nNext: 6.5 Source Finding and Detection\n<div class=warn><b>Future Additions:</b></div>\n\n\nexamples of under and over deconvolution\nexamples of poor deconvolution of extended sources with CLEAN\nexample: deconvolve the same field (cygnus a?) with different methods and show results\nexamples: sources of imaging artefacts, need real data examples"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ledeprogram/algorithms
|
class10/donow/kate_bennion_donow_10.ipynb
|
gpl-3.0
|
[
"Create a classifier to predict the wine color from wine quality attributes using this dataset: http://archive.ics.uci.edu/ml/datasets/Wine+Quality\nThe data is in the database we've been using\n\nhost='training.c1erymiua9dx.us-east-1.rds.amazonaws.com'\ndatabase='training'\nport=5432\nuser='dot_student'\npassword='qgis'\ntable name = 'winequality'\n\nQuery for the data and create a numpy array",
"import pandas as pd\nimport numpy as np\nimport pg8000\n\nconn = pg8000.connect(host='training.c1erymiua9dx.us-east-1.rds.amazonaws.com', database='training', user='dot_student', password='qgis')\n\ncursor = conn.cursor()\n\ncursor.execute('SELECT * FROM winequality')\n\ndata = []\nfor item in cursor.fetchall():\n data.append(item)\n\nfrom numpy import array\n\nmyarray = array(data)",
"Split the data into features (x) and target (y, the last column in the table)\nRemember you can cast the results into an numpy array and then slice out what you want",
"x = myarray[:,:11]\ny = myarray[:,11:]",
"Create a decision tree with the data",
"from sklearn.tree import DecisionTreeClassifier\n\ndt = DecisionTreeClassifier()\n\ndt = dt.fix(x,y)",
"Run 10-fold cross validation on the model",
"from sklearn.cross_validation import cross_val_score\n\nscores = cross_val_score(dt,x,y2,cv=10)",
"If you have time, calculate the feature importance and graph based on the code in the slides from last class\nUse this tip for getting the column names from your cursor object",
"plt.plot(dt.feature_importances_,'o')\nplt.ylim(0,1)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
christophreimer/pytesmo
|
docs/setup_validation_ASCAT_ISMN.ipynb
|
bsd-3-clause
|
[
"Example soil moisture validation: ASCAT - ISMN\nThis example shows how to setup the pytesmo validation framework to perform the validation either normal or using the parallel processing tool from ipython.",
"import os\n\nimport pytesmo.validation_framework.temporal_matchers as temporal_matchers\nimport pytesmo.validation_framework.metric_calculators as metrics_calculators\n\nfrom datetime import datetime\n\nfrom pytesmo.io.sat.ascat import AscatH25_SSM\nfrom pytesmo.io.ismn.interface import ISMN_Interface\nfrom pytesmo.validation_framework.validation import Validation\n\nfrom examples.data_preparation_ASCAT_ISMN import DataPreparation",
"Initialize ASCAT reader",
"ascat_data_folder = os.path.join('/media/sf_R', 'Datapool_processed', 'WARP', 'WARP5.5',\n 'IRMA1_WARP5.5_P2', 'R1', '080_ssm', 'netcdf')\nascat_grid_folder = os.path.join('/media/sf_R', 'Datapool_processed', 'WARP',\n 'ancillary', 'warp5_grid')\n\nascat_reader = AscatH25_SSM(ascat_data_folder, ascat_grid_folder)\nascat_reader.read_bulk = True\nascat_reader._load_grid_info()",
"Initialize ISMN reader",
"ismn_data_folder = os.path.join('/media/sf_D', 'ISMN', 'data')\nismn_reader = ISMN_Interface(ismn_data_folder)",
"Create the variable jobs which is a list containing either cell numbers (for a cell based process) or grid point index information tuple(gpi, longitude, latitude). For ISMN gpi is replaced by idx which is an index used to read time series of variables such as soil moisture. DO NOT CHANGE the name jobs because it will be searched during the parallel processing!",
"jobs = []\n\nids = ismn_reader.get_dataset_ids(variable='soil moisture', min_depth=0, max_depth=0.1)\nfor idx in ids:\n metadata = ismn_reader.metadata[idx]\n jobs.append((idx, metadata['longitude'], metadata['latitude']))",
"Create the variable save_path which is a string representing the path where the results will be saved. DO NOT CHANGE the name save_path because it will be searched during the parallel processing!",
"save_path = os.path.join('/media/sf_D', 'validation_framework', 'test_ASCAT_ISMN')",
"Create the validation object.",
"datasets = {'ISMN': {'class': ismn_reader, 'columns': ['soil moisture'],\n 'type': 'reference', 'args': [], 'kwargs': {}},\n 'ASCAT': {'class': ascat_reader, 'columns': ['sm'], 'type': 'other',\n 'args': [], 'kwargs': {}, 'grids_compatible': False,\n 'use_lut': False, 'lut_max_dist': 30000}\n }\n\nperiod = [datetime(2007, 1, 1), datetime(2014, 12, 31)]\n\nprocess = Validation(datasets=datasets, data_prep=DataPreparation(),\n temporal_matcher=temporal_matchers.BasicTemporalMatching(window=1/24.0, reverse=True),\n scaling='lin_cdf_match', scale_to_other=True,\n metrics_calculator=metrics_calculators.BasicMetrics(),\n period=period, cell_based_jobs=False)",
"If you decide to use the ipython parallel processing to perform the validation please ADD the start_processing function to your code. Then move to pytesmo.validation_framework.start_validation, change the path to your setup code and start the validation.",
"def start_processing(job):\n try:\n return process.calc(job)\n except RuntimeError:\n return process.calc(job)",
"If you chose to perform the validation normally then please ADD the uncommented main method to your code.",
"# if __name__ == '__main__':\n# \n# from pytesmo.validation_framework.results_manager import netcdf_results_manager\n# \n# for job in jobs:\n# results = process.calc(job)\n# netcdf_results_manager(results, save_path)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Ivanhehe/Sharings
|
affectiveComputing/ComparisonAnalysis.ipynb
|
mit
|
[
"import pandas as pd\nimport numpy as np\nfrom statistics import mode\nfrom scipy.spatial.distance import euclidean\nfrom dtw import dtw\nimport os\nimport cv2 \n\nfrom collections import Counter\nimport re\nimport datetime\nimport random\n\nimport matplotlib.pyplot as plt\nimport matplotlib.transforms as transforms\nimport seaborn as sns\nsns.set_style(\"whitegrid\", {'axes.grid' : False})\n\nimport tflearn \nfrom tflearn.layers.conv import conv_2d, max_pool_2d\nfrom tflearn.layers.core import input_data, dropout, fully_connected\nfrom tflearn.layers.estimator import regression\nfrom tflearn.layers.normalization import local_response_normalization\n\nimport tensorflow as tf",
"Objectives\n\nChoose two players who reach same scores(stop forcely)\nChoose two players who play the same time(stop forcely)\nCalculate their statistical result(variance, average, mode, etc.)\nVisualization in terms of HR, Emotional, Collection of Emoji\nAccording to the movement of birds(HR of players) to find out their similarity using Dynamic Time Warping algorithm",
"# all the function we need to parse the data\ndef extract_split_data(data): \n content = re.findall(\"\\[(.*?)\\]\", data)\n timestamps = []\n values = []\n for c in content[0].split(\",\"):\n c = (c.strip()[1:-1])\n if len(c)>21: \n x, y = c.split(\"#\")\n values.append(int(x))\n timestamps.append(y) \n return timestamps, values\n\ndef de_timestampe(time):\n \n # get year month date\n y = time.split()[0].split(\"-\")[0]\n m = time.split()[0].split(\"-\")[1]\n d = time.split()[0].split(\"-\")[2]\n # get hour minute second\n h = time.split()[1].split(\":\")[0]\n mi = time.split()[1].split(\":\")[1]\n s = time.split()[1].split(\":\")[2]\n \n t = m + \" \" + d + \" \" + h + \":\" + mi + \":\" + s + \" \" + y\n good_format = datetime.datetime.strptime(t, '%m %d %H:%M:%S.%f %Y' )\n return good_format\n\ndef de_movement(movement):\n moves = []\n\n for m in movement:\n if len(m[1:-2]) > 1:\n for y in m[1:-2].split(\",\"):\n moves.append(float(y))\n return moves\n\ndef chop_video(url):\n vidcap = cv2.VideoCapture(url)\n vidcap.set(cv2.CAP_PROP_POS_MSEC,6000) \n #success,image = vidcap.read()\n count = 0\n success = True\n while success:\n success,image = vidcap.read()\n (h, w) = image.shape[:2]\n M = cv2.getRotationMatrix2D((w/2,h/2),-90,1)\n rotated = cv2.warpAffine(image,M,(w,h))\n cropped = rotated[100:550, 80:400]\n cv2.imwrite(\"converted1/frame%d.jpg\" % count, cropped) # save frame as JPEG file\n count += 1\n\ndef process_pred_data():\n dirname = \"/Users/xueguoliang/myGithub/affectiveComputing/converted1\"\n # Load every image file in the provided directory\n filenames = [os.path.join(dirname, fname)\n for fname in os.listdir(dirname) if fname.split(\".\")[1] == 'jpg']\n \n # Read every filename as an RGB image\n imgs = [cv2.imread(fname,cv2.IMREAD_GRAYSCALE) for fname in filenames]\n # Then resize the square image to 48 x 48 pixels\n imgs = [cv2.resize(img_i, (48, 48)) for img_i in imgs]\n # Finally make our list of 3-D images a 4-D array with the first dimension the number of images:\n imgs = np.array(imgs).astype(np.float32) \n np.save('pred_data.npy', imgs)\n\n\ndef emotion_predict(x):\n \n MODEL = None\n with tf.Graph().as_default():\n network = input_data(shape=[None, IMG_SIZE, IMG_SIZE, 1], name='input')\n\n network = conv_2d(network, 96, 11, strides=4, activation='relu')\n network = max_pool_2d(network, 3, strides=2)\n network = local_response_normalization(network)\n network = conv_2d(network, 256, 5, activation='relu')\n network = max_pool_2d(network, 3, strides=2)\n network = local_response_normalization(network)\n network = conv_2d(network, 384, 3, activation='relu')\n network = conv_2d(network, 384, 3, activation='relu')\n network = conv_2d(network, 256, 3, activation='relu')\n network = max_pool_2d(network, 3, strides=2)\n network = local_response_normalization(network)\n network = fully_connected(network, 4096, activation='tanh')\n network = dropout(network, 0.5)\n network = fully_connected(network, 4096, activation='tanh')\n network = dropout(network, 0.5)\n network = fully_connected(network, 7, activation='softmax')\n network = regression(network, optimizer='momentum',loss='categorical_crossentropy',learning_rate=LR, name='targets')\n\n model = tflearn.DNN(network, tensorboard_dir='alex_bird')\n\n model.load(\"affective-bird-0.001-alexnet_15.model\")\n MODEL = model\n \n \n predict_y = MODEL.predict(x.reshape(-1,IMG_SIZE,IMG_SIZE,1))\n new_y = (np.argmax(predict_y, axis=1)).astype(np.uint8)\n \n return new_y\n\ndef get_track_emoj(data):\n \n content = re.findall(\"\\[(.*?)\\]\", data)\n e_timestamp = []\n #print (len(content[0]))\n if len(content[0])>0:\n for c in content[0].split(\",\"):\n c = (c.strip()[1:-1])\n e_timestamp.append(c)\n return e_timestamp\n\nplayer1 = pd.read_csv(\"/Users/xueguoliang/Desktop/finalData/FlappyBird-1ec48f0fbc8d80edc56051dd46c7070d-2017-07-06-20-48.csv\", delimiter=\";\")\nplayer2 = pd.read_csv(\"/Users/xueguoliang/Desktop/finalData/FlappyBird-f2b801830aba82769b39d29f2afddd10-2017-07-07-20-07.csv\", delimiter=\";\")\n\n#chop_video('/Users/xueguoliang/Desktop/finalData/VideoRecording-2017-07-06-20-48-51.mp4') \n#process_pred_data()\n\npred_data = np.load('pred_data.npy')\n\n# hyperparameter\nIMG_SIZE = 48\nLR = 1e-3\n\nresult = emotion_predict(pred_data)",
"Heart rates analysis from player1",
"# playing span\ns1 = player1['TimeStarted'].values[0]\ne1 = player1['TimeEnded'].values[-1]\nsx1 = player1['TimeStarted'].values[-1]\ndiff1 = (de_timestampe(e1) - de_timestampe(s1)) # difference in seconds\ndiffx1 = (de_timestampe(e1) - de_timestampe(sx1))\n\n# get timestamp and HR\ntimes1 = []\nrates1 = []\nflags = [0]\npos = 0\n\nfor session in player1['Heartbeats']: \n time, rate = extract_split_data(session)\n pos += len(time)-1\n if pos>0:\n flags.append(pos)\n times1 += time \n rates1 += rate\n \n\nprint (\"Player1\")\nprint (\"Time: {} minutes, {} ~ {}\".format(round(diff1.seconds/60,2), s1, e1))\nprint (\"Scores: {}\".format(player1[\"Score\"].values))\nprint (\"Emoj Scores: {}\".format(player1[\"EmojiScore\"].values))\nprint (\"Game Sessions: {}\".format(player1.shape[0]))\nprint (\"Variance of HR: {}\".format(np.var(rates1)))\nprint (\"Average of HR: {}\".format(np.mean(rates1)))\nprint (\"Mode of HR: {}\".format(mode(rates1)))",
"Emoj collection analysis from player1",
"e_timestamp = []\nfor session in player1['EmojiTimestamps']: \n e_timestamp += get_track_emoj(session)\n \nxi = []\ntrack = [] \n\nfor i,t in enumerate(times1):\n\n for e in e_timestamp:\n if abs((de_timestampe(e)-de_timestampe(t)).seconds) < 1:\n xi.append(i) \n track.append(int(rates1[i])) \n \nfig, ax = plt.subplots(figsize=(15,8))\nmarkers_on = track\nplt.plot(rates1)\nplt.scatter(xi,track,c=\"r\",s=50)\n#plt.xticks(x,times1, rotation=\"60\")\nplt.title(\"Heartbeats - EmojiCollection\")\nax.set_xlabel(\"time(s)\")\nax.set_ylabel(\"beats\")\n\nplt.show()\n\n# plot\nx1 = diffx1.seconds\nfig, ax1 = plt.subplots(figsize=(15,8))\nplt.title(\"Heartbeats of player1\")\n\n#plt.scatter(timestamps1, rates1)\nax2 = ax1.twinx()\n\nax1.plot(rates1)\nax1.tick_params('y', colors='b')\n\nemotions = []\ni=0\nwhile(i<=len(result)):\n emotions.append(int(result[i]))\n i = i+len(result)//len(rates1)\n\nax2.scatter(range(0,len(emotions)),emotions,color=\"r\",s=50,alpha=.4)\nax2.tick_params('y', colors='r')\n\n#plt.ylim([70,150])\nfor f in flags:\n plt.axvline(x=f, color='y', linestyle='--')\n\n#plt.text(x1,120, str(x1)+\" >>>\", size=15, fontweight='bold')\n\nax1.set_xlabel(\"time(s)\")\nax1.set_ylabel(\"Beats\", color=\"b\")\n\nax2.set_ylabel('Emotion', color=\"r\")\nax2.set_yticklabels([\"\",\"Angry\", \"Disgust\", \"Fear\", \"Happy\", \"Sad\", \"Surprise\", \"Neutral\",\"\"])\n\nplt.show()\n\nliter = [\"Angry\", \"Disgust\", \"Fear\", \"Happy\", \"Sad\", \"Surprise\", \"Neutral\"]\nfinal_result = [liter[i] for i in result]\nes = []\nfs = []\nrs = Counter(final_result)\nfor v in rs:\n es.append(v)\n fs.append(rs[v])\n\nsns.barplot(es, fs)\nplt.title(\"Emotional Distribution of Player1\")\nplt.show()",
"Heart rates analysis from player2",
"# playing span\ns2 = player2['TimeStarted'].values[0]\ne2 = player2['TimeEnded'].values[-1]\nsx2 = player2['TimeStarted'].values[-1]\ndiff2 = (de_timestampe(e2) - de_timestampe(s2)) # difference in second\ndiffx2 = (de_timestampe(e2) - de_timestampe(sx2)) # difference in seconds\n\n# get timestamp and HR\ntimes2 = []\nrates2 = []\n\nfor session in player2['Heartbeats']: \n time, rate = extract_split_data(session)\n times2 += time \n rates2 += rate\n \nprint (\"Player2\")\nprint (\"Time: {} minutes, {} ~ {}\".format(round(diff2.seconds/60,2), s2.split()[1], e2.split()[1]))\nprint (\"Game Sessions: {}\".format(player2.shape[0]))\nprint (\"Scores: {}\".format(player2[\"Score\"].values))\nprint (\"Emoj Scores: {}\".format(player2[\"EmojiScore\"].values))\nprint (\"Variance of HR: {}\".format(np.var(rates2)))\nprint (\"Average of HR: {}\".format(np.mean(rates2)))\nprint (\"Mode of HR: {}\".format(mode(rates2)))\n\n# plot\ntimestamps2 = pd.to_datetime(times2)\nx2 = diffx2.seconds\n\nfig, ax = plt.subplots(figsize=(15,8))\nplt.title(\"Heartbeats of player2\")\n#plt.scatter(timestamps1, rates1)\nsns.tsplot(rates2)\nplt.ylim([65,90])\n#plt.xticks(x, times, rotation=\"60\")\n\n\nax.set_xlabel(\"time(s)\")\nax.set_ylabel(\"beats\")\n\nplt.show()",
"Playing Pattern",
"m1 = player1[\"Movement\"]\nm2 = player2[\"Movement\"]\nprint (m1[:5])\nprint (m2[:5])\n\ny1 = de_movement(m1)\ny2 = de_movement(m2)\n\nfig, ax = plt.subplots(figsize=(15,8))\nplt.title(\"Comparison between birds\")\n#plt.scatter(timestamps1, rates1)\nplt.plot(y1, color=\"b\", label=\"player1\", alpha=.6)\nplt.plot(y2, color=\"g\", label=\"player2\", alpha=.4)\nplt.xlim([0,100])\nax.set_xlabel(\"time(s)\")\nax.set_ylabel(\"y\")\nplt.legend()\nplt.show()\n\nyy1 = (y1-np.mean(y1))/np.std(y1)\nyy2 = (y2-np.mean(y2))/np.std(y2)\n\ndist, cost, acc, path = dtw(yy1, yy2, dist=euclidean)\ndist1, cost1, acc1, path1 = dtw(yy1[:300], yy2[:300], dist=euclidean)\n\nprint(\"Whole Game Sessions: {}\".format(dist))\nprint(\"During Same Period: {}\".format(dist1))\n\n%pylab inline\nimshow(acc1.T, origin='lower', cmap=cm.gray, interpolation='nearest')\nplot(path1[0], path1[1], 'w')\nxlim((-0.5, acc1.shape[0]-0.5))\nylim((-0.5, acc1.shape[1]-0.5))\n\n# similarity for own movement\nfrom itertools import islice\n\ndef window(seq, n=2):\n \"Returns a sliding window (of width n) over data from the iterable\"\n \" s -> (s0,s1,...s[n-1]), (s1,s2,...,sn), ... \"\n it = iter(seq)\n result = tuple(islice(it, n))\n if len(result) == n:\n yield result \n for elem in it:\n result = result[1:] + (elem,)\n yield result\n\nseq = yy1[:100]\nsub = window(seq, 10)\nprint (sub)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
CalPolyPat/phys202-2015-work
|
assignments/assignment05/MatplotlibEx03.ipynb
|
mit
|
[
"Matplotlib Exercise 3\nImports",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np",
"Contour plots of 2d wavefunctions\nThe wavefunction of a 2d quantum well is:\n$$ \\psi_{n_x,n_y}(x,y) = \\frac{2}{L}\n \\sin{\\left( \\frac{n_x \\pi x}{L} \\right)} \n \\sin{\\left( \\frac{n_y \\pi y}{L} \\right)} $$\nThis is a scalar field and $n_x$ and $n_y$ are quantum numbers that measure the level of excitation in the x and y directions. $L$ is the size of the well.\nDefine a function well2d that computes this wavefunction for values of x and y that are NumPy arrays.",
"def well2d(x, y, nx, ny, L=1.0):\n \"\"\"Compute the 2d quantum well wave function.\"\"\"\n return 2/L*np.sin(nx*np.pi*x/L)*np.sin(ny*np.pi*y/L)\n\npsi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1)\nassert len(psi)==10\nassert psi.shape==(10,)",
"The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction:\n\nUse $n_x=3$, $n_y=2$ and $L=0$.\nUse the limits $[0,1]$ for the x and y axis.\nCustomize your plot to make it effective and beautiful.\nUse a non-default colormap.\nAdd a colorbar to you visualization.\n\nFirst make a plot using one of the contour functions:",
"f=plt.figure(figsize=(10,10))\nx=np.linspace(0,1,100)\ny=np.linspace(0,1,100)\nxx, yy=np.meshgrid(x, y)\nz=well2d(xx,yy,3,2,1)\nplt.contourf(x,y,z,50,cmap=plt.cm.get_cmap(\"hot\"))\nplt.colorbar(label=r\"$\\Psi (x,y)$\")\nplt.xlabel(\"X Position\")\nplt.ylabel(\"Y Position\")\nplt.title(\"The wavefunction of a 2D inifinite well\")\n\nassert True # use this cell for grading the contour plot",
"Next make a visualization using one of the pcolor functions:",
"f=plt.figure(figsize=(10,10))\nx=np.linspace(0,1,100)\ny=np.linspace(0,1,100)\nxx, yy=np.meshgrid(x, y)\nz=well2d(xx,yy,3,2,1)\nplt.pcolor(x,y,z,cmap=\"RdBu\")\nplt.colorbar(label=r\"$\\Psi (x,y)$\")\nplt.xlabel(\"X Position\")\nplt.ylabel(\"Y Position\")\nplt.title(\"The wavefunction of a 2D inifinite well\")\n\nassert True # use this cell for grading the pcolor plot"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
projectmesa/mesa-examples
|
examples/Schelling/.ipynb_checkpoints/analysis-checkpoint.ipynb
|
apache-2.0
|
[
"Schelling Segregation Model\nBackground\nThe Schelling (1971) segregation model is a classic of agent-based modeling, demonstrating how agents following simple rules lead to the emergence of qualitatively different macro-level outcomes. Agents are randomly placed on a grid. There are two types of agents, one constituting the majority and the other the minority. All agents want a certain number (generally, 3) of their 8 surrounding neighbors to be of the same type in order for them to be happy. Unhappy agents will move to a random available grid space. While individual agents do not have a preference for a segregated outcome (e.g. they would be happy with 3 similar neighbors and 5 different ones), the aggregate outcome is nevertheless heavily segregated.\nImplementation\nThis is a demonstration of running a Mesa model in an IPython Notebook. The actual model and agent code are implemented in Schelling.py, in the same directory as this notebook. Below, we will import the model class, instantiate it, run it, and plot the time series of the number of happy agents.",
"import matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom model import SchellingModel",
"Now we instantiate a model instance: a 10x10 grid, with an 80% change of an agent being placed in each cell, approximately 20% of agents set as minorities, and agents wanting at least 3 similar neighbors.",
"model = SchellingModel(10, 10, 0.8, 0.2, 3)",
"We want to run the model until all the agents are happy with where they are. However, there's no guarentee that a given model instantiation will ever settle down. So let's run it for either 100 steps or until it stops on its own, whichever comes first:",
"while model.running and model.schedule.steps < 100:\n model.step()\nprint(model.schedule.steps) # Show how many steps have actually run",
"The model has a DataCollector object, which checks and stores how many agents are happy at the end of each step. It can also generate a pandas DataFrame of the data it has collected:",
"model_out = model.datacollector.get_model_vars_dataframe()\n\nmodel_out.head()",
"Finally, we can plot the 'happy' series:",
"model_out.happy.plot()",
"For testing purposes, here is a table giving each agent's x and y values at each step.",
"x_positions = model.datacollector.get_agent_vars_dataframe()\n\nx_positions.head()",
"Effect of Homophily on segregation\nNow, we can do a parameter sweep to see how segregation changes with homophily.\nFirst, we create a function which takes a model instance and returns what fraction of agents are segregated -- that is, have no neighbors of the opposite type.",
"from mesa.batchrunner import BatchRunner\n\ndef get_segregation(model):\n '''\n Find the % of agents that only have neighbors of their same type.\n '''\n segregated_agents = 0\n for agent in model.schedule.agents:\n segregated = True\n for neighbor in model.grid.neighbor_iter(agent.pos):\n if neighbor.type != agent.type:\n segregated = False\n break\n if segregated:\n segregated_agents += 1\n return segregated_agents / model.schedule.get_agent_count()",
"Now, we set up the batch run, with a dictionary of fixed and changing parameters. Let's hold everything fixed except for Homophily.",
"parameters = {\"height\": 10, \"width\": 10, \"density\": 0.8, \"minority_pc\": 0.2, \n \"homophily\": range(1,9)}\n\nmodel_reporters = {\"Segregated_Agents\": get_segregation}\n\nparam_sweep = BatchRunner(SchellingModel, parameters, iterations=10, \n max_steps=200,\n model_reporters=model_reporters)\n\nparam_sweep.run_all()\n\ndf = param_sweep.get_model_vars_dataframe()\n\nplt.scatter(df.homophily, df.Segregated_Agents)\nplt.grid(True)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n
|
site/en-snapshot/hub/tutorials/cord_19_embeddings.ipynb
|
apache-2.0
|
[
"Copyright 2019 The TensorFlow Hub Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");",
"# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================",
"Exploring the TF-Hub CORD-19 Swivel Embeddings\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/hub/tutorials/cord_19_embeddings\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/cord_19_embeddings.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/hub/blob/master/examples/colab/cord_19_embeddings.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/cord_19_embeddings.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n <td>\n <a href=\"https://tfhub.dev/tensorflow/cord-19/swivel-128d/1\"><img src=\"https://www.tensorflow.org/images/hub_logo_32px.png\" />See TF Hub model</a>\n </td>\n</table>\n\nThe CORD-19 Swivel text embedding module from TF-Hub (https://tfhub.dev/tensorflow/cord-19/swivel-128d/1)\n was built to support researchers analyzing natural languages text related to COVID-19.\nThese embeddings were trained on the titles, authors, abstracts, body texts, and\nreference titles of articles in the CORD-19 dataset.\nIn this colab we will:\n- Analyze semantically similar words in the embedding space\n- Train a classifier on the SciCite dataset using the CORD-19 embeddings\nSetup",
"import functools\nimport itertools\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns\nimport pandas as pd\n\nimport tensorflow.compat.v1 as tf\ntf.disable_eager_execution()\ntf.logging.set_verbosity('ERROR')\n\nimport tensorflow_datasets as tfds\nimport tensorflow_hub as hub\n\ntry:\n from google.colab import data_table\n def display_df(df):\n return data_table.DataTable(df, include_index=False)\nexcept ModuleNotFoundError:\n # If google-colab is not available, just display the raw DataFrame\n def display_df(df):\n return df",
"Analyze the embeddings\nLet's start off by analyzing the embedding by calculating and plotting a correlation matrix between different terms. If the embedding learned to successfully capture the meaning of different words, the embedding vectors of semantically similar words should be close together. Let's take a look at some COVID-19 related terms.",
"# Use the inner product between two embedding vectors as the similarity measure\ndef plot_correlation(labels, features):\n corr = np.inner(features, features)\n corr /= np.max(corr)\n sns.heatmap(corr, xticklabels=labels, yticklabels=labels)\n\n\nwith tf.Graph().as_default():\n # Load the module\n query_input = tf.placeholder(tf.string)\n module = hub.Module('https://tfhub.dev/tensorflow/cord-19/swivel-128d/1')\n embeddings = module(query_input)\n\n with tf.train.MonitoredTrainingSession() as sess:\n\n # Generate embeddings for some terms\n queries = [\n # Related viruses\n \"coronavirus\", \"SARS\", \"MERS\",\n # Regions\n \"Italy\", \"Spain\", \"Europe\",\n # Symptoms\n \"cough\", \"fever\", \"throat\"\n ]\n\n features = sess.run(embeddings, feed_dict={query_input: queries})\n plot_correlation(queries, features)",
"We can see that the embedding successfully captured the meaning of the different terms. Each word is similar to the other words of its cluster (i.e. \"coronavirus\" highly correlates with \"SARS\" and \"MERS\"), while they are different from terms of other clusters (i.e. the similarity between \"SARS\" and \"Spain\" is close to 0).\nNow let's see how we can use these embeddings to solve a specific task.\nSciCite: Citation Intent Classification\nThis section shows how one can use the embedding for downstream tasks such as text classification. We'll use the SciCite dataset from TensorFlow Datasets to classify citation intents in academic papers. Given a sentence with a citation from an academic paper, classify whether the main intent of the citation is as background information, use of methods, or comparing results.",
"#@title Set up the dataset from TFDS\n\nclass Dataset:\n \"\"\"Build a dataset from a TFDS dataset.\"\"\"\n def __init__(self, tfds_name, feature_name, label_name):\n self.dataset_builder = tfds.builder(tfds_name)\n self.dataset_builder.download_and_prepare()\n self.feature_name = feature_name\n self.label_name = label_name\n \n def get_data(self, for_eval):\n splits = THE_DATASET.dataset_builder.info.splits\n if tfds.Split.TEST in splits:\n split = tfds.Split.TEST if for_eval else tfds.Split.TRAIN\n else:\n SPLIT_PERCENT = 80\n split = \"train[{}%:]\".format(SPLIT_PERCENT) if for_eval else \"train[:{}%]\".format(SPLIT_PERCENT)\n return self.dataset_builder.as_dataset(split=split)\n\n def num_classes(self):\n return self.dataset_builder.info.features[self.label_name].num_classes\n\n def class_names(self):\n return self.dataset_builder.info.features[self.label_name].names\n\n def preprocess_fn(self, data):\n return data[self.feature_name], data[self.label_name]\n\n def example_fn(self, data):\n feature, label = self.preprocess_fn(data)\n return {'feature': feature, 'label': label}, label\n\n\ndef get_example_data(dataset, num_examples, **data_kw):\n \"\"\"Show example data\"\"\"\n with tf.Session() as sess:\n batched_ds = dataset.get_data(**data_kw).take(num_examples).map(dataset.preprocess_fn).batch(num_examples)\n it = tf.data.make_one_shot_iterator(batched_ds).get_next()\n data = sess.run(it)\n return data\n\n\nTFDS_NAME = 'scicite' #@param {type: \"string\"}\nTEXT_FEATURE_NAME = 'string' #@param {type: \"string\"}\nLABEL_NAME = 'label' #@param {type: \"string\"}\nTHE_DATASET = Dataset(TFDS_NAME, TEXT_FEATURE_NAME, LABEL_NAME)\n\n#@title Let's take a look at a few labeled examples from the training set\nNUM_EXAMPLES = 20 #@param {type:\"integer\"}\ndata = get_example_data(THE_DATASET, NUM_EXAMPLES, for_eval=False)\ndisplay_df(\n pd.DataFrame({\n TEXT_FEATURE_NAME: [ex.decode('utf8') for ex in data[0]],\n LABEL_NAME: [THE_DATASET.class_names()[x] for x in data[1]]\n }))",
"Training a citaton intent classifier\nWe'll train a classifier on the SciCite dataset using an Estimator. Let's set up the input_fns to read the dataset into the model",
"def preprocessed_input_fn(for_eval):\n data = THE_DATASET.get_data(for_eval=for_eval)\n data = data.map(THE_DATASET.example_fn, num_parallel_calls=1)\n return data\n\n\ndef input_fn_train(params):\n data = preprocessed_input_fn(for_eval=False)\n data = data.repeat(None)\n data = data.shuffle(1024)\n data = data.batch(batch_size=params['batch_size'])\n return data\n\n\ndef input_fn_eval(params):\n data = preprocessed_input_fn(for_eval=True)\n data = data.repeat(1)\n data = data.batch(batch_size=params['batch_size'])\n return data\n\n\ndef input_fn_predict(params):\n data = preprocessed_input_fn(for_eval=True)\n data = data.batch(batch_size=params['batch_size'])\n return data",
"Let's build a model which use the CORD-19 embeddings with a classification layer on top.",
"def model_fn(features, labels, mode, params):\n # Embed the text\n embed = hub.Module(params['module_name'], trainable=params['trainable_module'])\n embeddings = embed(features['feature'])\n\n # Add a linear layer on top\n logits = tf.layers.dense(\n embeddings, units=THE_DATASET.num_classes(), activation=None)\n predictions = tf.argmax(input=logits, axis=1)\n\n if mode == tf.estimator.ModeKeys.PREDICT:\n return tf.estimator.EstimatorSpec(\n mode=mode,\n predictions={\n 'logits': logits,\n 'predictions': predictions,\n 'features': features['feature'],\n 'labels': features['label']\n })\n \n # Set up a multi-class classification head\n loss = tf.nn.sparse_softmax_cross_entropy_with_logits(\n labels=labels, logits=logits)\n loss = tf.reduce_mean(loss)\n\n if mode == tf.estimator.ModeKeys.TRAIN:\n optimizer = tf.train.GradientDescentOptimizer(learning_rate=params['learning_rate'])\n train_op = optimizer.minimize(loss, global_step=tf.train.get_or_create_global_step())\n return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)\n\n elif mode == tf.estimator.ModeKeys.EVAL:\n accuracy = tf.metrics.accuracy(labels=labels, predictions=predictions)\n precision = tf.metrics.precision(labels=labels, predictions=predictions)\n recall = tf.metrics.recall(labels=labels, predictions=predictions)\n\n return tf.estimator.EstimatorSpec(\n mode=mode,\n loss=loss,\n eval_metric_ops={\n 'accuracy': accuracy,\n 'precision': precision,\n 'recall': recall,\n })\n\n\n#@title Hyperparmeters { run: \"auto\" }\n\nEMBEDDING = 'https://tfhub.dev/tensorflow/cord-19/swivel-128d/1' #@param {type: \"string\"}\nTRAINABLE_MODULE = False #@param {type: \"boolean\"}\nSTEPS = 8000#@param {type: \"integer\"}\nEVAL_EVERY = 200 #@param {type: \"integer\"}\nBATCH_SIZE = 10 #@param {type: \"integer\"}\nLEARNING_RATE = 0.01 #@param {type: \"number\"}\n\nparams = {\n 'batch_size': BATCH_SIZE,\n 'learning_rate': LEARNING_RATE,\n 'module_name': EMBEDDING,\n 'trainable_module': TRAINABLE_MODULE\n}",
"Train and evaluate the model\nLet's train and evaluate the model to see the performance on the SciCite task",
"estimator = tf.estimator.Estimator(functools.partial(model_fn, params=params))\nmetrics = []\n\nfor step in range(0, STEPS, EVAL_EVERY):\n estimator.train(input_fn=functools.partial(input_fn_train, params=params), steps=EVAL_EVERY)\n step_metrics = estimator.evaluate(input_fn=functools.partial(input_fn_eval, params=params))\n print('Global step {}: loss {:.3f}, accuracy {:.3f}'.format(step, step_metrics['loss'], step_metrics['accuracy']))\n metrics.append(step_metrics)\n\nglobal_steps = [x['global_step'] for x in metrics]\nfig, axes = plt.subplots(ncols=2, figsize=(20,8))\n\nfor axes_index, metric_names in enumerate([['accuracy', 'precision', 'recall'],\n ['loss']]):\n for metric_name in metric_names:\n axes[axes_index].plot(global_steps, [x[metric_name] for x in metrics], label=metric_name)\n axes[axes_index].legend()\n axes[axes_index].set_xlabel(\"Global Step\")",
"We can see that the loss quickly decreases while especially the accuracy rapidly increases. Let's plot some examples to check how the prediction relates to the true labels:",
"predictions = estimator.predict(functools.partial(input_fn_predict, params))\n\nfirst_10_predictions = list(itertools.islice(predictions, 10))\n\ndisplay_df(\n pd.DataFrame({\n TEXT_FEATURE_NAME: [pred['features'].decode('utf8') for pred in first_10_predictions],\n LABEL_NAME: [THE_DATASET.class_names()[pred['labels']] for pred in first_10_predictions],\n 'prediction': [THE_DATASET.class_names()[pred['predictions']] for pred in first_10_predictions]\n }))",
"We can see that for this random sample, the model predicts the correct label most of the times, indicating that it can embed scientific sentences pretty well.\nWhat's next?\nNow that you've gotten to know a bit more about the CORD-19 Swivel embeddings from TF-Hub, we encourage you to participate in the CORD-19 Kaggle competition to contribute to gaining scientific insights from COVID-19 related academic texts.\n\nParticipate in the CORD-19 Kaggle Challenge\nLearn more about the COVID-19 Open Research Dataset (CORD-19)\nSee documentation and more about the TF-Hub embeddings at https://tfhub.dev/tensorflow/cord-19/swivel-128d/1\nExplore the CORD-19 embedding space with the TensorFlow Embedding Projector"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
swirlingsand/deep-learning-foundations
|
gans/gan_mnist/.ipynb_checkpoints/Intro_to_GANs_Solution-checkpoint.ipynb
|
mit
|
[
"Generative Adversarial Network\nIn this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!\nGANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:\n\nPix2Pix \nCycleGAN\nA whole list\n\nThe idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.\n\nThe general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can foold the discriminator.\nThe output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.",
"%matplotlib inline\n\nimport pickle as pkl\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data')",
"Model Inputs\nFirst we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.",
"def model_inputs(real_dim, z_dim):\n inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real') \n inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')\n \n return inputs_real, inputs_z",
"Generator network\n\nHere we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.\nVariable Scope\nHere we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.\nWe could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.\nTo use tf.variable_scope, you use a with statement:\npython\nwith tf.variable_scope('scope_name', reuse=False):\n # code here\nHere's more from the TensorFlow documentation to get another look at using tf.variable_scope.\nLeaky ReLU\nTensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can use take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:\n$$\nf(x) = max(\\alpha * x, x)\n$$\nTanh Output\nThe generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.",
"def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):\n with tf.variable_scope('generator', reuse=reuse):\n # Hidden layer\n h1 = tf.layers.dense(z, n_units, activation=None)\n # Leaky ReLU\n h1 = tf.maximum(alpha * h1, h1)\n \n # Logits and tanh output\n logits = tf.layers.dense(h1, out_dim, activation=None)\n out = tf.tanh(logits)\n \n return out",
"Discriminator\nThe discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.",
"def discriminator(x, n_units=128, reuse=False, alpha=0.01):\n with tf.variable_scope('discriminator', reuse=reuse):\n # Hidden layer\n h1 = tf.layers.dense(x, n_units, activation=None)\n # Leaky ReLU\n h1 = tf.maximum(alpha * h1, h1)\n \n logits = tf.layers.dense(h1, 1, activation=None)\n out = tf.sigmoid(logits)\n \n return out, logits",
"Hyperparameters",
"# Size of input image to discriminator\ninput_size = 784\n# Size of latent vector to generator\nz_size = 100\n# Sizes of hidden layers in generator and discriminator\ng_hidden_size = 128\nd_hidden_size = 128\n# Leak factor for leaky ReLU\nalpha = 0.01\n# Smoothing \nsmooth = 0.1",
"Build network\nNow we're building the network from the functions defined above.\nFirst is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.\nThen, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.\nThen the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).",
"tf.reset_default_graph()\n# Create our input placeholders\ninput_real, input_z = model_inputs(input_size, z_size)\n\n# Build the model\ng_model = generator(input_z, input_size)\n# g_model is the generator output\n\nd_model_real, d_logits_real = discriminator(input_real)\nd_model_fake, d_logits_fake = discriminator(g_model, reuse=True)",
"Discriminator and Generator Losses\nNow we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like \npython\ntf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\nFor the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)\nThe discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.\nFinally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.",
"# Calculate losses\nd_loss_real = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, \n labels=tf.ones_like(d_logits_real) * (1 - smooth)))\nd_loss_fake = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, \n labels=tf.zeros_like(d_logits_real)))\nd_loss = d_loss_real + d_loss_fake\n\ng_loss = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,\n labels=tf.ones_like(d_logits_fake)))",
"Optimizers\nWe want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.\nFor the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). \nWe can do something similar with the discriminator. All the variables in the discriminator start with discriminator.\nThen, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.",
"# Optimizers\nlearning_rate = 0.002\n\n# Get the trainable_variables, split into G and D parts\nt_vars = tf.trainable_variables()\ng_vars = [var for var in t_vars if var.name.startswith('generator')]\nd_vars = [var for var in t_vars if var.name.startswith('discriminator')]\n\nd_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)\ng_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)",
"Training",
"!mkdir checkpoints\n\nbatch_size = 100\nepochs = 100\nsamples = []\nlosses = []\n# Only save generator variables\nsaver = tf.train.Saver(var_list=g_vars)\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n \n # Get images, reshape and rescale to pass to D\n batch_images = batch[0].reshape((batch_size, 784))\n batch_images = batch_images*2 - 1\n \n # Sample random noise for G\n batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))\n \n # Run optimizers\n _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})\n _ = sess.run(g_train_opt, feed_dict={input_z: batch_z})\n \n # At the end of each epoch, get the losses and print them out\n train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})\n train_loss_g = g_loss.eval({input_z: batch_z})\n \n print(\"Epoch {}/{}...\".format(e+1, epochs),\n \"Discriminator Loss: {:.4f}...\".format(train_loss_d),\n \"Generator Loss: {:.4f}\".format(train_loss_g)) \n # Save losses to view after training\n losses.append((train_loss_d, train_loss_g))\n \n # Sample from generator as we're training for viewing afterwards\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, reuse=True),\n feed_dict={input_z: sample_z})\n samples.append(gen_samples)\n saver.save(sess, './checkpoints/generator.ckpt')\n\n# Save training generator samples\nwith open('train_samples.pkl', 'wb') as f:\n pkl.dump(samples, f)",
"Training loss\nHere we'll check out the training losses for the generator and discriminator.",
"fig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator')\nplt.plot(losses.T[1], label='Generator')\nplt.title(\"Training Losses\")\nplt.legend()",
"Generator samples from training\nHere we can view samples of images from the generator. First we'll look at images taken while training.",
"def view_samples(epoch, samples):\n fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\n im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n \n return fig, axes\n\n# Load samples from generator taken while training\nwith open('train_samples.pkl', 'rb') as f:\n samples = pkl.load(f)",
"These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.",
"_ = view_samples(-1, samples)",
"Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!",
"rows, cols = 10, 6\nfig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)\n\nfor sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):\n for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):\n ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)",
"It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.\nSampling from the generator\nWe can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!",
"saver = tf.train.Saver(var_list=g_vars)\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, reuse=True),\n feed_dict={input_z: sample_z})\n_ = view_samples(0, [gen_samples])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
whitead/numerical_stats
|
unit_11/hw_2017/problem_set_1.ipynb
|
gpl-3.0
|
[
"Problem Set Instructions\nFor the following problems identify the dimensionality, if it's convex optimization, if there are constraints/bounds and if you're being asked to find a root or minimum. Then list the matching type of optimization method (the name, not the python function name) you should use and rearrange any equations to match the method (e.g., transform your constraints so that they are satisfied at 0 or make transform your equations so that they are solved when equal to 0). In the second Python cell solve the problem. Make sure that you print out a string describing exactly the correct answer. Do not simply print the result of the optimization call. For example:\npython\nresult = minimize(...)\nprint('The x value that minimizes the function is {:.5f}'.format(result.x))\nProblem 1\nUsing optimization, solve the following equation:\n$$\n5x^3 - 2x^2 = 11\n$$\n1D, convex, root-finding\n$$\n5x^3 - 2x^2 - 11 = 0\n$$",
"from scipy.optimize import newton\n\nx = newton(lambda x: 5 * x**3 - 2 * x**2 - 11, x0=1)\nprint('x = {:.5f}'.format(x))",
"Problem 2\nUsing optimization solve the following equation:\n$$\n\\int_{-\\infty}^x e^{-s^2} = 0.25\n$$\n1D, convex, root-fniding",
"from scipy.integrate import quad\nimport numpy as np\n\nx = newton(lambda x: quad(lambda y: np.exp(-y**2), -np.inf,x)[0] - 0.25, x0=0)\nprint('x = {:.3f}'.format(x))",
"Problem 3\nFind the maximum value of $g(x,y)$ where both $x$ and $y$ are between 0 and 1:\n$$\ng(x,y) = \\exp\\left(-\\frac{(x - 0.2)^2}{4}\\right)\\exp\\left(-\\frac{(x - y)^2}{5}\\right) \\exp\\left(-\\frac{(y - 0.7)^2}{4}\\right)\n$$\n2D, convex, minimization",
"from scipy.optimize import minimize\ndef obj(z):\n x = z[0]\n y = z[1]\n #return negative to allow max\n return -np.exp(-(x - 0.2)**2 / 4) * np.exp(-(x - y)**2 / 5) * np.exp(-(y - 0.7)**2 / 4)\nresult = minimize(obj, x0=[0.5, 0.5], bounds=[(0, 1), (0,1)])\nprint('The minimizing x,y are x = {:.3f}, y = {:.3f}'.format(*result.x))",
"Problem 4\n$x$ and $y$ lie inside a disc with radius $3 \\geq r \\geq 5$. Find the point within the disc that minimizes the distance to (-6, 2). Modify the code to add your optimum point along with an entry in the legend. Complete the problem in Cartesian coordinates.",
"import matplotlib.pyplot as plt\nimport matplotlib\n\n#use nice style with larger plot size\nmatplotlib.style.use(['seaborn-white', 'seaborn-talk'])\n#set-up our points\ntheta = np.linspace(0, 2 * np.pi, 100)\nr = np.repeat(3, len(theta))\n#plot the disc boundaries\nplt.polar(theta, r, linestyle='--', color='#333333')\nplt.polar(theta, r + 2, linestyle='--', color='#333333')\n#plot the inside of the disc\nplt.fill_between(theta, r, r + 2, color='#AAAAAA')\n#plot the point\nplt.plot(np.arctan2(2, -6), np.sqrt((-6)**2 + 2**2), 'ro', label='objective')\n#give some whitespace\nplt.gca().set_rmax(10)\n#add legend\nplt.legend(loc='best')\nplt.show()",
"2D, convex, constrained, minimization. Constraints:\n$$\nx^2 + y^2 - 3^2 \\geq 0\n$$\n$$\n-x^2 - y^2 + 5^2 \\geq 0\n$$",
"#Optimization Code\n\n### BEGIN SOLUTION\nineq_1 = lambda x: x[0]**2 + x[1]**2 - 3**2 \nineq_2 = lambda x: -(x[0]**2 + x[1]**2 - 5**2)\nconstraints = [{'type':'ineq', 'fun':ineq_1},\n {'type':'ineq', 'fun':ineq_2}]\n\nresult = minimize(lambda x: (x[0] - -6)**2 + (x[1] - 2)**2, constraints=constraints, x0=[0,0])\nprint('The minimum coordinates are x = {:.3f} and y = {:.3f}'.format(*result.x))\n###END SOLUTION\n\n#Your plot Code\n\n### BEGIN SOLUTION\nimport matplotlib.pyplot as plt\nimport matplotlib\n\n#use nice style with larger plot size\nmatplotlib.style.use(['seaborn-white', 'seaborn-talk'])\n#set-up our points\ntheta = np.linspace(0, 2 * np.pi, 100)\nr = np.repeat(3, len(theta))\n#plot the disc boundaries\nplt.polar(theta, r, linestyle='--', color='#333333')\nplt.polar(theta, r + 2, linestyle='--', color='#333333')\n#plot the inside of the disc\nplt.fill_between(theta, r, r + 2, color='#AAAAAA')\n#plot the point\nplt.plot(np.arctan2(2, -6), np.sqrt((-6)**2 + 2**2), 'ro', label='objective')\nplt.plot(np.arctan2(result.x[1], result.x[0]), np.sqrt(result.x[1]**2 + result.x[0]**2), 'gX', label='optimum')\n#give some whitespace\nplt.gca().set_rmax(10)\n#add legend\nplt.legend(loc='upper center')\nplt.show()\n### END SOLUTION",
"Problem 5\nRepeat the previous problem except now you must minimize the distance to three points: (-6, 2), (4,2), (-7, 0)",
"#optimization\n\n### BEGIN SOLUTION\ndef obj(x):\n s = 0\n for p in [[-6,2], [4,2], [-7, 0]]:\n s += (x[0] - p[0])**2 + (x[1] - p[1])**2\n return s\nresult = minimize(obj, constraints=constraints, x0=[0,0])\nprint('The minimum coordinates are x = {:.3f} and y = {:.3f}'.format(*result.x))\n### END SOLUTION\n\n#Your plot Code\n\n### BEGIN SOLUTION\nimport matplotlib.pyplot as plt\nimport matplotlib\n\n#use nice style with larger plot size\nmatplotlib.style.use(['seaborn-white', 'seaborn-talk'])\n#set-up our points\ntheta = np.linspace(0, 2 * np.pi, 100)\nr = np.repeat(3, len(theta))\n#plot the disc boundaries\nplt.polar(theta, r, linestyle='--', color='#333333')\nplt.polar(theta, r + 2, linestyle='--', color='#333333')\n#plot the inside of the disc\nplt.fill_between(theta, r, r + 2, color='#AAAAAA')\n#plot the point\nplt.plot(np.arctan2(2, -6), np.sqrt((-6)**2 + 2**2), 'ro', label='objective')\nplt.plot(np.arctan2(2, 4), np.sqrt((4)**2 + 2**2), 'ro')\nplt.plot(np.arctan2(0, -7), np.sqrt((-7)**2 + 0**2), 'ro',)\nplt.plot(np.arctan2(result.x[1], result.x[0]), np.linalg.norm(result.x), 'gX', label='optimum')\n#give some whitespace\nplt.gca().set_rmax(10)\n#add legend\nplt.legend(loc='upper center')\nplt.show()\n### END SOLUTION",
"Problem 6\nThe free energy of mixing is given by the following equation in phase equilibrium theory:\n$$\n\\Delta F = x\\ln x + (1 - x)\\ln (1 - x) + \\chi_{AB}x(1 - x) + \\beta x\n$$\nwhere x is the mole fraction of component A, $\\chi_{AB}$ is the interaction parameter, and $\\beta$ is a system correction. Find the mole fraction of component A at which the free energy of mixing is minimized. Use $\\chi_{AB} = 3$ and $\\beta = 0.05$. Use bashinhopping.\n1D, non-convex (see plot below), bounded, minimization",
"#make a plot to see if it's convex\nchi = 3\nx = np.linspace(0.01,0.99, 100)\nF = x * np.log(x) + (1 - x) * np.log(1 - x) + chi * x * (1 - x) + 0.05 * x\nplt.plot(x,F)\n\n#looks nonconvex\nfrom scipy.optimize import basinhopping\n\ndef f(x):\n return x * np.log(x) + (1 - x) * np.log(1 - x) + 3 * x * (1 - x) + 0.05 * x\n\nresult = basinhopping(f, x0=0.5, minimizer_kwargs={'bounds': [(0.001,0.999)]})\nprint('The lowest free energy of mixing is at x = {:.5f}'.format(*result.x))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mromanello/SunoikisisDC_NER
|
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb
|
gpl-3.0
|
[
"Welcome\nThis notebook accompanies the Sunokisis Digital Classics common session on Named Entity Extraction, see https://github.com/SunoikisisDC/SunoikisisDC-2016-2017/wiki/Named-Entity-Extraction-I.\nIn this notebook we are going to experiment with three different methods for extracting named entities from a Latin text.\nLibrary imports\nExternal modules and libraries can be imported using import statements.\nLet's the Natural Language ToolKit (NLTK), the Classical Language ToolKit (CLTK), MyCapytain and some local libraries that are used in this notebook.",
"########\n# NLTK #\n########\nimport nltk\nfrom nltk.tag import StanfordNERTagger\n########\n# CLTK #\n########\nimport cltk\nfrom cltk.tag.ner import tag_ner\n##############\n# MyCapytain #\n##############\nimport MyCapytain \nfrom MyCapytain.resolvers.cts.api import HttpCTSResolver\nfrom MyCapytain.retrievers.cts5 import CTS\nfrom MyCapytain.common.constants import Mimetypes\n#################\n# other imports #\n#################\nimport sys\nsys.path.append(\"/opt/nlp/pymodules/\")\nfrom idai_journals.nlp import sub_leaves",
"And more precisely, we are using the following versions:",
"print(nltk.__version__)\n\nprint(cltk.__version__)\n\nprint(MyCapytain.__version__)",
"Let's grab some text\nTo start with, we need some text from which we'll try to extract named entities using various methods and libraries.\nThere are several ways of doing this e.g.:\n1. copy and paste the text from Perseus or the Latin Library into a text document, and read it into a variable\n2. load a text from one of the Latin corpora available via cltk (cfr. this blog post)\n3. or load it from Perseus by leveraging its Canonical Text Services API\nLet's gor for #3 :)\nWhat's CTS?\nCTS URNs stand for Canonical Text Service Uniform Resource Names.\nYou can think of a CTS URN like a social security number for texts (or parts of texts).\n\nHere are some examples of CTS URNs with different levels of granularity:\n- urn:cts:latinLit:phi0448 (Caesar)\n- urn:cts:latinLit:phi0448.phi001 (Caesar's De Bello Gallico)\n- urn:cts:latinLit:phi0448.phi001.perseus-lat2 DBG Latin edtion\n- urn:cts:latinLit:phi0448.phi001.perseus-lat2:1 DBG Latin edition, book 1\n- urn:cts:latinLit:phi0448.phi001.perseus-lat2:1.1.1 DBG Latin edition, book 1, chapter 1, section 1\nHow do I find out the CTS URN of a given author or text? The Perseus Catalog is your friend! (crf. e.g. http://catalog.perseus.org/catalog/urn:cts:latinLit:phi0448)\nQuerying a CTS API\nThe URN of the Latin edition of Caesar's De Bello Gallico is urn:cts:latinLit:phi0448.phi001.perseus-lat2.",
"my_passage = \"urn:cts:latinLit:phi0448.phi001.perseus-lat2\"",
"With this information, we can query a CTS API and get some information about this text.\nFor example, we can \"discover\" its canonical text structure, an essential information to be able to cite this text.",
"# We set up a resolver which communicates with an API available in Leipzig\nresolver = HttpCTSResolver(CTS(\"http://cts.dh.uni-leipzig.de/api/cts/\"))\n\n# We require some metadata information\ntextMetadata = resolver.getMetadata(\"urn:cts:latinLit:phi0448.phi001.perseus-lat2\")\n# Texts in CTS Metadata have one interesting property : its citation scheme.\n# Citation are embedded objects that carries information about how a text can be quoted, what depth it has\nprint([citation.name for citation in textMetadata.citation])",
"But we can also query the same API and get back the text of a specific text section, for example the entire book 1.\nTo do so, we need to append the indication of the reference scope (i.e. book 1) to the URN.",
"my_passage = \"urn:cts:latinLit:phi0448.phi001.perseus-lat2:1\"",
"So we retrieve the first book of the De Bello Gallico by passing its CTS URN (that we just stored in the variable my_passage) to the CTS API, via the resolver provided by MyCapytains:",
"passage = resolver.getTextualNode(my_passage)",
"At this point the passage is available in various formats: text, but also TEI XML, etc.\nThus, we need to specify that we are interested in getting the text only:",
"de_bello_gallico_book1 = passage.export(Mimetypes.PLAINTEXT)",
"Let's check that the text is there by printing the content of the variable de_bello_gallico_book1 where we stored it:",
"print(de_bello_gallico_book1)",
"The text that we have just fetched by using a programming interface (API) can also be viewed in the browser.\nOr even imported as an iframe into this notebook!",
"from IPython.display import IFrame\nIFrame('http://cts.dh.uni-leipzig.de/read/latinLit/phi0448/phi001/perseus-lat2/1', width=1000, height=350)",
"Let's see how many words (tokens, more properly) there are in Caesar's De Bello Gallico I:",
"len(de_bello_gallico_book1.split(\" \"))",
"Very simple baseline\nNow let's write what in NLP jargon is called a baseline, that is a method for extracting named entities that can serve as a term of comparison to evaluate the accuracy of other methods. \nBaseline method: \n- cycle through each token of the text\n- if the token starts with a capital letter it's a named entity (only one type, i.e. Entity)",
"\"T\".istitle()\n\n\"t\".istitle()\n\n# we need a list to store the tagged tokens\ntagged_tokens = []\n\n# tokenisation is done by using the string method `split(\" \")` \n# that splits a string upon white spaces\nfor n, token in enumerate(de_bello_gallico_book1.split(\" \")):\n if(token.istitle()):\n tagged_tokens.append((token, \"Entity\"))\n else:\n tagged_tokens.append((token, \"O\")) ",
"Let's a have a look at the first 50 tokens that we just tagged:",
"tagged_tokens[:50]",
"For convenience we can also wrap our baseline code into a function that we call extract_baseline. Let's define it:",
"def extract_baseline(input_text):\n \"\"\"\n :param input_text: the text to tag (string)\n :return: a list of tuples, where tuple[0] is the token and tuple[1] is the named entity tag\n \"\"\"\n # we need a list to store the tagged tokens\n tagged_tokens = []\n\n # tokenisation is done by using the string method `split(\" \")` \n # that splits a string upon white spaces\n for n, token in enumerate(input_text.split(\" \")):\n if(token.istitle()):\n tagged_tokens.append((token, \"Entity\"))\n else:\n tagged_tokens.append((token, \"O\")) \n return tagged_tokens",
"And now we can call it like this:",
"tagged_tokens_baseline = extract_baseline(de_bello_gallico_book1)\n\ntagged_tokens_baseline[-50:]",
"We can modify slightly our function so that it prints the snippet of text where an entity is found:",
"def extract_baseline(input_text):\n \"\"\"\n :param input_text: the text to tag (string)\n :return: a list of tuples, where tuple[0] is the token and tuple[1] is the named entity tag\n \"\"\"\n # we need a list to store the tagged tokens\n tagged_tokens = []\n\n # tokenisation is done by using the string method `split(\" \")` \n # that splits a string upon white spaces\n for n, token in enumerate(input_text.split(\" \")):\n if(token.istitle()):\n tagged_tokens.append((token, \"Entity\"))\n context = input_text.split(\" \")[n-5:n+5]\n print(\"Found entity \\\"%s\\\" in context \\\"%s\\\"\"%(token, \" \".join(context)))\n else:\n tagged_tokens.append((token, \"O\")) \n return tagged_tokens\n\ntagged_text_baseline = extract_baseline(de_bello_gallico_book1)\n\ntagged_text_baseline[:50]",
"NER with CLTK\nThe CLTK library has some basic support for the extraction of named entities from Latin and Greek texts (see CLTK's documentation).\nThe current implementation (as of version 0.1.47) uses a lookup-based method.\nFor each token in a text, the tagger checks whether that token is contained within a predefined list of possible named entities:\n- list of Latin proper nouns: https://github.com/cltk/latin_proper_names_cltk\n- list of Greek proper nouns: https://github.com/cltk/greek_proper_names_cltk\nLet's run CLTK's tagger (it takes a moment):",
"%%time\ntagged_text_cltk = tag_ner('latin', input_text=de_bello_gallico_book1)",
"Let's have a look at the ouput, only the first 10 tokens (by using the list slicing notation):",
"tagged_text_cltk[:10]",
"The output looks slightly different from the one of our baseline function (the size of the tuples in the list varies). \nBut we can write a function to fix this, we call it reshape_cltk_output:",
"def reshape_cltk_output(tagged_tokens):\n reshaped_output = []\n for tagged_token in tagged_tokens:\n if(len(tagged_token)==1):\n reshaped_output.append((tagged_token[0], \"O\"))\n else:\n reshaped_output.append((tagged_token[0], tagged_token[1]))\n return reshaped_output",
"We apply this function to CLTK's output:",
"tagged_text_cltk = reshape_cltk_output(tagged_text_cltk)",
"And the resulting output looks now ok:",
"tagged_text_cltk[:20]",
"Now let's compare the two list of tagged tokens by using a python function called zip, which allows us to read multiple lists simultaneously:",
"list(zip(tagged_text_baseline[:20], tagged_text_cltk_reshaped[:20]))",
"But, as you can see, the two lists are not aligned.\nThis is due to how the CLTK function tokenises the text. The comma after \"tres\" becomes a token on its own, whereas when we tokenise by white space the comma is attached to \"tres\" (i.e. \"tres,\").\nA solution to this is to pass to the tag_ner function the text already tokenised by text.",
"tagged_text_cltk = reshape_cltk_output(tag_ner('latin', input_text=de_bello_gallico_book1.split(\" \")))\n\nlist(zip(tagged_text_baseline[:20], tagged_text_cltk[:20]))",
"NER with NLTK",
"stanford_model_italian = \"/opt/nlp/stanford-tools/stanford-ner-2015-12-09/classifiers/ner-ita-nogpe-noiob_gaz_wikipedia_sloppy.ser.gz\"\n\nner_tagger = StanfordNERTagger(stanford_model_italian)\n\ntagged_text_nltk = ner_tagger.tag(de_bello_gallico_book1.split(\" \"))",
"Let's have a look at the output",
"tagged_text_nltk[:20]",
"Wrap up\nAt this point we can \"compare\" the output of the three different methods we used, again by using the zip function.",
"list(zip(tagged_text_baseline[:20], tagged_text_cltk[:20], tagged_text_nltk[:20]))\n\nfor baseline_out, cltk_out, nltk_out in zip(tagged_text_baseline[:20], tagged_text_cltk[:20], tagged_text_nltk[:20]):\n print(\"Baseline: %s\\nCLTK: %s\\nNLTK: %s\\n\"%(baseline_out, cltk_out, nltk_out))",
"Excercise\nExtract the named entities from the English translation of the De Bello Gallico book 1.\nThe CTS URN for this translation is urn:cts:latinLit:phi0448.phi001.perseus-eng2:1.\nModify the code above to use the English model of the Stanford tagger instead of the italian one.\nHint:",
"stanford_model_english = \"/opt/nlp/stanford-tools/stanford-ner-2015-12-09/classifiers/english.muc.7class.distsim.crf.ser.gz\""
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mohanprasath/Course-Work
|
coursera/databases_and_sql_for_data_science/DB0201EN-Week4-1-1-RealDataPractice-v3-py.ipynb
|
gpl-3.0
|
[
"<a href=\"https://cognitiveclass.ai\"><img src = \"https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png\" width = 300, align = \"center\"></a>\n<h1 align=center><font size = 5>Lab: Working with a real world data-set using SQL and Python</font></h1>\n\nIntroduction\nThis notebook shows how to work with a real world dataset using SQL and Python. In this lab you will:\n1. Understand the dataset for Chicago Public School level performance \n1. Store the dataset in an Db2 database on IBM Cloud instance\n1. Retrieve metadata about tables and columns and query data from mixed case columns\n1. Solve example problems to practice your SQL skills including using built-in database functions\nChicago Public Schools - Progress Report Cards (2011-2012)\nThe city of Chicago released a dataset showing all school level performance data used to create School Report Cards for the 2011-2012 school year. The dataset is available from the Chicago Data Portal: https://data.cityofchicago.org/Education/Chicago-Public-Schools-Progress-Report-Cards-2011-/9xs2-f89t\nThis dataset includes a large number of metrics. Start by familiarizing yourself with the types of metrics in the database: https://data.cityofchicago.org/api/assets/AAD41A13-BE8A-4E67-B1F5-86E711E09D5F?download=true\nNOTE: Do not download the dataset directly from City of Chicago portal. Instead download a more database friendly version from the link below.\nNow download a static copy of this database and review some of its contents:\nhttps://ibm.box.com/shared/static/0g7kbanvn5l2gt2qu38ukooatnjqyuys.csv\nStore the dataset in a Table\nIn many cases the dataset to be analyzed is available as a .CSV (comma separated values) file, perhaps on the internet. To analyze the data using SQL, it first needs to be stored in the database.\nWhile it is easier to read the dataset into a Pandas dataframe and then PERSIST it into the database as we saw in the previous lab, it results in mapping to default datatypes which may not be optimal for SQL querying. For example a long textual field may map to a CLOB instead of a VARCHAR. \nTherefore, it is highly recommended to manually load the table using the database console LOAD tool, as indicated in Week 2 Lab 1 Part II. The only difference with that lab is that in Step 5 of the instructions you will need to click on create \"(+) New Table\" and specify the name of the table you want to create and then click \"Next\". \nNow open the Db2 console, open the LOAD tool, Select / Drag the .CSV file for the CHICAGO PUBLIC SCHOOLS dataset and load the dataset into a new table called SCHOOLS.\n<a href=\"https://cognitiveclass.ai\"><img src = \"https://ibm.box.com/shared/static/uc4xjh1uxcc78ks1i18v668simioz4es.jpg\"></a>\nConnect to the database\nLet us now load the ipython-sql extension and establish a connection with the database",
"%load_ext sql\n\n# Enter the connection string for your Db2 on Cloud database instance below\n# %sql ibm_db_sa://my-username:my-password@my-hostname:my-port/my-db-name\n%sql ibm_db_sa://",
"Query the database system catalog to retrieve table metadata\nYou can verify that the table creation was successful by retrieving the list of all tables in your schema and checking whether the SCHOOLS table was created",
"# type in your query to retrieve list of all tables in the database for your db2 schema (username)\n",
"Double-click here for a hint\n<!--\nIn Db2 the system catalog table called SYSCAT.TABLES contains the table metadata\n-->\n\nDouble-click here for the solution.\n<!-- Solution:\n\n%sql select TABSCHEMA, TABNAME, CREATE_TIME from SYSCAT.TABLES where TABSCHEMA='YOUR-DB2-USERNAME'\n\nor, you can retrieve list of all tables where the schema name is not one of the system created ones:\n\n%sql select TABSCHEMA, TABNAME, CREATE_TIME from SYSCAT.TABLES \\\n where TABSCHEMA not in ('SYSIBM', 'SYSCAT', 'SYSSTAT', 'SYSIBMADM', 'SYSTOOLS', 'SYSPUBLIC')\n\nor, just query for a specifc table that you want to verify exists in the database\n%sql select * from SYSCAT.TABLES where TABNAME = 'SCHOOLS'\n\n-->\n\nQuery the database system catalog to retrieve column metadata\nThe SCHOOLS table contains a large number of columns. How many columns does this table have?",
"# type in your query to retrieve the number of columns in the SCHOOLS table\n",
"Double-click here for a hint\n<!--\nIn Db2 the system catalog table called SYSCAT.COLUMNS contains the column metadata\n-->\n\nDouble-click here for the solution.\n<!-- Solution:\n\n%sql select count(*) from SYSCAT.COLUMNS where TABNAME = 'SCHOOLS'\n\n-->\n\nNow retrieve the the list of columns in SCHOOLS table and their column type (datatype) and length.",
"# type in your query to retrieve all column names in the SCHOOLS table along with their datatypes and length\n",
"Double-click here for the solution.\n<!-- Solution:\n\n%sql select COLNAME, TYPENAME, LENGTH from SYSCAT.COLUMNS where TABNAME = 'SCHOOLS'\n\nor\n\n%sql select distinct(NAME), COLTYPE, LENGTH from SYSIBM.SYSCOLUMNS where TBNAME = 'SCHOOLS'\n\n-->\n\nQuestions\n\nIs the column name for the \"SCHOOL ID\" attribute in upper or mixed case?\nWhat is the name of \"Community Area Name\" column in your table? Does it have spaces?\nAre there any columns in whose names the spaces and paranthesis (round brackets) have been replaced by the underscore character \"_\"?\n\nProblems\nProblem 1\nHow many Elementary Schools are in the dataset?\nDouble-click here for a hint\n<!--\nWhich column specifies the school type e.g. 'ES', 'MS', 'HS'?\n-->\n\nDouble-click here for another hint\n<!--\nDoes the column name have mixed case, spaces or other special characters?\nIf so, ensure you use double quotes around the \"Name of the Column\"\n-->\n\nDouble-click here for the solution.\n<!-- Solution:\n\n%sql select count(*) from SCHOOLS where \"Elementary, Middle, or High School\" = 'ES'\n\nCorrect answer: 462\n\n-->\n\nProblem 2\nWhat is the highest Safety Score?\nDouble-click here for a hint\n<!--\nUse the MAX() function\n-->\n\nDouble-click here for the solution.\n<!-- Hint:\n\n%sql select MAX(\"Safety_Score\") AS MAX_SAFETY_SCORE from SCHOOLS\nCorrect answer: 99\n-->\n\nProblem 3\nWhich schools have highest Safety Score?\nDouble-click here for the solution.\n<!-- Solution:\nIn the previous problem we found out that the highest Safety Score is 99, so we can use that as an input in the where clause:\n\n%sql select \"Name_of_School\", \"Safety_Score\" from SCHOOLS where \"Safety_Score\" = 99\n\nor, a better way:\n\n%sql select \"Name_of_School\", \"Safety_Score\" from SCHOOLS where \\\n \"Safety_Score\"= (select MAX(\"Safety_Score\") from SCHOOLS)\n\nCorrect answer: several schools with with Safety Score of 99.\n-->\n\nProblem 4\nWhat are the top 10 schools with the highest \"Average Student Attendance\"?\nDouble-click here for the solution.\n<!-- Solution:\n\n%sql select Name_of_School, Average_Student_Attendance from SCHOOLS \\\n order by Average_Student_Attendance desc nulls last limit 10 \n\n-->\n\nProblem 5\nRetrieve the list of 5 Schools with the lowest Average Student Attendance sorted in ascending order based on attendance\nDouble-click here for the solution.\n<!-- Solution:\n\n%sql SELECT \"Name_of_School\", \"Average_Student_Attendance\" \\\n from SCHOOLS \\\n order by \"Average_Student_Attendance\" \\\n fetch first 5 rows only\n\n-->\n\nProblem 6\nNow remove the '%' sign from the above result set for Average Student Attendance column\nDouble-click here for a hint\n<!--\nUse the REPLACE() function to replace '%' with ''\nSee documentation for this function at:\nhttps://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.5.0/com.ibm.db2.luw.sql.ref.doc/doc/r0000843.html\n-->\n\nDouble-click here for the solution.\n<!-- Hint:\n\n%sql SELECT Name_of_School, REPLACE(Average_Student_Attendance, '%', '') \\\n from SCHOOLS \\\n order by Average_Student_Attendance \\\n fetch first 5 rows only\n\n-->\n\nProblem 7\nWhich Schools have Average Student Attendance lower than 70%?\nDouble-click here for a hint\n<!--\nThe datatype of the \"Average_Student_Attendance\" column is varchar.\nSo you cannot use it as is in the where clause for a numeric comparison.\nFirst use the CAST() function to cast it as a DECIMAL or DOUBLE\ne.g. CAST(\"Column_Name\" as DOUBLE)\nor simply: DECIMAL(\"Column_Name\")\n-->\n\nDouble-click here for another hint\n<!--\nDon't forget the '%' age sign needs to be removed before casting\n-->\n\nDouble-click here for the solution.\n<!-- Solution:\n\n%sql SELECT Name_of_School, Average_Student_Attendance \\\n from SCHOOLS \\\n where CAST ( REPLACE(Average_Student_Attendance, '%', '') AS DOUBLE ) < 70 \\\n order by Average_Student_Attendance\n\nor,\n\n%sql SELECT Name_of_School, Average_Student_Attendance \\\n from SCHOOLS \\\n where DECIMAL ( REPLACE(Average_Student_Attendance, '%', '') ) < 70 \\\n order by Average_Student_Attendance\n\n-->\n\nProblem 8\nGet the total College Enrollment for each Community Area\nDouble-click here for a hint\n<!--\nVerify the exact name of the Enrollment column in the database\nUse the SUM() function to add up the Enrollments for each Community Area\n-->\n\nDouble-click here for another hint\n<!--\nDon't forget to group by the Community Area\n-->\n\nDouble-click here for the solution.\n<!-- Solution:\n\n%sql select Community_Area_Name, sum(College_Enrollment) AS TOTAL_ENROLLMENT \\\n from SCHOOLS \\\n group by Community_Area_Name \n\n-->\n\nProblem 9\nGet the 5 Community Areas with the least total College Enrollment sorted in ascending order\nDouble-click here for a hint\n<!--\nOrder the previous query and limit the number of rows you fetch\n-->\n\nDouble-click here for the solution.\n<!-- Solution:\n\n%sql select Community_Area_Name, sum(College_Enrollment) AS TOTAL_ENROLLMENT \\\n from SCHOOLS \\\n group by Community_Area_Name \\\n order by TOTAL_ENROLLMENT asc \\\n fetch first 5 rows only\n\n-->\n\nSummary\nIn this lab you learned how to work with a real word dataset using SQL and Python. You learned how to query columns with spaces or special characters in their names and with mixed case names. You also used built in database functions and practiced how to sort, limit, and order result sets.\nCopyright © 2018 cognitiveclass.ai. This notebook and its source code are released under the terms of the MIT License."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
supergis/git_notebook
|
geospatial/openstreetmap/osm-tag2json.ipynb
|
gpl-3.0
|
[
"#!/usr/bin/python\n#coding=utf-8",
"OSM文件分类捡取工具。\nby openthings@163.com, 2016-05-04. \n将osm文件按照tag分类,并转为不同的文件,以方便后续的处理。\n\n每一个tag对象转为独立的一行(去掉换行符),以便Spark读入。\n采用递归方式处理,占用内存较少,可以处理大型文件。\n\n后续工作:\n每一tag对象数据转为dict并保存为json到一行。\n每一tag对象数据转为wkt格式。",
"import os\nimport lxml\nfrom lxml import etree\nimport xmltodict, sys, gc\nfrom pymongo import MongoClient\n\ngc.enable() #Enable Garbadge Collection\n\nclient = MongoClient()\ndb = client.re\nstreetsDB = db.streets\n\nhwTypes = ['motorway', 'trunk', 'primary', 'secondary', 'tertiary', 'pedestrian', 'unclassified', 'service']",
"递归方式读取osm的xml结构数据。\nhttp://www.ibm.com/developerworks/xml/library/x-hiperfparse/",
"def process_element(elem):\n print(\"element:\",str(elem.attrib))\n if (elem.tag==\"node\"): \n fnode.write((etree.tostring(elem).decode('utf-8'))+\"\\r\\n\")\n elif (elem.tag==\"way\"): \n fway.write((etree.tostring(elem).decode('utf-8'))+\"\\r\\n\")\n elif (elem.tag==\"relation\"): \n frelation.write((etree.tostring(elem)).decode('utf-8')+\"\\r\\n\")\n data = etree.tostring(elem)\n \n #data = etree.tostring(elem)\n #data = xmltodict.parse(data)\n #print(data.decode('ascii'))\n #print(str(elem))",
"快速迭代处理,func为迭代的element处理函数。",
"from pprint import *\n\ndef fast_iter(context, func, file, maxline):\n print('Process XML...')\n placement = 0\n try:\n for event, elem in context:\n placement += 1\n if (maxline > 0):\n if (placement >= maxline): break\n print(placement,\"elem: \")\n\n #print(\"element\",str(elem.attrib)) \n data = etree.tostring(elem)\n print(data)\n \n global data2\n data2 = xmltodict.parse(data)\n pprint(data2)\n\n #if (file):\n # file.write(str(elem.attrib) + \"\\n\")\n #else:\n # print(\"file is null.\")\n #func(elem)\n \n elem.clear()\n #while elem.getprevious() is not None:\n # del elem.getparent()[0]\n except Exception as ex:\n print(\"Error:\",ex)\n \n del context",
"将指定tag的对象提取,写入json文件。\nosmfile:输入的*.osm文件 \ntagname:'node','way','relation'",
"def process_tag(osmfile, tagname, maxline):\n filename_tag = osmfile + \"_\" + tagname + \".json\"\n print(\"Filename output: \",filename_tag)\n ftag = open(filename_tag,\"w+\")\n context = etree.iterparse(osmfile, tag = tagname)\n fast_iter(context,process_element,ftag,maxline)\n ftag.close()\n\nosmfile = '../data/muenchen.osm'\n\n#process_tag(osmfile,'node',5)\nprocess_tag(osmfile,'way',2)\n#process_tag(osmfile,'relation',0)\n\n\npprint(data2)\n\nfor i in data2[\"way\"][\"nd\"]:\n print(\"nd=\",i[\"@ref\"])\n\nfor i in data2[\"way\"][\"tag\"]:\n print(i[\"@k\"],\"=\",i[\"@v\"])\n\nimport json\njsonStr = json.dumps(data2)\npprint(jsonStr)\n\njsonobj = json.loads(jsonStr)\npprint(jsonobj)\n\njsonobj[\"way\"][\"tag\"]"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
NEONScience/NEON-Data-Skills
|
tutorials/Python/Hyperspectral/hyperspectral-classification/classification_kmeans_pca_py/classification_kmeans_pca_py.ipynb
|
agpl-3.0
|
[
"syncID: 75f8885948494c0dbe6084099c61dd1e\ntitle: \"Unsupervised Spectral Classification in Python: KMeans & PCA\"\ndescription: \"Learn to classify spectral data using KMeans and Principal Components Analysis (PCA).\"\ndateCreated: 2018-07-10 \nauthors: Bridget Hass\ncontributors: Donal O'Leary\nestimatedTime: 1 hour\npackagesLibraries: numpy, gdal, matplotlib, matplotlib.pyplot\ntopics: hyperspectral-remote-sensing, HDF5, remote-sensing\nlanguagesTool: python\ndataProduct: NEON.DP1.30006, NEON.DP3.30006, NEON.DP1.30008\ncode1: https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/Hyperspectral/hyperspectral-classification/classification_kmeans_pca_py/classification_kmeans_pca_py.ipynb\ntutorialSeries: intro-hsi-py-series\nurlTitle: classification-kmeans-pca-python\n\nIn this tutorial, we will use the Spectral Python (SPy) package to run KMeans and Principal Component Analysis unsupervised classification algorithms. \n<div id=\"ds-objectives\" markdown=\"1\">\n\n### Objectives\nAfter completing this tutorial, you will be able to:\n\n* Classify spectral remote sensing data. \n\n### Install Python Packages\n\n* **numpy**\n* **gdal** \n* **matplotlib** \n* **matplotlib.pyplot** \n\n\n### Download Data\n\nThis tutorial uses a 1km AOP Hyperspectral Reflectance 'tile' from the SERC site. <a href=\"https://ndownloader.figshare.com/files/25752968\">\nDownload the spectral classification teaching data subset here</a>.\n\n<a href=\"https://ndownloader.figshare.com/files/25752968\" class=\"link--button link--arrow\">\nDownload Dataset</a>\n\n</div>\n\nIn this tutorial, we will use the Spectral Python (SPy) package to run KMeans and Principal Component Analysis unsupervised classification algorithms. \nTo learn more about the Spcral Python packages read: \n\n<a href=\"http://www.spectralpython.net/user_guide.html\" target=\"blank\">Spectral Python User Guide</a>.\n<a href=\"http://www.spectralpython.net/algorithms.html#unsupervised-classification\" target=\"_blank\">Spectral Python Unsupervised Classification</a>.\n\nKMeans Clustering\nKMeans is an iterative clustering algorithm used to classify unsupervised data (eg. data without a training set) into a specified number of groups. The algorithm begins with an initial set of randomly determined cluster centers. Each pixel in the image is then assigned to the nearest cluster center (using distance in N-space as the distance metric) and each cluster center is then re-computed as the centroid of all pixels assigned to the cluster. This process repeats until a desired stopping criterion is reached (e.g. max number of iterations). \nRead more on KMeans clustering from <a href=\"http://www.spectralpython.net/algorithms.html#k-means-clustering\" target=\"_blank\">Spectral Python</a>. \nTo visualize how the algorithm works, it's easier look at a 2D data set. In the example below, watch how the cluster centers shift with progressive iterations, \n<figure>\n <a href=\"https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/KMeans2D.gif\">\n <img src=\"https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/KMeans2D.gif\"></a>\n <figcaption> KMeans clustering demonstration Source: <a href=\"https://sandipanweb.wordpress.com/2017/03/19/hard-soft-clustering-with-k-means-weighted-k-means-and-gmm-em/\" target=\"_blank\">Sandipan Deyn</a>\n </figcaption>\n</figure>\n\nPrincipal Component Analysis (PCA) - Dimensionality Reduction\nMany of the bands within hyperspectral images are often strongly correlated. The principal components transformation represents a linear transformation of the original image bands to a set of new, uncorrelated features. These new features correspond to the eigenvectors of the image covariance matrix, where the associated eigenvalue represents the variance in the direction of the eigenvector. A very large percentage of the image variance can be captured in a relatively small number of principal components (compared to the original number of bands).\nRead more about PCA with \n<a href=\"http://www.spectralpython.net/algorithms.html#principal-components\" target=\"_blank\">Spectral Python</a>.\nSet up\nTo run this notebook, the following Python packages need to be installed. You can install required packages from command line pip install spectra scikit-learn cvxopt.\nor if already in a Jupyter Notebook, run the following code in a Notebook code cell. \nPackages:\n- pylab\n- spectral\n- scikit-learn (optional)\npython \nimport sys\n!{sys.executable} -m pip install spectral\n!conda install --yes --prefix {sys.prefix} scikit-learn\n!conda install --yes --prefix {sys.prefix} cvxopt\nIn order to make use of the interactive graphics capabilities of spectralpython, such as N-Dimensional Feature Display, you work in a Python 3.6 environment (as of July 2018). \nFor more, read from <a href=\"http://www.spectralpython.net/graphics.html\" target=\"_blank\">Spectral Python</a>.\nOptional:\nmatplotlib wx backend (for 3-D visualization of PCA, requires Python 3.6)\nFind out more on \n<a href=\"https://stackoverflow.com/questions/42007164/how-to-install-wxpython-phoenix-for-python-3-6\" target=\"_blank\"> StackOverflow</a>. \npython \nconda install -c newville wxpython-phoenix\nManaging Conda Environments\n- nb_conda_kernels package provides a separate jupyter kernel for each conda environment\n- Find out more on \n<a href=\"https://conda.io/docs/user-guide/tasks/manage-environments.html\" target=\"_blank\"> Conda docs</a>. \npython \nconda install -c conda-forge nb_conda_kernels\nFirst, import the required packages and set display preferences:",
"from spectral import *\nimport spectral.io.envi as envi\nimport numpy as np\nimport matplotlib\n\n#for clean output, to not print warnings, don't use when developing script\nimport warnings\nwarnings.filterwarnings('ignore')",
"For this example, we will read in a reflectance tile in ENVI format. NEON provides an h5 plugin for ENVI",
"# You will need to download the example dataset above,\n# extract the files therein,\n# and update the filepaths below per your local machine\nimg = envi.open('/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_reflectance.hdr',\n '/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_reflectance.dat')",
"Note that the information is stored differently when read in with envi.open. We can find the wavelength information in img.bands.centers. Let's take a look at the first and last wavelengths values:",
"print('First 3 Band Center Wavelengths:',img.bands.centers[:3])\nprint('Last 3 Band Center Wavelengths:',img.bands.centers[-3:])",
"We'll set the Water Vapor Band windows to NaN:",
"img.bands.centers[191:211]==np.nan\nimg.bands.centers[281:314]==np.nan\nimg.bands.centers[-10:]==np.nan",
"To get a quick look at the img data, use the params method:",
"img.params",
"Metadata information is stored in img.metadata, a dictionary. Let's look at the metadata contents:",
"md = img.metadata\nprint('Metadata Contents:')\nfor item in md:\n print('\\t',item)",
"To access any of these metadata items, use the syntax md['description'] or md['map info']:",
"print('description:',md['description'])\nprint('map info:',md['map info'])",
"You can also use type and len to look at the type and length (or number) of some of the metadata contents:",
"print(type(md['wavelength']))\nprint('Number of Bands:',len(md['wavelength']))",
"Let's look at the data using imshow, a wrapper around matplotlib's imshow for multi-band images:",
"view = imshow(img,bands=(58,34,19),stretch=0.05,title=\"RGB Image of 2017 SERC Tile\")\nprint(view)",
"When dealing with NEON hyperspectral data, we first want to remove the water vapor & noisy bands, keeping only the valid bands. To speed up the classification algorithms for demonstration purposes, we'll look at a subset of the data using read_subimage, a built in method to subset by area and bands. Type help(img.read_subimage) to see how it works.",
"valid_band_range = [i for j in (range(0,191), range(212, 281), range(315,415)) for i in j] #remove water vapor bands\nimg_subset = img.read_subimage(range(400,600),range(400,600),bands=valid_band_range) #subset image by area and bands",
"Plot the subsetted image for reference:",
"view = imshow(img_subset,bands=(58,34,19),stretch=0.01,title=\"RGB Image of 2017 SERC Tile Subset\")",
"Now that we have the image subsetted, lets run the k-means algorithm. Type help(kmeans) to show how the function works. To run the k-means algorithm on the image and create 5 clusters, using a maximum of 50 iterations, use the following syntax:",
"(m,c) = kmeans(img_subset,5,50) ",
"Note that the algorithm terminated afte 14 iterations, when the pixels stopped being reassigned. \nData Tip: You can iterrupt the algorithm with a keyboard interrupt (CTRL-C) if you notice that the number of reassigned pixels drops off. Kmeans catches the KeyboardInterrupt exception and returns the clusters generated at the end of the previous iteration. If you are running the algorithm interactively, this feature allows you to set the max number of iterations to an arbitrarily high number and then stop the algorithm when the clusters have converged to an acceptable level. If you happen to set the max number of iterations too small (many pixels are still migrating at the end of the final iteration), you can simply call kmeans again to resume processing by passing the cluster centers generated by the previous call as the optional start_clusters argument to the function.\nLet's take a look at the cluster centers c. In this case, these represent spectras of the five clusters of reflectance that the data were grouped into.",
"print(c.shape)",
"c contains 5 groups of spectral curves with 360 bands (the # of bands we've kept after removing the water vapor windows and the last 10 noisy bands). Let's plot these spectral classes:",
"%matplotlib inline\nimport pylab\npylab.figure()\nfor i in range(c.shape[0]):\n pylab.plot(c[i])\npylab.show\npylab.title('Spectral Classes from K-Means Clustering')\npylab.xlabel('Bands (with Water Vapor Windows Removed)')\npylab.ylabel('Reflectance')\n\n#%matplotlib notebook\nview = imshow(img_subset, bands=(58,34,19),stretch=0.01, classes=m)\nview.set_display_mode('overlay')\nview.class_alpha = 0.5 #set transparency\nview.show_data",
"Challenges: K-Means\n\nWhat do you think the spectral classes in the figure you just created represent? \nTry using a different number of clusters in the kmeans algorithm (e.g., 3 or 10) to see what spectral classes and classifications result. \n\nPrincipal Component Analysis (PCA)\nMany of the bands within hyperspectral images are often strongly correlated. The principal components transformation represents a linear transformation of the original image bands to a set of new, uncorrelated features. These new features correspond to the eigenvectors of the image covariance matrix, where the associated eigenvalue represents the variance in the direction of the eigenvector. A very large percentage of the image variance can be captured in a relatively small number of principal components (compared to the original number of bands) .",
"pc = principal_components(img_subset)\npc_view = imshow(pc.cov)\nxdata = pc.transform(img_subset)",
"In the covariance matrix display, lighter values indicate strong positive covariance, darker values indicate strong negative covariance, and grey values indicate covariance near zero.",
"pcdata = pc.reduce(num=10).transform(img_subset)\n\npc_0999 = pc.reduce(fraction=0.999)\n\n# How many eigenvalues are left?\nprint(len(pc_0999.eigenvalues))\n\nimg_pc = pc_0999.transform(img_subset)\nprint(img_pc.shape)\n\nv = imshow(img_pc[:,:,:5], stretch_all=True)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ajgpitch/qutip-notebooks
|
examples/quantum-gates - Copy.ipynb
|
lgpl-3.0
|
[
"QuTiP example: Quantum Gates and their usage\nAuthor: Anubhav Vardhan (anubhavvardhan@gmail.com)\nFor more information about QuTiP see http://qutip.org",
"%matplotlib inline\n\nfrom IPython.display import Image\n\nfrom numpy import pi\n\nfrom qutip import *",
"Introduction\nhttp://en.wikipedia.org/wiki/Quantum_gate\nGates in QuTiP and their representation\nControlled-PHASE",
"cphase(pi/2)\n\nImage(filename='images/cphase.png')",
"Rotation about X-axis",
"rx(pi/2)\n\nImage(filename='images/rx.png')",
"Rotation about Y-axis",
"ry(pi/2)\n\nImage(filename='images/ry.png')",
"Rotation about Z-axis",
"rz(pi/2)\n\nImage(filename='images/rz.png')",
"CNOT",
"cnot()\n\nImage(filename='images/cnot.png')",
"CSIGN",
"csign()\n\nImage(filename='images/csign.png')",
"Berkeley",
"berkeley()\n\nImage(filename='images/berkeley.png')",
"SWAPalpha",
"swapalpha(pi/2)\n\nImage(filename='images/swapalpha.png')",
"FREDKIN",
"fredkin()\n\nImage(filename='images/fredkin.png')",
"TOFFOLI",
"toffoli()\n\nImage(filename='images/toffoli.png')",
"SWAP",
"swap()\n\nImage(filename='images/swap.png')",
"ISWAP",
"iswap()\n\nImage(filename='images/iswap.png')",
"SQRTiSWAP",
"sqrtiswap()\n\nImage(filename='images/sqrtiswap.png')",
"SQRTSWAP",
"sqrtswap()\n\nImage(filename='images/sqrtswap.png')",
"SQRTNOT",
"sqrtnot()\n\nImage(filename='images/sqrtnot.png')",
"HADAMARD",
"snot()\n\nImage(filename='images/snot.png')",
"PHASEGATE",
"phasegate(pi/2)\n\nImage(filename='images/phasegate.png')",
"GLOBALPHASE",
"globalphase(pi/2)\n\nImage(filename='images/globalphase.png')",
"Mølmer–Sørensen gate",
"molmer_sorensen(pi/2)",
"Qubit rotation gate",
"qrot(pi/2, pi/4)",
"Expanding gates to larger qubit registers\nThe example above show how to generate matrice representations of the gates implemented in QuTiP, in their minimal qubit requirements. If the same gates is to be represented in a qubit register of size $N$, the optional keywork argument N can be specified when calling the gate function. For example, to generate the matrix for the CNOT gate for a $N=3$ bit register:",
"cnot(N=3)\n\nImage(filename='images/cnot310.png')",
"Furthermore, the control and target qubits (when applicable) can also be similarly specified using keyword arguments control and target (or in some cases controls or targets):",
"cnot(N=3, control=2, target=0)\n\nImage(filename='images/cnot302.png')",
"Setup of a Qubit Circuit\nThe gates implemented in QuTiP can be used to build any qubit circuit using the class QubitCircuit. The output can be obtained in the form of a unitary matrix or a latex representation.\nIn the following example, we take a SWAP gate. It is known that a swap gate is equivalent to three CNOT gates applied in the given format.",
"N = 2\nqc0 = QubitCircuit(N)\nqc0.add_gate(\"SWAP\", [0, 1], None)\nqc0.png\n\nU_list0 = qc0.propagators()\nU0 = gate_sequence_product(U_list0)\nU0\n\nqc1 = QubitCircuit(N)\nqc1.add_gate(\"CNOT\", 0, 1)\nqc1.add_gate(\"CNOT\", 1, 0)\nqc1.add_gate(\"CNOT\", 0, 1)\nqc1.png\n\nU_list1 = qc1.propagators()\nU1 = gate_sequence_product(U_list1)\nU1",
"In place of manually converting the SWAP gate to CNOTs, it can be automatically converted using an inbuilt function in QubitCircuit",
"qc2 = qc0.resolve_gates(\"CNOT\")\nqc2.png\n\nU_list2 = qc2.propagators()\nU2 = gate_sequence_product(U_list2)\nU2",
"Example of basis transformation",
"qc3 = QubitCircuit(3)\nqc3.add_gate(\"CNOT\", 1, 0)\nqc3.add_gate(\"RX\", 0, None, pi/2, r\"\\pi/2\")\nqc3.add_gate(\"RY\", 1, None, pi/2, r\"\\pi/2\")\nqc3.add_gate(\"RZ\", 2, None, pi/2, r\"\\pi/2\")\nqc3.add_gate(\"ISWAP\", [1, 2])\nqc3.png\n\nU3 = gate_sequence_product(qc3.propagators())\nU3",
"The transformation can either be only in terms of 2-qubit gates:",
"qc4 = qc3.resolve_gates(\"CNOT\")\nqc4.png\n\nU4 = gate_sequence_product(qc4.propagators())\nU4\n\nqc5 = qc3.resolve_gates(\"ISWAP\")\nqc5.png\n\nU5 = gate_sequence_product(qc5.propagators())\nU5",
"Or the transformation can be in terms of any 2 single qubit rotation gates along with the 2-qubit gate.",
"qc6 = qc3.resolve_gates([\"ISWAP\", \"RX\", \"RY\"])\nqc6.png\n\nU6 = gate_sequence_product(qc6.propagators())\nU6\n\nqc7 = qc3.resolve_gates([\"CNOT\", \"RZ\", \"RX\"])\nqc7.png\n\nU7 = gate_sequence_product(qc7.propagators())\nU7",
"Resolving non-adjacent interactions\nInteractions between non-adjacent qubits can be resolved by QubitCircuit to a series of adjacent interactions, which is useful for systems such as spin chain models.",
"qc8 = QubitCircuit(3)\nqc8.add_gate(\"CNOT\", 2, 0)\nqc8.png\n\nU8 = gate_sequence_product(qc8.propagators())\nU8\n\nqc9 = qc8.adjacent_gates()\nqc9.png\n\nU9 = gate_sequence_product(qc9.propagators())\nU9\n\nqc10 = qc9.resolve_gates(\"CNOT\")\nqc10.png\n\nU10 = gate_sequence_product(qc10.propagators())\nU10",
"User defined gates\nA user defined gate can be defined by a python function that takes at most one parameter and return a Qobj, the dimension of the Qobj has to match the qubit system.",
"import numpy as np\ndef user_gate1(arg_value):\n # controlled rotation X\n mat = np.zeros((4, 4), dtype=np.complex)\n mat[0, 0] = mat[1, 1] = 1.\n mat[2:4, 2:4] = rx(arg_value)\n return Qobj(mat, dims=[[2, 2], [2, 2]])\n\ndef user_gate2():\n # S gate\n mat = np.array([[1., 0],\n [0., 1.j]])\n return Qobj(mat, dims=[[2], [2]])",
"To let the QubitCircuit process those gates, one can modify its the attributes QubitCircuit.user_gates, which is a python dictionary in the form {name: gate_function}.",
"qc = QubitCircuit(2)\nqc.user_gates = {\"CTRLRX\": user_gate1, \n \"S\" : user_gate2}",
"When calling the add_gate method, the targets qubits and the argument need to be given.",
"# qubit 0 controlls qubit 1\nqc.add_gate(\"CTRLRX\", targets=[0,1], arg_value=pi/2)\n# qubit 1 controlls qutbi 0\nqc.add_gate(\"CTRLRX\", targets=[1,0], arg_value=pi/2)\n# a gate can also be added using the Gate class\ng_T = Gate(\"S\", targets=[1])\nqc.add_gate(\"S\", targets=[1])\nprops = qc.propagators()\n\nprops[0] # qubit 0 controlls qubit 1\n\nprops[1] # qubit 1 controlls qutbi 0\n\nprops[2] # S gate acts on qubit 1",
"Software versions",
"from qutip.ipynbtools import version_table\nversion_table()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
steinam/teacher
|
jup_notebooks/datenbanken/Nordwind_11FI3_On_Paper.ipynb
|
mit
|
[
"Versicherung on Paper",
"%load_ext sql\n\n%sql mysql://steinam:steinam@localhost/versicherung_complete",
"Gesucht wird eine wiederholungsfreie Liste der Herstellerländer 3 P",
"%%sql \n-- meine Lösung\n\nselect distinct(Land) from Fahrzeughersteller;\n\n%%sql\n-- deine Lösung\nSELECT DISTINCT Land\nFROM fahrzeughersteller\n",
"Listen Sie alle Fahrzeugtypen und die Anzahl Fahrzeuge dieses Typs, aber nur, wenn mehr als 2 Fahrzeuge des Typs vorhanden sind. Sortieren Sie die Ausgabe nach Fahrzeugtypen. 4 P",
"%%sql\n-- meine Lösung\nselect fahrzeugtyp.Bezeichnung, count(fahrzeug.iD) as Anzahl\nfrom fahrzeugtyp left join fahrzeug\non fahrzeugtyp.id = fahrzeug.fahrzeugtyp_id\ngroup by fahrzeugtyp.bezeichnung\nhaving count(Anzahl) > 2\n\n%%sql\n-- deine Lösung\nselect \n\tf.Bezeichnung, \n\t( select count(f.ID)) as Anzahl\n from fahrzeugtyp ft\n where \n ft.ID = f.Fahrzeugtyp_ID )# Tabellenverknüpfung\nfrom fahrzeugtyp f\t\norder by ft.Bezeichnung;\n\n\n",
"Ermittle die Namen und Vornamen der Mitarbeiter incl. Abteilungsname, deren Abteilung ihren Sitz in Dortmund oder Bochum hat.",
"%%sql\n-- meine Lösung\n\nselect Name, vorname, Bezeichnung from Mitarbeiter inner join Abteilung \non Mitarbeiter.Abteilung_ID = Abteilung.ID \nwhere Abteilung.Ort in('Dortmund', 'Bochum')\n\n%%sql\nselect \n\tconcat(m.Name, ' ',m.Vorname) as Mitarbeiter, # Zusammenführung von Vor- & Nachname\n\tab.Bezeichnung as Abteilung\nfrom mitarbeiter m, abteilung ab\nwhere\n\tm.Abteilung_ID = ab.ID and\n\tupper(ab.Ort) in ('DORTMUND', 'BOCHUM'); # Ort in Upper-case Buchstaben selektieren um Matchquote zu erhöhen\n\t\n\t\n",
"Gesucht wird für jeden Fahrzeughersteller (Angabe der ID reicht) und jedes Jahr die kleinste und größte Schadenshöhe. \nGeben Sie falls möglich auch die Differenz zwischen den beiden Werten mit in der jeweiligen Ergebnismenge aus. Ansonsten erzeugen Sie für diese Aufgabe ein eigenes sql-Statement. 5 P",
"%%sql\n\n-- meine Lösung\nselect fahrzeughersteller.id, year(Datum), min(zuordnung_sf_fz.schadenshoehe), max(zuordnung_sf_fz.Schadenshoehe), \n(max(zuordnung_sf_fz.schadenshoehe) - min(zuordnung_sf_fz.schadenshoehe)) as Differenz\nfrom fahrzeughersteller left join fahrzeugtyp \n on fahrzeughersteller.id = fahrzeugtyp.hersteller_ID\n inner join fahrzeug on fahrzeugtyp.id = fahrzeug.fahrzeugtyp_id\n inner join zuordnung_sf_fz\n on fahrzeug.id = zuordnung_sf_fz.fahrzeug_id\n inner join schadensfall \n on zuordnung_sf_fz.Schadensfall_ID = schadensfall.ID\ngroup by fahrzeughersteller.id, year(Datum)\n\n\n%%sql\n\n-- deine Lösung\n\nselect \n\tf.id,\n\tyear(s.Datum) as Jahr, # Verwendung von der Systemfunktin year() um das Datum zu konvertieren\n\t(select min(Schadenshoehe) from schadensfall where year(Datum) = Jahr ) as Min, # Subselect für min\n\t(select max(Schadenshoehe) from schadensfall where year(Datum) = Jahr ) as MAX # Subselect für max\n # Berechnung der Differenz (Max - Min)\nfrom fahrzeug f, zuordnung_sf_fz z, schadensfall s\nwhere\n\tz.Fahrzeug_ID = f.ID and \n\ts.ID = z.Schadensfall_ID\ngroup by f.ID, year(s.Datum);\n",
"Zeige alle Mitarbeiter und deren Autokennzeichen, die als Dienstwagen einen Opel fahren. 4 P",
"%%sql\n\nselect Mitarbeiter.Name, dienstwagen.Kennzeichen\nfrom Mitarbeiter inner join dienstwagen\non mitarbeiter.id = dienstwagen.Mitarbeiter_id\ninner join fahrzeugtyp \n on dienstwagen.fahrzeugtyp_Id = fahrzeugtyp.id\n inner join fahrzeughersteller\n on fahrzeugtyp.hersteller_id = fahrzeughersteller.id\nwhere fAhrzeughersteller.NAme = 'Opel'\n\n\n\n%%sql\n\n-- deine Lösung\n\nselect \n\tconcat(m.Name, ' ',m.Vorname) as Mitarbeiter,\n\td.Kennzeichen,\n\tfh.Name as Hersteller\nfrom mitarbeiter m, dienstwagen d, fahrzeugtyp ft, fahrzeughersteller fh\nwhere \n\td.Mitarbeiter_ID = m.ID and\n\tft.ID = d.Fahrzeugtyp_ID and\n\tfh.ID = ft.Hersteller_ID and\n\tupper(fh.Name) = 'OPEL';\n",
"Welche Fahrzeuge haben Schäden verursacht, deren Schadenssumme höher als die durchschnittliche Schadenshöhe sind. 5 P",
"%%sql\n-- meine Lösung\nselect fahrzeug.kennzeichen, sum(schadenshoehe)\nfrom fahrzeug inner join zuordnung_sf_fz\n on fahrzeug.id = zuordnung_sf_fz.Fahrzeug_ID\ngroup by fahrzeug.kennzeichen\nhaving sum(schadenshoehe) > (select avg(schadenshoehe) from zuordnung_sf_fz)\n\n%%sql\n\n-- deine Lösung\nselect \n\t\tf.ID , \n\t\tf.Kennzeichen,\n\t\ts.Schadenshoehe\nfrom fahrzeug f, zuordnung_sf_fz z, schadensfall s \nwhere\n\tz.Fahrzeug_ID = f.ID and \n\ts.ID = z.Schadensfall_ID and \n\ts.Schadenshoehe > ( select avg(Schadenshoehe) from schadensfall ) # Durchschnitt durch Subselect ermitteln\n\tgroup by f.ID;\n",
"Welche Mitarbeiter sind älter als das Durchschnittsalter der Mitarbeiter. 4 P",
"%%sql\n\nselect Mitarbeiter.Name, Mitarbeiter.Geburtsdatum\nfrom Mitarbeiter\nwhere Geburtsdatum > (select avg(Geburtsdatum) from Mitarbeiter) \norder by Mitarbeiter.Name\n \n\n\n-- oder anders\n-- where (now() - Geburtsdatum) > (select now() - (select avg(geburtsdatum) from mitarbeiter); \n\n%%sql\nselect \n\tconcat(m.Name, ' ',m.Vorname) as Mitarbeiter,\n\tm.Geburtsdatum\nfrom mitarbeiter m \nwhere \n\tm.Geburtsdatum > ( select avg(Geburtsdatum) from mitarbeiter);\n\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
juneeseo/foundations-homework
|
05/spotify_api.ipynb
|
mit
|
[
"import requests\nresponse = requests.get('https://api.spotify.com/v1/search?query=lil&type=artist&country=US&limit=50')\ndata = response.json()\n# data['artists'].keys()\n# data['artists']['items'][0]\n\ndata['artists']['items'][0]\n\n#1 50 Lils and their popularity\n\nitems = data['artists']['items']\nfor item in items:\n print (item['name'], item['popularity'])",
"2) What genres are most represented in the search results? Edit your previous printout to also display a list of their genres in the format \"GENRE_1, GENRE_2, GENRE_3\". If there are no genres, print \"No genres listed\".\nTip: \"how to join a list Python\" might be a helpful search",
"\nitems = data['artists']['items']\nfor item in items:\n if len(item['genres'])==0:\n print(\"No genres listed\")\n else:\n print(','.join(item['genres']))\n\n\n# AGGREGATION PROBLEM\nall_genres = []\n\n# THE LOOP\nfor item in items:\n print (\"All genres so far: \", all_genres)\n # THE CONDITIONAL\n print (\"Current artist has: \", item['genres'])\n all_genres = all_genres + item['genres']\nprint ('++++++++++++++++++++++++++++++++')\nprint (\"All final genres: \", all_genres)\n\nfor genre in all_genres:\n genre_count = all_genres.count(genre)\n print (genre, genre_count)\nunique_genres = set(all_genres)\nprint ('+++++++++++++++++++++++')\nfor genre in unique_genres:\n genre_count = all_genres.count(genre)\n print(genre, genre_count)\nprint (\"dirty south rap is the most represented genre\")",
"3) Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating. Is it the same artist who has the largest number of followers?",
"# popularity\nitems = data['artists']['items']\n# AGGREGATION PROBLEM\nmost_popular_name = \"\"\nmost_popular_score = 0\nfor item in items:\n print (\"Looking at\", item['name'], \"who has popularity of\", item['popularity'])\n print (\"Comapring\", item['popularity'], \"to\", most_popular_score)\n # THE CONDITIONAL\n if item['popularity'] > most_popular_score:\n if item['name'] == 'Lil Wayne':\n pass\n else:\n most_popular_name = item['name']\n most_popular_score = item['popularity']\n else:\n pass\nprint ('++++++++++++++++++++')\nprint (most_popular_name, most_popular_score)\n\ndata['artists']['items'][0]['followers']['total']\n\n# Followers\n# popularity\nitems = data['artists']['items']\n# AGGREGATION PROBLEM\nartist_most_followers = \"\"\nmost_followers = 0\nfor item in items:\n print (\"Looking at\", item['name'], \"who has\", item['followers']['total'], \"followers\")\n print (\"Comapring\", item['followers']['total'], \"to\", most_followers)\n # THE CONDITIONAL\n if item['followers']['total'] > most_followers:\n if item['name'] == 'Lil Wayne':\n pass\n else:\n artist_most_followers = item['name']\n most_followers = item['followers']['total']\n else:\n pass\nprint ('++++++++++++++++++++')\nprint (artist_most_followers, most_followers)",
"4) print a list of Lil's that are more popular than Lil' Kim.",
"items = data['artists']['items']\nfor item in items:\n if item['name'] == \"Lil' Kim\":\n print (item['name'], item['popularity'])\n else: pass\n\n\nlil_kim_popularity = 62\n\n# AGGREGATION PROBLEM\nmore_popular_than_lil_kim = []\n\n# THE LOOP\nfor item in items:\n if item['popularity'] > lil_kim_popularity:\n more_popular_than_lil_kim.append(item['name'])\n else:\n pass\n\nfor item in more_popular_than_lil_kim:\n print(item)\nmore_popular_string = \", \".join(more_popular_than_lil_kim)\nprint(more_popular_string)",
"5) Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks.",
"#5\n# Lil Wayne and Lil June\n\nitems = data['artists']['items']\nfor item in items:\n print (item['name'], item['id'])\n\nimport requests\nlil_wayne_id = '55Aa2cqylxrFIXC767Z865'\nlil_june_id = '3GH3KD2078kLPpEkN1UN26'\n\nlil_wayne_response = requests.get(https://api.spotify.com/v1/artists/lil_wayne_id/top-tracks?country='US')\nlil_wayne_data = lil_wayne_response.json()\nlil_june_response = requests.get(https://api.spotify.com/v1/artists/lil_june_id/top-tracks?country='US')\nlil_june_data = lil_june_response.json()\n\nlil_wayne_data\n\n# Lil Wayne Top Tracks\nwayne_response = requests.get('https://api.spotify.com/v1/artists/55Aa2cqylxrFIXC767Z865/top-tracks')\nwayne_data = wayne_response.json()\n\nwayne_data\n\nwayne_data.keys()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Chipe1/aima-python
|
mdp_apps.ipynb
|
mit
|
[
"APPLICATIONS OF MARKOV DECISION PROCESSES\n\nIn this notebook we will take a look at some indicative applications of markov decision processes. \nWe will cover content from mdp.py, for Chapter 17 Making Complex Decisions of Stuart Russel's and Peter Norvig's book Artificial Intelligence: A Modern Approach.",
"from mdp import *\nfrom notebook import psource, pseudocode",
"CONTENTS\n\nSimple MDP\nState dependent reward function\nState and action dependent reward function\nState, action and next state dependent reward function\n\n\nGrid MDP\nPathfinding problem\n\n\nPOMDP\nTwo state POMDP\n\n\n\nSIMPLE MDP\n\nState dependent reward function\nMarkov Decision Processes are formally described as processes that follow the Markov property which states that \"The future is independent of the past given the present\". \nMDPs formally describe environments for reinforcement learning and we assume that the environment is fully observable. \nLet us take a toy example MDP and solve it using the functions in mdp.py.\nThis is a simple example adapted from a similar problem by Dr. David Silver, tweaked to fit the limitations of the current functions.\n\nLet's say you're a student attending lectures in a university.\nThere are three lectures you need to attend on a given day.\n<br>\nAttending the first lecture gives you 4 points of reward.\nAfter the first lecture, you have a 0.6 probability to continue into the second one, yielding 6 more points of reward.\n<br>\nBut, with a probability of 0.4, you get distracted and start using Facebook instead and get a reward of -1.\nFrom then onwards, you really can't let go of Facebook and there's just a 0.1 probability that you will concentrate back on the lecture.\n<br>\nAfter the second lecture, you have an equal chance of attending the next lecture or just falling asleep.\nFalling asleep is the terminal state and yields you no reward, but continuing on to the final lecture gives you a big reward of 10 points.\n<br>\nFrom there on, you have a 40% chance of going to study and reach the terminal state, \nbut a 60% chance of going to the pub with your friends instead. \nYou end up drunk and don't know which lecture to attend, so you go to one of the lectures according to the probabilities given above.\n<br> \nWe now have an outline of our stochastic environment and we need to maximize our reward by solving this MDP.\n<br>\n<br>\nWe first have to define our Transition Matrix as a nested dictionary to fit the requirements of the MDP class.",
"t = {\n 'leisure': {\n 'facebook': {'leisure':0.9, 'class1':0.1},\n 'quit': {'leisure':0.1, 'class1':0.9},\n 'study': {},\n 'sleep': {},\n 'pub': {}\n },\n 'class1': {\n 'study': {'class2':0.6, 'leisure':0.4},\n 'facebook': {'class2':0.4, 'leisure':0.6},\n 'quit': {},\n 'sleep': {},\n 'pub': {}\n },\n 'class2': {\n 'study': {'class3':0.5, 'end':0.5},\n 'sleep': {'end':0.5, 'class3':0.5},\n 'facebook': {},\n 'quit': {},\n 'pub': {},\n },\n 'class3': {\n 'study': {'end':0.6, 'class1':0.08, 'class2':0.16, 'class3':0.16},\n 'pub': {'end':0.4, 'class1':0.12, 'class2':0.24, 'class3':0.24},\n 'facebook': {},\n 'quit': {},\n 'sleep': {}\n },\n 'end': {}\n}",
"We now need to define the reward for each state.",
"rewards = {\n 'class1': 4,\n 'class2': 6,\n 'class3': 10,\n 'leisure': -1,\n 'end': 0\n}",
"This MDP has only one terminal state.",
"terminals = ['end']",
"Let's now set the initial state to Class 1.",
"init = 'class1'",
"We will write a CustomMDP class to extend the MDP class for the problem at hand. \nThis class will implement the T method to implement the transition model. This is the exact same class as given in mdp.ipynb.",
"class CustomMDP(MDP):\n\n def __init__(self, transition_matrix, rewards, terminals, init, gamma=.9):\n # All possible actions.\n actlist = []\n for state in transition_matrix.keys():\n actlist.extend(transition_matrix[state])\n actlist = list(set(actlist))\n print(actlist)\n\n MDP.__init__(self, init, actlist, terminals=terminals, gamma=gamma)\n self.t = transition_matrix\n self.reward = rewards\n for state in self.t:\n self.states.add(state)\n\n def T(self, state, action):\n if action is None:\n return [(0.0, state)]\n else: \n return [(prob, new_state) for new_state, prob in self.t[state][action].items()]",
"We now need an instance of this class.",
"mdp = CustomMDP(t, rewards, terminals, init, gamma=.9)",
"The utility of each state can be found by value_iteration.",
"value_iteration(mdp)",
"Now that we can compute the utility values, we can find the best policy.",
"pi = best_policy(mdp, value_iteration(mdp, .01))",
"pi stores the best action for each state.",
"print(pi)",
"We can confirm that this is the best policy by verifying this result against policy_iteration.",
"policy_iteration(mdp)",
"Everything looks perfect, but let us look at another possibility for an MDP.\n<br>\nTill now we have only dealt with rewards that the agent gets while it is on a particular state.\nWhat if we want to have different rewards for a state depending on the action that the agent takes next. \nThe agent gets the reward during its transition to the next state.\n<br>\nFor the sake of clarity, we will call this the transition reward and we will call this kind of MDP a dynamic MDP. \nThis is not a conventional term, we just use it to minimize confusion between the two.\n<br>\nThis next section deals with how to create and solve a dynamic MDP.\nState and action dependent reward function\nLet us consider a very similar problem, but this time, we do not have rewards on states, \ninstead, we have rewards on the transitions between states. \nThis state diagram will make it clearer.\n\nA very similar scenario as the previous problem, but we have different rewards for the same state depending on the action taken.\n<br>\nTo deal with this, we just need to change the R method of the MDP class, but to prevent confusion, we will write a new similar class DMDP.",
"class DMDP:\n\n \"\"\"A Markov Decision Process, defined by an initial state, transition model,\n and reward model. We also keep track of a gamma value, for use by\n algorithms. The transition model is represented somewhat differently from\n the text. Instead of P(s' | s, a) being a probability number for each\n state/state/action triplet, we instead have T(s, a) return a\n list of (p, s') pairs. The reward function is very similar.\n We also keep track of the possible states,\n terminal states, and actions for each state.\"\"\"\n\n def __init__(self, init, actlist, terminals, transitions={}, rewards={}, states=None, gamma=.9):\n if not (0 < gamma <= 1):\n raise ValueError(\"An MDP must have 0 < gamma <= 1\")\n\n if states:\n self.states = states\n else:\n self.states = set()\n self.init = init\n self.actlist = actlist\n self.terminals = terminals\n self.transitions = transitions\n self.rewards = rewards\n self.gamma = gamma\n\n def R(self, state, action):\n \"\"\"Return a numeric reward for this state and this action.\"\"\"\n if (self.rewards == {}):\n raise ValueError('Reward model is missing')\n else:\n return self.rewards[state][action]\n\n def T(self, state, action):\n \"\"\"Transition model. From a state and an action, return a list\n of (probability, result-state) pairs.\"\"\"\n if(self.transitions == {}):\n raise ValueError(\"Transition model is missing\")\n else:\n return self.transitions[state][action]\n\n def actions(self, state):\n \"\"\"Set of actions that can be performed in this state. By default, a\n fixed list of actions, except for terminal states. Override this\n method if you need to specialize by state.\"\"\"\n if state in self.terminals:\n return [None]\n else:\n return self.actlist",
"The transition model will be the same",
"t = {\n 'leisure': {\n 'facebook': {'leisure':0.9, 'class1':0.1},\n 'quit': {'leisure':0.1, 'class1':0.9},\n 'study': {},\n 'sleep': {},\n 'pub': {}\n },\n 'class1': {\n 'study': {'class2':0.6, 'leisure':0.4},\n 'facebook': {'class2':0.4, 'leisure':0.6},\n 'quit': {},\n 'sleep': {},\n 'pub': {}\n },\n 'class2': {\n 'study': {'class3':0.5, 'end':0.5},\n 'sleep': {'end':0.5, 'class3':0.5},\n 'facebook': {},\n 'quit': {},\n 'pub': {},\n },\n 'class3': {\n 'study': {'end':0.6, 'class1':0.08, 'class2':0.16, 'class3':0.16},\n 'pub': {'end':0.4, 'class1':0.12, 'class2':0.24, 'class3':0.24},\n 'facebook': {},\n 'quit': {},\n 'sleep': {}\n },\n 'end': {}\n}",
"The reward model will be a dictionary very similar to the transition dictionary with a reward for every action for every state.",
"r = {\n 'leisure': {\n 'facebook':-1,\n 'quit':0,\n 'study':0,\n 'sleep':0,\n 'pub':0\n },\n 'class1': {\n 'study':-2,\n 'facebook':-1,\n 'quit':0,\n 'sleep':0,\n 'pub':0\n },\n 'class2': {\n 'study':-2,\n 'sleep':0,\n 'facebook':0,\n 'quit':0,\n 'pub':0\n },\n 'class3': {\n 'study':10,\n 'pub':1,\n 'facebook':0,\n 'quit':0,\n 'sleep':0\n },\n 'end': {\n 'study':0,\n 'pub':0,\n 'facebook':0,\n 'quit':0,\n 'sleep':0\n }\n}",
"The MDP has only one terminal state",
"terminals = ['end']",
"Let's now set the initial state to Class 1.",
"init = 'class1'",
"We will write a CustomDMDP class to extend the DMDP class for the problem at hand.\nThis class will implement everything that the previous CustomMDP class implements along with a new reward model.",
"class CustomDMDP(DMDP):\n \n def __init__(self, transition_matrix, rewards, terminals, init, gamma=.9):\n actlist = []\n for state in transition_matrix.keys():\n actlist.extend(transition_matrix[state])\n actlist = list(set(actlist))\n print(actlist)\n \n DMDP.__init__(self, init, actlist, terminals=terminals, gamma=gamma)\n self.t = transition_matrix\n self.rewards = rewards\n for state in self.t:\n self.states.add(state)\n \n \n def T(self, state, action):\n if action is None:\n return [(0.0, state)]\n else:\n return [(prob, new_state) for new_state, prob in self.t[state][action].items()]\n \n def R(self, state, action):\n if action is None:\n return 0\n else:\n return self.rewards[state][action]",
"One thing we haven't thought about yet is that the value_iteration algorithm won't work now that the reward model is changed.\nIt will be quite similar to the one we currently have nonetheless.\nThe Bellman update equation now is defined as follows\n$$U(s)=\\max_{a\\epsilon A(s)}\\bigg[R(s, a) + \\gamma\\sum_{s'}P(s'\\ |\\ s,a)U(s')\\bigg]$$\nIt is not difficult to see that the update equation we have been using till now is just a special case of this more generalized equation. \nWe also need to max over the reward function now as the reward function is action dependent as well.\n<br>\nWe will use this to write a function to carry out value iteration, very similar to the one we are familiar with.",
"def value_iteration_dmdp(dmdp, epsilon=0.001):\n U1 = {s: 0 for s in dmdp.states}\n R, T, gamma = dmdp.R, dmdp.T, dmdp.gamma\n while True:\n U = U1.copy()\n delta = 0\n for s in dmdp.states:\n U1[s] = max([(R(s, a) + gamma*sum([(p*U[s1]) for (p, s1) in T(s, a)])) for a in dmdp.actions(s)])\n delta = max(delta, abs(U1[s] - U[s]))\n if delta < epsilon * (1 - gamma) / gamma:\n return U",
"We're all set.\nLet's instantiate our class.",
"dmdp = CustomDMDP(t, r, terminals, init, gamma=.9)",
"Calculate utility values by calling value_iteration_dmdp.",
"value_iteration_dmdp(dmdp)",
"These are the expected utility values for our new MDP.\n<br>\nAs you might have guessed, we cannot use the old best_policy function to find the best policy.\nSo we will write our own.\nBut, before that we need a helper function to calculate the expected utility value given a state and an action.",
"def expected_utility_dmdp(a, s, U, dmdp):\n return dmdp.R(s, a) + dmdp.gamma*sum([(p*U[s1]) for (p, s1) in dmdp.T(s, a)])",
"Now we write our modified best_policy function.",
"from utils import argmax\ndef best_policy_dmdp(dmdp, U):\n pi = {}\n for s in dmdp.states:\n pi[s] = argmax(dmdp.actions(s), key=lambda a: expected_utility_dmdp(a, s, U, dmdp))\n return pi",
"Find the best policy.",
"pi = best_policy_dmdp(dmdp, value_iteration_dmdp(dmdp, .01))\nprint(pi)",
"From this, we can infer that value_iteration_dmdp tries to minimize the negative reward. \nSince we don't have rewards for states now, the algorithm takes the action that would try to avoid getting negative rewards and take the lesser of two evils if all rewards are negative.\nYou might also want to have state rewards alongside transition rewards. \nPerhaps you can do that yourself now that the difficult part has been done.\n<br>\nState, action and next-state dependent reward function\nFor truly stochastic environments, \nwe have noticed that taking an action from a particular state doesn't always do what we want it to. \nInstead, for every action taken from a particular state, \nit might be possible to reach a different state each time depending on the transition probabilities. \nWhat if we want different rewards for each state, action and next-state triplet? \nMathematically, we now want a reward function of the form R(s, a, s') for our MDP. \nThis section shows how we can tweak the MDP class to achieve this.\n<br>\nLet's now take a different problem statement. \nThe one we are working with is a bit too simple.\nConsider a taxi that serves three adjacent towns A, B, and C.\nEach time the taxi discharges a passenger, the driver must choose from three possible actions:\n1. Cruise the streets looking for a passenger.\n2. Go to the nearest taxi stand.\n3. Wait for a radio call from the dispatcher with instructions.\n<br>\nSubject to the constraint that the taxi driver cannot do the third action in town B because of distance and poor reception.\nLet's model our MDP.\n<br>\nThe MDP has three states, namely A, B and C.\n<br>\nIt has three actions, namely 1, 2 and 3.\n<br>\nAction sets:\n<br>\n$K_{a}$ = {1, 2, 3}\n<br>\n$K_{b}$ = {1, 2}\n<br>\n$K_{c}$ = {1, 2, 3}\n<br>\nWe have the following transition probability matrices:\n<br>\n<br>\nAction 1: Cruising streets\n<br>\n$\\\n P^{1} = \n \\left[ {\\begin{array}{ccc}\n \\frac{1}{2} & \\frac{1}{4} & \\frac{1}{4} \\\n \\frac{1}{2} & 0 & \\frac{1}{2} \\\n \\frac{1}{4} & \\frac{1}{4} & \\frac{1}{2} \\\n \\end{array}}\\right] \\\n \\\n $\n<br>\n<br>\nAction 2: Waiting at the taxi stand \n<br>\n$\\\n P^{2} = \n \\left[ {\\begin{array}{ccc}\n \\frac{1}{16} & \\frac{3}{4} & \\frac{3}{16} \\\n \\frac{1}{16} & \\frac{7}{8} & \\frac{1}{16} \\\n \\frac{1}{8} & \\frac{3}{4} & \\frac{1}{8} \\\n \\end{array}}\\right] \\\n \\\n $\n<br>\n<br>\nAction 3: Waiting for dispatch \n<br>\n$\\\n P^{3} =\n \\left[ {\\begin{array}{ccc}\n \\frac{1}{4} & \\frac{1}{8} & \\frac{5}{8} \\\n 0 & 1 & 0 \\\n \\frac{3}{4} & \\frac{1}{16} & \\frac{3}{16} \\\n \\end{array}}\\right] \\\n \\\n $\n<br>\n<br>\nFor the sake of readability, we will call the states A, B and C and the actions 'cruise', 'stand' and 'dispatch'.\nWe will now build the transition model as a dictionary using these matrices.",
"t = {\n 'A': {\n 'cruise': {'A':0.5, 'B':0.25, 'C':0.25},\n 'stand': {'A':0.0625, 'B':0.75, 'C':0.1875},\n 'dispatch': {'A':0.25, 'B':0.125, 'C':0.625}\n },\n 'B': {\n 'cruise': {'A':0.5, 'B':0, 'C':0.5},\n 'stand': {'A':0.0625, 'B':0.875, 'C':0.0625},\n 'dispatch': {'A':0, 'B':1, 'C':0}\n },\n 'C': {\n 'cruise': {'A':0.25, 'B':0.25, 'C':0.5},\n 'stand': {'A':0.125, 'B':0.75, 'C':0.125},\n 'dispatch': {'A':0.75, 'B':0.0625, 'C':0.1875}\n }\n}",
"The reward matrices for the problem are as follows:\n<br>\n<br>\nAction 1: Cruising streets\n<br>\n$\\\n R^{1} = \n \\left[ {\\begin{array}{ccc}\n 10 & 4 & 8 \\\n 14 & 0 & 18 \\\n 10 & 2 & 8 \\\n \\end{array}}\\right] \\\n \\\n $\n<br>\n<br>\nAction 2: Waiting at the taxi stand\n<br>\n$\\\n R^{2} = \n \\left[ {\\begin{array}{ccc}\n 8 & 2 & 4 \\\n 8 & 16 & 8 \\\n 6 & 4 & 2\\\n \\end{array}}\\right] \\\n \\\n $\n<br>\n<br>\nAction 3: Waiting for dispatch\n<br>\n$\\\n R^{3} = \n \\left[ {\\begin{array}{ccc}\n 4 & 6 & 4 \\\n 0 & 0 & 0 \\\n 4 & 0 & 8\\\n \\end{array}}\\right] \\\n \\\n $\n<br>\n<br>\nWe now build the reward model as a dictionary using these matrices.",
"r = {\n 'A': {\n 'cruise': {'A':10, 'B':4, 'C':8},\n 'stand': {'A':8, 'B':2, 'C':4},\n 'dispatch': {'A':4, 'B':6, 'C':4}\n },\n 'B': {\n 'cruise': {'A':14, 'B':0, 'C':18},\n 'stand': {'A':8, 'B':16, 'C':8},\n 'dispatch': {'A':0, 'B':0, 'C':0}\n },\n 'C': {\n 'cruise': {'A':10, 'B':2, 'C':18},\n 'stand': {'A':6, 'B':4, 'C':2},\n 'dispatch': {'A':4, 'B':0, 'C':8}\n }\n}",
"The Bellman update equation now is defined as follows\n$$U(s)=\\max_{a\\epsilon A(s)}\\sum_{s'}P(s'\\ |\\ s,a)(R(s'\\ |\\ s,a) + \\gamma U(s'))$$\nIt is not difficult to see that all the update equations we have used till now is just a special case of this more generalized equation. \nIf we did not have next-state-dependent rewards, the first term inside the summation exactly sums up to R(s, a) or the state-reward for a particular action and we would get the update equation used in the previous problem.\nIf we did not have action dependent rewards, the first term inside the summation sums up to R(s) or the state-reward and we would get the first update equation used in mdp.ipynb.\n<br>\nFor example, as we have the same reward regardless of the action, let's consider a reward of r units for a particular state and let's assume the transition probabilities to be 0.1, 0.2, 0.3 and 0.4 for 4 possible actions for that state.\nWe will further assume that a particular action in a state leads to the same state every time we take that action.\nThe first term inside the summation for this case will evaluate to (0.1 + 0.2 + 0.3 + 0.4)r = r which is equal to R(s) in the first update equation.\n<br>\nThere are many ways to write value iteration for this situation, but we will go with the most intuitive method.\nOne that can be implemented with minor alterations to the existing value_iteration algorithm.\n<br>\nOur DMDP class will be slightly different.\nMore specifically, the R method will have one more index to go through now that we have three levels of nesting in the reward model.\nWe will call the new class DMDP2 as I have run out of creative names.",
"class DMDP2:\n\n \"\"\"A Markov Decision Process, defined by an initial state, transition model,\n and reward model. We also keep track of a gamma value, for use by\n algorithms. The transition model is represented somewhat differently from\n the text. Instead of P(s' | s, a) being a probability number for each\n state/state/action triplet, we instead have T(s, a) return a\n list of (p, s') pairs. The reward function is very similar.\n We also keep track of the possible states,\n terminal states, and actions for each state.\"\"\"\n\n def __init__(self, init, actlist, terminals, transitions={}, rewards={}, states=None, gamma=.9):\n if not (0 < gamma <= 1):\n raise ValueError(\"An MDP must have 0 < gamma <= 1\")\n\n if states:\n self.states = states\n else:\n self.states = set()\n self.init = init\n self.actlist = actlist\n self.terminals = terminals\n self.transitions = transitions\n self.rewards = rewards\n self.gamma = gamma\n\n def R(self, state, action, state_):\n \"\"\"Return a numeric reward for this state, this action and the next state_\"\"\"\n if (self.rewards == {}):\n raise ValueError('Reward model is missing')\n else:\n return self.rewards[state][action][state_]\n\n def T(self, state, action):\n \"\"\"Transition model. From a state and an action, return a list\n of (probability, result-state) pairs.\"\"\"\n if(self.transitions == {}):\n raise ValueError(\"Transition model is missing\")\n else:\n return self.transitions[state][action]\n\n def actions(self, state):\n \"\"\"Set of actions that can be performed in this state. By default, a\n fixed list of actions, except for terminal states. Override this\n method if you need to specialize by state.\"\"\"\n if state in self.terminals:\n return [None]\n else:\n return self.actlist\n \n def actions(self, state):\n \"\"\"Set of actions that can be performed in this state. By default, a\n fixed list of actions, except for terminal states. Override this\n method if you need to specialize by state.\"\"\"\n if state in self.terminals:\n return [None]\n else:\n return self.actlist",
"Only the R method is different from the previous DMDP class.\n<br>\nOur traditional custom class will be required to implement the transition model and the reward model.\n<br>\nWe call this class CustomDMDP2.",
"class CustomDMDP2(DMDP2):\n \n def __init__(self, transition_matrix, rewards, terminals, init, gamma=.9):\n actlist = []\n for state in transition_matrix.keys():\n actlist.extend(transition_matrix[state])\n actlist = list(set(actlist))\n print(actlist)\n \n DMDP2.__init__(self, init, actlist, terminals=terminals, gamma=gamma)\n self.t = transition_matrix\n self.rewards = rewards\n for state in self.t:\n self.states.add(state)\n \n def T(self, state, action):\n if action is None:\n return [(0.0, state)]\n else:\n return [(prob, new_state) for new_state, prob in self.t[state][action].items()]\n \n def R(self, state, action, state_):\n if action is None:\n return 0\n else:\n return self.rewards[state][action][state_]",
"We can finally write value iteration for this problem.\nThe latest update equation will be used.",
"def value_iteration_taxi_mdp(dmdp2, epsilon=0.001):\n U1 = {s: 0 for s in dmdp2.states}\n R, T, gamma = dmdp2.R, dmdp2.T, dmdp2.gamma\n while True:\n U = U1.copy()\n delta = 0\n for s in dmdp2.states:\n U1[s] = max([sum([(p*(R(s, a, s1) + gamma*U[s1])) for (p, s1) in T(s, a)]) for a in dmdp2.actions(s)])\n delta = max(delta, abs(U1[s] - U[s]))\n if delta < epsilon * (1 - gamma) / gamma:\n return U",
"These algorithms can be made more pythonic by using cleverer list comprehensions.\nWe can also write the variants of value iteration in such a way that all problems are solved using the same base class, regardless of the reward function and the number of arguments it takes.\nQuite a few things can be done to refactor the code and reduce repetition, but we have done it this way for the sake of clarity.\nPerhaps you can try this as an exercise.\n<br>\nWe now need to define terminals and initial state.",
"terminals = ['end']\ninit = 'A'",
"Let's instantiate our class.",
"dmdp2 = CustomDMDP2(t, r, terminals, init, gamma=.9)\n\nvalue_iteration_taxi_mdp(dmdp2)",
"These are the expected utility values for the states of our MDP.\nLet's proceed to write a helper function to find the expected utility and another to find the best policy.",
"def expected_utility_dmdp2(a, s, U, dmdp2):\n return sum([(p*(dmdp2.R(s, a, s1) + dmdp2.gamma*U[s1])) for (p, s1) in dmdp2.T(s, a)])\n\nfrom utils import argmax\ndef best_policy_dmdp2(dmdp2, U):\n pi = {}\n for s in dmdp2.states:\n pi[s] = argmax(dmdp2.actions(s), key=lambda a: expected_utility_dmdp2(a, s, U, dmdp2))\n return pi",
"Find the best policy.",
"pi = best_policy_dmdp2(dmdp2, value_iteration_taxi_mdp(dmdp2, .01))\nprint(pi)",
"We have successfully adapted the existing code to a different scenario yet again.\nThe takeaway from this section is that you can convert the vast majority of reinforcement learning problems into MDPs and solve for the best policy using simple yet efficient tools.\nGRID MDP\n\nPathfinding Problem\nMarkov Decision Processes can be used to find the best path through a maze. Let us consider this simple maze.\n\nThis environment can be formulated as a GridMDP.\n<br>\nTo make the grid matrix, we will consider the state-reward to be -0.1 for every state.\n<br>\nState (1, 1) will have a reward of -5 to signify that this state is to be prohibited.\n<br>\nState (9, 9) will have a reward of +5.\nThis will be the terminal state.\n<br>\nThe matrix can be generated using the GridMDP editor or we can write it ourselves.",
"grid = [\n [None, None, None, None, None, None, None, None, None, None, None], \n [None, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, None, +5.0, None], \n [None, -0.1, None, None, None, None, None, None, None, -0.1, None], \n [None, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, None], \n [None, -0.1, None, None, None, None, None, None, None, None, None], \n [None, -0.1, None, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, None], \n [None, -0.1, None, None, None, None, None, -0.1, None, -0.1, None], \n [None, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, None, -0.1, None], \n [None, None, None, None, None, -0.1, None, -0.1, None, -0.1, None], \n [None, -5.0, -0.1, -0.1, -0.1, -0.1, None, -0.1, None, -0.1, None], \n [None, None, None, None, None, None, None, None, None, None, None]\n]",
"We have only one terminal state, (9, 9)",
"terminals = [(9, 9)]",
"We define our maze environment below",
"maze = GridMDP(grid, terminals)",
"To solve the maze, we can use the best_policy function along with value_iteration.",
"pi = best_policy(maze, value_iteration(maze))",
"This is the heatmap generated by the GridMDP editor using value_iteration on this environment\n<br>\n\n<br>\nLet's print out the best policy",
"from utils import print_table\nprint_table(maze.to_arrows(pi))",
"As you can infer, we can find the path to the terminal state starting from any given state using this policy.\nAll maze problems can be solved by formulating it as a MDP.\nPOMDP\nTwo state POMDP\nLet's consider a problem where we have two doors, one to our left and one to our right.\nOne of these doors opens to a room with a tiger in it, and the other one opens to an empty hall.\n<br>\nWe will call our two states 0 and 1 for left and right respectively.\n<br>\nThe possible actions we can take are as follows:\n<br>\n1. Open-left: Open the left door.\nRepresented by 0.\n2. Open-right: Open the right door.\nRepresented by 1.\n3. Listen: Listen carefully to one side and possibly hear the tiger breathing.\nRepresented by 2.\n<br>\nThe possible observations we can get are as follows:\n<br>\n1. TL: Tiger seems to be at the left door.\n2. TR: Tiger seems to be at the right door.\n<br>\nThe reward function is as follows:\n<br>\nWe get +10 reward for opening the door to the empty hall and we get -100 reward for opening the other door and setting the tiger free.\n<br>\nListening costs us -1 reward.\n<br>\nWe want to minimize our chances of setting the tiger free.\nOur transition probabilities can be defined as:\n<br>\n<br>\nAction 0 (Open left door)\n$\\\n P(0) = \n \\left[ {\\begin{array}{cc}\n 0.5 & 0.5 \\\n 0.5 & 0.5 \\\n \\end{array}}\\right] \\\n \\\n $\nAction 1 (Open right door)\n$\\\n P(1) = \n \\left[ {\\begin{array}{cc}\n 0.5 & 0.5 \\\n 0.5 & 0.5 \\\n \\end{array}}\\right] \\\n \\\n $\nAction 2 (Listen)\n$\\\n P(2) = \n \\left[ {\\begin{array}{cc}\n 1.0 & 0.0 \\\n 0.0 & 1.0 \\\n \\end{array}}\\right] \\\n \\\n $\n<br>\n<br>\nOur observation probabilities can be defined as:\n<br>\n<br>\n$\\\n O(0) = \n \\left[ {\\begin{array}{ccc}\n Open left & TL & TR \\\n Tiger: left & 0.5 & 0.5 \\\n Tiger: right & 0.5 & 0.5 \\\n \\end{array}}\\right] \\\n \\\n $\n$\\\n O(1) = \n \\left[ {\\begin{array}{ccc}\n Open right & TL & TR \\\n Tiger: left & 0.5 & 0.5 \\\n Tiger: right & 0.5 & 0.5 \\\n \\end{array}}\\right] \\\n \\\n $\n$\\\n O(2) = \n \\left[ {\\begin{array}{ccc}\n Listen & TL & TR \\\n Tiger: left & 0.85 & 0.15 \\\n Tiger: right & 0.15 & 0.85 \\\n \\end{array}}\\right] \\\n \\\n $\n<br>\n<br>\nThe rewards of this POMDP are defined as:\n<br>\n<br>\n$\\\n R(0) = \n \\left[ {\\begin{array}{cc}\n Openleft & Reward \\\n Tiger: left & -100 \\\n Tiger: right & +10 \\\n \\end{array}}\\right] \\\n \\\n $\n$\\\n R(1) = \n \\left[ {\\begin{array}{cc}\n Openright & Reward \\\n Tiger: left & +10 \\\n Tiger: right & -100 \\\n \\end{array}}\\right] \\\n \\\n $\n$\\\n R(2) = \n \\left[ {\\begin{array}{cc}\n Listen & Reward \\\n Tiger: left & -1 \\\n Tiger: right & -1 \\\n \\end{array}}\\right] \\\n \\\n $\n<br>\nBased on these matrices, we will initialize our variables.\nLet's first define our transition state.",
"t_prob = [[[0.5, 0.5], \n [0.5, 0.5]], \n \n [[0.5, 0.5], \n [0.5, 0.5]], \n \n [[1.0, 0.0], \n [0.0, 1.0]]]",
"Followed by the observation model.",
"e_prob = [[[0.5, 0.5], \n [0.5, 0.5]], \n \n [[0.5, 0.5], \n [0.5, 0.5]], \n \n [[0.85, 0.15], \n [0.15, 0.85]]]",
"And the reward model.",
"rewards = [[-100, 10], \n [10, -100], \n [-1, -1]]",
"Let's now define our states, observations and actions.\n<br>\nWe will use gamma = 0.95 for this example.\n<br>",
"# 0: open-left, 1: open-right, 2: listen\nactions = ('0', '1', '2')\n# 0: left, 1: right\nstates = ('0', '1')\n\ngamma = 0.95",
"We have all the required variables to instantiate an object of the POMDP class.",
"pomdp = POMDP(actions, t_prob, e_prob, rewards, states, gamma)",
"We can now find the utility function by running pomdp_value_iteration on our pomdp object.",
"utility = pomdp_value_iteration(pomdp, epsilon=3)\nutility\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndef plot_utility(utility):\n open_left = utility['0'][0]\n open_right = utility['1'][0]\n listen_left = utility['2'][0]\n listen_right = utility['2'][-1]\n left = (open_left[0] - listen_left[0]) / (open_left[0] - listen_left[0] + listen_left[1] - open_left[1])\n right = (open_right[0] - listen_right[0]) / (open_right[0] - listen_right[0] + listen_right[1] - open_right[1])\n \n colors = ['g', 'b', 'k']\n for action in utility:\n for value in utility[action]:\n plt.plot(value, color=colors[int(action)])\n plt.vlines([left, right], -10, 35, linestyles='dashed', colors='c')\n plt.ylim(-10, 35)\n plt.xlim(0, 1)\n plt.text(left/2 - 0.35, 30, 'open-left')\n plt.text((right + left)/2 - 0.04, 30, 'listen')\n plt.text((right + 1)/2 + 0.22, 30, 'open-right')\n plt.show()\n\nplot_utility(utility)",
"Hence, we get a piecewise-continuous utility function consistent with the given POMDP."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/docs-l10n
|
site/en-snapshot/lite/models/convert/metadata_writer_tutorial.ipynb
|
apache-2.0
|
[
"Copyright 2021 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"TensorFlow Lite Metadata Writer API\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/lite/models/convert/metadata_writer_tutorial\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/models/convert/metadata_writer_tutorial.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/models/convert/metadata_writer_tutorial.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/models/convert/metadata_writer_tutorial.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n\n</table>\n\nTensorFlow Lite Model Metadata is a standard model description format. It contains rich semantics for general model information, inputs/outputs, and associated files, which makes the model self-descriptive and exchangeable.\nModel Metadata is currently used in the following two primary use cases:\n1. Enable easy model inference using TensorFlow Lite Task Library and codegen tools. Model Metadata contains the mandatory information required during inference, such as label files in image classification, sampling rate of the audio input in audio classification, and tokenizer type to process input string in Natural Language models.\n\nEnable model creators to include documentation, such as description of model inputs/outputs or how to use the model. Model users can view these documentation via visualization tools such as Netron.\n\nTensorFlow Lite Metadata Writer API provides an easy-to-use API to create Model Metadata for popular ML tasks supported by the TFLite Task Library. This notebook shows examples on how the metadata should be populated for the following tasks below:\n\nImage classifiers\nObject detectors\nImage segmenters\nNatural language classifiers\nAudio classifiers\n\nMetadata writers for BERT natural language classifiers and BERT question answerers are coming soon.\nIf you want to add metadata for use cases that are not supported, please use the Flatbuffers Python API. See the tutorials here.\nPrerequisites\nInstall the TensorFlow Lite Support Pypi package.",
"!pip install tflite-support-nightly",
"Create Model Metadata for Task Library and Codegen\n<a name=image_classifiers></a>\nImage classifiers\nSee the image classifier model compatibility requirements for more details about the supported model format.\nStep 1: Import the required packages.",
"from tflite_support.metadata_writers import image_classifier\nfrom tflite_support.metadata_writers import writer_utils",
"Step 2: Download the example image classifier, mobilenet_v2_1.0_224.tflite, and the label file.",
"!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_classifier/mobilenet_v2_1.0_224.tflite -o mobilenet_v2_1.0_224.tflite\n!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_classifier/labels.txt -o mobilenet_labels.txt",
"Step 3: Create metadata writer and populate.",
"ImageClassifierWriter = image_classifier.MetadataWriter\n_MODEL_PATH = \"mobilenet_v2_1.0_224.tflite\"\n# Task Library expects label files that are in the same format as the one below.\n_LABEL_FILE = \"mobilenet_labels.txt\"\n_SAVE_TO_PATH = \"mobilenet_v2_1.0_224_metadata.tflite\"\n# Normalization parameters is required when reprocessing the image. It is\n# optional if the image pixel values are in range of [0, 255] and the input\n# tensor is quantized to uint8. See the introduction for normalization and\n# quantization parameters below for more details.\n# https://www.tensorflow.org/lite/models/convert/metadata#normalization_and_quantization_parameters)\n_INPUT_NORM_MEAN = 127.5\n_INPUT_NORM_STD = 127.5\n\n# Create the metadata writer.\nwriter = ImageClassifierWriter.create_for_inference(\n writer_utils.load_file(_MODEL_PATH), [_INPUT_NORM_MEAN], [_INPUT_NORM_STD],\n [_LABEL_FILE])\n\n# Verify the metadata generated by metadata writer.\nprint(writer.get_metadata_json())\n\n# Populate the metadata into the model.\nwriter_utils.save_file(writer.populate(), _SAVE_TO_PATH)",
"<a name=object_detectors></a>\nObject detectors\nSee the object detector model compatibility requirements for more details about the supported model format.\nStep 1: Import the required packages.",
"from tflite_support.metadata_writers import object_detector\nfrom tflite_support.metadata_writers import writer_utils",
"Step 2: Download the example object detector, ssd_mobilenet_v1.tflite, and the label file.",
"!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/object_detector/ssd_mobilenet_v1.tflite -o ssd_mobilenet_v1.tflite\n!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/object_detector/labelmap.txt -o ssd_mobilenet_labels.txt",
"Step 3: Create metadata writer and populate.",
"ObjectDetectorWriter = object_detector.MetadataWriter\n_MODEL_PATH = \"ssd_mobilenet_v1.tflite\"\n# Task Library expects label files that are in the same format as the one below.\n_LABEL_FILE = \"ssd_mobilenet_labels.txt\"\n_SAVE_TO_PATH = \"ssd_mobilenet_v1_metadata.tflite\"\n# Normalization parameters is required when reprocessing the image. It is\n# optional if the image pixel values are in range of [0, 255] and the input\n# tensor is quantized to uint8. See the introduction for normalization and\n# quantization parameters below for more details.\n# https://www.tensorflow.org/lite/models/convert/metadata#normalization_and_quantization_parameters)\n_INPUT_NORM_MEAN = 127.5\n_INPUT_NORM_STD = 127.5\n\n# Create the metadata writer.\nwriter = ObjectDetectorWriter.create_for_inference(\n writer_utils.load_file(_MODEL_PATH), [_INPUT_NORM_MEAN], [_INPUT_NORM_STD],\n [_LABEL_FILE])\n\n# Verify the metadata generated by metadata writer.\nprint(writer.get_metadata_json())\n\n# Populate the metadata into the model.\nwriter_utils.save_file(writer.populate(), _SAVE_TO_PATH)",
"<a name=image_segmenters></a>\nImage segmenters\nSee the image segmenter model compatibility requirements for more details about the supported model format.\nStep 1: Import the required packages.",
"from tflite_support.metadata_writers import image_segmenter\nfrom tflite_support.metadata_writers import writer_utils",
"Step 2: Download the example image segmenter, deeplabv3.tflite, and the label file.",
"!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_segmenter/deeplabv3.tflite -o deeplabv3.tflite\n!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_segmenter/labelmap.txt -o deeplabv3_labels.txt",
"Step 3: Create metadata writer and populate.",
"ImageSegmenterWriter = image_segmenter.MetadataWriter\n_MODEL_PATH = \"deeplabv3.tflite\"\n# Task Library expects label files that are in the same format as the one below.\n_LABEL_FILE = \"deeplabv3_labels.txt\"\n_SAVE_TO_PATH = \"deeplabv3_metadata.tflite\"\n# Normalization parameters is required when reprocessing the image. It is\n# optional if the image pixel values are in range of [0, 255] and the input\n# tensor is quantized to uint8. See the introduction for normalization and\n# quantization parameters below for more details.\n# https://www.tensorflow.org/lite/models/convert/metadata#normalization_and_quantization_parameters)\n_INPUT_NORM_MEAN = 127.5\n_INPUT_NORM_STD = 127.5\n\n# Create the metadata writer.\nwriter = ImageSegmenterWriter.create_for_inference(\n writer_utils.load_file(_MODEL_PATH), [_INPUT_NORM_MEAN], [_INPUT_NORM_STD],\n [_LABEL_FILE])\n\n# Verify the metadata generated by metadata writer.\nprint(writer.get_metadata_json())\n\n# Populate the metadata into the model.\nwriter_utils.save_file(writer.populate(), _SAVE_TO_PATH)",
"<a name=nl_classifiers></a>\nNatural language classifiers\nSee the natural language classifier model compatibility requirements for more details about the supported model format.\nStep 1: Import the required packages.",
"from tflite_support.metadata_writers import nl_classifier\nfrom tflite_support.metadata_writers import metadata_info\nfrom tflite_support.metadata_writers import writer_utils",
"Step 2: Download the example natural language classifier, movie_review.tflite, the label file, and the vocab file.",
"!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/nl_classifier/movie_review.tflite -o movie_review.tflite\n!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/nl_classifier/labels.txt -o movie_review_labels.txt\n!curl -L https://storage.googleapis.com/download.tensorflow.org/models/tflite_support/nl_classifier/vocab.txt -o movie_review_vocab.txt",
"Step 3: Create metadata writer and populate.",
"NLClassifierWriter = nl_classifier.MetadataWriter\n_MODEL_PATH = \"movie_review.tflite\"\n# Task Library expects label files and vocab files that are in the same formats\n# as the ones below.\n_LABEL_FILE = \"movie_review_labels.txt\"\n_VOCAB_FILE = \"movie_review_vocab.txt\"\n# NLClassifier supports tokenize input string using the regex tokenizer. See\n# more details about how to set up RegexTokenizer below:\n# https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/metadata/python/metadata_writers/metadata_info.py#L130\n_DELIM_REGEX_PATTERN = r\"[^\\w\\']+\"\n_SAVE_TO_PATH = \"moview_review_metadata.tflite\"\n\n# Create the metadata writer.\nwriter = nl_classifier.MetadataWriter.create_for_inference(\n writer_utils.load_file(_MODEL_PATH),\n metadata_info.RegexTokenizerMd(_DELIM_REGEX_PATTERN, _VOCAB_FILE),\n [_LABEL_FILE])\n\n# Verify the metadata generated by metadata writer.\nprint(writer.get_metadata_json())\n\n# Populate the metadata into the model.\nwriter_utils.save_file(writer.populate(), _SAVE_TO_PATH)",
"<a name=audio_classifiers></a>\nAudio classifiers\nSee the audio classifier model compatibility requirements for more details about the supported model format.\nStep 1: Import the required packages.",
"from tflite_support.metadata_writers import audio_classifier\nfrom tflite_support.metadata_writers import metadata_info\nfrom tflite_support.metadata_writers import writer_utils",
"Step 2: Download the example audio classifier, yamnet.tflite, and the label file.",
"!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/audio_classifier/yamnet_wavin_quantized_mel_relu6.tflite -o yamnet.tflite\n!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/audio_classifier/yamnet_521_labels.txt -o yamnet_labels.txt\n",
"Step 3: Create metadata writer and populate.",
"AudioClassifierWriter = audio_classifier.MetadataWriter\n_MODEL_PATH = \"yamnet.tflite\"\n# Task Library expects label files that are in the same format as the one below.\n_LABEL_FILE = \"yamnet_labels.txt\"\n# Expected sampling rate of the input audio buffer.\n_SAMPLE_RATE = 16000\n# Expected number of channels of the input audio buffer. Note, Task library only\n# support single channel so far.\n_CHANNELS = 1\n_SAVE_TO_PATH = \"yamnet_metadata.tflite\"\n\n# Create the metadata writer.\nwriter = AudioClassifierWriter.create_for_inference(\n writer_utils.load_file(_MODEL_PATH), _SAMPLE_RATE, _CHANNELS, [_LABEL_FILE])\n\n# Verify the metadata generated by metadata writer.\nprint(writer.get_metadata_json())\n\n# Populate the metadata into the model.\nwriter_utils.save_file(writer.populate(), _SAVE_TO_PATH)",
"Create Model Metadata with semantic information\nYou can fill in more descriptive information about the model and each tensor through the Metadata Writer API to help improve model understanding. It can be done through the 'create_from_metadata_info' method in each metadata writer. In general, you can fill in data through the parameters of 'create_from_metadata_info', i.e. general_md, input_md, and output_md. See the example below to create a rich Model Metadata for image classifers.\nStep 1: Import the required packages.",
"from tflite_support.metadata_writers import image_classifier\nfrom tflite_support.metadata_writers import metadata_info\nfrom tflite_support.metadata_writers import writer_utils\nfrom tflite_support import metadata_schema_py_generated as _metadata_fb",
"Step 2: Download the example image classifier, mobilenet_v2_1.0_224.tflite, and the label file.",
"!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_classifier/mobilenet_v2_1.0_224.tflite -o mobilenet_v2_1.0_224.tflite\n!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_classifier/labels.txt -o mobilenet_labels.txt",
"Step 3: Create model and tensor information.",
"model_buffer = writer_utils.load_file(\"mobilenet_v2_1.0_224.tflite\")\n\n# Create general model information.\ngeneral_md = metadata_info.GeneralMd(\n name=\"ImageClassifier\",\n version=\"v1\",\n description=(\"Identify the most prominent object in the image from a \"\n \"known set of categories.\"),\n author=\"TensorFlow Lite\",\n licenses=\"Apache License. Version 2.0\")\n\n# Create input tensor information.\ninput_md = metadata_info.InputImageTensorMd(\n name=\"input image\",\n description=(\"Input image to be classified. The expected image is \"\n \"128 x 128, with three channels (red, blue, and green) per \"\n \"pixel. Each element in the tensor is a value between min and \"\n \"max, where (per-channel) min is [0] and max is [255].\"),\n norm_mean=[127.5],\n norm_std=[127.5],\n color_space_type=_metadata_fb.ColorSpaceType.RGB,\n tensor_type=writer_utils.get_input_tensor_types(model_buffer)[0])\n\n# Create output tensor information.\noutput_md = metadata_info.ClassificationTensorMd(\n name=\"probability\",\n description=\"Probabilities of the 1001 labels respectively.\",\n label_files=[\n metadata_info.LabelFileMd(file_path=\"mobilenet_labels.txt\",\n locale=\"en\")\n ],\n tensor_type=writer_utils.get_output_tensor_types(model_buffer)[0])",
"Step 4: Create metadata writer and populate.",
"ImageClassifierWriter = image_classifier.MetadataWriter\n# Create the metadata writer.\nwriter = ImageClassifierWriter.create_from_metadata_info(\n model_buffer, general_md, input_md, output_md)\n\n# Verify the metadata generated by metadata writer.\nprint(writer.get_metadata_json())\n\n# Populate the metadata into the model.\nwriter_utils.save_file(writer.populate(), _SAVE_TO_PATH)",
"Read the metadata populated to your model.\nYou can display the metadata and associated files in a TFLite model through the following code:",
"from tflite_support import metadata\n\ndisplayer = metadata.MetadataDisplayer.with_model_file(\"mobilenet_v2_1.0_224_metadata.tflite\")\nprint(\"Metadata populated:\")\nprint(displayer.get_metadata_json())\n\nprint(\"Associated file(s) populated:\")\nfor file_name in displayer.get_packed_associated_file_list():\n print(\"file name: \", file_name)\n print(\"file content:\")\n print(displayer.get_associated_file_buffer(file_name))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/bigquery-oreilly-book
|
05_devel/magics.ipynb
|
apache-2.0
|
[
"Example of using the Jupyter magics for BigQuery\nThis is the recommended way to programmatically access BigQuery when you are in a notebook environment.\nIf you need to create a Python code that works outside of a notebook environment (e.g. in a production environment), you should use code as shown in the Google Cloud Client Library example notebook in this directory.\nInstall library and extensions if needed",
"#!pip install google-cloud-bigquery\n%load_ext google.cloud.bigquery\n\nPROJECT='cloud-training-demos' # CHANGE THIS",
"Run a query",
"%%bigquery --project $PROJECT\nSELECT \n start_station_name \n , AVG(duration) as duration\n , COUNT(duration) as num_trips\nFROM `bigquery-public-data`.london_bicycles.cycle_hire \nGROUP BY start_station_name\nORDER BY num_trips DESC\nLIMIT 5",
"Run a parameterized query",
"PARAMS = {\"num_stations\": 3}\n\n%%bigquery --project $PROJECT --params $PARAMS\nSELECT \n start_station_name \n , AVG(duration) as duration\n , COUNT(duration) as num_trips\nFROM `bigquery-public-data`.london_bicycles.cycle_hire \nGROUP BY start_station_name\nORDER BY num_trips DESC\nLIMIT @num_stations",
"Into a dataframe",
"%%bigquery df --project $PROJECT\nSELECT \n start_station_name \n , AVG(duration) as duration\n , COUNT(duration) as num_trips\nFROM `bigquery-public-data`.london_bicycles.cycle_hire \nGROUP BY start_station_name\nORDER BY num_trips DESC\n\ndf.describe()\n\ndf.plot.scatter('duration', 'num_trips');",
"Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tpin3694/tpin3694.github.io
|
python/pandas_apply_operations_to_groups.ipynb
|
mit
|
[
"Title: Apply Operations To Groups In Pandas\nSlug: pandas_apply_operations_to_groups\nSummary: Apply Operations To Groups In Pandas\nDate: 2016-05-01 12:00\nCategory: Python\nTags: Data Wrangling\nAuthors: Chris Albon \nPreliminaries",
"# import modules\nimport pandas as pd\n\n# Create dataframe\nraw_data = {'regiment': ['Nighthawks', 'Nighthawks', 'Nighthawks', 'Nighthawks', 'Dragoons', 'Dragoons', 'Dragoons', 'Dragoons', 'Scouts', 'Scouts', 'Scouts', 'Scouts'], \n 'company': ['1st', '1st', '2nd', '2nd', '1st', '1st', '2nd', '2nd','1st', '1st', '2nd', '2nd'], \n 'name': ['Miller', 'Jacobson', 'Ali', 'Milner', 'Cooze', 'Jacon', 'Ryaner', 'Sone', 'Sloan', 'Piger', 'Riani', 'Ali'], \n 'preTestScore': [4, 24, 31, 2, 3, 4, 24, 31, 2, 3, 2, 3],\n 'postTestScore': [25, 94, 57, 62, 70, 25, 94, 57, 62, 70, 62, 70]}\ndf = pd.DataFrame(raw_data, columns = ['regiment', 'company', 'name', 'preTestScore', 'postTestScore'])\ndf\n\n# Create a groupby variable that groups preTestScores by regiment\ngroupby_regiment = df['preTestScore'].groupby(df['regiment'])\ngroupby_regiment",
"\"This grouped variable is now a GroupBy object. It has not actually computed anything yet except for some intermediate data about the group key df['key1']. The idea is that this object has all of the information needed to then apply some operation to each of the groups.\" - Python for Data Analysis\nView a grouping\nUse list() to show what a grouping looks like",
"list(df['preTestScore'].groupby(df['regiment']))",
"Descriptive statistics by group",
"df['preTestScore'].groupby(df['regiment']).describe()",
"Mean of each regiment's preTestScore",
"groupby_regiment.mean()",
"Mean preTestScores grouped by regiment and company",
"df['preTestScore'].groupby([df['regiment'], df['company']]).mean()",
"Mean preTestScores grouped by regiment and company without heirarchical indexing",
"df['preTestScore'].groupby([df['regiment'], df['company']]).mean().unstack()",
"Group the entire dataframe by regiment and company",
"df.groupby(['regiment', 'company']).mean()",
"Number of observations in each regiment and company",
"df.groupby(['regiment', 'company']).size()",
"Iterate an operations over groups",
"# Group the dataframe by regiment, and for each regiment,\nfor name, group in df.groupby('regiment'): \n # print the name of the regiment\n print(name)\n # print the data of that regiment\n print(group)",
"Group by columns\nSpecifically in this case: group by the data types of the columns (i.e. axis=1) and then use list() to view what that grouping looks like",
"list(df.groupby(df.dtypes, axis=1))",
"In the dataframe \"df\", group by \"regiments, take the mean values of the other variables for those groups, then display them with the prefix_mean",
"df.groupby('regiment').mean().add_prefix('mean_')",
"Create a function to get the stats of a group",
"def get_stats(group):\n return {'min': group.min(), 'max': group.max(), 'count': group.count(), 'mean': group.mean()}",
"Create bins and bin up postTestScore by those pins",
"bins = [0, 25, 50, 75, 100]\ngroup_names = ['Low', 'Okay', 'Good', 'Great']\ndf['categories'] = pd.cut(df['postTestScore'], bins, labels=group_names)",
"Apply the get_stats() function to each postTestScore bin",
"df['postTestScore'].groupby(df['categories']).apply(get_stats).unstack()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fsilva/deputado-histogramado
|
notebooks/Deputado-Histogramado-4.ipynb
|
gpl-3.0
|
[
"Deputado Histogramado\nexpressao.xyz/deputado/\nComo processar as sessões do parlamento Português\nÍndice\n\nReunír o dataset\nContando as palavras mais comuns\nFazendo histogramas\nRepresentações geograficas\nSimplificar o dataset e exportar para o expressa.xyz/deputado/\n\nO que se passou nas mais de 4000 sessões de discussão do parlamento Português que ocorreram desde 1976? \nNeste notebook vamos tentar visualizar o que se passou da maneira mais simples - contando palavras, e fazendo gráficos.\nPara obter os textos de todas as sessões usaremos o demo.cratica.org, onde podemos aceder facilmente a todas as sessões do parlamento de 1976 a 2015. Depois com um pouco de python, pandas e matplotlib vamos analisar o que se passou.\nPara executar estes notebook será necessário descarregar e abrir com o Jupiter Notebooks (a distribuição Anaconda faz com que instalar todas as ferramentas necessárias seja fácil - https://www.continuum.io/downloads)\nParte 4 - Regionalização\nVamos tentar representar que distritos de Portugal são mais referenciados no Parlamento.\nPara isso vamos usar dois pacotes de python muito úteis e faceis de usar: geopy para obter as localizações das cidades, e Basemap para a representação.\nComeçemos por localizar todas os distritos/cidades. \nDe notar que a função geocode aceita moradas e retorna as coordenadas de GPS. (!!)",
"# capitais de distrito, encontra latitude e longitude\n\nimport geopy\nfrom geopy.geocoders import Nominatim\ngeolocator = Nominatim()\n\nlong = []\nlat = []\ncities = ['Açores','Madeira','Aveiro','Beja','Braga','Bragança','Castelo Branco','Coimbra','Évora','Faro','Guarda','Leiria','Lisboa','Portalegre','Porto','Santarém','Setúbal','Viana do Castelo','Vila Real','Viseu']\n#cities = ['Aveiro','Braga','Porto','Lisboa']\n\nfor city in cities:\n location = geolocator.geocode(city+',Portugal')\n print(city + ':' + location.address + ' ('+str(location.longitude)+','+str(location.latitude)+')')\n long.append(location.longitude)\n lat.append(location.latitude)",
"De um modo similar ao que fizemos antes, vamos contar todas as ocorrencias em todas as sessões do nome de cada cidade",
"%matplotlib inline\nimport pylab\nimport matplotlib\nimport pandas\nimport numpy\n\n\ndateparse = lambda x: pandas.datetime.strptime(x, '%Y-%m-%d')\nsessoes = pandas.read_csv('sessoes_democratica_org.csv',index_col=0,parse_dates=['data'], date_parser=dateparse)\n\nfrom functools import reduce\n\n# retira falsas ocorrencias de 'guarda' e 'porto'\ndef conta_palavras(texto,palavra):\n return texto.replace('aeroporto','').replace('lopes porto','').replace('guardar','').replace('guardas','').replace('guardado','').replace('aguarda','').replace('vanguarda','').replace('salvaguarda','').replace('guarda nacional','').replace('guarda civil','').count(palavra.lower())\n\ndef conta_ocorrencias(palavra):\n return numpy.sum(sessoes['sessao'].map(lambda texto: conta_palavras(texto,palavra)))\n \nprint('A contar...')\ncontagens = []\nfor cidade in cities:\n contagem = conta_ocorrencias(cidade)\n print(cidade +' '+str(contagem))\n contagens.append(contagem)\n\n",
"Agora representamos o mapa, e depois desenhamos circulos para cada cidade, de cor e tamanho variável consoante o número de menções.\nPara representar os distritos Portugueses necessitamos de um dataset com os 'shapefiles' destes: obtem-o em http://www.gadm.org/country\nAlternativamente o script tambem executa sem a linha 'shp_info = ...'.",
"from mpl_toolkits.basemap import Basemap\npylab.figure(figsize=(20,10))\n#map = Basemap(projection='merc',lat_0=40,lon_0=0,resolution='l',llcrnrlon=-10.5, llcrnrlat=36,urcrnrlon=-5.5, urcrnrlat=43) # PT continental\nmap = Basemap(projection='merc',lat_0=40,lon_0=0,resolution='l',llcrnrlon=-32, llcrnrlat=31,urcrnrlon=-5.5, urcrnrlat=43)\n#shp_info = map.readshapefile('PRT_adm0','',drawbounds=True)#país\nshp_info = map.readshapefile('PRT_adm1','',drawbounds=True)#distritos\n#shp_info = map.readshapefile('PRT_adm2','',drawbounds=True)#concelhos\n#shp_info = map.readshapefile('PRT_adm3','',drawbounds=True)#freguesias\nmap.drawcoastlines(linewidth=0.25)\nmap.fillcontinents(color='#00ff00',lake_color='aqua')\nmap.drawmapboundary(fill_color='aqua')\n\ncontagens = numpy.array(contagens)\nsize = 10+1000*contagens/max(contagens)\ncolor = contagens/max(contagens)\n\nx,y = map(long, lat)\nmap.scatter(x, y,s=size, c=-color, zorder=10, cmap=pylab.autumn())\n\npylab.show()",
"Vamos claramente que a Madeira, Lisboa e Porto são as regiões mais discutidas. De seguida Açores, Braga e Coimbra. Já tinhamos estes dados na tabela acima, mas agora é visual. De notar que as contagens de 'Porto' podem estar inflacionadas, pois a palavra é usada em vários contextos diferentes dificeis de isolar/rejeitar.\nFica como exercício contar e representar os concelhos de cada distrito!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ProjectQ-Framework/ProjectQ
|
examples/compiler_tutorial.ipynb
|
apache-2.0
|
[
"ProjectQ Compiler Tutorial\nThe aim of this short tutorial is to give a first introduction to the ProjectQ compiler and show the use different preconfigured setups. In particular, we will show how to specify the gate set to which the compiler should translate a quantum program. A more extended tutorial will follow soon. Please check out our ProjectQ paper for an introduction to the basic concepts behind our compiler. If you are interested how to compile to a restricted hardware with, e.g., only nearest neighbour connectivity, please have a look at the mapper_tutorial.ipynb afterwards.\nThe default compiler\nTo compile a quantum program, we begin by creating a compiler called MainEngine and specify the backend for which the compiler should translate the program. For the purpose of this tutorial, we will use a CommandPrinter as a backend to display the compiled algorithm. It works the same for all other backends such as, e.g., the simulator or an interface to real hardware.\nLet's write a small program:",
"import projectq\nfrom projectq.backends import CommandPrinter\nfrom projectq.meta import Control\nfrom projectq.ops import All, CNOT, Measure, QFT, QubitOperator, Rx, TimeEvolution, X\n\n# create the compiler and specify the backend:\neng = projectq.MainEngine(backend=CommandPrinter(accept_input=False))\n\ndef my_quantum_program(eng):\n qubit = eng.allocate_qubit()\n qureg = eng.allocate_qureg(3)\n with Control(eng, qubit):\n hamiltonian = 0.5 * QubitOperator(\"X0 Y1 Z2\")\n TimeEvolution(0.1, hamiltonian) | qureg\n QFT | qureg\n Rx(0.1) | qubit\n CNOT | (qubit, qureg[0])\n All(Measure) | qureg\n Measure | qubit\n eng.flush()\nmy_quantum_program(eng)",
"In the above example, the compiler did nothing because the default compiler (when MainEngine is called without a specific engine_list parameter) translates the individual gates to the gate set supported by the backend. In our case, the backend is a CommandPrinter which supports any type of gate.\nWe can check what happens when the backend is a Simulator by inserting a CommandPrinter as a last compiler engine before the backend so that every command is printed before it gets sent to the Simulator:",
"from projectq.backends import Simulator\nfrom projectq.setups.default import get_engine_list\n\n# Use the default compiler engines with a CommandPrinter in the end:\nengines2 = get_engine_list() + [CommandPrinter()]\n\neng2 = projectq.MainEngine(backend=Simulator(), engine_list=engines2)\nmy_quantum_program(eng2)",
"As one can see, in this case the compiler had to do a little work because the Simulator does not support a QFT gate. Therefore, it automatically replaces the QFT gate by a sequence of lower-level gates.\nUsing a provided setup and specifying a particular gate set\nProjectQ's compiler is fully modular, so one can easily build a special purpose compiler. All one has to do is compose a list of compiler engines through which the individual operations will pass in a serial order and give this compiler list to the MainEngine as the engine_list parameter.\nFor common compiler needs we try to provide predefined \"setups\" which contain a function get_engine_list which returns a suitable list of compiler engines for the MainEngine. All of our current setups can be found in projectq.setups. For example there is a setup called restrictedgateset which allows to compile to common restricted gate sets. This is useful, for example, to obtain resource estimates for running a given program on actual quantum hardware which does not support every quantum gate. Let's look at an example:",
"import projectq\nfrom projectq.setups import restrictedgateset\nfrom projectq.ops import All, H, Measure, Rx, Ry, Rz, Toffoli\nengine_list3 = restrictedgateset.get_engine_list(one_qubit_gates=\"any\",\n two_qubit_gates=(CNOT,),\n other_gates=(Toffoli,))\neng3 = projectq.MainEngine(backend=CommandPrinter(accept_input=False),\n engine_list=engine_list3)\n\ndef my_second_program(eng):\n qubit = eng3.allocate_qubit()\n qureg = eng3.allocate_qureg(3)\n H | qubit\n Rx(0.3) | qubit\n Toffoli | (qureg[:-1], qureg[2])\n QFT | qureg\n All(Measure) | qureg\n Measure | qubit\n eng.flush()\nmy_second_program(eng3)",
"Please have a look at the documention of the restrictedgateset for details. The above compiler compiles the circuit to gates consisting of any single qubit gate, the CNOT and Toffoli gate. The gate specifications can either be a gate class, e.g., Rz or a specific instance Rz(math.pi). A smaller but still universal gate set would be for example CNOT and Rz, Ry:",
"engine_list4 = restrictedgateset.get_engine_list(one_qubit_gates=(Rz, Ry),\n two_qubit_gates=(CNOT,),\n other_gates=())\neng4 = projectq.MainEngine(backend=CommandPrinter(accept_input=False),\n engine_list=engine_list4)\nmy_second_program(eng4)",
"As mentioned in the documention of this setup, one cannot (yet) choose an arbitrary gate set but there is a limited choice. If it doesn't work for a specified gate set, the compiler will either raises a NoGateDecompositionError or a RuntimeError: maximum recursion depth exceeded... which means that for this particular choice of gate set, one would be required to write more decomposition rules to make it work. Also for some choice of gate set there might be compiler engines producing more optimal code.\nError messages\nBy default the MainEngine shortens error messages as most often this is enough information to find the error. To see the full error message one can to set verbose=True, i.e.:\nMainEngine(verbose=True)\nDIY: Build a compiler engine list for a specific gate set\nIn this short example, we want to look at how to build an own compiler engine_list for compiling to a restricted gate set. Please have a look at the predefined setups for guidance.\nOne of the important compiler engines to change the gate set is the AutoReplacer. It queries the following engines to check if a particular gate is supported and if not, it will use decomposition rules to change this gate to supported ones. Most engines just forward this query to the next engine until the backend is reached. The engine after an AutoReplacer is usually a TagRemover which removes previous tags in commands such as, e.g., ComputeTag which allows a following LocalOptimizer to perform more optimizations (otherwise it would only optimize within a \"compute\" section and not over the boundaries).\nTo specify different intermediate gate sets, one can insert an InstructionFilter into the engine_list after the AutoReplacer in order to return True or False for the queries of the AutoReplacer asking if a specific gate is supported. \nHere is a minimal example of a compiler which compiles to CNOT and single qubit gates but doesn't perform optimizations (which could be achieved using the LocalOptimizer). For the more optimal versions, have a look at the restrictricedgateset setup:",
"import projectq\nfrom projectq.backends import CommandPrinter\nfrom projectq.cengines import AutoReplacer, DecompositionRuleSet, InstructionFilter\nfrom projectq.ops import All, ClassicalInstructionGate, Measure, Toffoli, X\nimport projectq.setups.decompositions\n\n# Write a function which, given a Command object, returns whether the command is supported:\ndef is_supported(eng, cmd):\n if isinstance(cmd.gate, ClassicalInstructionGate):\n # This is required to allow Measure, Allocate, Deallocate, Flush\n return True\n elif isinstance(cmd.gate, X.__class__) and len(cmd.control_qubits) == 1:\n # Allows a CNOT gate which is an X gate with one control qubit\n return True\n elif (len(cmd.control_qubits) == 0 and \n len(cmd.qubits) == 1 and\n len(cmd.qubits[0]) == 1):\n # Gate which has no control qubits, applied to 1 qureg consisting of 1 qubit\n return True\n else:\n return False\n\n#is_supported(\"test\", \"eng\")\n\nrule_set = DecompositionRuleSet(modules=[projectq.setups.decompositions])\nengine_list5 = [AutoReplacer(rule_set), InstructionFilter(is_supported)]\neng5 = projectq.MainEngine(backend=CommandPrinter(accept_input=False),\n engine_list=engine_list5)\n\ndef my_third_program(eng):\n qubit = eng5.allocate_qubit()\n qureg = eng5.allocate_qureg(3)\n Toffoli | (qureg[:2], qureg[2])\n QFT | qureg\n All(Measure) | qureg\n Measure | qubit\n eng5.flush()\nmy_third_program(eng5)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
catalystcomputing/DSIoT-Python-sessions
|
Session4/code/01 Loading EPOS Category Data for modelling.ipynb
|
apache-2.0
|
[
"EPOS Data Set composition\nTerms used for columns in the data\n\nBarcode: https://en.wikipedia.org/wiki/Barcode\nDescription: product description\nUnitRRP: Products recommended retail price/selling price\nCategoryID: Surrogate key for Category https://en.wikipedia.org/wiki/Surrogate_key\nCategory: Human readable product categorisation\n\nData files\ntraining_data.dat\nTraining data 526 data items with 6 categories.\ntest_data.dat\nTraining data 191 data items with 6 categories.",
"# Imports\nfrom sklearn import metrics\nfrom sklearn.tree import DecisionTreeClassifier\nimport pandas as pd\n\n# Training Data\ntraining_raw = pd.read_table(\"../data/training_data.dat\")\ndf_training = pd.DataFrame(training_raw)\ndf_training.head()\n\n# test Data\ntest_raw = pd.read_table(\"../data/test_data.dat\")\ndf_test = pd.DataFrame(test_raw)\ndf_test.head()\n\n# target names\ntarget_categories = ['Unclassified','Art','Aviation','Boating','Camping /Walking /Climbing','Collecting']\ntarget_values = ['1','528','529','530','531','532']\n\n# features\nfeature_names = ['Barcode','Description','UnitRRP']\n\n# Extract features from panda\ntraining_data = df_training[feature_names].values\ntraining_data[:3]\n\n# Extract target results from panda\ntarget = df_training[\"CategoryID\"].values\n\n# Create classifier class\nmodel_dtc = DecisionTreeClassifier()\n\n# train model\nmodel_dtc.fit(training_data, target)",
"We fail here because the description column is a string.\nLets try again without the description.",
"# features\nfeature_names_integers = ['Barcode','UnitRRP']\n\n# Extra features from panda (without description)\ntraining_data_integers = df_training[feature_names_integers].values\ntraining_data_integers[:3]\n\n# train model again\nmodel_dtc.fit(training_data_integers, target)\n\n# Extract test data and test the model\ntest_data_integers = df_test[feature_names_integers].values\ntest_target = df_test[\"CategoryID\"].values\nexpected = test_target\npredicted_dtc = model_dtc.predict(test_data_integers)\n\nprint(metrics.classification_report(expected, predicted_dtc, target_names=target_categories))\n\nprint(metrics.confusion_matrix(expected, predicted_dtc))\n\nmetrics.accuracy_score(expected, predicted, normalize=True, sample_weight=None)\n\npredicted[:5]",
"Lets try a different Classifier\nLinear classifiers (SVM, logistic regression, a.o.) with SGD training.",
"from sklearn.linear_model import SGDClassifier\n\n# Create classifier class\nmodel_sgd = SGDClassifier()\n\n# train model again\nmodel_sgd.fit(training_data_integers, target)\n\npredicted_sgd = model_sgd.predict(test_data_integers)\n\nprint(metrics.classification_report(expected, predicted_sgd, target_names=target_categories))\n\nprint(metrics.confusion_matrix(expected, predicted_sgd))\n\nmetrics.accuracy_score(expected, predicted_sgd, normalize=True, sample_weight=None)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
NYUDataBootcamp/Projects
|
MBA_S16/Berry-Domenico-Comics.ipynb
|
mit
|
[
"Data Bootcamp - Final Project\nChristine Berry and Christina Domenico\nWe analyzed the character demographics and details from the Marvel and DC universes with a few questions in mind: What are the commonalities between characters in the Marvel and DC comic book universes? How are men and women represented in comic books?\nWe found some interesting trends and outliers in regards to representation. Marvel shows greater diversity in sexual orientation, but overall the two franchises have been quite similar (and encouraging) in introducing and positively representing characters.\nNote: This notebook requires internet access to run",
"import sys # system module \nimport pandas as pd # data package\nimport matplotlib.pyplot as plt # graphics module \nimport datetime as dt # date and time module\nimport numpy as np # foundation for Pandas \n\n%matplotlib inline \n\n# check versions\nprint('Python version: ', sys.version)\nprint('Pandas version: ', pd.__version__)\nprint('Today: ', dt.date.today())\n\n#Import Marvel and DC Wikia datasets\nmurl1='https://raw.githubusercontent.com/fivethirtyeight/'\nmurl2='data/master/comic-characters/marvel-wikia-data.csv'\nmarvelurl=murl1+murl2\n\ndurl1='https://raw.githubusercontent.com/fivethirtyeight/'\ndurl2='data/master/comic-characters/dc-wikia-data.csv'\ndcurl=durl1+durl2\n\nmarvel = pd.read_csv(marvelurl)\navengers = pd.read_csv(m2url)\ndc = pd.read_csv(dcurl)\n\nprint(\"Marvel is:\",type(marvel),\"with dimensions:\",marvel.shape)\nprint(\"DC is:\",type(dc),\"with dimensions:\",dc.shape)",
"The search for good, evil, and the gender divide\nWe took a look at how Marvel and DC are divided along the lines of gender and good vs. evil.",
"#Clean up and shape the Marvel dataframe\n\n#Set Alignment to Index\nmarvel_men = marvel[['ALIGN','SEX']]\nmarvel_men=marvel_men.set_index('ALIGN')\n\n#Create separate Male and Female columns\ngender = ['Male','Female']\noldmarvel = marvel_men.copy()\n\nvnames=[]\nfor x in gender:\n newname = x\n vnames.append(newname)\n marvel_men[newname]=marvel_men['SEX'].str.contains(x)*1\n\nmarvel_goodbad=marvel_men[vnames]\n\n#summate indices into the common categories\nmarvel_goodbad=marvel_goodbad.groupby(marvel_goodbad.index).sum()\n\n#do the same for DC\n\n#Set Alignment to Index\ndc_men = dc[['ALIGN','SEX']]\ndc_men=dc_men.set_index('ALIGN')\n\n#Create separate Male and Female columns\ngender = ['Male','Female']\nolddc = dc_men.copy()\n\nvnames=[]\nfor x in gender:\n newname = x\n vnames.append(newname)\n dc_men[newname]=dc_men['SEX'].str.contains(x)*1\n\ndc_goodbad=dc_men[vnames]\n\n#summate indices into the common categories\ndc_goodbad=dc_goodbad.groupby(dc_goodbad.index).sum()\n\n#drop Reformed Characters Column, as there are only 3 from the universe\ndc_goodbad=dc_goodbad.drop('Reformed Criminals')\n\n#plot the findings\nmarvel_goodbad.plot(kind='bar', color=['blue','salmon'],title='Marvel Universe')\n\ndc_goodbad.plot(kind='bar', color=['blue','salmon'], title='DC Universe')",
"We can see that the Marvel and DC universes are each dominated by men. What may be surprising, however, is that in both franchises, bad men dominate the universe. On the female side, there are more good females than bad females. Perhaps comic book authors have found have \nProportionally, DC has more equal representation of good vs bad characters, while Marvel has almost twice as many bad male characters as good male characters.\nGood, Evil, and Sexual Orientation",
"#Clean up and shape the Marvel dataframe\n\n#Set Alignment to Index\nmarvel_gsm = marvel[['ALIGN','GSM']]\nmarvel_gsm=marvel_gsm.set_index('ALIGN')\n\n#Create separate GSM columns\ngsm = ['Hetero', 'Bisexual','Transvestites',\n 'Homosexual','Pansexual',\n 'Transgender','Genderfluid']\noldmarvel2 = marvel_gsm.copy()\n\nonames=[]\nfor x in gsm:\n newname5 = x\n onames.append(newname5)\n marvel_gsm[newname5]=marvel_gsm['GSM'].str.contains(x)*1\n\nmarvel_orient=marvel_gsm[onames]\nmarvel_orient.head()\n\n#summate indices into the common categories\nmarvel_orient=marvel_orient.groupby(marvel_orient.index).sum()\n\n#Clean up and shape the DC dataframe\n\n#Set Alignment to Index\ndc_gsm = dc[['ALIGN','GSM']]\ndc_gsm=dc_gsm.set_index('ALIGN')\n\n#Create separate GSM columns\ngsm = ['Hetero', 'Bisexual','Transvestites',\n 'Homosexual','Pansexual',\n 'Transgender','Genderfluid']\nolddc2 = dc_gsm.copy()\n\nonames2=[]\nfor x in gsm:\n newname4 = x\n onames2.append(newname4)\n dc_gsm[newname4]=dc_gsm['GSM'].str.contains(x)*1\n\ndc_orient=dc_gsm[onames2]\n\n#summate indices into the common categories\ndc_orient=dc_orient.groupby(dc_orient.index).sum()\n\n#drop Reformed Characters Column, as there are only 3 from the universe\ndc_orient=dc_orient.drop('Reformed Criminals')\n\nmarvel_orient[[1,2,3,4,5,6]].plot(kind='bar',title='Marvel Universe')\n\ndc_orient[[1,3]].plot(kind='bar',color=('blue','red'),title='DC Universe')",
"Our findings here are pretty encouraging! Non-heterosexual characters have not been disproportionately vilified in either Marvel or DC.\nSexual Orientation in Comics Over the Years\nWe'll take a look at how many characters of each orientation were introduced in both Marvel and DC each year, and any trends that surfaces.",
"# We create a copy of the original dataframes to look at gender\nGenderM = marvel.copy()\nGenderDC = dc.copy()\n\n# Clean up and shape the Marvel dataframe\n\n# First, we'll drop those decimals from the years\nGenderM[\"Year\"] = GenderM[\"Year\"].astype(str)\nGenderM[\"Year\"] = GenderM[\"Year\"].str.replace(\".0\",\"\")\nGenderM[\"Year\"] = GenderM[\"Year\"].str.replace(\"nan\",\"0\")\n\n# We also want to streamline the entries, removing \"Characters\"\n# We'll assume any NaN values for orientation are \"Heterosexual\"\nGenderM[\"SEX\"] = GenderM[\"SEX\"].str.replace(\" Characters\",\"\")\nGenderM[\"ALIGN\"] = GenderM[\"ALIGN\"].str.replace(\" Characters\",\"\")\nGenderM[\"GSM\"] = GenderM[\"GSM\"].str.replace(\" Characters\",\"\")\nGenderM[\"GSM\"] = GenderM[\"GSM\"].fillna(\"Heterosexual\")\n\n# Next, we pull out just the relevant columns\nGenderM = GenderM[[\"name\",\"ALIGN\",\"SEX\",\"GSM\",\"Year\"]]\n\n# Here, we clean up the variable names\nnewheadings = [\"Name\",\"Alignment\",\"Gender\",\"Orientation\",\"Year\"]\nGenderM.columns = newheadings\n\n# We eventually want to view trends over time, so \"Year\" needs to become the index\nGenderM = GenderM.set_index(\"Year\")\n\n# Clean up and shape the DC dataframe\n\n# To make it look a little cleaner, we'll drop those decimals from the years\nGenderDC[\"YEAR\"] = GenderDC[\"YEAR\"].astype(str)\nGenderDC[\"YEAR\"] = GenderDC[\"YEAR\"].str.replace(\".0\",\"\")\nGenderDC[\"YEAR\"] = GenderDC[\"YEAR\"].str.replace(\"nan\",\"0\")\n\n# We also want to streamline the entries, removing \"Characters\" from the columns we plan to use\n# We'll assume any NaN values for orientation are \"Heterosexual\"\nGenderDC[\"SEX\"] = GenderDC[\"SEX\"].str.replace(\" Characters\",\"\")\nGenderDC[\"ALIGN\"] = GenderDC[\"ALIGN\"].str.replace(\" Characters\",\"\")\nGenderDC[\"GSM\"] = GenderDC[\"GSM\"].str.replace(\" Characters\",\"\")\nGenderDC[\"GSM\"] = GenderDC[\"GSM\"].fillna(\"Heterosexual\")\n\n# Next, we pull out just the relevant columns\nGenderDC = GenderDC[[\"name\",\"ALIGN\",\"SEX\",\"GSM\",\"YEAR\"]]\n\n# Let's clean up those column headings\nnewdcheadings = [\"Name\",\"Alignment\",\"Gender\",\"Orientation\",\"Year\"]\nGenderDC.columns = newdcheadings\n\n# We eventually want to view trends over time, so \"Year\" needs to become the index\nGenderDC = GenderDC.set_index(\"Year\")\n\n# A simple bar chart at this stage shows us the large outlier is skewing the scale; we'll get rid of it later\nGenderM[\"Orientation\"].value_counts().plot.barh(alpha=0.5)\n\n# Now we need to change some of the text data to numbers so it is easier plot. First, Marvel.\n\n# Keeping the df GenderM available for other use, we create OrientationM to look specifically at sexual orientations in Marvel\nOrientationM = GenderM.copy()\nOrientationM = OrientationM.reset_index()\n\n# The goal is to separate each orientation into its own column and assign 1s and 0s.\n# First, we'll create new columns for each orientation and get rid of the existing one.\nOrientationM[\"Heterosexual\"] = OrientationM[\"Orientation\"].copy()\nOrientationM[\"Homosexual\"] = OrientationM[\"Orientation\"].copy()\nOrientationM[\"Bisexual\"] = OrientationM[\"Orientation\"].copy()\nOrientationM[\"Transgender\"] = OrientationM[\"Orientation\"].copy()\nOrientationM[\"Genderfluid\"] = OrientationM[\"Orientation\"].copy()\nOrientationM[\"Transvestite\"] = OrientationM[\"Orientation\"].copy()\nOrientationM[\"Pansexual\"] = OrientationM[\"Orientation\"].copy()\nOrientationM = OrientationM.drop(\"Orientation\", 1)\n\n# Now we want to convert the values in each orientation column to a 1 if it matches, a 0 if it does not\nOrientationM[\"Heterosexual\"] = OrientationM[\"Heterosexual\"].replace([\"Heterosexual\",\"Homosexual\",\"Bisexual\",\"Transgender\",\"Genderfluid\",\"Transvestites\",\"Pansexual\"],[1,0,0,0,0,0,0])\nOrientationM[\"Homosexual\"] = OrientationM[\"Homosexual\"].replace([\"Heterosexual\",\"Homosexual\",\"Bisexual\",\"Transgender\",\"Genderfluid\",\"Transvestites\",\"Pansexual\"],[0,1,0,0,0,0,0])\nOrientationM[\"Bisexual\"] = OrientationM[\"Bisexual\"].replace([\"Heterosexual\",\"Homosexual\",\"Bisexual\",\"Transgender\",\"Genderfluid\",\"Transvestites\",\"Pansexual\"],[0,0,1,0,0,0,0])\nOrientationM[\"Transgender\"] = OrientationM[\"Transgender\"].replace([\"Heterosexual\",\"Homosexual\",\"Bisexual\",\"Transgender\",\"Genderfluid\",\"Transvestites\",\"Pansexual\"],[0,0,0,1,0,0,0])\nOrientationM[\"Genderfluid\"] = OrientationM[\"Genderfluid\"].replace([\"Heterosexual\",\"Homosexual\",\"Bisexual\",\"Transgender\",\"Genderfluid\",\"Transvestites\",\"Pansexual\"],[0,0,0,0,1,0,0])\nOrientationM[\"Transvestite\"] = OrientationM[\"Transvestite\"].replace([\"Heterosexual\",\"Homosexual\",\"Bisexual\",\"Transgender\",\"Genderfluid\",\"Transvestites\",\"Pansexual\"],[0,0,0,0,0,1,0])\nOrientationM[\"Pansexual\"] = OrientationM[\"Pansexual\"].replace([\"Heterosexual\",\"Homosexual\",\"Bisexual\",\"Transgender\",\"Genderfluid\",\"Transvestites\",\"Pansexual\"],[0,0,0,0,0,0,1])\n\n# In preparation for plotting, we set \"Year\" as the index and sorted it\nOrientationM = OrientationM.set_index(\"Year\")\nOrientationM = OrientationM.sort_index()\n\n# Next, we want to group all the values for each year into sums to measure volume of exposure\nOrientationM = OrientationM.groupby(OrientationM.index).sum()\n\n# Now we need to change some of the text data to numbers so it is easier plot. Now, DC.\n\n# Keeping the df GenderM available for future use, we create OrientationDC to look specifically at sexual orientations in DC\nOrientationDC = GenderDC.copy()\n\n# Here again, we want to separate each orientation into its own column and assign 1s and 0s.\n# First, we'll create new columns for each orientation and get rid of the existing one.\n# This will look redundant at first, but we're gettinig there.\nOrientationDC[\"Heterosexual\"] = OrientationDC[\"Orientation\"].copy()\nOrientationDC[\"Homosexual\"] = OrientationDC[\"Orientation\"].copy()\nOrientationDC[\"Bisexual\"] = OrientationDC[\"Orientation\"].copy()\nOrientationDC[\"Transgender\"] = OrientationDC[\"Orientation\"].copy()\nOrientationDC[\"Genderfluid\"] = OrientationDC[\"Orientation\"].copy()\nOrientationDC[\"Transvestite\"] = OrientationDC[\"Orientation\"].copy()\nOrientationDC[\"Pansexual\"] = OrientationDC[\"Orientation\"].copy()\nOrientationDC = OrientationDC.drop(\"Orientation\", 1)\n\n# Now we want to convert the values in each orientation column to a 1 if it matches, a 0 if it does not\nOrientationDC[\"Heterosexual\"] = OrientationDC[\"Heterosexual\"].replace([\"Heterosexual\",\"Homosexual\",\"Bisexual\",\"Transgender\",\"Genderfluid\",\"Transvestites\",\"Pansexual\"],[1,0,0,0,0,0,0])\nOrientationDC[\"Homosexual\"] = OrientationDC[\"Homosexual\"].replace([\"Heterosexual\",\"Homosexual\",\"Bisexual\",\"Transgender\",\"Genderfluid\",\"Transvestites\",\"Pansexual\"],[0,1,0,0,0,0,0])\nOrientationDC[\"Bisexual\"] = OrientationDC[\"Bisexual\"].replace([\"Heterosexual\",\"Homosexual\",\"Bisexual\",\"Transgender\",\"Genderfluid\",\"Transvestites\",\"Pansexual\"],[0,0,1,0,0,0,0])\nOrientationDC[\"Transgender\"] = OrientationDC[\"Transgender\"].replace([\"Heterosexual\",\"Homosexual\",\"Bisexual\",\"Transgender\",\"Genderfluid\",\"Transvestites\",\"Pansexual\"],[0,0,0,1,0,0,0])\nOrientationDC[\"Genderfluid\"] = OrientationDC[\"Genderfluid\"].replace([\"Heterosexual\",\"Homosexual\",\"Bisexual\",\"Transgender\",\"Genderfluid\",\"Transvestites\",\"Pansexual\"],[0,0,0,0,1,0,0])\nOrientationDC[\"Transvestite\"] = OrientationDC[\"Transvestite\"].replace([\"Heterosexual\",\"Homosexual\",\"Bisexual\",\"Transgender\",\"Genderfluid\",\"Transvestites\",\"Pansexual\"],[0,0,0,0,0,1,0])\nOrientationDC[\"Pansexual\"] = OrientationDC[\"Pansexual\"].replace([\"Heterosexual\",\"Homosexual\",\"Bisexual\",\"Transgender\",\"Genderfluid\",\"Transvestites\",\"Pansexual\"],[0,0,0,0,0,0,1])\n\n# \"Year\" is already our index; but it needs to be sorted\nOrientationDC = OrientationDC.sort_index()\n\n# Next, we want to group all the values for each year into sums to measure volume of exposure\nOrientationDC = OrientationDC.groupby(OrientationDC.index).sum()\n\n# In order to better see the granularity, we ignore the overwhelming \"Heterosexual\" values in the Marvel Universe\n# Now let's see what it looks like!\n# For clarity, we've also added a title and made the lines bolder so they are easier to see\nOrientationM = OrientationM.drop(\"Heterosexual\", 1)\nOrientationM.plot(figsize=(12,6), title=\"Marvel Characters by Orientation\", linewidth=2.0)\n\n# In order to better see the granularity, we ignore the overwhelming \"Heterosexual\" values in DC\n# Now let's see what it looks like!\n# For clarity, we've also added a title and made the lines bolder so they are easier to see\nOrientationDC = OrientationDC.drop(\"Heterosexual\", 1)\nOrientationDC.plot(figsize=(12,6), title=\"DC Characters by Orientation\", linewidth=2.0)",
"A brief comparison of the two plots reveals that Marvel has introduced more variation in sexual identity over the years than DC. To get a better idea, though, we'll want to look at them side by side.\nComparing Representation in Marvel vs. DC",
"# Now we can compare the two franchises in combined plots\nax = OrientationM[\"Homosexual\"].plot()\nOrientationDC[\"Homosexual\"].plot(ax=ax)\n\n# Pretty powerful data, but we can make it look better\n# By adding a title, axis labels, legend; changing the colors, and enlarging, we enhance the \"Pow!\" factor\nax = OrientationM[\"Homosexual\"].plot(label=\"Marvel\", color=\"red\", linewidth=2.0)\nOrientationDC[\"Homosexual\"].plot(ax=ax, label=\"DC\", figsize=(12,6), color=\"blue\", linewidth=2.0)\nax.set_title(\"Homosexual Comic Characters\", fontsize=16, fontweight=\"bold\")\nax.set_xlabel(\"Year\", fontsize=12)\nax.set_ylabel(\"Number of Characters Introduced\", fontsize=12)\nlegend = ax.legend(loc='upper center', shadow=True)",
"This shows us that Marvel introduced more homosexual characters as early as the 1940s, but DC has been more representative in recent years.",
"# We can go on to do this for any orientation\nax = OrientationM[\"Bisexual\"].plot(label=\"Marvel\", color=\"green\", linewidth=2.0)\nOrientationDC[\"Bisexual\"].plot(ax=ax, label=\"DC\", figsize=(12,6), color=\"purple\", linewidth=2.0)\nax.set_title(\"Bisexual Comic Characters\", fontsize=16, fontweight=\"bold\")\nax.set_xlabel(\"Year\", fontsize=12)\nax.set_ylabel(\"Number of Characters Introduced\", fontsize=12)\nlegend = ax.legend(loc='upper center', shadow=True)",
"When it comes to bisexual characters, it is harder to find a trend. The two franchises seem to have ebbed and flowed in representation for this group.",
"# We could also use a stacked bar chart to look at the array of introductions per year\nOrientationM.plot.bar(stacked=True, figsize=(16,8), title=\"Marvel Yearly Character Introductions\", fontsize=12)\n\n# Or the same thing for DC...\nOrientationDC.plot.bar(stacked=True, figsize=(16,8), title=\"DC Yearly Character Introductions\", fontsize=12)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
qinwf-nuan/keras-js
|
notebooks/layers/pooling/GlobalMaxPooling2D.ipynb
|
mit
|
[
"import numpy as np\nfrom keras.models import Model\nfrom keras.layers import Input\nfrom keras.layers.pooling import GlobalMaxPooling2D\nfrom keras import backend as K\nimport json\nfrom collections import OrderedDict\n\ndef format_decimal(arr, places=6):\n return [round(x * 10**places) / 10**places for x in arr]\n\nDATA = OrderedDict()",
"GlobalMaxPooling2D\n[pooling.GlobalMaxPooling2D.0] input 6x6x3, data_format='channels_last'",
"data_in_shape = (6, 6, 3)\nL = GlobalMaxPooling2D(data_format='channels_last')\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = L(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nnp.random.seed(270)\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['pooling.GlobalMaxPooling2D.0'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"[pooling.GlobalMaxPooling2D.1] input 3x6x6, data_format='channels_first'",
"data_in_shape = (3, 6, 6)\nL = GlobalMaxPooling2D(data_format='channels_first')\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = L(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nnp.random.seed(271)\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['pooling.GlobalMaxPooling2D.1'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"[pooling.GlobalMaxPooling2D.2] input 5x3x2, data_format='channels_last'",
"data_in_shape = (5, 3, 2)\nL = GlobalMaxPooling2D(data_format='channels_last')\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = L(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nnp.random.seed(272)\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['pooling.GlobalMaxPooling2D.2'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"export for Keras.js tests",
"print(json.dumps(DATA))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
chengts95/homeworkOfPowerSystem
|
power_system/调频计算.ipynb
|
gpl-2.0
|
[
"22. 三个电力系统联合运行如图。该系统A的单位调节功率$K_{A}=100MW/Hz$ ,系统B、C的单位调节功率$K_{B}=K_{C}=200MH/Hz$ 。设系统A中负荷增加100MW。系统B中的发电厂作二次调频增发50MW。试计算,联合系统中的频率变化量$\\Delta f$和联络线上交换功率$P_{ab}$ 、$P_{bc}$ 的变化量。\n<img src=\"./22题图.png\" />",
"Ka=100\nKb=200\nKc=200\ndPla=100\ndPgb=50\nK=Ka+Kb+Kc\ndf = lambda dPl,dPg,K: -(dPla-dPgb)/K\ndf1=df(dPla,dPgb,K)\n\ntrans_power=lambda Ka,Kb,Pa,Pb: (Ka*Pb-Kb*Pa)/(Ka+Kb)\n\nPa=dPla\nPb=dPgb\nPc=0\n#BC作为一个系统\nPab=trans_power(Ka,Kb+Kc,Pa,Pb+Pc)\n#AB作为一个系统\nPbc=trans_power(Kb+Ka,Kc,Pb+Pa,Pc)\ndf1,Pab,Pbc",
"23. 系统A:当负荷增加250MW时,频率下降0.1HZ。系统B:当负荷增加400MW时,频率下降0.1HZ。系统A运行于49.85HZ,系统B运行于50HZ,如用联络线将两系统相连,求联络线上的功率。",
"Ka=2500\nKb=4000\nfa=49.85\nfb=50\ndf2=fb-fa\ndPl=df2*Ka\n\ndfab=-1*dPl/(Ka+Kb)#B下降的频率\nPab=dfab*Kb\n\ntrans_power(Ka,Kb,dPl,0)",
"24. A、B两系统经联络线相连,已知:$K_{GA}=270MW/Hz$ ,$K_{LA}=21MW/Hz$ ,$K_{GB}=480MW/Hz$ ,$K_{LB}=21MW/Hz$ ,$P_{AB}=300MW$ ,系统B负荷增加150MW。1)两系统所有发电机均仅参加一次调频,求系统频率、联络线功率变化量,A、B两系统发电机和负荷功率变化量;2)除一次调频外,A系统设调频厂进行二次调频,联络线最大允许输送功率为400MW,求系统频率的变化量。",
"Ka=291\nKb=501\nPlb=150\ndf=-1*Plb/(Ka+Kb)\n\n\ntrans_power(Ka,Kb,0,Plb)\ndf\n\n100/Ka"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/training-data-analyst
|
self-paced-labs/vertex-ai/vertex-pipelines/kfp/lab_exercise.ipynb
|
apache-2.0
|
[
"Lab: Chicago taxifare tip prediction with AutoML Tables on Vertex Pipelines using Kubeflow Pipelines SDK\nLearning objectives\n\nPerform exploratory data analysis (EDA) on tabular data using BigQuery.\nCreate a BigQuery dataset for a ML classification task.\nDefine an AutoML tables pipeline using the Kubeflow Pipelines (KFP) SDK for model training, evaluation, and conditional deployment.\nCreate a custom model evaluation component using the KFP SDK.\nIncorporate pre-built KFP components into your pipeline from google_cloud_components.\nQuery your model for online predictions and explanations.\n\nSetup\nDefine constants",
"# Add installed depedencies to Python PATH variable.\nPATH=%env PATH\n%env PATH={PATH}:/home/jupyter/.local/bin\n\nPROJECT_ID = !(gcloud config get-value core/project)\nPROJECT_ID = PROJECT_ID[0]\nREGION = 'us-central1'\n\nBQ_DATASET_NAME = 'chicago_taxi'\nBQ_TABLE_NAME = 'chicago_taxi_tips_raw'\nBQ_LOCATION = 'US'\n\n!echo {PROJECT_ID}\n!echo {REGION}",
"Create Cloud Storage bucket for storing Vertex Pipeline artifacts",
"BUCKET_NAME = f\"gs://{PROJECT_ID}-bucket\"\nprint(BUCKET_NAME)\n\n!gsutil ls -al $BUCKET_NAME\n\nUSER = \"dougkelly\" # <---CHANGE THIS\nPIPELINE_ROOT = \"{}/pipeline_root/{}\".format(BUCKET_NAME, USER)\n\nPIPELINE_ROOT",
"Create BigQuery dataset",
"!bq --location=US mk -d \\\n$PROJECT_ID:$BQ_DATASET_NAME",
"Exploratory Data Analysis in BigQuery",
"%%bigquery data\n\nSELECT \n CAST(EXTRACT(DAYOFWEEK FROM trip_start_timestamp) AS string) AS trip_dayofweek, \n FORMAT_DATE('%A',cast(trip_start_timestamp as date)) AS trip_dayname,\n COUNT(*) as trip_count,\nFROM `bigquery-public-data.chicago_taxi_trips.taxi_trips`\nWHERE\n EXTRACT(YEAR FROM trip_start_timestamp) = 2015 \nGROUP BY\n trip_dayofweek,\n trip_dayname\nORDER BY\n trip_dayofweek\n;\n\ndata.plot(kind='bar', x='trip_dayname', y='trip_count');",
"Create BigQuery dataset for ML classification task",
"SAMPLE_SIZE = 100000\nYEAR = 2020\n\nsql_script = '''\nCREATE OR REPLACE TABLE `@PROJECT_ID.@DATASET.@TABLE` \nAS (\n WITH\n taxitrips AS (\n SELECT\n trip_start_timestamp,\n trip_seconds,\n trip_miles,\n payment_type,\n pickup_longitude,\n pickup_latitude,\n dropoff_longitude,\n dropoff_latitude,\n tips,\n fare\n FROM\n `bigquery-public-data.chicago_taxi_trips.taxi_trips`\n WHERE 1=1 \n AND pickup_longitude IS NOT NULL\n AND pickup_latitude IS NOT NULL\n AND dropoff_longitude IS NOT NULL\n AND dropoff_latitude IS NOT NULL\n AND trip_miles > 0\n AND trip_seconds > 0\n AND fare > 0\n AND EXTRACT(YEAR FROM trip_start_timestamp) = @YEAR\n )\n\n SELECT\n trip_start_timestamp,\n EXTRACT(MONTH from trip_start_timestamp) as trip_month,\n EXTRACT(DAY from trip_start_timestamp) as trip_day,\n EXTRACT(DAYOFWEEK from trip_start_timestamp) as trip_day_of_week,\n EXTRACT(HOUR from trip_start_timestamp) as trip_hour,\n trip_seconds,\n trip_miles,\n payment_type,\n ST_AsText(\n ST_SnapToGrid(ST_GeogPoint(pickup_longitude, pickup_latitude), 0.1)\n ) AS pickup_grid,\n ST_AsText(\n ST_SnapToGrid(ST_GeogPoint(dropoff_longitude, dropoff_latitude), 0.1)\n ) AS dropoff_grid,\n ST_Distance(\n ST_GeogPoint(pickup_longitude, pickup_latitude), \n ST_GeogPoint(dropoff_longitude, dropoff_latitude)\n ) AS euclidean,\n CONCAT(\n ST_AsText(ST_SnapToGrid(ST_GeogPoint(pickup_longitude,\n pickup_latitude), 0.1)), \n ST_AsText(ST_SnapToGrid(ST_GeogPoint(dropoff_longitude,\n dropoff_latitude), 0.1))\n ) AS loc_cross,\n IF((tips/fare >= 0.2), 1, 0) AS tip_bin,\n IF(ABS(MOD(FARM_FINGERPRINT(STRING(trip_start_timestamp)), 10)) < 9, 'UNASSIGNED', 'TEST') AS data_split\n FROM\n taxitrips\n LIMIT @LIMIT\n)\n'''\n\nsql_script = sql_script.replace(\n '@PROJECT_ID', PROJECT_ID).replace(\n '@DATASET', BQ_DATASET_NAME).replace(\n '@TABLE', BQ_TABLE_NAME).replace(\n '@YEAR', str(YEAR)).replace(\n '@LIMIT', str(SAMPLE_SIZE))\n\n# print(sql_script)\n\nfrom google.cloud import bigquery\n\nbq_client = bigquery.Client(project=PROJECT_ID, location=BQ_LOCATION)\njob = bq_client.query(sql_script)\n_ = job.result()",
"Verify data split proportions",
"%%bigquery\n\nSELECT data_split, COUNT(*)\nFROM dougkelly-vertex-demos.chicago_taxi.chicago_taxi_tips_raw\nGROUP BY data_split",
"Create\nImport libraries",
"import json\nimport logging\nfrom typing import NamedTuple\n\nimport kfp\n# from google.cloud import aiplatform\nfrom google_cloud_pipeline_components import aiplatform as gcc_aip\nfrom kfp.v2 import dsl\nfrom kfp.v2.dsl import (ClassificationMetrics, Input, Metrics, Model, Output,\n component)\nfrom kfp.v2.google.client import AIPlatformClient",
"Create and run an AutoML Tabular classification pipeline using Kubeflow Pipelines SDK\nCreate a custom KFP evaluation component",
"@component(\n base_image=\"gcr.io/deeplearning-platform-release/tf2-cpu.2-3:latest\",\n output_component_file=\"components/tables_eval_component.yaml\", # Optional: you can use this to load the component later\n packages_to_install=[\"google-cloud-aiplatform==1.0.0\"],\n)\ndef classif_model_eval_metrics(\n project: str,\n location: str,\n api_endpoint: str,\n thresholds_dict_str: str,\n model: Input[Model],\n metrics: Output[Metrics],\n metricsc: Output[ClassificationMetrics],\n) -> NamedTuple(\"Outputs\", [(\"dep_decision\", str)]): # Return parameter.\n\n \"\"\"This function renders evaluation metrics for an AutoML Tabular classification model.\n It retrieves the classification model evaluation generated by the AutoML Tabular training\n process, does some parsing, and uses that info to render the ROC curve and confusion matrix\n for the model. It also uses given metrics threshold information and compares that to the\n evaluation results to determine whether the model is sufficiently accurate to deploy.\n \"\"\"\n import json\n import logging\n\n from google.cloud import aiplatform\n\n # Fetch model eval info\n def get_eval_info(client, model_name):\n from google.protobuf.json_format import MessageToDict\n\n response = client.list_model_evaluations(parent=model_name)\n metrics_list = []\n metrics_string_list = []\n for evaluation in response:\n print(\"model_evaluation\")\n print(\" name:\", evaluation.name)\n print(\" metrics_schema_uri:\", evaluation.metrics_schema_uri)\n metrics = MessageToDict(evaluation._pb.metrics)\n for metric in metrics.keys():\n logging.info(\"metric: %s, value: %s\", metric, metrics[metric])\n metrics_str = json.dumps(metrics)\n metrics_list.append(metrics)\n metrics_string_list.append(metrics_str)\n\n return (\n evaluation.name,\n metrics_list,\n metrics_string_list,\n )\n\n # Use the given metrics threshold(s) to determine whether the model is \n # accurate enough to deploy.\n def classification_thresholds_check(metrics_dict, thresholds_dict):\n for k, v in thresholds_dict.items():\n logging.info(\"k {}, v {}\".format(k, v))\n if k in [\"auRoc\", \"auPrc\"]: # higher is better\n if metrics_dict[k] < v: # if under threshold, don't deploy\n logging.info(\n \"{} < {}; returning False\".format(metrics_dict[k], v)\n )\n return False\n logging.info(\"threshold checks passed.\")\n return True\n\n def log_metrics(metrics_list, metricsc):\n test_confusion_matrix = metrics_list[0][\"confusionMatrix\"]\n logging.info(\"rows: %s\", test_confusion_matrix[\"rows\"])\n\n # log the ROC curve\n fpr = []\n tpr = []\n thresholds = []\n for item in metrics_list[0][\"confidenceMetrics\"]:\n fpr.append(item.get(\"falsePositiveRate\", 0.0))\n tpr.append(item.get(\"recall\", 0.0))\n thresholds.append(item.get(\"confidenceThreshold\", 0.0))\n print(f\"fpr: {fpr}\")\n print(f\"tpr: {tpr}\")\n print(f\"thresholds: {thresholds}\")\n metricsc.log_roc_curve(fpr, tpr, thresholds)\n\n # log the confusion matrix\n annotations = []\n for item in test_confusion_matrix[\"annotationSpecs\"]:\n annotations.append(item[\"displayName\"])\n logging.info(\"confusion matrix annotations: %s\", annotations)\n metricsc.log_confusion_matrix(\n annotations,\n test_confusion_matrix[\"rows\"],\n )\n\n # log textual metrics info as well\n for metric in metrics_list[0].keys():\n if metric != \"confidenceMetrics\":\n val_string = json.dumps(metrics_list[0][metric])\n metrics.log_metric(metric, val_string)\n # metrics.metadata[\"model_type\"] = \"AutoML Tabular classification\"\n\n logging.getLogger().setLevel(logging.INFO)\n aiplatform.init(project=project)\n # extract the model resource name from the input Model Artifact\n model_resource_path = model.uri.replace(\"aiplatform://v1/\", \"\")\n logging.info(\"model path: %s\", model_resource_path)\n\n client_options = {\"api_endpoint\": api_endpoint}\n # Initialize client that will be used to create and send requests.\n client = aiplatform.gapic.ModelServiceClient(client_options=client_options)\n eval_name, metrics_list, metrics_str_list = get_eval_info(\n client, model_resource_path\n )\n logging.info(\"got evaluation name: %s\", eval_name)\n logging.info(\"got metrics list: %s\", metrics_list)\n log_metrics(metrics_list, metricsc)\n\n thresholds_dict = json.loads(thresholds_dict_str)\n deploy = classification_thresholds_check(metrics_list[0], thresholds_dict)\n if deploy:\n dep_decision = \"true\"\n else:\n dep_decision = \"false\"\n logging.info(\"deployment decision is %s\", dep_decision)\n\n return (dep_decision,)\n\nimport time\n\nDISPLAY_NAME = \"automl-tab-chicago-taxi-tips-{}\".format(str(int(time.time())))\nprint(DISPLAY_NAME)",
"Define the pipeline",
"@kfp.dsl.pipeline(name=\"automl-tab-chicago-taxi-tips-train\", pipeline_root=PIPELINE_ROOT)\ndef pipeline(\n bq_source: str = \"bq://dougkelly-vertex-demos:chicago_taxi.chicago_taxi_tips_raw\",\n display_name: str = DISPLAY_NAME,\n project: str = PROJECT_ID,\n gcp_region: str = REGION,\n api_endpoint: str = \"us-central1-aiplatform.googleapis.com\",\n thresholds_dict_str: str = '{\"auRoc\": 0.90}',\n):\n dataset_create_op = gcc_aip.TabularDatasetCreateOp(\n project=project, display_name=display_name, bq_source=bq_source\n )\n\n training_op = gcc_aip.AutoMLTabularTrainingJobRunOp(\n project=project,\n display_name=display_name,\n optimization_prediction_type=\"classification\",\n optimization_objective=\"maximize-au-roc\", # binary classification \n budget_milli_node_hours=750,\n training_fraction_split=0.9,\n validation_fraction_split=0.1,\n column_transformations=[\n {\"numeric\": {\"column_name\": \"trip_seconds\"}}, \n {\"numeric\": {\"column_name\": \"trip_miles\"}}, \n {\"categorical\": {\"column_name\": \"trip_month\"}},\n {\"categorical\": {\"column_name\": \"trip_day\"}},\n {\"categorical\": {\"column_name\": \"trip_day_of_week\"}}, \n {\"categorical\": {\"column_name\": \"trip_hour\"}}, \n {\"categorical\": {\"column_name\": \"payment_type\"}},\n {\"numeric\": {\"column_name\": \"euclidean\"}},\n {\"categorical\": {\"column_name\": \"tip_bin\"}},\n ],\n dataset=dataset_create_op.outputs[\"dataset\"],\n target_column=\"tip_bin\",\n )\n \n model_eval_task = classif_model_eval_metrics(\n project,\n gcp_region,\n api_endpoint,\n thresholds_dict_str,\n training_op.outputs[\"model\"],\n )\n\n with dsl.Condition(\n model_eval_task.outputs[\"dep_decision\"] == \"true\",\n name=\"deploy_decision\",\n ):\n\n deploy_op = gcc_aip.ModelDeployOp( # noqa: F841\n model=training_op.outputs[\"model\"],\n project=project,\n machine_type=\"n1-standard-4\",\n )",
"Compile and run the pipeline",
"from kfp.v2 import compiler # noqa: F811\n\ncompiler.Compiler().compile(\n pipeline_func=pipeline, package_path=\"automl-tab-chicago-taxi-tips-train_pipeline.json\"\n)",
"Run the pipeline",
"from kfp.v2.google.client import AIPlatformClient # noqa: F811\n\napi_client = AIPlatformClient(project_id=PROJECT_ID, region=REGION)\n\nresponse = api_client.create_run_from_job_spec(\n \"automl-tab-chicago-taxi-tips-train_pipeline.json\",\n pipeline_root=PIPELINE_ROOT,\n parameter_values={\"project\": PROJECT_ID, \"display_name\": DISPLAY_NAME},\n)",
"Query your deployed model to retrieve online predictions and explanations",
"from google.cloud import aiplatform\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\nendpoint = aiplatform.Endpoint(\n endpoint_name=\"2677161280053182464\",\n project=PROJECT_ID,\n location=REGION)\n\n%%bigquery test_df\n\nSELECT\n CAST(trip_month AS STRING) AS trip_month,\n CAST(trip_day AS STRING) AS trip_day,\n CAST(trip_day_of_week AS STRING) AS trip_day_of_week,\n CAST(trip_hour AS STRING) AS trip_hour,\n CAST(trip_seconds AS STRING) AS trip_seconds,\n trip_miles,\n payment_type,\n euclidean\nFROM \n dougkelly-vertex-demos.chicago_taxi.chicago_taxi_tips_raw\nWHERE \n data_split = 'TEST'\n AND tip_bin = 1\n\ntest_instance = test_df.iloc[0]\n\ntest_instance_dict = test_instance.to_dict()\ntest_instance_dict\n\nexplained_prediction = endpoint.explain([test_instance_dict])\n\npd.DataFrame.from_dict(explained_prediction.predictions[0]).plot(kind='bar');\n\npd.DataFrame.from_dict(explained_prediction.explanations[0].attributions[0].feature_attributions, orient='index').plot(kind='barh');",
"Congratulations! Lab wrap-up\nLicense\n<font size=-1>Licensed under the Apache License, Version 2.0 (the \\\"License\\\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \\\"AS IS\\\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.</font>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
gon1213/SDC
|
find_lane_lines/CarND_LaneLines_P1/P1.ipynb
|
gpl-3.0
|
[
"Finding Lane Lines on the Road\n\nIn this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip \"raw-lines-example.mp4\" (also contained in this repository) to see what the output should look like after using the helper functions below. \nOnce you have a result that looks roughly like \"raw-lines-example.mp4\", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video \"P1_example.mp4\". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.\n\nLet's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the \"play\" button above) to display the image.\nNote If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the \"Kernel\" menu above and selecting \"Restart & Clear Output\".\n\nThe tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.\n\n<figure>\n <img src=\"line-segments-example.jpg\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your output should look something like this (above) after detecting line segments using the helper functions below </p> \n </figcaption>\n</figure>\n<p></p>\n<figure>\n <img src=\"laneLines_thirdPass.jpg\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your goal is to connect/average/extrapolate line segments to get output like this</p> \n </figcaption>\n</figure>",
"#importing some useful packages\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport cv2\n%matplotlib inline\n\n#reading in an image\nimage = mpimg.imread('test_images/solidWhiteRight.jpg')\n#printing out some stats and plotting\nprint('This image is:', type(image), 'with dimesions:', image.shape)\nplt.imshow(image) #call as plt.imshow(gray, cmap='gray') to show a grayscaled image",
"Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:\ncv2.inRange() for color selection\ncv2.fillPoly() for regions selection\ncv2.line() to draw lines on an image given endpoints\ncv2.addWeighted() to coadd / overlay two images\ncv2.cvtColor() to grayscale or change color\ncv2.imwrite() to output images to file\ncv2.bitwise_and() to apply a mask to an image\nCheck out the OpenCV documentation to learn about these and discover even more awesome functionality!\nBelow are some helper functions to help get you started. They should look familiar from the lesson!",
"import math\n\ndef grayscale(img):\n \"\"\"Applies the Grayscale transform\n This will return an image with only one color channel\n but NOTE: to see the returned image as grayscale\n you should call plt.imshow(gray, cmap='gray')\"\"\"\n return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n # Or use BGR2GRAY if you read an image with cv2.imread()\n # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n \ndef canny(img, low_threshold, high_threshold):\n \"\"\"Applies the Canny transform\"\"\"\n return cv2.Canny(img, low_threshold, high_threshold)\n\ndef gaussian_blur(img, kernel_size):\n \"\"\"Applies a Gaussian Noise kernel\"\"\"\n return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)\n\ndef region_of_interest(img, vertices):\n \"\"\"\n Applies an image mask.\n \n Only keeps the region of the image defined by the polygon\n formed from `vertices`. The rest of the image is set to black.\n \"\"\"\n #defining a blank mask to start with\n mask = np.zeros_like(img) \n \n #defining a 3 channel or 1 channel color to fill the mask with depending on the input image\n if len(img.shape) > 2:\n channel_count = img.shape[2] # i.e. 3 or 4 depending on your image\n ignore_mask_color = (255,) * channel_count\n else:\n ignore_mask_color = 255\n \n #filling pixels inside the polygon defined by \"vertices\" with the fill color \n cv2.fillPoly(mask, vertices, ignore_mask_color)\n \n #returning the image only where mask pixels are nonzero\n masked_image = cv2.bitwise_and(img, mask)\n return masked_image\n\n\ndef draw_lines(img, lines, color=[255, 0, 0], thickness=2):\n \"\"\"\n NOTE: this is the function you might want to use as a starting point once you want to \n average/extrapolate the line segments you detect to map out the full\n extent of the lane (going from the result shown in raw-lines-example.mp4\n to that shown in P1_example.mp4). \n \n Think about things like separating line segments by their \n slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left\n line vs. the right line. Then, you can average the position of each of \n the lines and extrapolate to the top and bottom of the lane.\n \n This function draws `lines` with `color` and `thickness`. \n Lines are drawn on the image inplace (mutates the image).\n If you want to make the lines semi-transparent, think about combining\n this function with the weighted_img() function below\n \"\"\"\n for line in lines:\n for x1,y1,x2,y2 in line:\n cv2.line(img, (x1, y1), (x2, y2), color, thickness)\n\ndef hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):\n \"\"\"\n `img` should be the output of a Canny transform.\n \n Returns an image with hough lines drawn.\n \"\"\"\n lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)\n line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)\n draw_lines(line_img, lines)\n return line_img\n\n# Python 3 has support for cool math symbols.\n\ndef weighted_img(img, initial_img, α=0.8, β=1., λ=0.):\n \"\"\"\n `img` is the output of the hough_lines(), An image with lines drawn on it.\n Should be a blank image (all black) with lines drawn on it.\n \n `initial_img` should be the image before any processing.\n \n The result image is computed as follows:\n \n initial_img * α + img * β + λ\n NOTE: initial_img and img must be the same shape!\n \"\"\"\n return cv2.addWeighted(initial_img, α, img, β, λ)",
"Test on Images\nNow you should build your pipeline to work on the images in the directory \"test_images\"\nYou should make sure your pipeline works well on these images before you try the videos.",
"import os\nos.listdir(\"test_images/\")",
"run your solution on all test_images and make copies into the test_images directory).",
"# TODO: Build your pipeline that will draw lane lines on the test_images\n# then save them to the test_images directory.\n\n",
"Test on Videos\nYou know what's cooler than drawing lanes over images? Drawing lanes over video!\nWe can test our solution on two provided videos:\nsolidWhiteRight.mp4\nsolidYellowLeft.mp4",
"# Import everything needed to edit/save/watch video clips\nfrom moviepy.editor import VideoFileClip\nfrom IPython.display import HTML\n\ndef process_image(image):\n # NOTE: The output you return should be a color image (3 channel) for processing video below\n # TODO: put your pipeline here,\n # you should return the final output (image with lines are drawn on lanes)\n\n return result",
"Let's try the one with the solid white lane on the right first ...",
"white_output = 'white.mp4'\nclip1 = VideoFileClip(\"solidWhiteRight.mp4\")\nwhite_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!\n%time white_clip.write_videofile(white_output, audio=False)",
"Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.",
"HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(white_output))",
"At this point, if you were successful you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. Modify your draw_lines function accordingly and try re-running your pipeline.\nNow for the one with the solid yellow lane on the left. This one's more tricky!",
"yellow_output = 'yellow.mp4'\nclip2 = VideoFileClip('solidYellowLeft.mp4')\nyellow_clip = clip2.fl_image(process_image)\n%time yellow_clip.write_videofile(yellow_output, audio=False)\n\nHTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(yellow_output))",
"Reflections\nCongratulations on finding the lane lines! As the final step in this project, we would like you to share your thoughts on your lane finding pipeline... specifically, how could you imagine making your algorithm better / more robust? Where will your current algorithm be likely to fail?\nPlease add your thoughts below, and if you're up for making your pipeline more robust, be sure to scroll down and check out the optional challenge video below!\nSubmission\nIf you're satisfied with your video outputs it's time to submit! Submit this ipython notebook for review.\nOptional Challenge\nTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!",
"challenge_output = 'extra.mp4'\nclip2 = VideoFileClip('challenge.mp4')\nchallenge_clip = clip2.fl_image(process_image)\n%time challenge_clip.write_videofile(challenge_output, audio=False)\n\nHTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(challenge_output))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
blakeflei/IntroScientificPythonWithJupyter
|
08 - Signal Processing - Scipy.ipynb
|
bsd-3-clause
|
[
"Scipy Signal Processing\nOne of the greatest strengths of matlab is the included signal processing. The python scipy library has many of these capabilities and some are highlighted below. These have applications in electronics, microscopy, telescopy, radio, and many other fields.\nBy the end of this file you should have seen simple examples of:\n1. Frequency decomposition via Fourier transforms\n2. Feature detection via correlations\n3. Convolution and deconvolution\n4. Digital filtering of signal using infinite impulse response (IIR) and finite impulse response (FIR) filters\nFurther reading:\nhttps://docs.scipy.org/doc/scipy/reference/tutorial/signal.html \nhttps://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.correlate.html \nFilters:\nhttp://www.dummies.com/education/science/science-engineering/real-world-signals-and-systems-case-analog-filter-design-with-a-twist/ \nhttp://radio.feld.cvut.cz/matlab/toolbox/signal/basics27.html \nIIR Filters:\nhttps://dspguru.com/dsp/faqs/iir/basics/ \nhttps://www.dsprelated.com/showarticle/194.php \nFIR Filters:\nhttp://scipy-cookbook.readthedocs.io/items/ApplyFIRFilter.html",
"import numpy as np # Python numpy \nfrom scipy import signal, stats # Python scipy signal package\n\nfrom matplotlib import pyplot as plt # Python matplotlib library\nimport matplotlib.gridspec as gridspec # Multiple plots in a single figure\n\n# Display matplotlib in the notebook\n%matplotlib inline \n\n%cd datafiles",
"Fourier Transform Examples\nFourier transforms are most often used to decompose a signal as a function of time into the frequency components that comprise it, e.g. transforming between time and frequency domains. It's also possible to post-process a filtered signal using Fourier transforms.\nFFTs decompose a single signal into the form of:\n$$ Y = \\frac{1}{2} a_0 \\sum_{n=1}^{\\infty} a_n cos (n x + \\phi_x) $$\nAmplitude Example\nHere, we solve for the individual $a_n$, so we know how strong each of the individual signal components are.",
"# Create signal\nfrq1 = 50 # Frequency 1(hz)\namp1 = 5 # Amplitude 1\nfrq2 = 250 # Frequency 2(hz)\namp2 = 3 # Amplitude 2\nsr = 2000 # Sample rate\ndur = 0.4 # Duration (s) (increasing/decreasing this changes S/N)\n\n# Create signal and timesteps\nX = np.linspace(0, dur-1/sr, int(dur*sr)) # Time\nY_s = amp1*np.cos(X*2*np.pi*frq1 - np.pi/4)+amp2*np.cos(X*2*np.pi*frq2 - np.pi/2) # Signal\n\n# Add noise\nY_sn = Y_s + 40*np.random.rand(len(X)) # Signal + noise\n\nplt.plot(X[1:100], Y_sn[1:100])\nplt.title('Plot of Signal with Noise')\nplt.xlabel('Time (s)')\nplt.ylabel('Amplitude of Signal')\nplt.show()\n\n# Plot Single Sided FT Spectrum\nY_sn_fft = np.fft.fft(Y_sn)\n\n# Update fft output\nFT = np.roll(Y_sn_fft, len(X)//2) # Shift zero freq component to center of spectrum\nSSFT_amp = np.abs(FT)[len(X)//2:] # Use the absolute value for amplitude; spectrum is symmetric - start from zero freq component\nSSFT_amp = 2/len(X) * SSFT_amp # Normalize\n\n# Determine frequencies\nfreqs = sr/len(X)*np.arange(0,len(SSFT_amp))\n\n# Plot\nplt.plot(freqs[1:], SSFT_amp[1:])\nplt.title('Single-Sided Spectrum of Signal')\nplt.xlabel('freq (Hz)')\nplt.ylabel('Freq Amplitude')\nplt.show()",
"The amplitudes don't seem quite right - longer duration increases the signal to noise and gives a better result:",
"# Create signal\nsr = 2000 # Sample rate\ndur = 10 # Increased duration (s) (increasing/decreasing this changes S/N)\nX = np.linspace(0, dur-1/sr, int(dur*sr)) # Time\nY_s = amp1*np.sin(X*2*np.pi*frq1 - np.pi/4) + amp2*np.sin(X*2*np.pi*frq2 + np.pi/2)\nY_sn = Y_s + 40*np.random.rand(len(X))\n\n# Determine Single Sided FT Spectrum\nY_sn_fft = np.fft.fft(Y_sn)\n\n# Update ft output\nFT = np.roll(Y_sn_fft, len(X)//2) # Shift zero freq component to center of spectrum\nSSFT_amp = np.abs(FT)[len(X)//2:] # Use the absolute value for amplitude; spectrum is symmetric - start from zero freq component\nSSFT_amp = 2/len(X) * SSFT_amp # Scale by 2 (using half the spectrum) / number points\n\n# Determine frequencies\nfreqs = sr/len(X)*np.arange(0,len(SSFT_amp))\n\n# Plot\nplt.plot(freqs[1:], SSFT_amp[1:])\nplt.title('Single-Sided Spectrum of Signal')\nplt.xlabel('freq (Hz)')\nplt.ylabel('Freq Amplitude')\nplt.show()\n\n# Create signal\nsr = 2000 # Sample rate\ndur = 10 # Increased duration (s) (increasing/decreasing this changes S/N)\nX = np.linspace(0, dur-1/sr, int(dur*sr)) # Time\nY_s = amp1*np.cos(X*2*np.pi*frq1 - np.pi/4) + amp2*np.cos(X*2*np.pi*frq2 + np.pi/2)\nY_sn = Y_s + 40*np.random.rand(len(X))\n\n# Determine Single Sided FT Spectrum\nY_sn_fft = np.fft.fft(Y_sn)\n\n# Update ft output\nFT = np.roll(Y_sn_fft, len(X)//2) # Shift zero freq component to center of spectrum\nSSFT_amp = np.abs(FT)[len(X)//2:] # Use the absolute value for amplitude; spectrum is symmetric - start from zero freq component\nSSFT_amp = 2/len(X) * SSFT_amp # Scale by 2 (using half the spectrum) / number points\n\n# Determine frequencies\nfreqs = sr/len(X)*np.arange(0,len(SSFT_amp))\n\n# Plot\nplt.plot(freqs[1:], SSFT_amp[1:])\nplt.title('Single-Sided Spectrum of Signal')\nplt.xlabel('freq (Hz)')\nplt.ylabel('Freq Amplitude')\nplt.show()",
"Phase Example\nPhase is shift of a periodic signal 'left' or 'right'. it is the $\\phi_x$ in the following equation:\n$$ Y = \\frac{1}{2} a_0 \\sum_{n=1}^{\\infty} a_n cos (n x + \\phi_x) $$",
"# We can use the previous signal to get the phase:\n\n# Set a tolerance limit - phase is sensitive to floating point errors \n# (see Gotchas and Optimization for more info):\nFT_trun = FT\ntol = 1*10**-6 # Truncate signal below tolerance level\nFT_trun[np.abs(FT_trun)<tol] = 0\n\n# Use the angle function (arc tangent of imaginary over real)\nphase = np.angle(FT_trun[len(X)//2:])\nphase_rad = 1/np.pi * phase # Convert to radians\n\n# Plot\nplt.plot(freqs[1:], phase_rad[1:])\nplt.title('Single-Sided Spectrum of Signal')\nplt.xlabel('freq (Hz)')\nplt.ylabel('Freq Amplitude')\nplt.show()",
"This shows the phase for every single frequency, but we really only care about the nonzero frequencies with minimum amplitude:",
"nonzero_freqs = freqs[SSFT_amp > 1][1:]\nprint('Notable frequencies are: {}'.format(nonzero_freqs))\n\ninds = [list(freqs).index(x) for x in nonzero_freqs] # Return index of nonzero frequencies\nprint('Phase shifts for notable frequencies are: {}'.format(phase_rad[inds]))",
"This is better visualized without noise:",
"# Determine Single Sided FT Spectrum\nY_s_fft = np.fft.fft(Y_s)\n\n# Update ft output\nFT = np.roll(Y_s_fft, len(X)//2) \n\n# Set a tolerance limit - phase is sensitive to floating point errors \n# (see Gotchas and Optimization for more info):\nFT_trun = FT\ntol = 1*10**-6 # Truncate signal below tolerance level\nFT_trun[np.abs(FT_trun)<tol] = 0\n\n# Use the angle function (arc tangent of imaginary over real)\nphase = np.angle(FT_trun[len(X)//2:])\nphase_rad = 1/np.pi * phase # Convert to radians\n\n# Plot\nplt.plot(freqs[1:], phase_rad[1:])\nplt.title('Single-Sided Spectrum of Signal')\nplt.xlabel('freq (Hz)')\nplt.ylabel('Freq Amplitude')\nplt.show()\n\n# To streamline, we can create a function in Pandas (see Pandas Crash Course for more info):\nimport pandas as pd\n\ndef fft_norm(signal, sr=1):\n '''Return a normalized fft single sided spectrum.''' \n signal = signal[: signal.shape[0]//2*2] # Use an even number of data points so can divide FFT in half evenly\n N = signal.shape[0]\n freqs = sr*np.arange(0, N//2)/N \n \n # FFT\n fft = np.fft.fft(signal)\n fft = np.roll(fft, N//2) # Shift zero freq component to center of spectrum\n \n # Normalized Amplitude\n amp_norm = 2/N*np.abs(fft[N//2:])\n \n # Phase\n tol = 1*10**-6 # Truncate signal below tolerance so phase isn't weird\n fft[np.abs(fft)<tol] = 0\n phase_rad = np.angle(fft[N//2:])/(np.pi) \n \n # To convert the phase, use (fft_norm(Phase (Radians)+np.pi)) * conversion factor/(2*np.pi)\n # I.e. add Pi to the output before converting from radians\n return pd.DataFrame({'Frequency':freqs, 'Amplitude':amp_norm, 'Phase (Radians)':phase_rad, 'Phase (Degrees)':phase_rad*180}).set_index('Frequency')\n\n\nY_ms = Y_s-Y_s.mean() # Mean subtract to remove the offset (0 freq component)\nfft_norm(Y_ms, sr=2000).plot(subplots=True, layout=(3,1)) # mean subtract to\nplt.show()",
"Notch Filter\nPlot our original, non-noisy two-component signal:\nPerform the Fourier transform, and set the 200 Hz signal to zero:",
"# Fourier transform\nYfft = np.fft.fft(Y_s); \n\nfreqs = sr*np.arange(0,len(Yfft)/2)/len(Y_sn) # Frequencies of the FT\nind250Hz = np.where(freqs==250)[0][0] # Index to get just 250 Hz Signal \n\nY_filt = Yfft[:] # The original, non-absolute, full spectrum is important\nfull_w = 200 # Width of spectrum to set to zero\n\n# Set FT at frequency to zero\nY_filt[ind250Hz-int(full_w/2):ind250Hz+int(full_w/2)] = 0 # Set the 250 Hz signal (+-) to zero on the lower side\nY_filt[-ind250Hz-int(full_w/2):-ind250Hz+int(full_w/2)] = 0 # Set the 250 Hz signal (+-) to zero on the upper side\n\n# Determine single sided Fourier transform\nSSFT_filt = Y_filt[:int(len(Y_filt)/2)] # Index the first half\nSSFT_filt = np.abs(SSFT_filt) # Use the absolute Value\nSSFT_filt = SSFT_filt/len(X) * 2 # Normalize and double the values (FFT is wrapped)\n\n# Plot\nplt.plot(freqs[1:], SSFT_filt[1:]) \nplt.title('Single-Sided Spectrum of Signal')\nplt.xlabel('freq (Hz)')\nplt.ylabel('Amplitude of X')\nplt.show()",
"Inverse Fourier transform back, and plot the original filtered signal:",
"# Inverse FFT the original, non-absolute, full spectrum\nY2 = np.fft.ifft(Y_filt) \nY2 = np.real(Y2) # Use the real values to plot the filtered signal\n\n# Plot\nplt.plot(X[:100],Y_s[:100], label='Original')\nplt.plot(X[:100],Y2[:100], label='Filtered')\nplt.title('Two Signals')\nplt.xlabel('Time (s)')\nplt.ylabel('Signal Amplutude')\nplt.legend(loc='best')\nplt.show()",
"While the Fourier amplitudes properly represent the amplitude of frequency components, the power spectral density (square of the discrete fourier transform) can be estimated using a periodogram:",
"# Determine approx power spectral density\nf, Pxx_den = signal.periodogram(Y_s, sr)\n\n# Plot\nplt.plot(f, Pxx_den)\nplt.xlabel('frequency [Hz]')\nplt.ylabel('PSD')\nplt.show()",
"Correlation\nCorrelations are a measure of the product of two signals as a function of the x-axis shift between them. They are often used to determine similarity between the two signals, e.g. is there some structure or repeating feature that is present in both signals?",
"# Create a signal\nnpts = 200\nheartbeat = np.array([0,1,0,0,4,8,2,-4,0,4,0,1,2,1,0,0,0,0])/8\nxvals = np.linspace(0,len(heartbeat),npts)\nheartbeat = np.interp(xvals,np.arange(0,len(heartbeat)),heartbeat) # Use interpolation to spread the signal out\n\n# Repeat the signal ten times, add some noise:\nhrtbt = np.tile(heartbeat,10)\nhrtbt_noise = hrtbt + np.random.rand(len(hrtbt))\n\n# Plot\nG = gridspec.GridSpec(2, 1) \naxis1 = plt.subplot(G[0, 0])\naxis1.plot(heartbeat)\naxis1.set_title('Single Heartbeat Electrocardiogram')\n\naxis2 = plt.subplot(G[1, 0])\naxis2.plot(hrtbt_noise)\naxis2.set_title('Noisy Electrocardiogram for repeating Heartbeat')\n\nplt.tight_layout()\nplt.show()",
"The center of the repeating (heartbeat) signal is marked as a centroid:",
"# Find center of each repeating signal\ncent_x = np.arange(1,11)*200 - 100\ncent_y = np.ones(10)*max(hrtbt)\n\n# Plot\nplt.plot(hrtbt[:], label='heartbeat')\nplt.plot(cent_x[:],cent_y[:],'r^', label='Centroid')\nplt.title('Heartbeat Electrocardiogram')\nplt.xlabel('Time')\nplt.ylabel('Volts')\nplt.legend(loc='best')\nplt.show()",
"Correlate the single signal with the repeating, noisy one:",
"# Correlate\ncorr = signal.correlate(hrtbt_noise, heartbeat, mode='same')\n\n# Plot\nplt.plot(corr/max(corr), label='Corelogram')\nplt.plot(cent_x,cent_y,'r^', label='Centroid')\nplt.title('Correlogram')\nplt.xlabel('Delay')\nplt.ylabel('Normalized Volts $^2$')\nplt.legend(loc='best')\nplt.show()",
"The correlogram recovered the repeating signal central points. This is because at these points, the signal has the greatest similarity with the rectangular pulse. In other words, we're recovering the areas that share the greatest amount of similarity with our rectangular pulse.\nConvolution\nConvolution is a process in which the shape of one function is expressed in another. They're useful for adjusting features, or representing real-world measurements if the response of the filter or instrument is known.\nAs an example, consider a 1 dimensional image taken by an optical microscope (here, a sawtooth wave). The microscope itself imposes empirical limitations in the optics they use approximated by a Gaussian point squared function (PSF). The final image is the convolution of the original image and the PSF.",
"# Signal and PSF\norig_sig = signal.sawtooth(2*np.pi*np.linspace(0,3,300))/2+0.5 \npsf = signal.gaussian(101, std=15) \n\n# Convolve\nconvolved = signal.convolve(orig_sig, psf) \n\n# Plot\nG = gridspec.GridSpec(3, 1) \naxis1 = plt.subplot(G[0, 0])\naxis1.plot(orig_sig)\naxis1.set_xlim(0, len(convolved))\naxis1.set_title('Original Pulse')\n\naxis2 = plt.subplot(G[1, 0])\naxis2.plot(psf)\naxis2.set_xlim(0, len(convolved))\naxis2.set_title('Point Spread Function')\n\naxis3 = plt.subplot(G[2, 0])\naxis3.plot(convolved)\naxis3.set_xlim(0, len(convolved))\naxis3.set_title('Convolved Signal')\n\nplt.tight_layout()\nplt.show()",
"Deconvolution\nDeconvolution can be thought of as removing the filter or instrument response. This is pretty common when reconstructing real signals if the response is known.\nIn the microscope example, this would be deconvolving image with a known response of the instrument to a point source. If it is known how much the entire image is spread out, the original image can be recovered.",
"# Deconvolve\nrecovered, remainder = signal.deconvolve(convolved, psf)\n\n# Plot\nG = gridspec.GridSpec(3, 1) \naxis1 = plt.subplot(G[0, 0])\naxis1.plot(convolved)\naxis1.set_xlim(0, len(convolved))\naxis1.set_title('Convolved Signal')\n\naxis2 = plt.subplot(G[1, 0])\naxis2.plot(psf)\naxis2.set_xlim(0, len(convolved))\naxis2.set_title('Known Impulse Response')\n\naxis3 = plt.subplot(G[2, 0])\naxis3.plot(recovered)\naxis3.set_xlim(0, len(convolved))\naxis3.set_title('Recovered Pulse')\n\nplt.tight_layout()\nplt.show()",
"Filtering\nFilters recieve a signal input and selectively reduce the amplitude of certain frequencies. Working with digital signals, they can broadly be divided into infinitie impulse response (IIR) and finite impulse response (FIR).\nIIR filters that receive an impulse response (signal of value 1 followed by many zeros) yield a (theoretically) infinite number of non-zero values.\nThis is in contrast with the the finite impulse response (FIR) filter that receives an impulse response and does become exactly zero beyond the duration of the impulse.\nIIR and FIR filters are also different in that they have different filter coefficients (b, a) that represent the feed forward coefficients and feedback coefficients, respectively. Feed forward coefficients (b) are applied to input (x) values, and feedback coefficients (a) are applied to output (y) values - i.e:\n$y(n) = b_1x(n) + b_2x(n) - a_1y(n) - a_2y(n) $\nwhere:\n$b_jx(n)$ are the feed forward coefficients, using $x$ values\n$a_jy(n)$ are the feedback coefficients (notice the $y(n)$!) \nGenerate Signal\nFirst, we generate a signal and approximate the power spectral density (PSD):",
"frq1 = 250 # Frequency 1(hz)\namp1 = 3 # Amplitude 1\nsr = 2000 # Sample rate\ndur = 1 # Duration (s) (increasing/decreasing this changes S/N)\n\n# Create timesteps, signal and noise\nX = np.linspace(0, dur-1/sr, int(dur*sr)) # Time\nY = amp1*np.sin(X*2*np.pi*frq1) # Signal\nY_noise = Y + 40*np.random.rand(len(X)) # Add noise\n\n# Approx PSD\nf, Pxx_den = signal.periodogram(Y_noise, sr) \n\n# Plot\nG = gridspec.GridSpec(2, 1) \naxis1 = plt.subplot(G[0, 0])\naxis1.plot(X, Y_noise)\naxis1.set_title('Plot of Signal with Noise')\n\naxis2 = plt.subplot(G[1, 0])\naxis2.plot(f, Pxx_den)\naxis2.set_title('Approx PSD')\n\nplt.tight_layout()\nplt.show()",
"Infinite Impulse Response (IIR) filters\nDigital filters inherently account for digital signal limitations, i.e. the sampling frequency. The Nyquist theorem asserts that we can't measure frequencies that are higher than 1/2 the sampling frequency, and the digital filter operates on this principle.\nNext, we create the digital filter and plot the response, using both feedforward (b) and feedback (a) coefficeints.:",
"f_order = 10.0 # Filter order\nf_pass = 'low' # Filter is low pass\nf_freq = 210.0 # Frequency to pass\nf_cutoff = f_freq/(sr/2) # Convert frequency into \n\n# Create the filter\nb, a = signal.iirfilter(f_order, f_cutoff, btype=f_pass, ftype='butter') \n\n# Test the filter\nw, h = signal.freqz(b, a, 1000) # Test response of filter across \n # frequencies (Use 'freqz' for digital)\nfreqz_hz = w * sr / (2 * np.pi) # Convert frequency to Hz\nresp_db = 20 * np.log10(abs(h)) # Convert response to decibels\n\n# Plot filter response\nplt.semilogx(freqz_hz, resp_db ) \nplt.title('Butterworth Bandpass Frequency Response')\nplt.xlabel('Frequency (Hz)')\nplt.ylabel('Amplitude (dB)')\nplt.axis((100, 500, -200, 10))\nplt.grid(which='both', axis='both')\nplt.show()",
"Applying the filter to our signal filters all higher frequencies:",
"# Apply filter to signal\nsig_filtered = signal.filtfilt(b, a, Y_noise) \n\n# Determine approx PSD\nf, Pxx_den_f = signal.periodogram(sig_filtered, sr) \n\n# Plot\nG = gridspec.GridSpec(2, 1) \naxis1 = plt.subplot(G[0, 0])\naxis1.plot(f, Pxx_den)\naxis1.set_title('Approx PSD of Original Signal')\n\naxis2 = plt.subplot(G[1, 0])\naxis2.plot(f, Pxx_den_f)\naxis2.set_title('Approx PSD of Filtered Signal')\n\nplt.tight_layout()\nplt.show()",
"Finite Impulse Response Filters\nA finite impulse response (FIR) filter can be designed where a linear phase response is specified within specified regions (up to the Nyquist or 1/2 of the sampling frequency). Only feedforward coefficients (b) are used.",
"# Create FIR filter\ntaps = 150 # Analogus to IIR order - indication \n # of memory, calculation, and \n # 'filtering'\nfreqs = [0, 150, 300, 500, sr/2.] # FIR frequencies\nny_fract = np.array(freqs)/(sr/2) # Convert frequency to fractions of \n # the Nyquist freq\ngains = [10.0, 1.0, 10.0, 0.0, 0.0] # Gains at each frequency\n\nb = signal.firwin2(taps, ny_fract, gains) # Make the filter (there are no \n # 'a' coefficients)\nw, h = signal.freqz(b) # Check filter response\n\n# Test FIR filter\nfreqz_hz = w * sr / (2 * np.pi) # Convert frequency to Hz\nresp_db = 20 * np.log10(abs(h)) # Convert response to decibels\n\n# Plot filter response\nplt.title('Digital filter frequency response')\nplt.plot(freqz_hz, np.abs(h))\nplt.title('Digital filter frequency response')\nplt.ylabel('Amplitude Response')\nplt.xlabel('Frequency (Hz)')\nplt.grid()\nplt.show()",
"And the effect of the FIR digital filter:",
"# Apply FIR filter\nsig_filtered = signal.filtfilt(b, 1, Y_noise) \n\n# Determine approx PSD\nf, Pxx_den_f = signal.periodogram(sig_filtered, sr)\n\n# Plot\nG = gridspec.GridSpec(2, 1) \naxis1 = plt.subplot(G[0, 0])\naxis1.plot(f, Pxx_den)\naxis1.set_title('Approx PSD of Original Signal')\n\naxis2 = plt.subplot(G[1, 0])\naxis2.plot(f, Pxx_den_f)\naxis2.set_title('Approx PSD of Filtered Signal')\n\nplt.tight_layout()\nplt.show()",
"Interestingly, the FIR filter can be replicated by convolutions. A set of related benchmarks relating to that property is available here: \nhttp://scipy-cookbook.readthedocs.io/items/ApplyFIRFilter.html"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
gxxjjj/QuantEcon.py
|
solutions/lakemodel_solutions.ipynb
|
bsd-3-clause
|
[
"Lake Model Solutions\nExcercise 1\nWe begin by initializing the variables and import the necessary modules",
"%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom quantecon.models import LakeModel\n\nalpha = 0.012\nlamb = 0.2486\nb = 0.001808\nd = 0.0008333\ng = b-d\nN0 = 100.\ne0 = 0.92\nu0 = 1-e0\nT = 50",
"Now construct the class containing the initial conditions of the problem",
"LM0 = LakeModel(lamb,alpha,b,d)\nx0 = LM0.find_steady_state()# initial conditions\n\nprint \"Initial Steady State: \", x0",
"New legislation changes $\\lambda$ to $0.2$",
"LM1 = LakeModel(0.2,alpha,b,d)\n\nxbar = LM1.find_steady_state() # new steady state\nX_path = np.vstack(LM1.simulate_stock_path(x0*N0,T)) # simulate stocks\nx_path = np.vstack(LM1.simulate_rate_path(x0,T)) # simulate rates\nprint \"New Steady State: \", xbar",
"Now plot stocks",
"plt.figure(figsize=[10,9])\nplt.subplot(3,1,1)\nplt.plot(X_path[:,0])\nplt.title(r'Employment')\nplt.subplot(3,1,2)\nplt.plot(X_path[:,1])\nplt.title(r'Unemployment')\nplt.subplot(3,1,3)\nplt.plot(X_path.sum(1))\nplt.title(r'Labor Force')",
"And how the rates evolve:",
"plt.figure(figsize=[10,6])\nplt.subplot(2,1,1)\nplt.plot(x_path[:,0])\nplt.hlines(xbar[0],0,T,'r','--')\nplt.title(r'Employment Rate')\nplt.subplot(2,1,2)\nplt.plot(x_path[:,1])\nplt.hlines(xbar[1],0,T,'r','--')\nplt.title(r'Unemployment Rate')",
"We see that it takes 20 periods for the economy to converge to it's new steady state levels\nExercise 2\nThis next exercise has the economy expriencing a boom in entrances to the labor market and then later returning to the original levels. For 20 periods the economy has a new entry rate into the labor market",
"bhat = 0.003\nT_hat = 20\nLM1 = LakeModel(lamb,alpha,bhat,d)",
"We simulate for 20 periods at the new parameters",
"X_path1 = np.vstack(LM1.simulate_stock_path(x0*N0,T_hat)) # simulate stocks\nx_path1 = np.vstack(LM1.simulate_rate_path(x0,T_hat)) # simulate rates",
"Now using the state after 20 periods for the new initial conditions we simulate for the additional 30 periods",
"X_path2 = np.vstack(LM0.simulate_stock_path(X_path1[-1,:2],T-T_hat+1)) # simulate stocks\nx_path2 = np.vstack(LM0.simulate_rate_path(x_path1[-1,:2],T-T_hat+1)) # simulate rates",
"Finally we combine these two paths and plot",
"x_path = np.vstack([x_path1,x_path2[1:]]) # note [1:] to avoid doubling period 20\nX_path = np.vstack([X_path1,X_path2[1:]]) # note [1:] to avoid doubling period 20\n\nplt.figure(figsize=[10,9])\nplt.subplot(3,1,1)\nplt.plot(X_path[:,0])\nplt.title(r'Employment')\nplt.subplot(3,1,2)\nplt.plot(X_path[:,1])\nplt.title(r'Unemployment')\nplt.subplot(3,1,3)\nplt.plot(X_path.sum(1))\nplt.title(r'Labor Force')",
"And the rates:",
"plt.figure(figsize=[10,6])\nplt.subplot(2,1,1)\nplt.plot(x_path[:,0])\nplt.hlines(x0[0],0,T,'r','--')\nplt.title(r'Employment Rate')\nplt.subplot(2,1,2)\nplt.plot(x_path[:,1])\nplt.hlines(x0[1],0,T,'r','--')\nplt.title(r'Unemployment Rate')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
whitead/numerical_stats
|
unit_8/lectures/lecture_2.ipynb
|
gpl-3.0
|
[
"Confidence Intervals\nUnit 8, Lecture 2\nNumerical Methods and Statistics\n\nReading\nBulmer: Pages 165-167\n\nProf. Andrew White, March 22 2020\nComputing Confidence Interval for error in population mean with $t$-Distribution\nWe know that the error in the population mean follows a $t$-distribution for small $N$. What if we want a confidence interval for where the true mean lies?\n$$P(a < \\mu < b) = 0.95$$\nOne simplification we can make right away is that we know $\\mu$ will be centered at $\\bar{x}$ and is symmetric:\n$$P( \\bar{x} - y < \\mu< \\bar{x} + y) = 0.95$$\nwhere $y$ is some number we need to find. We can further rewrite this as:\n$$P( - y < \\mu - \\bar{x}< + y) = 0.95$$\nwhich we know follows a $t-$distribution. Note that these are probailities, which are integrals of the probability distribution\nHere's a visual to understand what we're after. Note that I'm actually answering this problem to make the graph, so wait until later to try to understand the code!",
"import scipy.stats as ss\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n#make some points for plot\nN = 5\nx = np.linspace(-5,5, 1000)\nT = ss.t.ppf(0.975, df=N-1)\ny = ss.t.pdf(x, df=N-1)\nplt.plot(x,y) \nplt.fill_between(x, y, where= np.abs(x) < T)\nplt.text(0,np.max(y) / 3, 'Area=0.95', fontdict={'size':14}, horizontalalignment='center')\nplt.axvline(T, linestyle='--', color='orange')\nplt.axvline(-T, linestyle='--', color='orange')\nplt.xticks([-T, T], ['-y', 'y'])\nplt.yticks([])\nplt.ylabel(r'$p(\\mu - \\bar{x})$')\nplt.xlabel(r'$\\mu - \\bar{x}$')\nplt.show()",
"This is a very similar problem to the prediction intervals we had before. We know that $p(\\mu - \\bar{x})$ follows a $T(0, \\sigma_x /\\sqrt{N}, N - 1)$ distribution and we can use the same idea as $Z$-scores as we did for prediction intervals\n$$T(y) = \\frac{y - 0}{\\sigma_x / \\sqrt{N}}$$\nThe 'mean' our error in the population mean distribution is 0, because our error in population mean is always centered around 0.\nAfter taking 5 samples, we've found that the sample mean is 45 and the sample standard deviation, $\\sigma_x$ is 3. What is the 95% confidence interval for the true mean, $\\mu$?\nWe can write this more like this:\n$$P(- y < \\mu - \\bar{x} < +y) = 0.95$$\nOur interval will go from 2.5% to 97.5% (95% of probability), so let's find the $T$-value for $-\\infty$ to 2.5% and 97.5% to $\\infty$. Remember that the $T$-value depends on the degrees of freedom, N-1.",
"import scipy.stats\n\n#The lower T Value. YOU MUST GIVE THE SAMPLE NUMBER\nprint(scipy.stats.t.ppf(0.025, 4))\nprint(scipy.stats.t.ppf(0.975, 4))",
"$$T_{low} = \\frac{-y - 0}{\\sigma_x / \\sqrt{N}}$$\n$$T_{low} = -\\frac{y}{\\sigma_x / \\sqrt{N}}$$\n$$y = -T_{low}\\frac{\\sigma_x}{\\sqrt{N}}$$",
"print(-scipy.stats.t.ppf(0.025, 4) * 3 / np.sqrt(5))",
"The final answer is $P(45 - 3.72 < 45 < 45 + 3.72) = 0.95$ or $45\\pm 3.72$\nComputing Confidence Interval for Error in Population Mean Steps\n\nIs the sample size greater than 25 OR do you know the true (population) standard deviation? If so, then use standard normal ($Z$) otherwise the $t$-distribution for your sample size ($T$).\nBuild your interval in probability. For example, a 95% double-sided goes from 2.5% to 97.5%\nFind the $Z$ or $T$ values that match your interval. For example, $Z_{low} = -1.96$ to $Z_{high} = 1.96$ is for a double-sided 95% confidence inerval. Use the scipy.stats.t.ppf or scipy.stats.norm.ppf function to find them.\nUse the $y = Z \\sigma / \\sqrt{N}$ or $y = T \\sigma_x / \\sqrt{N}$ equation to find the interval values in your particular distribution, where $y$ is the interval width\nReport your answer either as an interval or the $\\bar{x} \\pm y$ notation.\n\nShortcut Method For Normal\nHere's how to quickly do these steps in Python for sample size greater than 25",
"# DO NOT COPY, JUST GENERATING DATA FOR EXAMPLE\ndata = scipy.stats.norm.rvs(size=100, scale=15, loc=50)\n\n#Check if sample size is big enough.\n#This code will cause an error if it's not\nassert len(data) > 25 \n\nCI = 0.95\nsample_mean = np.mean(data)\n#The second argument specifies what the denominator should be (N - x),\n#where x is 1 in this case\nsample_var = np.var(data, ddof=1) \nZ = scipy.stats.norm.ppf((1 - CI) / 2)\ny = -Z * np.sqrt(sample_var / len(data))\n\nprint('{} +/ {}'.format(sample_mean, y))\n ",
"Is that low? Well, remember that our error in the mean follows standard deviation divided by the root of number of samples.\nShortcut Method For $t$-Distribution\nHere's how to quickly do these steps in Python for sample size less than 25",
"# DO NOT COPY, THIS JUST GENERATES DATA FOR EXAMPLE\ndata = scipy.stats.norm.rvs(size=4, scale=15, loc=50)\n\nCI = 0.95\nsample_mean = np.mean(data)\nsample_var = np.var(data, ddof=1) \nT = scipy.stats.t.ppf((1 - CI) / 2, df=len(data)-1)\ny = -T * np.sqrt(sample_var / len(data))\n\nprint('{} +/ {}'.format(sample_mean, y))\n ",
"Example of Prediction Intervals\nI know that the thickness of a metal slab is distributed according to ${\\cal N}(3.4, 0.75)$. Construct a prediction interval so that a randomly chosen metal slab will lie within it 95% confidence.\n$$P( \\mu - y < x < \\mu + y) = 0.95$$\nThis is a prediction interval, so we're computing a interval on the distribution itself and we know everything about it.\n$$Z(\\mu + y) = \\frac{\\mu + y - \\mu}{\\sigma} \\Rightarrow y = \\sigma Z$$\n$$Z = 1.96$$\n$$x = \\mu \\pm 1.96 \\sigma = 3.4 \\pm 1.40$$\nA randomly chosen slab will have a thickness of $3.4 \\pm 1.40$ 95% of the time. \nExample 1 of error in population mean with known $\\sigma$\nI measure the thickness of 35 metal slabs and find that $\\bar{x}$, the sample mean, is 3.38. If I know that $\\sigma = 0.75$, construct a confidence interval that will contain the true mean $\\mu$ with 95% confidence.\nWe know that $p(\\bar{x} - \\mu)$ is normally distributed with ${\\cal N}(0, \\sigma / \\sqrt{N})$. We want to find\n$$ P(-y < \\bar{x} - \\mu < +y) = 0.95$$\n$$ Z(+y) = \\frac{y - 0}{\\sigma_e} = \\frac{y}{\\sigma / \\sqrt{N}} \\Rightarrow y = \\frac{\\sigma}{\\sqrt{N}} Z$$\n$$y = \\frac{0.75}{\\sqrt{35}}1.96 = 0.248$$\n$$ \\mu - \\bar{x} = 0 \\pm 0.248$$\n$$ \\mu = 3.38 \\pm 0.248$$\nAt a 95% confidence level, the true mean is $3.38 \\pm 0.248$. \nExample 2 of error in population mean with known $\\sigma$\nI measure the thickness of 11 metal slabs and find that $\\bar{x}$, the sample mean, is 5.64. If I know that $\\sigma = 1.2$, construct a confidence interval that will contain the true mean $\\mu$ with 99% confidence.\nAgain we know that $p(\\bar{x} - \\mu)$ is normally distributed with ${\\cal N}(0, \\sigma / \\sqrt{N})$. We want to find\n$$ P(-y < \\bar{x} - \\mu < +y) = 0.99$$\n$$ Z(+y) = \\frac{y - 0}{\\sigma_e} = \\frac{y}{\\sigma / \\sqrt{N}} \\Rightarrow y = \\frac{\\sigma}{\\sqrt{N}} Z$$\n$$y = \\frac{1.2}{\\sqrt{11}}2.575 = 0.932$$\n$$ \\mu - \\bar{x} = 0 \\pm 0.932$$\n$$ \\mu = 5.64 \\pm 0.932$$\nExample 1 of error in population mean with unknown $\\sigma$\nI measure the thickness of 6 metal slabs and find that $\\bar{x}$, the sample mean, is 3.65 and the sample standard deviation is $1.25$. Construct a confidence interval that will contain the true mean $\\mu$ with 90% confidence.\n$$T(+y) = \\frac{y - 0}{\\sigma_x / \\sqrt{N}} \\Rightarrow y = \\frac{\\sigma_x}{\\sqrt{N}} T$$\nWe know that $p(\\bar{x} - \\mu)$ is a $t$-distribution because $N$ is small. It is distributed as $T(0, \\sigma_x / \\sqrt{N})$. We want to find\n$$ P(-y < \\bar{x} - \\mu < +y) = 0.90$$",
"#Notice it is 95%, so the interval goes from\n#5% to 95% containing 90% of probability\nT = scipy.stats.t.ppf(0.95, df=6-1)\nprint(T)",
"$$ y = \\frac{1.25}{\\sqrt{6}} 2.015 = 1.028 $$\n$$\\mu = 3.65 \\pm 1.028$$\nThe population mean of the slabs is $3.65 \\pm 1.028$ with 90% confidence.\nExample 2 of error in population mean with unknown $\\sigma$\nI measure the thickness of 25 metal slabs and find that $\\bar{x}$, the sample mean, is 3.42 and the sample standard deviation is 0.85. Construct a confidence interval that will contain the true mean $\\mu$ with 90% confidence.\nWe know, just like last example, that $P(\\bar{x} - \\mu)$ is a normal distribution because $N$ is large enough for the central limit theorem to apply. It is distributed as ${\\cal N}(0, \\sigma_x / \\sqrt{N})$. We want to find\n$$ P(-y < \\bar{x} - \\mu < +y) = 0.90$$\n$$Z(+y) = \\frac{y - 0}{\\sigma_x / \\sqrt{N}} \\Rightarrow y = \\frac{\\sigma_x}{\\sqrt{N}} Z$$\n$$ y = \\frac{0.85}{\\sqrt{25}} 1.645 = 0.28$$\n$$\\mu = 3.42 \\pm 0.28$$\nSingle-Sided Confidence Intervals\nSometimes, you would desire only to bound the population mean to be on one side. \nUpper Interval (Lower-Bound)\nAn upper-interval covers the upper x% of the probability mass and can be defined as an interval from $(y, \\infty)$, where $y$ acts as a lower-bound. A visual is shown below for an upper 90% confidence interval.",
"#make some points for plot\nN = 5\nx = np.linspace(-5,5, 1000)\nT = ss.t.ppf(0.10, df=N-1)\ny = ss.t.pdf(x, df=N-1)\nplt.plot(x,y) \nplt.fill_between(x, y, where= x > T)\nplt.text(0,np.max(y) / 3, 'Area=0.90', fontdict={'size':14}, horizontalalignment='center')\nplt.axvline(T, linestyle='--', color='orange')\nplt.xticks([T], ['lower-bound'])\nplt.yticks([])\nplt.ylabel(r'$p(\\mu - \\bar{x})$')\nplt.xlabel(r'$\\mu - \\bar{x}$')\nplt.show()",
"Lower Interval (Upper-Bound)\nA lower interval covers the lower x% of probability mass. It is defined with an upper bound like so: $(-\\infty, y)$. An example is below:",
"#make some points for plot\nN = 5\nx = np.linspace(-5,5, 1000)\nT = ss.t.ppf(0.90, df=N-1)\ny = ss.t.pdf(x, df=N-1)\nplt.plot(x,y) \nplt.fill_between(x, y, where= x < T)\nplt.text(0,np.max(y) / 3, 'Area=0.90', fontdict={'size':14}, horizontalalignment='center')\nplt.axvline(T, linestyle='--', color='orange')\nplt.xticks([T], ['upper-bound'])\nplt.yticks([])\nplt.ylabel(r'$p(\\mu - \\bar{x})$')\nplt.xlabel(r'$\\mu - \\bar{x}$')\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
adaptive-learning/flocs
|
analysis/demo/demo.ipynb
|
gpl-2.0
|
[
"Flocs Data Demo\nHow to Export Data\nBoth static and collected data can be exported by command make export-data-to-csv.\nThis command creates CSV tables for all models registered in flocs/management/commands/export_data_to_csv.py\ninto directory exported-data/<datestamp>/. If there is a need to change what data for a model are exported, modify its export_class attribute (namedtuple) and to_export_tuple method.\nTables\nThere are following CSV tables:\n- static data: concepts.csv, blocks.csv, instructions.csv, tasks.csv\n- collected data: students.csv, task-instances.csv, attempts.csv",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd",
"Concepts",
"concepts = pd.read_csv('data/concepts.csv')\nconcepts.head()",
"Blocks",
"blocks = pd.read_csv('data/blocks.csv')\nblocks.head()",
"Instructions",
"instructions = pd.read_csv('data/instructions.csv')\ninstructions.head()",
"Tasks",
"tasks = pd.read_csv('data/tasks.csv')\ntasks.head(3)",
"Students",
"students = pd.read_csv('data/students.csv')\nstudents.head()",
"Task Instances",
"task_instances = pd.read_csv('data/task-instances.csv')\ntask_instances.head()",
"Attempts",
"attempts = pd.read_csv('data/attempts.csv')\nattempts.head()",
"Analysis Example\nProblem: Find median of a task solving time for each programming concept.",
"programming_concepts = concepts[concepts.type == 'programming']\nprogramming_concepts\n\nsolved_instances = task_instances[task_instances.solved]\ninstances_concepts = pd.merge(solved_instances, tasks, on='task_id')[['time_spent', 'concepts_ids']]\ninstances_concepts.head()\n\n# unpack concepts IDs\nfrom ast import literal_eval\nconcepts_lists = [literal_eval(c) for c in instances_concepts.concepts_ids]\ntimes = instances_concepts.time_spent\nconcepts_times = pd.DataFrame([(times[i], concept_id)\n for i, concepts_list in enumerate(concepts_lists)\n for concept_id in concepts_list],\n columns=['time', 'concept_id'])\nconcepts_times.head()\n# (If you know how to do this better (ideally function to unpack any column), let mi know.)\n\n# filter programming concepts\nprogramming_concepts_times = pd.merge(concepts_times, programming_concepts)\nprogramming_concepts_times.head()\n\n# calculate median for each programming concept\nmedians = programming_concepts_times.groupby(['concept_id', 'name']).median()\nmedians\n\n# plot\nprogramming_concepts_times['concept'] = programming_concepts_times['name'].apply(lambda x: x.split('_')[-1].lower())\nprogramming_concepts_times[['concept', 'time']].boxplot(by='concept')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Kaggle/learntools
|
notebooks/python/raw/ex_6.ipynb
|
apache-2.0
|
[
"You are almost done with the course. Nice job!\nWe have a couple more interesting problems for you before you go. \nAs always, run the setup code below before working on the questions.",
"from learntools.core import binder; binder.bind(globals())\nfrom learntools.python.ex6 import *\nprint('Setup complete.')",
"Let's start with a string lightning round to warm up. What are the lengths of the strings below?\nFor each of the five strings below, predict what len() would return when passed that string. Use the variable length to record your answer, then run the cell to check whether you were right. \n0a.",
"a = \"\"\nlength = ____\nq0.a.check()",
"0b.",
"b = \"it's ok\"\nlength = ____\nq0.b.check()",
"0c.",
"c = 'it\\'s ok'\nlength = ____\nq0.c.check()",
"0d.",
"d = \"\"\"hey\"\"\"\nlength = ____\nq0.d.check()",
"0e.",
"e = '\\n'\nlength = ____\nq0.e.check()",
"1.\nThere is a saying that \"Data scientists spend 80% of their time cleaning data, and 20% of their time complaining about cleaning data.\" Let's see if you can write a function to help clean US zip code data. Given a string, it should return whether or not that string represents a valid zip code. For our purposes, a valid zip code is any string consisting of exactly 5 digits.\nHINT: str has a method that will be useful here. Use help(str) to review a list of string methods.",
"def is_valid_zip(zip_code):\n \"\"\"Returns whether the input string is a valid (5 digit) zip code\n \"\"\"\n pass\n\n# Check your answer\nq1.check()\n\n#%%RM_IF(PROD)%%\ndef is_valid_zip(zip_code):\n \"\"\"Returns whether the input string is a valid (5 digit) zip code\n \"\"\"\n return len(zip_code) == 5 and zip_code.isdigit()\n\nq1.assert_check_passed()\n\n#%%RM_IF(PROD)%%\ndef is_valid_zip(zip_code):\n \"\"\"Returns whether the input string is a valid (5 digit) zip code\n \"\"\"\n return len(zip_code) == 5\n\nq1.assert_check_failed()\n\n#_COMMENT_IF(PROD)_\nq1.hint()\n#_COMMENT_IF(PROD)_\nq1.solution()",
"2.\nA researcher has gathered thousands of news articles. But she wants to focus her attention on articles including a specific word. Complete the function below to help her filter her list of articles.\nYour function should meet the following criteria:\n\nDo not include documents where the keyword string shows up only as a part of a larger word. For example, if she were looking for the keyword “closed”, you would not include the string “enclosed.” \nShe does not want you to distinguish upper case from lower case letters. So the phrase “Closed the case.” would be included when the keyword is “closed”\nDo not let periods or commas affect what is matched. “It is closed.” would be included when the keyword is “closed”. But you can assume there are no other types of punctuation.",
"def word_search(doc_list, keyword):\n \"\"\"\n Takes a list of documents (each document is a string) and a keyword. \n Returns list of the index values into the original list for all documents \n containing the keyword.\n\n Example:\n doc_list = [\"The Learn Python Challenge Casino.\", \"They bought a car\", \"Casinoville\"]\n >>> word_search(doc_list, 'casino')\n >>> [0]\n \"\"\"\n pass\n\n# Check your answer\nq2.check()\n\n#_COMMENT_IF(PROD)_\nq2.hint()\n#_COMMENT_IF(PROD)_\nq2.solution()",
"3.\nNow the researcher wants to supply multiple keywords to search for. Complete the function below to help her.\n(You're encouraged to use the word_search function you just wrote when implementing this function. Reusing code in this way makes your programs more robust and readable - and it saves typing!)",
"def multi_word_search(doc_list, keywords):\n \"\"\"\n Takes list of documents (each document is a string) and a list of keywords. \n Returns a dictionary where each key is a keyword, and the value is a list of indices\n (from doc_list) of the documents containing that keyword\n\n >>> doc_list = [\"The Learn Python Challenge Casino.\", \"They bought a car and a casino\", \"Casinoville\"]\n >>> keywords = ['casino', 'they']\n >>> multi_word_search(doc_list, keywords)\n {'casino': [0, 1], 'they': [1]}\n \"\"\"\n pass\n\n# Check your answer\nq3.check()\n\n#_COMMENT_IF(PROD)_\nq3.solution()",
"Keep Going\nYou've learned a lot. But even the best programmers rely heavily on \"libraries\" of code from other programmers. You'll learn about that in the last lesson."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
AtmaMani/pyChakras
|
islr/verifying_clt_in_regression.ipynb
|
mit
|
[
"Verifying Central Limit Theorem in regression",
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\nimport warnings\nwarnings.filterwarnings('ignore')",
"Synthesize the dataset\nCreate 1000 random integers between 0, 100 for X and create y such that\n$$\ny = \\beta_{0} + \\beta_{1}X + \\epsilon\n$$\nwhere\n$$\n\\beta_{0} = 30 \\ and \\ \\beta_{1} = 1.8 \\ and \\ \\epsilon \\ = \\ standard \\ normal \\ error\n$$",
"rand_1kx = np.random.randint(0,100,1000)\nx_mean = np.mean(rand_1kx)\nx_sd = np.std(rand_1kx)\nx_mean\n\npop_intercept = 30\npop_slope = 1.8\nerror_boost = 10\npop_error = np.random.standard_normal(size = rand_1kx.size) * error_boost\n# I added an error booster since without it, the correlation was too high.\n\ny = pop_intercept + pop_slope*rand_1kx + pop_error\ny_mean = np.mean(y)\ny_sd = np.std(y)\ny_mean",
"Make a scatter plot of X and y variables.",
"sns.jointplot(rand_1kx, y)",
"X and y follow uniform distribution, but the error $\\epsilon$ is generated from standard normal distribution with a boosting factor. Let us plot its histogram to verify the distribution",
"sns.distplot(pop_error)",
"Predict using population\nLet us predict the coefficients and intercept when using the whole dataset. We will compare this approach with CLT approach of breaking into multiple subsets and averaging the coefficients and intercepts\nUsing whole population",
"from sklearn.linear_model import LinearRegression\nX_train_full = rand_1kx.reshape(-1,1)\ny_train_full = y.reshape(-1,1)\n\ny_train_full.shape\n\nlm.fit(X_train, y_train)\n\n#print the linear model built\npredicted_pop_slope = lm.coef_[0][0]\npredicted_pop_intercept = lm.intercept_[0]\n\nprint(\"y = \" + str(predicted_pop_slope) + \"*X\" + \" + \" + str(predicted_pop_intercept))",
"Prediction with 66% of data",
"from sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(rand_1kx, y, test_size=0.33)\nprint(X_train.size)\n\nfrom sklearn.linear_model import LinearRegression\nlm = LinearRegression()\n\nX_train = X_train.reshape(-1,1)\nX_test = X_test.reshape(-1,1)\ny_train = y_train.reshape(-1,1)\ny_test = y_test.reshape(-1,1)\n\ny_train.shape\n\nlm.fit(X_train, y_train)\n\n#print the linear model built\npredicted_subset_slope = lm.coef_[0][0]\npredicted_subset_intercept = lm.intercept_[0]\n\nprint(\"y = \" + str(predicted_subset_slope) + \"*X\" \n + \" + \" + str(predicted_subset_intercept))",
"Perform predictions and plot the charts",
"y_predicted = lm.predict(X_test)\nresiduals = y_test - y_predicted",
"Fitted vs Actual scatter",
"jax = sns.jointplot(y_test, y_predicted)\njax.set_axis_labels(xlabel='Y', ylabel='Predicted Y')\n\ndax = sns.distplot(residuals)\ndax.set_title('Distribution of residuals')\n\njax = sns.jointplot(y_predicted, residuals)\njax.set_axis_labels(xlabel='Predicted Y', ylabel='Residuals')\n\njax = sns.jointplot(y_test, residuals)\njax.set_axis_labels(xlabel='Y', ylabel='Residuals')",
"Predict using multiple samples",
"pop_df = pd.DataFrame(data={'x':rand_1kx, 'y':y})\npop_df.head()\n\npop_df.shape",
"Select 50 samples of size 200 and perform regression",
"sample_slopes = []\nsample_intercepts = []\n\nfor i in range(0,50):\n # perform a choice on dataframe index\n sample_index = np.random.choice(pop_df.index, size=50)\n \n # select the subset using that index\n sample_df = pop_df.iloc[sample_index]\n \n # convert to numpy and reshape the matrix for lm.fit\n sample_x = np.array(sample_df['x']).reshape(-1,1)\n sample_y = np.array(sample_df['y']).reshape(-1,1)\n \n lm.fit(X=sample_x, y=sample_y)\n \n sample_slopes.append(lm.coef_[0][0])\n sample_intercepts.append(lm.intercept_[0])",
"Plot the distribution of sample slopes and intercepts",
"mean_sample_slope = np.mean(sample_slopes)\nmean_sample_intercept = np.mean(sample_intercepts)\n\nfig, ax = plt.subplots(1,2, figsize=(15,6))\n\n# plot sample slopes\nsns.distplot(sample_slopes, ax=ax[0])\nax[0].set_title('Distribution of sample slopes. Mean: ' \n + str(round(mean_sample_slope, 2)))\nax[0].axvline(mean_sample_slope, color='black')\n\n# plot sample slopes\nsns.distplot(sample_intercepts, ax=ax[1])\nax[1].set_title('Distribution of sample intercepts. Mean: ' \n + str(round(mean_sample_intercept,2)))\nax[1].axvline(mean_sample_intercept, color='black')",
"Conclusion\nHere we compare the coefficients and intercepts obtained by different methods to see how CLT adds up.",
"print(\"Predicting using population\")\nprint(\"----------------------------\")\nprint(\"Error in intercept: {}\".format(pop_intercept - predicted_pop_intercept))\nprint(\"Error in slope: {}\".format(pop_slope - predicted_pop_slope))\n\nprint(\"\\n\\nPredicting using subset\")\nprint(\"----------------------------\")\nprint(\"Error in intercept: {}\".format(pop_intercept - predicted_subset_intercept))\nprint(\"Error in slope: {}\".format(pop_slope - predicted_subset_slope))\n\nprint(\"\\n\\nPredicting using a number of smaller samples\")\nprint(\"------------------------------------------------\")\nprint(\"Error in intercept: {}\".format(pop_intercept - mean_sample_intercept))\nprint(\"Error in slope: {}\".format(pop_slope - mean_sample_slope))",
"As we can see, error in quite small in all 3 cases, especially for slope. Prediction by averaging a number of smaller samples gives us much closer slope to population.\nFor intercept, the least error was with prediction using subset, which is still interesting as prediction using the whole population yielded poorer intercept!\nIn general, for really large datasets, that cannot be held in system memory, we can apply Central Limit Theorem for estimating slope and intercept by averaging over a number of smaller samples."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
STREAM3/visisc
|
docs/visISC_query_dialog_example.ipynb
|
bsd-3-clause
|
[
"visISC Example: Interactive Query Dialog with Visualization\nIn this example, we will show ho you can use the GUI component EventSelectionDialog tother with EventSelectionQuery for letting the use select which events to visualize. We start by creating a data set similar to the previous example on <a href=\"visISC_hierachical_frequency_data_example.ipynb\">Visualizing Anomalous Frequency Data with Hierarchical Data</a> but that also includes source classes (for instance, machine types). So, the data set becomes quite large and thereby we need to be able to select a subset of the data that we are most interested in comparing.",
"import pyisc;\nimport visisc;\nimport numpy as np\nimport datetime\nfrom scipy.stats import poisson, norm, multivariate_normal\n%matplotlib wx\n%gui wx\n\nn_sources = 10\nn_source_classes = 10\nn_events = 100\nnum_of_normal_days = 200\nnum_of_anomalous_days = 10\ndata = None\ndays_list = [num_of_normal_days, num_of_anomalous_days]\ndates = []\nfor state in [0,1]: # normal, anomalous data\n num_of_days = days_list[state]\n for k in range(n_source_classes):\n for i in range(n_sources):\n data0 = None\n for j in range(n_events):\n if state == 0:# Normal\n po_dist = poisson(int((10+2*(n_source_classes-k))*(float(j)/n_events/2+0.75))) # from 0.75 to 1.25\n else: # anomalous\n po_dist = poisson(int((20+2*(n_source_classes-k))*(float(j)/n_events+0.5))) # from 0.5 to 1.5\n\n tmp = po_dist.rvs(num_of_days)\n if data0 is None:\n data0 = tmp\n else:\n data0 = np.c_[data0,tmp]\n\n tmp = np.c_[\n [k*n_sources+i] * (num_of_days), # Sources\n [k] * (num_of_days), # Source classes\n [ # Timestamp\n datetime.date(2015,02,24) + datetime.timedelta(d) \n for d in np.array(range(num_of_days)) + (0 if state==0 else num_of_normal_days)\n ], \n [1] * (num_of_days), # Measurement period\n data0, # Event frequency counts\n\n ]\n\n if data is None:\n data = tmp\n else:\n data = np.r_[\n tmp,\n data\n ]\n\n# Column index into the data\nsource_column = 0\nclass_column = 1\ndate_column = 2\nperiod_column = 3\nfirst_event_column = 4\nlast_event_column = first_event_column + n_events",
"Likewise, as before we need to create an event parth function and a severity level function.",
"event_names = [\"event_%i\"%i for i in range(n_events)]\n\ndef event_path(x): # Returns a list of strings with 3 elements\n return [\"Type_%i\"%(x/N) for N in [50, 10]]+[event_names[x]]\n\ndef severity_level(x): # returns 3 different severity levels: 0, 1, 2\n return x-(x/3)*3",
"Next, we need to make an subclass or an instance of the visisc.EventSelectionQuery. This class uses the <a href=\"http://docs.enthought.com/traits\">Traits</a> library which is also used by <a href=\"http://docs.enthought.com/mayavi/mayavi/\">Mayavi</a>, the 3D visualization library that we use for visualizing the data. In the initialization of an instance, we need to set four Trait lists:\nlist_of_source_ids, list_of_source_classes, list_of_event_names, list_of_event_severity_levels. In addition to that, we need to set period_start_date and period_end_date. In the current version, we also need to programatically set selected_list_of_source_ids. We need also implement the execute_query method similarly to as shown below. The execute_query can access the users selection from selected_list_of_source_ids, selected_list_of_source_classes, selected_list_of_event_names, and selected_list_of_event_severity_levels.",
"class MySelectionQuery(visisc.EventSelectionQuery):\n def __init__(self):\n self.list_of_source_ids = [i for i in range(n_sources*n_classes)]\n # Below: a list of pairs with id and name, where the name is shown in the GUI while the id is put into teh selection. \n self.list_of_source_classes = [(i, \"class_%i\"%i) for i in range(n_source_classes)] \n self.list_of_event_names = event_names\n # Below: a list of pairs with id and name, where the name is shown in the GUI while the id is put into teh selection. \n self.list_of_event_severity_levels = [(i, \"Level %i\"%i) for i in range(3)] \n self.period_start_date = data.T[date_column].min()\n self.period_end_date = data.T[date_column].max()\n \n def execute_query(self):\n query = self\n query.selected_list_of_source_ids = query.list_of_source_ids\n\n data_query = np.array(\n [\n data[i] for i in range(len(data)) if \n data[i][source_column] in query.selected_list_of_source_ids and\n data[i][class_column] in query.selected_list_of_source_classes and\n data[i][date_column] >= query.period_start_date and\n data[i][date_column] <= query.period_end_date\n ]\n )\n\n event_columns = [first_event_column+event_names.index(e) for e in query.selected_list_of_event_names\n if severity_level(first_event_column+event_names.index(e)) in query.selected_list_of_event_severity_levels]\n\n model = visisc.EventDataModel.hierarchical_model(\n event_columns=event_columns,\n get_event_path = event_path,\n get_severity_level = severity_level,\n num_of_severity_levels=3\n )\n\n data_object = model.data_object(\n data_query,\n source_column = source_column,\n class_column = class_column,\n period_column=period_column,\n date_column=date_column\n )\n\n anomaly_detector = model.fit_anomaly_detector(data_object,poisson_onesided=True)\n\n vis = visisc.EventVisualization(model, 13.8,\n start_day=query.period_end_date,# yes confusing, start day in the EventVisualization is backward looking\n precompute_cache=True) # Precompute all anomaly calculation in order to speed up visualization.",
"Given that we have the query class, we can now create and open a query selection dialog where it is possible to customize the labels for source classes and the severity levels.",
"query = MySelectionQuery()\n\ndialog = visisc.EventSelectionDialog(\n query,\n source_class_label=\"Select Machine Types\",\n severity_level_label=\"Select Event Severity Types\"\n)",
"For opening the window, we can the call. However, simarly to previous visualization examples, we have to run it outside the Jupyter notebook by calling ipython directly.\ndialog.configure_traits()",
"!ipython --matplotlib=wx --gui=wx -i visISC_query_dialog_example.py",
"The result from running the above statement will look similar to what is shown below.<br/>\n<img width=\"75%\" src=\"query_selection_dialog_1.png\"/><br/>\nBy selecting severity level 0 and class 0, and then, press the run query button, we will see a similar window as in previous examples:<br/>\n<img width=\"75%\" src=\"query_selection_dialog_2.png\"/><br/>\nIn addition, we can also select which events we want to visualize by typing search engine like queries using:<br/>\nAllowed charachters: alphanumeric and '_'and '.'<br/>\nSpace indicate OR-separated queries<br/>\n'?' = matches any character<br/>\n'*' = matches any number of characters<br/>\n'^' = matches beginning of event name<br/>\n'\\$' = matches end of event name<br/>\n<img width=\"75%\" src=\"query_selection_dialog_3.png\"/><br/>\nIn the example above, the query \"1\\$ 2\\$\" matches all event names ending with 1 or 2."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dchud/warehousing-course
|
lectures/week-07-20151027-more-sql.ipynb
|
cc0-1.0
|
[
"More SQL\nLet's grab a fairly large dataset, load it into a database, and work with it.\nGetting your data\nCapital Bikeshare trip data is a fun source of transactional data. We can work with one quarter's data to show a few key concepts.\nThe following few cells should be feeling like old hat to you by now.",
"!wget https://www.capitalbikeshare.com/assets/files/trip-history-data/2013-Q1-Trips-History-Data.zip",
"It's in a zip format, so unzip it:",
"!unzip 2013-Q1-Trips-History-Data.zip",
"How big is it?",
"!wc 2013-Q1-Trips-History-Data.csv",
"What are its columns?",
"!csvcut -n 2013-Q1-Trips-History-Data.csv",
"Okay, let's have a look.",
"!head -5 2013-Q1-Trips-History-Data.csv | csvlook",
"Ah, that's kinda wordy. Let's cut out that first column, which we can compute for ourselves later.",
"!head 2013-Q1-Trips-History-Data.csv | csvcut -C1 | csvlook",
"That's a little bit cleaner, and the rest of the data should be useful. Let's clean up the data by removing that column and renaming the headers so they're a little easier to query.",
"!csvcut -C1 2013-Q1-Trips-History-Data.csv | \\\n header -r \"start_date,end_date,start_station,end_station,bike_id,sub_type\" \\\n > bikeshare.csv",
"Make sure you haven't lost anything!",
"!wc bikeshare.csv",
"Prepping and loading data into the database\nAlright, then, let's get loading.",
"%load_ext sql",
"NOTE: See a bunch of ShimWarnings with a pink background? That's normal. It's just a heads-up about ongoing changes to IPython/Jupyter code. You can keep going.\nFirst, we create a database in mysql. Note: you can do the same thing on the command line by issuing the CREATE DATABASE command part before the pipe within the mysql shell, which you get to with the second part after the pipe. Here we'll pipe the one into the other so it reads well in the notebook.",
"!echo \"CREATE DATABASE bikedb\" | mysql --user=mysqluser --password=mysqlpass ",
"Here's how we connect the notebook up to the mysql database using a username and password. Remember that this shorthand version is possible thanks to the excellent ipython-sql Jupyter extension that we're using, otherwise you'd have to establish the connection, get a cursor, etc., like you've done explicitly in python in your other class. \nNot that there's anything wrong with that.",
"%sql mysql://mysqluser:mysqlpass@localhost/bikedb",
"Very easy, no?\nFirst, clean up if we're not running this for the first time.",
"%%sql\nDROP TABLE IF EXISTS bikeshare;",
"Next, create a table schema using DDL.",
"%%sql\nCREATE TABLE bikeshare (\n start_date DATETIME,\n end_date DATETIME,\n start_station VARCHAR(100),\n end_station VARCHAR(100),\n bike_id CHAR(7),\n sub_type CHAR(10)\n )",
"Just to verify it worked:",
"%%sql\nSELECT COUNT(*)\nFROM bikeshare",
"It worked! We just don't have any data in there yet.\nNow we load the data using LOAD DATA INFILE. You can do pretty much the same thing from the bash shell using mysqlimport and a bunch of options. It'll read better here in the notebook with the options spelled out.\nDocs for LOAD DATA INFILE are available at https://dev.mysql.com/doc/refman/5.1/en/load-data.html.\nNote: this assumes you've placed your bikeshare file in the directory /vagrant.\nNote also: I had to look up the mysql date formatting docs to get this date format conversion correct. It took me a few trials and errors before I got it right. This is an extremely common thing to have to do if you ever spend time wrangling data - every system handles dates in its own way.",
"%%sql\nLOAD DATA INFILE '/vagrant/bikeshare.csv'\nREPLACE\nINTO TABLE bikeshare\nFIELDS TERMINATED BY ','\n OPTIONALLY ENCLOSED BY '\"'\nIGNORE 1 LINES\n(@start_date, @end_date, start_station, end_station, bike_id, sub_type) \nSET start_date = STR_TO_DATE(@start_date, '%c/%e/%Y %k:%i'),\n end_date = STR_TO_DATE(@end_date, '%c/%e/%Y %k:%i')",
"Note: if the above command fails for you with a \"file not found\" error, please read these notes about apparmor. Follow that advice, and add a line like it shows, e.g.:\n/vagrant/* r\n\n...to the file, or whatever path you have your data on, reload apparmor, and try again. I had to do this, and it worked perfectly after I made that change.\nExploring your data\nNow that we've loaded our data, or we think we have, let's just verify it. Should be the same row count as what csvkit and wc gave us.",
"%%sql\nSELECT COUNT(*) \nFROM bikeshare",
"Looks good! Let's look at the data a little.",
"%%sql\nSELECT *\nFROM bikeshare\nLIMIT 5",
"How does MySQL construct this query, or more specifically, what's its execution plan? We can find out with EXPLAIN.\nFor more about how to read MySQL 5.5's query plan, see https://dev.mysql.com/doc/refman/5.5/en/execution-plan-information.html.",
"%%sql\nEXPLAIN SELECT COUNT(*)\nFROM bikeshare\nLIMIT 5",
"This says \"using no keys, we're going to just scan roughly 395,390 rows, sans indexes, to answer this query.\"",
"%%sql\nSELECT MAX(start_date)\nFROM bikeshare\n\n%%sql\nEXPLAIN SELECT MAX(start_date)\nFROM bikeshare",
"Pretty much the same thing. You can't get the max without looking at all of the values if there is no index.",
"%%sql\nSELECT COUNT(*)\nFROM bikeshare\nWHERE start_station LIKE \"%dupont%\"\n\n%%sql\nEXPLAIN SELECT COUNT(*)\nFROM bikeshare\nWHERE start_station LIKE \"%dupont%\"",
"Now we see \"using where\" under \"extra\", so we know there's a filter operation, but that's about the only change. What if we add more things to filter on?",
"%%sql\nEXPLAIN SELECT start_station, end_station, COUNT(*)\nFROM bikeshare\nWHERE start_station LIKE \"%dupont%\"\nAND end_station LIKE \"%21st%\"\nAND start_date LIKE \"2013-02-14%\"\nGROUP BY start_station, end_station\nORDER BY start_station, end_station",
"Ah, some more info - it looks like it's using a temporary relation to store intermediate results, perhaps for the GROUP BY, then a sort to handle ORDER BY.\nStill no indexes, though. Let's change that.",
"%%sql\nCREATE INDEX idx_start_station ON bikeshare (start_station)\n\n%%sql\nEXPLAIN SELECT start_station, end_station, COUNT(*)\nFROM bikeshare\nWHERE start_station LIKE \"21st%\"\nAND start_date LIKE \"2013-02-14%\"\nGROUP BY start_station, end_station\nORDER BY start_station, end_station",
"I changed the query a little bit to use the index, do you see the difference? It found search keys in the index, and the row count went down by an order of magnitude. That's the power of indexes.\nIt helps even on simple queries like this.",
"%%sql\nEXPLAIN SELECT DISTINCT start_station\nFROM bikeshare\nORDER BY start_station",
"What's that 201 value for rows? Maybe the actual count of distinct values. We can test that:",
"%%sql\nSELECT COUNT(*) \nFROM (\n SELECT DISTINCT start_station \n FROM bikeshare\n ) made_up_subquery_alias_name",
"There you go, that's exactly the answer.\nHow about that MAX() query we tried a little while back?",
"%%sql\nSELECT MAX(start_date)\nFROM bikeshare\n\n%%sql\nEXPLAIN SELECT MAX(start_date)\nFROM bikeshare",
"Let's create another index on start_date to see what the effect on the query plan will be.",
"%%sql\nCREATE INDEX idx_start_date ON bikeshare (start_date)\n\n%%sql\nSELECT MAX(start_date)\nFROM bikeshare",
"Same result, but...",
"%%sql\nEXPLAIN SELECT MAX(start_date)\nFROM bikeshare",
"That's new! In this case it doesn't have to look at any rows, it can just look at one end of the index. We've optimized away the need to even look at the table.\nLet's go back to COUNT() and try a few more things before we move on.",
"%%sql\nEXPLAIN SELECT COUNT(*)\nFROM bikeshare \n\n%%sql\nEXPLAIN SELECT COUNT(start_date)\nFROM bikeshare \n\n%%sql\nEXPLAIN SELECT COUNT(end_date)\nFROM bikeshare ",
"Do you see what happened there?\nNormalizing attributes\nLet's look at a few tasks you might need to perform if you were normalizing this dataset. Remember that in normalization, we reduce redundancy with the goal of consistency.\nWhat's redundant? Well, the station names for one.",
"%%sql\nSELECT COUNT(DISTINCT start_station)\nFROM bikeshare\n\n%%sql\nSELECT COUNT(DISTINCT end_station)\nFROM bikeshare",
"Hmm, they're different. Let's put them together.",
"%%sql\nSELECT COUNT(DISTINCT station) FROM\n(\n SELECT start_station AS station FROM bikeshare\n UNION\n SELECT end_station AS station FROM bikeshare\n) a",
"We'll create a table to hold the names of stations. Each station name should be represented once, and we'll assign a primary key to each in the form of a unique integer.",
"%%sql\nCREATE TABLE station (\n id SMALLINT NOT NULL AUTO_INCREMENT,\n name VARCHAR(100),\n PRIMARY KEY (id)\n)\n\n%%sql\nSELECT COUNT(*) \nFROM station",
"Looks good. Now we can load the data with an INSERT that draws from our previous query. We can skip specifying the id because MySQL will do that for us.\nNote: every database handles this issue its own way. This is a nice convenience in MySQL; other database backends require more work.",
"%%sql\nINSERT INTO station (name) \nSELECT DISTINCT station AS name\nFROM\n(\n SELECT start_station AS station FROM bikeshare\n UNION\n SELECT end_station AS station FROM bikeshare\n) a\n\n%%sql\nSELECT * \nFROM station\nLIMIT 10",
"It worked. Now we can update the bikeshare table to add columns for station identifiers.",
"%%sql\nALTER TABLE bikeshare\nADD COLUMN start_station_id SMALLINT\nAFTER start_station",
"Looks good. But what exactly just happened?",
"%%sql\nDESCRIBE bikeshare\n\n%%sql\nSELECT * \nFROM bikeshare\nLIMIT 5",
"What just happened? Why are all the start_station_id values None?\nLet's fill in those values with our new identifiers from the station table.",
"%%sql\nUPDATE bikeshare\nINNER JOIN station\n ON bikeshare.start_station = station.name\nSET bikeshare.start_station_id = station.id\n\n%%sql\nSELECT * FROM bikeshare LIMIT 5\n\n%%sql\nSELECT * FROM station WHERE id = 161",
"Great, now we can drop start_station from bikeshare and save a lot of space.",
"%%sql\nALTER TABLE bikeshare\nDROP COLUMN start_station\n\n%%sql\nDESCRIBE bikeshare\n\n%%sql\nSELECT * FROM bikeshare LIMIT 5",
"Worked!\nAnd we can repeat the process for end_station.",
"%%sql\nALTER TABLE bikeshare\nADD COLUMN end_station_id SMALLINT\nAFTER end_station\n\n%%sql\nUPDATE bikeshare\nINNER JOIN station\n ON bikeshare.end_station = station.name\nSET bikeshare.end_station_id = station.id\n\n%%sql\nALTER TABLE bikeshare\nDROP COLUMN end_station\n\n%%sql\nSELECT * FROM bikeshare LIMIT 5",
"A lot leaner, right?\nJOINs and indexes\nNow let's look at queries that return station names, thus requiring a JOIN across the two tables. Keep in mind our two table schema.",
"%%sql\nDESCRIBE station\n\n%%sql\nDESCRIBE bikeshare",
"Let's try a basic query that looks for the most busy station pairs.",
"%%sql\nSELECT COUNT(*) AS c, start_station_id, end_station_id\nFROM bikeshare\nGROUP BY start_station_id, end_station_id\nORDER BY c DESC\nLIMIT 5",
"Now let's liven it up by joining to station and including station names. We'll need to join twice, using two aliases.\nWorked just fine. Let's look under the hood, though.",
"%%sql\nSELECT COUNT(*) AS c, station_1.name AS start_station, station_2.name AS end_station\nFROM bikeshare, station AS station_1, station AS station_2\nWHERE station_1.id = bikeshare.start_station_id\n AND station_2.id = bikeshare.end_station_id\nGROUP BY bikeshare.start_station_id, bikeshare.end_station_id\nORDER BY c DESC\nLIMIT 5",
"Looks good, and it's in my neighborhood. :)\nLet's look at the query plan for all this:",
"%%sql\nEXPLAIN SELECT COUNT(*) AS c, station_1.name AS start_station, station_2.name AS end_station\nFROM station AS station_1, station AS station_2, bikeshare\nWHERE bikeshare.start_station_id = station_1.id\n AND bikeshare.end_station_id = station_2.id\nGROUP BY bikeshare.start_station_id, bikeshare.end_station_id\nORDER BY c DESC\nLIMIT 5",
"Not bad, but it's doing a full table scan on bikeshare. Let's see if some indexes would help with the two joins.",
"%%sql\nCREATE INDEX idx_start_station_id ON bikeshare (start_station_id)\n\n%%sql\nCREATE INDEX idx_end_station_id ON bikeshare (end_station_id)\n\n%%sql\nEXPLAIN SELECT COUNT(*) AS c, station_1.name AS s1_name, station_2.name AS s2_name\nFROM bikeshare, station AS station_1, station AS station_2\nWHERE station_1.id = bikeshare.start_station_id\n AND station_2.id = bikeshare.end_station_id\nGROUP BY bikeshare.start_station_id, bikeshare.end_station_id\nORDER BY c DESC\nLIMIT 5",
"Well, it's hard to say how much better this will perform without a lot more data. A COUNT operation simply needs to be able to count everything, if the level of granularity it's counting doesn't already have an easy lookup like we saw before. Sometimes you just don't feel the pain of scale until you hit a scaling threshold that varies with the shape of your data.\nBut - see the possible_keys in the first row? That means the optimizer sees the indexes present and will attempt to use those to at least organize the query a little better than it would be able to do without them.\nLet's try one more thing - we can create an index on multiple columns that matches our query more precisely. It's inefficient tot look up one column, then another, after all, we're looking for combinations of both. A multiple column index can precompute that.",
"%%sql\nCREATE INDEX idx_stations ON bikeshare (start_station_id, end_station_id)\n\n%%sql\nEXPLAIN SELECT COUNT(*) AS c, station_1.name AS s1_name, station_2.name AS s2_name\nFROM bikeshare, station AS station_1, station AS station_2\nWHERE station_1.id = bikeshare.start_station_id\n AND station_2.id = bikeshare.end_station_id\nGROUP BY bikeshare.start_station_id, bikeshare.end_station_id\nORDER BY c DESC\nLIMIT 5",
"Finally, looks like a big difference!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
aflaxman/siaman16-va-minitutorial
|
2-tutorial-notebook-solutions/4-va_csmf.ipynb
|
gpl-3.0
|
[
"We won't work through this notebook\nWe won't have time. But I thought I'd include it, in case you want to see exactly how I implement my population-level quality metric.",
"import numpy as np, pandas as pd",
"Let's put the CSMF Accuracy calculation right at the top",
"def measure_prediction_quality(csmf_pred, y_test):\n \"\"\"Calculate population-level prediction quality (CSMF Accuracy)\n \n Parameters\n ----------\n csmf_pred : pd.Series, predicted distribution of causes\n y_test : array-like, labels for test dataset\n \n Results\n -------\n csmf_acc : float\n \"\"\"\n \n csmf_true = pd.Series(y_test).value_counts() / float(len(y_test))\n csmf_acc = 1 - np.sum(np.absolute(csmf_pred - csmf_true)) / (2*(1 - csmf_true.min()))\n # cccsmf_acc = (csmf_acc - 0.632) / (1 - 0.632)\n\n return csmf_acc",
"How can I test this?",
"csmf_pred = pd.Series({'cause_1': .5, 'cause_2': .5})\ny_test = ['cause_1', 'cause_2']\nmeasure_prediction_quality(csmf_pred, y_test)\n\ncsmf_pred = pd.Series({'cause_1': 0., 'cause_2': 1.})\ny_test = ['cause_1']*1000 + ['cause_2']\nmeasure_prediction_quality(csmf_pred, y_test)",
"Things we don't have time for\nAn approach to really do the cross-validation out of sample:",
"val = {}\nmodule = 'Adult'\nval[module] = pd.read_csv('../3-data/phmrc_cleaned.csv')\n\ndef get_data(module):\n X = np.array(val[module].filter(regex='(^s[0-9]+|age|sex)').fillna(0))\n y = np.array(val[module].gs_text34)\n site = np.array(val[module].site)\n \n return X, y, site\n\nX, y, site = get_data(module)\nX.shape\n\ndef my_resample(X, y, N2, csmf_new):\n \"\"\"\"Randomly resample X and y so that resampled cause distribution follows\n csmf_new and there are N2 samples total\n \n Parameters\n ----------\n X : array-like, feature vectors\n y : array-like, corresponding labels\n N2 : int, number of samples in resampled results\n csmf_new : pd.Series, distribution of resampled data\n \n Results\n -------\n X_new : array-like, resampled feature vectors\n y_new : array-like, corresponding resampled labels\n \"\"\"\n \n N, I = X.shape\n assert len(y) == N, 'X and y must have same length' \n\n causes = csmf_new.index\n J, = causes.shape # trailing comma for sneaky numpy reasons\n \n # generate count of examples for each cause according to csmf_new\n cnt_new = np.random.multinomial(N2, csmf_new)\n \n # replace y_new with original values\n y_new = []\n for cnt, cause in zip(cnt_new, causes):\n for n_j in range(cnt):\n y_new.append(cause)\n y_new = np.array(y_new)\n \n # resample rows of X appropriately\n X_new = np.zeros((len(y_new), I))\n for j in causes:\n new_rows, = np.where(y_new == j) # trailing comma for sneaky numpy reasons\n candidate_rows, = np.where(y == j) # trailing comma for sneaky numpy reasons\n \n assert len(candidate_rows) > 0, 'must have examples of each resampled cause'\n old_rows = np.random.choice(candidate_rows, size=len(new_rows), replace=True)\n X_new[new_rows,] = X[old_rows,]\n return X_new, y_new\n\ndef random_allocation(X_train, y_train):\n \"\"\" make predictions by random allocation\"\"\"\n clf = sklearn.base.BaseEstimator()\n def my_predict(X_test):\n N = len(X_test)\n J = float(len(np.unique(y_train)))\n \n y_pred = np.ones((N, J)) / J\n csmf_pred = pd.Series(y_pred.sum(axis=0),\n index=np.unique(y_train)) / N\n return csmf_pred\n clf.my_predict = my_predict\n return clf\n\ndef my_key(module, clf):\n return '{}-{}'.format(module, clf)\n\nimport sklearn.model_selection\n\nresults = []\ndef measure_csmf_acc(my_fit_predictor, replicates=10):\n \"\"\" my_fit_predictor : function that takes X,y returns clf object with my_predict method\n clf.my_predict takes X_test, return csmf_pred\n \n Results\n -------\n stores calculation in results dict,\n returns calc for adults\n \"\"\"\n X, y, site = get_data(module)\n acc = []\n\n np.random.seed(12345) # set seed for reproducibility\n cv = sklearn.model_selection.StratifiedShuffleSplit(n_iter=replicates, test_size=0.25)\n for train_index, test_index in cv.split(X, y):\n # make train test split\n X_train, X_test = X[train_index], X[test_index]\n y_train, y_test = y[train_index], y[test_index]\n\n # resample train set for equal class weights\n J = len(np.unique(y))\n csmf_flat = pd.Series(np.ones(J)/J, index=np.unique(y))\n X_train, y_train = my_resample(X_train, y_train, J*100, csmf_flat)\n\n clf = my_fit_predictor(X_train, y_train)\n\n # resample test set to have uninformative cause distribution\n csmf_rand = pd.Series(np.random.dirichlet(np.ones(J)), index=np.unique(y))\n X_test_resamp, y_test_resamp = my_resample(X_test, y_test, J*100, csmf_rand)\n\n # make predictions\n csmf_pred = clf.my_predict(X_test_resamp)\n\n # test predictions\n csmf_acc = measure_prediction_quality(csmf_pred, y_test_resamp)\n\n results.append({'csmf_acc':csmf_acc, 'key':my_key(module, clf)})\n\n df = pd.DataFrame(results)\n g = df.groupby('key')\n return g.csmf_acc.describe().unstack()\n\nbaseline_csmf_acc = measure_csmf_acc(random_allocation)\nbaseline_csmf_acc\n\nimport sklearn.naive_bayes\n\ndef nb_pr_allocation(X_train, y_train):\n clf = sklearn.naive_bayes.BernoulliNB()\n clf.fit(X_train, y_train)\n \n def my_predict(X_test):\n y_pred = clf.predict_proba(X_test)\n csmf_pred = pd.Series(y_pred.sum(axis=0), index=clf.classes_) / float(len(y_pred))\n return csmf_pred\n clf.my_predict = my_predict\n return clf\n \nmeasure_csmf_acc(nb_pr_allocation)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
RobinKa/tfga
|
notebooks/em.ipynb
|
mit
|
[
"%load_ext autoreload\n%autoreload 2\n\nimport tensorflow as tf\n# Make tensorflow not take over the entire GPU memory\nfor gpu in tf.config.experimental.list_physical_devices('GPU'):\n tf.config.experimental.set_memory_growth(gpu, True)\n\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom IPython.display import Video\nfrom tfga import GeometricAlgebra\n\nnp.set_printoptions(precision=2, suppress=True)",
"Introduction\nClassical electromagnetism is most often described using maxwell's equations. Instead, we can also describe it using a Lagrange density and an action which is the spacetime integral over the Lagrange density.\nThe field is represented by a 4-vector in the spacetime-algebra where the first component is the electric potential and the last three components are the magnetic vector potential. Such as 4-vector is given at every point in spacetime. The Lagrangian density at a spacetime point $X = (t, x, y, z)$ for such a 4-vector field $A(X)$ when speed of light $c = 1$ and without external sources is given by the following equation:\n$\\mathcal{L}(A, X) = \\langle \\nabla_X A(X) * \\widetilde{\\nabla_X A}(X) \\rangle_0$\nThe principle of stationary action then states that the classical solution of the field is achieved when the action\n$S(A) = \\int_{X}{\\mathcal{L}(A, X) dX}$\ndoes not change anymore, that is $\\delta S(A) = 0$.\nGoal\nBelow we will obtain an entire space-time field configuration $A(X)$ given only some boundary conditions and a function for the action given $A$. We will then use Tensorflow's optimizer to\nfind a field configuration that makes the action stationary.\nCreate the spacetime algebra\nHere we initialize a tfga.GeometricAlgebra instance with bases $e_0=e_t, e_1=e_x, e_2=e_y, e_3=e_z$ and corresponding metric $[-1, 1, 1, 1]$. We will use this\nwhen calculating the action later as we need the geometric product and reversion operations.",
"ga = GeometricAlgebra([-1, 1, 1, 1])",
"Calculate the action\nNow we create a function which returns the action $S$ given a field configuration $A(X)$ on a discretized spacetime lattice of size $[N, N, N, N]$. We use the following boundary conditions for $A(X)$:\n$A_{t=-1} = 0, A_{t=N} = 0$\n$A_{x=-1} = 10 sin(4 * \\pi / N * t) e_0, A_{x=N} = -5 e_0$\n$A_{y=-1} = 0, A_{y=N} = 0$\n$A_{z=-1} = 0, A_{z=N} = 0$\nAs a reminder, $e_0$ is the electric potential part of the 4-vector, so we have a periodic sine electric potential that changes over time (two periods in total) and amplitude 10 at the lower x boundary and a constant negative electric potential of -5 at the upper x boundary.",
"def get_action(config_a_variable):\n # config_a_variable will be of shape [N, N, N, N, 4].\n # The last axis' values are the e0, e1, e2, e3 parts of the multivector.\n\n # Finite differences in each direction using padding.\n # Example with zero padding (ie. zeros on the boundary):\n # 1 2 3\n # 1 2 3 0 padded right\n # - 0 1 2 3 padded left\n # = 1 1 1-3 padded right - padded left\n # As spacing we use 1 so we don't need to divide by anything here.\n \n # Also use the boundary conditions in the padded values here.\n # This gets a bit verbose because of the pad syntax esepcially since we only want to pad the\n # first index of the last axis with non-zeros.\n\n # Create time-varying boundary condition. Start with sine of shape [N].\n # Then reshape to [N, 1, N, N, 1] which we can concatenate with the\n # original values.\n pad_values = 10.0 * tf.sin(2.0 * tf.range(grid_size[0], dtype=tf.float32) * 2.0 * tf.constant(np.pi, dtype=tf.float32) / grid_size[0])\n pad_values = tf.expand_dims(pad_values, axis=-1)\n pad_values = tf.expand_dims(pad_values, axis=-1)\n pad_values = tf.expand_dims(pad_values, axis=-1)\n pad_values = tf.expand_dims(pad_values, axis=-1)\n pad_values = tf.tile(pad_values, [1, 1, grid_size[0], grid_size[0], 1])\n\n config_left_pad_x = tf.concat([\n tf.concat([pad_values, config_a_variable[..., :1]], axis=1),\n tf.pad(config_a_variable[..., 1:], [[0, 0], [1, 0], [0, 0], [0, 0], [0, 0]]),\n ], axis=-1)\n\n config_right_pad_x = tf.concat([\n tf.pad(config_a_variable[..., :1], [[0, 0], [0, 1], [0, 0], [0, 0], [0, 0]], constant_values=-5),\n tf.pad(config_a_variable[..., 1:], [[0, 0], [0, 1], [0, 0], [0, 0], [0, 0]]),\n ], axis=-1)\n\n config_left_pad_y = tf.concat([\n tf.pad(config_a_variable[..., :1], [[0, 0], [0, 0], [1, 0], [0, 0], [0, 0]]),\n tf.pad(config_a_variable[..., 1:], [[0, 0], [0, 0], [1, 0], [0, 0], [0, 0]]),\n ], axis=-1)\n\n config_dt_a = (\n tf.pad(config_a_variable, [[0, 1], [0, 0], [0, 0], [0, 0], [0, 0]]) -\n tf.pad(config_a_variable, [[1, 0], [0, 0], [0, 0], [0, 0], [0, 0]])\n )\n config_dx_a = config_right_pad_x - config_left_pad_x\n config_dy_a = (\n tf.pad(config_a_variable, [[0, 0], [0, 0], [0, 1], [0, 0], [0, 0]]) -\n config_left_pad_y\n )\n config_dz_a = (\n tf.pad(config_a_variable, [[0, 0], [0, 0], [0, 0], [0, 1], [0, 0]]) -\n tf.pad(config_a_variable, [[0, 0], [0, 0], [0, 0], [1, 0], [0, 0]])\n )\n\n # Convert to multivectors so we can use GA ops we need in the Lagrangian:\n # the geometric product and reversion.\n config_dt_a = ga.from_tensor_with_kind(config_dt_a, \"vector\")\n config_dx_a = ga.from_tensor_with_kind(config_dx_a, \"vector\")\n config_dy_a = ga.from_tensor_with_kind(config_dy_a, \"vector\")\n config_dz_a = ga.from_tensor_with_kind(config_dz_a, \"vector\")\n\n # Sum all the derivatives according to the action / Lagrangian and return a single scalar value\n return (\n tf.reduce_sum(ga.geom_prod(config_dt_a, ga.reversion(config_dt_a))[..., 0]) +\n tf.reduce_sum(ga.geom_prod(config_dx_a, ga.reversion(config_dx_a))[..., 0]) +\n tf.reduce_sum(ga.geom_prod(config_dy_a, ga.reversion(config_dy_a))[..., 0]) +\n tf.reduce_sum(ga.geom_prod(config_dz_a, ga.reversion(config_dz_a))[..., 0])\n )",
"Initialize the 4-vector field variable randomly",
"grid_size = [16, 16, 16, 16]\n\nconfig_a_variable = tf.Variable(tf.random.normal([*grid_size, 4], seed=0))",
"Optimize the 4-vector field variable to make the action stationary\nIn order to make the action stationary we use a loss function that is minimal when the action is stationary (ie. the gradient of the action with respect to the field configuration is 0).\nWe use the mean-squared error to create such a loss function, although other functions such as the absolute value would work too.\nWe use Tensorflow's Adam optimizer to find a field configuration which minimizes the loss.",
"optimizer = tf.optimizers.Adam(0.01)\n\n@tf.function\ndef train_step(config_a_variable):\n # Principle of stationary action:\n # Minimize the distance of gradient of the action to zero with respect to our field\n with tf.GradientTape() as tape_outer:\n tape_outer.watch(config_a_variable)\n with tf.GradientTape() as tape:\n tape.watch(config_a_variable)\n loss = get_action(config_a_variable)\n\n grads = tape.gradient(loss, [config_a_variable])\n grads_mse = tf.reduce_mean(tf.square(grads))\n grads2 = tape_outer.gradient(grads_mse, [config_a_variable])\n optimizer.apply_gradients(zip(grads2, [config_a_variable]))\n\nfor i in range(3000):\n train_step(config_a_variable)",
"Extract and visualize the optimized electric field\nNow we can take the result, that is the $A$ at every spacetime point and visualize it. Obviously we can't visualize a 4 dimensional 4-vector field. However we can look at\nindividual 2D slices of the electric potential field, which is the first component of the 4-vector, where the other two coordinates take on a specific value.",
"# Plot electric potential slices. We are not plotting the boundaries here.\n\nplt.figure(figsize=(7, 7))\nplt.imshow(config_a_variable[..., 0, 0, 0])\nplt.colorbar()\nplt.title(\"Electric potential in TX plane Y=0, Z=0\")\nplt.xlabel(\"X\")\nplt.ylabel(\"T\")\nplt.show()\n\nplt.figure(figsize=(7, 7))\nplt.imshow(config_a_variable[..., 0, 5, :, :, 0])\nplt.colorbar()\nplt.title(\"Electric potential in YZ plane T=0, X=5\")\nplt.xlabel(\"Z\")\nplt.ylabel(\"Y\")\nplt.show()\n\nplt.figure(figsize=(7, 7))\nplt.imshow(config_a_variable[..., 2, :, :, 0, 0])\nplt.colorbar()\nplt.title(\"Electric potential in XY plane T=2, Z=0\")\nplt.xlabel(\"Y\")\nplt.ylabel(\"X\")\nplt.show()",
"In the first figure we can see the potential close to X=0 (where we applied the sine boundary condition) changing over time.\nThe second figure shows the YZ slice at T=0, X=5 where the potential is almost constant but we still have a radial symmetry.\nThe last figure shows the XY slice at T=2, Z=0 where the potential takes its maximum value around X=0 if we look at the first figure. We can also see that on upper boundary of X that we have a negative potential as we applied a constant negative electric potential for boundary condition there.\nWe can also visualize the XY slices over time in a video. For this I saved the XY slices at all times and converted them to a webm using ffmpeg. Here we can see the electric potential close to X=0 changing over time as we expected from the boundary condition. (Direct link: em_output/electric_potential.webm)",
"Video(\"./em_output/electric_potential.webm\")",
"Next we can look at the electric vector field corresponding to the electric potential: $E = -\\nabla_{x,y,z} \\langle A(X) \\rangle_{e0} - \\nabla_t \\langle A(X) \\rangle_{e1,e2,e3}$",
"def draw_electric_field_xy(t, z):\n # Extract XY slice of electric potential [T=t, X, Y, Z=0, 0]\n electric_potential = config_a_variable[t, :, :, z, 0]\n magnetic_potential_t = config_a_variable[t, :, :, z, 1:]\n magnetic_potential_t2 = config_a_variable[t+1, :, :, z, 1:]\n\n # The electric field can be obtained from the 4-vector potential:\n # E = - (d/dx, d/dy, d/dz) <A>_e0 - d/dt <A>_e1,e2,e3\n # We can use finite differences again to approximate the derivatives.\n # We also need to get rid of the last element of the respective other axis,\n # since we couldn't calculate the last finite difference as that would\n # require using the boundary condition (which is possible, but would require extra code).\n\n # Start with -(d/dx, d/dy, d/dz) <A>_e0\n ex = -(electric_potential[1:, :-1] - electric_potential[:-1, :-1])\n ey = -(electric_potential[:-1, 1:] - electric_potential[:-1, :-1])\n \n # Calculate d/dt <A>_e1,e2,e3 and add it to the previous calculation\n dt_mag_a = -(magnetic_potential_t2[-1, :-1] - magnetic_potential_t[:-1, :-1])\n\n ex += dt_mag_a[..., 0]\n ey += dt_mag_a[..., 1]\n\n ys, xs = np.meshgrid(np.arange(ex.shape[0]), np.arange(ex.shape[1]))\n\n plt.figure(figsize=(10, 10))\n plt.quiver(ys, xs, ey, ex, scale=10, scale_units=\"inches\")\n plt.xlabel(\"Y\")\n plt.ylabel(\"X\")\n plt.title(\"Electric field XY at T=%d, Z=%d\" % (t, z))\n\ndraw_electric_field_xy(t=2, z=0)\nplt.show()",
"And again I made a video showing all the time slices. (Direct link: em_output/electric_field.webm)",
"Video(\"./em_output/electric_field.webm\")"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
anilcs13m/MachineLearning_Mastering
|
songrecommender/.ipynb_checkpoints/Song recommender-checkpoint.ipynb
|
gpl-2.0
|
[
"Building a song recommender\nFire up GraphLab Create",
"import graphlab",
"Load music data",
"song_data = graphlab.SFrame('song_data.gl/')",
"Explore data\nMusic data shows how many times a user listened to a song, as well as the details of the song.",
"song_data.head(5)",
"Showing the most popular songs in the dataset",
"graphlab.canvas.set_target('ipynb')\n\nsong_data['song'].show()\n\nlen(song_data)",
"Count number of unique users in the dataset",
"users = song_data['user_id'].unique()\n\nlen(users)",
"Create a song recommender",
"train_data,test_data = song_data.random_split(.8,seed=0)",
"Simple popularity-based recommender",
"popularity_model = graphlab.popularity_recommender.create(train_data,\n user_id='user_id',\n item_id='song')",
"Use the popularity model to make some predictions\nA popularity model makes the same prediction for all users, so provides no personalization.",
"popularity_model.recommend(users=[users[0]])\n\npopularity_model.recommend(users=[users[1]])",
"Build a song recommender with personalization\nWe now create a model that allows us to make personalized recommendations to each user.",
"personalized_model = graphlab.item_similarity_recommender.create(train_data,\n user_id='user_id',\n item_id='song')",
"Applying the personalized model to make song recommendations\nAs you can see, different users get different recommendations now.",
"personalized_model.recommend(users=[users[0]])\n\npersonalized_model.recommend(users=[users[1]])",
"We can also apply the model to find similar songs to any song in the dataset",
"personalized_model.get_similar_items(['With Or Without You - U2'])\n\npersonalized_model.get_similar_items(['Chan Chan (Live) - Buena Vista Social Club'])",
"Quantitative comparison between the models\nWe now formally compare the popularity and the personalized models using precision-recall curves.",
"if graphlab.version[:3] >= \"1.6\":\n model_performance = graphlab.compare(test_data, [popularity_model, personalized_model], user_sample=0.05)\n graphlab.show_comparison(model_performance,[popularity_model, personalized_model])\nelse:\n %matplotlib inline\n model_performance = graphlab.recommender.util.compare_models(test_data, [popularity_model, personalized_model], user_sample=.05)",
"The curve shows that the personalized model provides much better performance."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/training-data-analyst
|
blogs/explainable_ai/AI_Explanations_on_CAIP.ipynb
|
apache-2.0
|
[
"AI Explanations: Explaining a tabular data model\nOverview\nIn this tutorial we will perform the following steps:\n\nBuild and train a Keras model.\nExport the Keras model as a TF 1 SavedModel and deploy the model on Cloud AI Platform.\nCompute explainations for our model's predictions using Explainable AI on Cloud AI Platform.\n\nDataset\nThe dataset used for this tutorial was created from a BigQuery Public Dataset: NYC 2018 Yellow Taxi data. \nObjective\nThe goal is to train a model using the Keras Sequential API that predicts how much a customer is compelled to pay (fares + tolls) for a taxi ride given the pickup location, dropoff location, the day of the week, and the hour of the day.\nThis tutorial focuses more on deploying the model to AI Explanations than on the design of the model itself. We will be using preprocessed data for this lab. If you wish to know more about the data and how it was preprocessed please see this notebook.\nBefore you begin\nThis notebook was written with running in Google Colabratory in mind. The notebook will run on Cloud AI Platform Notebooks or your local environment if the proper packages are installed.\nMake sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime --> Change runtime type and select GPU for Hardward Accelerator.\nAuthenticate your GCP account\nIf you are using AI Platform Notebooks, your environment is already\nauthenticated. You should skip this step.\nBe sure to change the PROJECT_ID below to your project before running the cell!",
"import os\n\nPROJECT_ID = \"michaelabel-gcp-training\" \nos.environ[\"PROJECT_ID\"] = PROJECT_ID",
"If you are using Colab, run the cell below and follow the instructions\nwhen prompted to authenticate your account via oAuth. Ignore the error message related to tensorflow-serving-api.",
"import sys\nimport warnings\nwarnings.filterwarnings('ignore')\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' \n# If you are running this notebook in Colab, follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\nif 'google.colab' in sys.modules:\n from google.colab import auth as google_auth\n google_auth.authenticate_user()\n !pip install witwidget --quiet\n !pip install tensorflow==1.15.2 --quiet\n !gcloud config set project $PROJECT_ID\n\nelif \"DL_PATH\" in os.environ:\n !sudo pip install tabulate --quiet\n",
"Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you submit a training job using the Cloud SDK, you upload a Python package\ncontaining your training code to a Cloud Storage bucket. AI Platform runs\nthe code from this package. In this tutorial, AI Platform also saves the\ntrained model that results from your job in the same bucket. You can then\ncreate an AI Platform model version based on this output in order to serve\nonline predictions.\nSet the name of your Cloud Storage bucket below. It must be unique across all\nCloud Storage buckets.\nYou may also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Make sure to choose a region where Cloud\nAI Platform services are\navailable. Note that you may\nnot use a Multi-Regional Storage bucket for training with AI Platform.",
"BUCKET_NAME = \"michaelabel-gcp-training-ml\" \nREGION = \"us-central1\"\n\nos.environ['BUCKET_NAME'] = BUCKET_NAME\nos.environ['REGION'] = REGION",
"Run the following cell to create your Cloud Storage bucket if it does not already exist.",
"%%bash\nexists=$(gsutil ls -d | grep -w gs://${BUCKET_NAME}/)\n\nif [ -n \"$exists\" ]; then\n echo -e \"Bucket gs://${BUCKET_NAME} already exists.\"\n \nelse\n echo \"Creating a new GCS bucket.\"\n gsutil mb -l ${REGION} gs://${BUCKET_NAME}\n echo -e \"\\nHere are your current buckets:\"\n gsutil ls\nfi",
"Import libraries for creating model\nImport the libraries we'll be using in this tutorial. This tutorial has been tested with TensorFlow 1.15.2.",
"%tensorflow_version 1.x\nimport tensorflow as tf \nimport tensorflow.feature_column as fc\nimport pandas as pd\nimport numpy as np \nimport json\nimport time\n\n# Should be 1.15.2\nprint(tf.__version__)",
"Downloading and preprocessing data\nIn this section you'll download the data to train and evaluate your model from a public GCS bucket. The original data has been preprocessed from the public BigQuery dataset linked above.",
"%%bash\n# Copy the data to your notebook instance\nmkdir taxi_preproc\ngsutil cp -r gs://cloud-training/bootcamps/serverlessml/taxi_preproc/*_xai.csv ./taxi_preproc\nls -l taxi_preproc",
"Read the data with Pandas\nWe'll use Pandas to read the training and validation data into a DataFrame. We will only use the first 7 columns of the csv files for our models.",
"CSV_COLUMNS = ['fare_amount', 'dayofweek', 'hourofday', 'pickuplon',\n 'pickuplat', 'dropofflon', 'dropofflat']\n\nDAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']\nDTYPES = ['float32', 'str' , 'int32', 'float32' , 'float32' , 'float32' , 'float32' ]\n\ndef prepare_data(file_path):\n\n df = pd.read_csv(file_path, usecols = range(7), names = CSV_COLUMNS,\n dtype = dict(zip(CSV_COLUMNS, DTYPES)), skiprows=1)\n \n labels = df['fare_amount'] \n df = df.drop(columns=['fare_amount'])\n\n df['dayofweek'] = df['dayofweek'].map(dict(zip(DAYS, range(7)))).astype('float32')\n\n return df, labels\n\ntrain_data, train_labels = prepare_data('./taxi_preproc/train_xai.csv')\nvalid_data, valid_labels = prepare_data('./taxi_preproc/valid_xai.csv')\n\n# Preview the first 5 rows of training data\ntrain_data.head()",
"Build, train, and evaluate our model with Keras\nWe'll use tf.Keras to build a our ML model that takes our features as input and predicts the fare amount. \nBut first, we will do some feature engineering. We will be utilizing tf.feature_column and tf.keras.layers.Lambda to implement our feature engineering in the model graph to simplify our serving_input_fn later.",
"# Create functions to compute engineered features in later Lambda layers\ndef euclidean(params):\n lat1, lon1, lat2, lon2 = params\n londiff = lon2 - lon1\n latdiff = lat2 - lat1\n return tf.sqrt(londiff*londiff + latdiff*latdiff)\n\nNUMERIC_COLS = ['pickuplon', 'pickuplat', 'dropofflon', 'dropofflat', 'hourofday', 'dayofweek']\n\ndef transform(inputs):\n\n transformed = inputs.copy()\n\n transformed['euclidean'] = tf.keras.layers.Lambda(euclidean, name='euclidean')([\n inputs['pickuplat'],\n inputs['pickuplon'],\n inputs['dropofflat'],\n inputs['dropofflon']])\n \n feat_cols = {colname: fc.numeric_column(colname)\n for colname in NUMERIC_COLS}\n\n feat_cols['euclidean'] = fc.numeric_column('euclidean')\n\n print(\"BEFORE TRANSFORMATION\")\n print(\"INPUTS:\", inputs.keys())\n\n print(\"AFTER TRANSFORMATION\")\n print(\"TRANSFORMED:\", transformed.keys())\n print(\"FEATURES\", feat_cols.keys()) \n\n return transformed, feat_cols\n\ndef build_model():\n\n raw_inputs = {\n colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')\n for colname in NUMERIC_COLS\n }\n \n transformed, feat_cols = transform(raw_inputs)\n\n dense_inputs = tf.keras.layers.DenseFeatures(feat_cols.values(),\n name = 'dense_input')(transformed)\n\n h1 = tf.keras.layers.Dense(64, activation='relu', name='h1')(dense_inputs)\n h2 = tf.keras.layers.Dense(32, activation='relu', name='h2')(h1)\n output = tf.keras.layers.Dense(1, activation='linear', name = 'output')(h2)\n\n model = tf.keras.models.Model(raw_inputs, output)\n\n return model\n\nmodel = build_model()\nmodel.summary()\n\n# Compile the model and see a summary\noptimizer = tf.keras.optimizers.Adam(0.001)\n\nmodel.compile(loss='mean_squared_error', optimizer=optimizer,\n metrics = [tf.keras.metrics.RootMeanSquaredError()])\n\ntf.keras.utils.plot_model(model, to_file='model_plot.png', show_shapes=True, \n show_layer_names=True, rankdir=\"TB\")",
"Create an input data pipeline with tf.data\nPer best practices, we will use tf.Data to create our input data pipeline. Our data is all in an in-memory dataframe, so we will use tf.data.Dataset.from_tensor_slices to create our pipeline.",
"def load_dataset(features, labels, mode):\n\n dataset = tf.data.Dataset.from_tensor_slices(({\"dayofweek\" : features[\"dayofweek\"],\n \"hourofday\" : features[\"hourofday\"],\n \"pickuplat\" : features[\"pickuplat\"],\n \"pickuplon\" : features[\"pickuplon\"],\n \"dropofflat\" : features[\"dropofflat\"],\n \"dropofflon\" : features[\"dropofflon\"]},\n labels\n ))\n\n\n if mode == tf.estimator.ModeKeys.TRAIN:\n dataset = dataset.repeat().batch(256).shuffle(256*10)\n else:\n dataset = dataset.batch(256)\n\n return dataset.prefetch(1)\n\n\ntrain_dataset = load_dataset(train_data, train_labels, tf.estimator.ModeKeys.TRAIN)\nvalid_dataset = load_dataset(valid_data, valid_labels, tf.estimator.ModeKeys.EVAL)",
"Train the model\nNow we train the model. We will specify a number of epochs which to train the model and tell the model how many steps to expect per epoch.",
"tf.keras.backend.get_session().run(tf.tables_initializer(name='init_all_tables'))\n\nsteps_per_epoch = 426433 // 256\n\nmodel.fit(train_dataset, steps_per_epoch=steps_per_epoch, validation_data=valid_dataset, epochs=10)\n\n# Send test instances to model for prediction\npredict = model.predict(valid_dataset, steps = 1)\npredict[:5]",
"Export the model as a TF 1 SavedModel\nIn order to deploy our model in a format compatible with AI Explanations, we'll follow the steps below to convert our Keras model to a TF Estimator, and then use the export_saved_model method to generate the SavedModel and save it in GCS.",
"## Convert our Keras model to an estimator\nkeras_estimator = tf.keras.estimator.model_to_estimator(keras_model=model, model_dir='export')\n\nprint(model.input)\n\n# We need this serving input function to export our model in the next cell\nserving_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(\n model.input\n)\n\n\nexport_path = keras_estimator.export_saved_model(\n 'gs://' + BUCKET_NAME + '/explanations',\n serving_input_receiver_fn=serving_fn\n).decode('utf-8')",
"Use TensorFlow's saved_model_cli to inspect the model's SignatureDef. We'll use this information when we deploy our model to AI Explanations in the next section.",
"!saved_model_cli show --dir $export_path --all",
"Deploy the model to AI Explanations\nIn order to deploy the model to Explanations, we need to generate an explanations_metadata.json file and upload this to the Cloud Storage bucket with our SavedModel. Then we'll deploy the model using gcloud.\nPrepare explanation metadata\nWe need to tell AI Explanations the names of the input and output tensors our model is expecting, which we print below. \nThe value for input_baselines tells the explanations service what the baseline input should be for our model. Here we're using the median for all of our input features. That means the baseline prediction for this model will be the fare our model predicts for the median of each feature in our dataset.",
"# Print the names of our tensors\nprint('Model input tensors: ', model.input)\nprint('Model output tensor: ', model.output.name)\n\nbaselines_med = train_data.median().values.tolist()\nbaselines_mode = train_data.mode().values.tolist()\nprint(baselines_med)\nprint(baselines_mode)\n\nexplanation_metadata = {\n \"inputs\": {\n \"dayofweek\": {\n \"input_tensor_name\": \"dayofweek:0\",\n \"input_baselines\": [baselines_mode[0][0]] # Thursday\n },\n \"hourofday\": {\n \"input_tensor_name\": \"hourofday:0\",\n \"input_baselines\": [baselines_mode[0][1]] # 8pm\n },\n \"dropofflon\": {\n \"input_tensor_name\": \"dropofflon:0\",\n \"input_baselines\": [baselines_med[4]] \n },\n \"dropofflat\": {\n \"input_tensor_name\": \"dropofflat:0\",\n \"input_baselines\": [baselines_med[5]] \n },\n \"pickuplon\": {\n \"input_tensor_name\": \"pickuplon:0\",\n \"input_baselines\": [baselines_med[2]] \n },\n \"pickuplat\": {\n \"input_tensor_name\": \"pickuplat:0\",\n \"input_baselines\": [baselines_med[3]] \n },\n },\n \"outputs\": {\n \"dense\": {\n \"output_tensor_name\": \"output/BiasAdd:0\"\n }\n },\n \"framework\": \"tensorflow\"\n }\n\nprint(explanation_metadata)",
"Since this is a regression model (predicting a numerical value), the baseline prediction will be the same for every example we send to the model. If this were instead a classification model, each class would have a different baseline prediction.",
"# Write the json to a local file\nwith open('explanation_metadata.json', 'w') as output_file:\n json.dump(explanation_metadata, output_file)\n\n!gsutil cp explanation_metadata.json $export_path",
"Create the model\nNow we will create out model on Cloud AI Platform if it does not already exist.",
"MODEL = 'taxifare_explain'\nos.environ[\"MODEL\"] = MODEL\n\n%%bash\nexists=$(gcloud ai-platform models list | grep ${MODEL})\n\nif [ -n \"$exists\" ]; then\n echo -e \"Model ${MODEL} already exists.\"\n \nelse\n echo \"Creating a new model.\"\n gcloud ai-platform models create ${MODEL}\nfi",
"Create the model version\nCreating the version will take ~5-10 minutes. Note that your first deploy may take longer.",
"# Each time you create a version the name should be unique\nimport datetime\nnow = datetime.datetime.now().strftime(\"%Y%m%d%H%M%S\")\nVERSION_IG = 'v_IG_{}'.format(now)\nVERSION_SHAP = 'v_SHAP_{}'.format(now)\n\n# Create the version with gcloud\n!gcloud beta ai-platform versions create $VERSION_IG \\\n--model $MODEL \\\n--origin $export_path \\\n--runtime-version 1.15 \\\n--framework TENSORFLOW \\\n--python-version 3.7 \\\n--machine-type n1-standard-4 \\\n--explanation-method 'integrated-gradients' \\\n--num-integral-steps 25\n\n!gcloud beta ai-platform versions create $VERSION_SHAP \\\n--model $MODEL \\\n--origin $export_path \\\n--runtime-version 1.15 \\\n--framework TENSORFLOW \\\n--python-version 3.7 \\\n--machine-type n1-standard-4 \\\n--explanation-method 'sampled-shapley' \\\n--num-paths 50\n\n# Make sure the model deployed correctly. State should be `READY` in the following log\n!gcloud ai-platform versions describe $VERSION_IG --model $MODEL\n!echo \"---\"\n!gcloud ai-platform versions describe $VERSION_SHAP --model $MODEL",
"Getting predictions and explanations on deployed model\nNow that your model is deployed, you can use the AI Platform Prediction API to get feature attributions. We'll pass it a single test example here and see which features were most important in the model's prediction. Here we'll use gcloud to call our deployed model.\nFormat our request for gcloud\nTo use gcloud to make our AI Explanations request, we need to write the JSON to a file. Our example here is for a ride from the Google office in downtown Manhattan to LaGuardia Airport at 5pm on a Tuesday afternoon.\nNote that we had to write our day of the week at \"3\" instead of \"Tue\" since we encoded the days of the week outside of our model and serving input function.",
"# Format data for prediction to our model\n!rm taxi-data.txt\n!touch taxi-data.txt\nprediction_json = {\"dayofweek\": \"3\", \"hourofday\": \"17\", \"pickuplon\": \"-74.0026\", \"pickuplat\": \"40.7410\", \"dropofflat\": \"40.7790\", \"dropofflon\": \"-73.8772\"}\nwith open('taxi-data.txt', 'a') as outfile:\n json.dump(prediction_json, outfile)\n\n# Preview the contents of the data file\n!cat taxi-data.txt",
"Making the explain request\nNow we make the explaination requests. We will go ahead and do this here for both integrated gradients and SHAP using the prediction JSON from above.",
"resp_obj = !gcloud beta ai-platform explain --model $MODEL --version $VERSION_IG --json-instances='taxi-data.txt'\nresponse_IG = json.loads(resp_obj.s)\nresp_obj\n\nresp_obj = !gcloud beta ai-platform explain --model $MODEL --version $VERSION_SHAP --json-instances='taxi-data.txt'\nresponse_SHAP = json.loads(resp_obj.s)\nresp_obj",
"Understanding the explanations response\nFirst let's just look at the difference between our predictions using our baselines and our predicted taxi fare for the example.",
"explanations_IG = response_IG['explanations'][0]['attributions_by_label'][0]\nexplanations_SHAP = response_SHAP['explanations'][0]['attributions_by_label'][0]\n\npredicted = round(explanations_SHAP['example_score'], 2)\nbaseline = round(explanations_SHAP['baseline_score'], 2 )\nprint('Baseline taxi fare: ' + str(baseline) + ' dollars')\nprint('Predicted taxi fare: ' + str(predicted) + ' dollars')",
"Next let's look at the feature attributions for this particular example. Positive attribution values mean a particular feature pushed our model prediction up by that amount, and vice versa for negative attribution values. Which features seem like they're the most important...well it seems like the location features are the most important!",
"from tabulate import tabulate\n\nfeature_names = valid_data.columns.tolist()\nattributions_IG = explanations_IG['attributions']\nattributions_SHAP = explanations_SHAP['attributions']\nrows = []\nfor feat in feature_names:\n rows.append([feat, prediction_json[feat], attributions_IG[feat], attributions_SHAP[feat]])\nprint(tabulate(rows,headers=['Feature name', 'Feature value', 'Attribution value (IG)', 'Attribution value (SHAP)']))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kellerberrin/OSM-QSAR
|
Notebooks/OSM_Results/OSM Prelim Results.ipynb
|
mit
|
[
"OSM COMPETITION: A Meta Model that optimally combines the outputs of other models.\nThe aim of the competition is to develop a computational model that predicts which molecules will block the malaria parasite's ion pump, PfATP4.\nSubmitted by James McCulloch - james.duncan.mcculloch@gmail.com\nFinal Results. The DNN meta model combines probability maps of molecular structure and EC50 classifiers and has a predictive score of AUC = 0.89 on the test molecules. This model (\"osm\" in the software) is selected for the competition.\nOther \"off-the-shelf\" meta models from SKLearn have predictive scores of AUC [0.81, 0.85] (see below) and support the results obtained from the meta DNN.\nWhat is a Meta Model?\n\n\nEach predictive model based on fingerprints or another SMILE based description vector such as DRAGON brings a certain amount of predictive power to the task of assessing likely molecular activity against PfATP4.\n\n\nWhat the meta model does is combine the predictive power of each model in an optimal way to produce a more predictive composite model.\n\n\nIt does this by taking as it's input the probability maps (the outputs) of other classifiers,\n\n\nThe two models chosen as inputs to the meta model are:\n\n\nA Neural Network model that uses the DRAGON molecular descriptor to estimate molecular PfATP4 ion activity directly. This model had modest predictive power of AUC=0.77. See the first notebook for details. \n\n\nA logistic classifier that uses the Morgan fingerprints (mol radius = 5) to predict the EC50 <= 500 nMol class. This model was discussed in notebook 2 and has a predictive power of AUC=0.93 for the test molecules. Crucially, this predictive power is for EC50 only, not PfATP4 ion activity. For the test set, EC50 and PfATP4 ion activity are closely correlated because these molecules have similar structures and were designed to be active against PfATP4. However, other molecules from the training set with different structures have different sites of activity and membership of the EC50 <= 500 nMol class is not predictive of PfATP4 ion activity.\n\n\n\n\nFinding the Optimal Final Model.\nA DNN and a variety SKlearn classifiers were trained as Meta Models against the probability maps of the 2 models described above and the resultant Area Under Curve (AUC) statistics against the test molecules are tabulated below. Note the meta model is a binary classifier [ACTIVE, INACTIVE] for ion activity it does not attempt to classify molecules as [PARTIAL].\nResults Summary",
"from IPython.display import display\nimport pandas as pd\nprint(\"Meta Results\")\nmeta_results = pd.read_csv(\"./meta_results.csv\")\ndisplay(meta_results)\n",
"Where the META MODELs are as follows:\n\n\nDNN - A Deep Neural Network classifier [16, 32, 32, 16, 2] from the Keras toolkit. Cross-entropy loss function.\n\n\nNBC - A Naive Bayes Classifier\n\n\nSVMC - A support vector machine classifier.\n\n\nLOGC - A Logistic classifier.\n\n\nModelling.\nThe Meta Models run on Linux and Windows under Python 2.7 and 3.5 (Mac untested):\n\n\nDownload (follow the readme setup) the entire directory tree from google drive here: https://github.com/kellerberrin/OSM-QSAR. Detailed instructions will be also posted so that the withheld molecules can be tested against the optimal model with minimum hassle. The pre-trained DRAGON classifier \"ION_DRAGON_625.krs\" must be in the model directories. In addition, for the \"osm\" model, the pre-trained meta model \"ION_META_40.krs\" should be in the \"osm\" model directory. The software should give sensible error messages if they are missing. \n\n\nMake sure you have setup and activated the python anaconda environment as described in \"readme.md\".\n\n\nFor the optimal OSM meta model (--help for flag descriptions) the following cmd was used (the clean flag is optional it removes previous results from the model directory):\n$python OSM_QSAR.py --classify osm --load ION_META --epoch 40 --train 0 [--clean]\nFor the svmc SKLearn meta model (--help for flag descriptions) the following cmd was used (the clean flag is optional it removes previous results from the model directory):\n$python OSM_QSAR.py --classify osm_sk [--clean]\nWe visualize the training set probability maps by normalizing them to the unit interval [0, 1] and sorting them in descending order.",
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom pylab import *\nfrom sklearn.preprocessing import minmax_scale\n\ndef sort_map(column):\n array = minmax_scale(train_results[column])\n return array[np.argsort(-array)]\n\nscale = 1.0\nfig = plt.figure(num=None, figsize=(8 * scale, 6 * scale), dpi=80, facecolor='w', edgecolor='k')\nfor map in all_active: plt.plot(sort_map(map), label=map)\nxlabel(\"molecules\")\nylabel(\"normalized probability\")\ntitle(\" Training Set [ACTIVE] Probability Maps\")\nlegend(loc=1); # upper right corner\n\n\ndef mol_label_list(data_frame): # Function to produce rdkit mols and associated molecular labels\n id = data_frame[\"ID\"].tolist()\n klass = data_frame[\"ACTUAL_500\"].tolist()\n potency = data_frame[\"EC50\"].tolist()\n ion_activity = data_frame[\"ION_ACTIVITY\"].tolist()\n map_prob = data_frame[\"M5_500_250\"].tolist()\n labels = []\n for idx in range(len(id)):\n labels.append(\"{} {} {} {} {:5.0f} ({:5.4f})\".format(idx+1, id[idx],\n klass[idx][0], ion_activity[idx][0],\n potency[idx]*1000, map_prob[idx]))\n smiles = data_frame[\"SMILE\"].tolist()\n mols = [Chem.MolFromSmiles(smile) for smile in smiles]\n return mols, labels\n\n\nfrom rdkit import Chem\nfrom rdkit.Chem import Draw\nfrom rdkit.Chem.Draw import IPythonConsole\nfrom rdkit import rdBase\nIPythonConsole.ipython_useSVG=True",
"ION_ACTIVITY [ACTIVE] in EC50_200",
"ion_active = ec50_200_active.loc[train_results[\"ION_ACTIVITY\"] == \"ACTIVE\"].sort_values(\"EC50\")\nmols, labels = mol_label_list(ion_active)\nDraw.MolsToGridImage(mols,legends=labels,molsPerRow=4)",
"ION_ACTIVITY [ACTIVE] Exemplar molecules that were added to the training set when moving from EC50_200 to EC50_500\nCommentary\nThese molecules have the same Triazole arm as we noticed in the previous notebook when trying to classifiy the molecular ION_ACTIVITY using D840_ACTIVE (DRAGON). This structure is also well represented in the test molecules.\nION_ACTIVITY [ACTIVE] Exemplar molecules that were added to the training set when moving from EC50_500 to EC50_1000\nThe results of the EC50_500 classification of the test molecules.",
"sorted = test_results.sort_values(\"M5_500_250\", ascending=False)\nmols, labels = mol_label_list(sorted)\nDraw.MolsToGridImage(mols,legends=labels,molsPerRow=4)",
"Commentary"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.